url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://www.newton.ac.uk/programmes/MOS/seminars/2011022414001.html | # MOS
## Seminar
### On the monodromy of the Hitchin connection.
Pauly, C (Montpellier 2)
Thursday 24 February 2011, 14:00-15:00
Seminar Room 1, Newton Institute
#### Abstract
In this talk I will show that the monodromy representation of the projective Hitchin connection on the sheaf of generalized theta functions on the moduli space of vector bundles over a curve has an element of infinite order in its image. I will explain the link with conformal blocks.
#### Video
Available Video Formats | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936223030090332, "perplexity": 1002.6154115119842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704658856/warc/CC-MAIN-20130516114418-00060-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://link.springer.com/article/10.1007%2Fs10584-018-2275-2 | Climatic Change
, Volume 150, Issue 3–4, pp 181–194
# The fate of Lake Baikal: how climate change may alter deep ventilation in the largest lake on Earth
• Sebastiano Piccolroaz
• Marco Toffolon
Open Access
Article
## Abstract
Lake Baikal is the oldest, deepest, and most voluminous freshwater lake on Earth. Despite its enormous depth, episodically (almost twice a year) large amounts of surface, cold, and oxygenated water sink until the bottom of the lake due to thermobaric instability, with consequent effects on the ecology of the whole lake. A minimal one-dimensional model is used to investigate how changes in the main external forcing (i.e., wind and lake surface temperature) may affect this deep ventilation mechanism. The effect of climate change is evaluated considering the IPCC RCP8.5 and some idealized scenarios and is quantified by (i) estimating the mean annual downwelling volume and temperature and (ii) analyzing vertical temperature and dissolved oxygen profiles. The results suggest that the strongest impact is produced by alterations of wind forcing, while deep ventilation is resistant to rising lake surface temperature. In fact, the seasons when deep ventilation can occur can be shifted in time by lake warming, but not dramatically modified in their duration. Overall, the results show that Lake Baikal is sensible to climate change, to an extent that the ecosystem and water quality of this unique lacustrine system may undergo profound disturbances.
## 1 Introduction
Lake Baikal is the lake of records: it is the oldest (25 million years), deepest (max depth 1′642 m), and most voluminous (23′615 km3) freshwater lake on Earth (Fig. 1). It contains an outstanding variety of endemic species that adapted to singular conditions (e.g., large depth, several months of ice cover, high water clarity, low nutrient concentrations) during thousands of years (Bondarenko et al. 2006; Moore et al. 2009). This exceptional endemism earned the lake to be declared as a World Heritage site by UNESCO in 1996.
Lake Baikal is also a large freshwater system hosting a broad range of fascinating physical phenomena, which captures the attention of the scientific community. Owing to its large depth and climatic conditions, it is a fine example of thermobarically stratified lake (e.g., Boehrer and Schultze 2008), whereby thermobaricity is the combined dependence of water density (ρ) on temperature (T) and pressure (P) (e.g., McDougall 1987). The thermobaric effect causes the temperature of maximum density Tρmax (about 4 °C at atmospheric pressure) to decrease with depth, and the temperature profiles crossing the Tρmax line to have a maximum at this intersection (Eklund 1963, 1965). Accordingly, in Lake Baikal a weak (10−4–10−5 °C/m) direct stratification, with temperatures warmer than Tρmax and ranging from ~ 3.50 °C to ~ 3.35 °C, is permanently present below ~ 250 m, while the upper layers are either directly stratified (with surface water warmer than Tρmax) or inversely stratified (surface water colder than Tρmax, or ice cover), depending on the season.
Previous studies (Vereshchagin 1936; Weiss et al. 1991; Shimaraev et al. 1993) demonstrated that the mixing regime of the lake is intimately related with thermobaricity. Lake Baikal, in fact, is renowned for the occurrence of periodic large-scale recirculation triggered by thermobaric instability, causing renewal of hypolimnetic water by mixing and replacement with surface water. During inverse stratification, unstable conditions occur when relatively cold surface water (T < Tρmax) is moved beneath a threshold depth called compensation depth (hc, i.e., the depth where the sinking water and the local water have the same density). Then, the sinking water is heavier than local water and sinks towards the bottom of the lake, stopping where its density equals that of local water (Fig. 2b) or when it reaches the bottom of the lake (Fig. 2c). Conversely, if the surface water is displaced to a depth shallower than hc, it rises back to the surface due to buoyancy forces because it is lighter than local water (Fig. 2a).
Recent studies suggested (Wüest et al. 2005; Boehrer and Schultze 2008) and successively demonstrated (Schmid et al. 2008; Tsimitri et al. 2015) that the primary cause of thermobaric instability in Lake Baikal is coastal downwelling due to Ekman transport. The phenomenon can take place twice a year: in late spring (June, after the melting of the ice cover) and early winter (December/January, before the surface freezes). In these periods, the lake is stably but weakly inversely stratified in its upper part (Fig. 2b, c), and sufficiently strong winds may overcome buoyancy forces, triggering thermobaric instability. Wind forcing and lake stratification are therefore the key controls of the phenomenon, while the steep shores and elongated shape of the lake promote the occurrence of coastal downwelling (see also Toffolon (2013) for a theoretical analysis).
Since deep recirculation determines the replacement and mixing of deep water with surface water rich in dissolved gasses, some authors refer to this phenomenon as deep ventilation (e.g., Shimaraev et al. 1993; Moore et al. 2009), a term typically used by oceanographers (e.g., Khatiwala et al. 2012). Relevant ecological implications result from deep ventilation, among which the most evident is the high oxygen content (up to the 80% of saturation, Weiss et al. 1991) along the entire water column, which allows for the existence of aquatic fauna down to huge depths (Chapelle and Peck 1999). Deep mixing influences the recycling of carbon and nutrients, playing a key role in lakes’ biological productivity (e.g., Dokulil 2014). Therefore, any changes in the current environmental conditions able to affect deep ventilation are likely to have significant implications on the equilibrium of the lake, possibly threatening its unique ecosystem.
Relatively intense deep-mixing activity in Lake Baikal was observed and monitored especially in the South Basin (Wüest et al. 2005; Schmid et al. 2008; Tsimitri et al. 2015), where the downwelling volume was estimated ranging between 10 and 100 km3 per year (Wüest et al. 2005; Schmid et al. 2008; Shimaraev et al. 2011; Piccolroaz and Toffolon 2013). For this reason and since the South Basin (maximum depth of 1461 m, volume of 6360 km3) is the region where most data are available, here we focused on the response of deep ventilation to climate change in this portion of the lake. To this aim, we used a simplified, one-dimensional model (Piccolroaz and Toffolon 2013) and considered some scenarios constructed changing the surface boundary conditions, namely wind energy and lake surface temperature. Results obtained under these scenarios are compared to current conditions to investigate the role played by the main external forcing on deep ventilation and to quantify the impact that climate change is likely to have on deep-water oxygenation.
Although the analysis is specific for Lake Baikal, it contributes to the general understanding of how deep thermobaric convection in lakes can be affected by climate change. In fact, thermobaric stratification is a trait common to several deep temperate lakes (Boehrer and Schultze 2008) among which the most famous example is Crater Lake (USA, McManus et al. 1993; Crawford and Collier 1997, 2007; Wood et al. 2016), although other lakes can be found in Japan (Boehrer et al. 2008), Norway (Strøm 1945; Boehrer et al. 2013), and Canada (Johnson 1964; Laval et al. 2012).
## 2 Methods
### 2.1 Description of the model
The study is carried out using the minimal one-dimensional numerical model presented in Piccolroaz and Toffolon (2013) and specifically developed to investigate deep thermobaric convection in Lake Baikal, but applicable to other deep lakes. The use of a more complex three-dimensional model would be unfeasible due to data scarcity for model validation and high computational cost to run long-term climate change simulations. An exhaustive presentation of the model is provided in the Supplementary Material (we also refer the interested reader to Piccolroaz and Toffolon (2013) for all technical details about model implementation and calibration); while here, we summarize the main features. The model solves the following reaction-diffusion equation for the generic tracer C (here dissolved oxygen, DO) in addition to the temperature T:
$$\frac{\partial C}{\partial t}=\frac{1}{S}\frac{\partial }{\partial z}\left(S\ {D}_z\frac{\partial C}{\partial z}\right)+R$$
(1)
where t is the time, z is the vertical direction (positive downward), S is the horizontal surface at a fixed depth, R is the source/sink term, and Dz is vertical diffusivity.
Vertical transport is simulated using two Lagrangian-based algorithms, which are at the core of the model: one for simulating wind-driven convective transport (based on wind speed W and duration Δtwind, see Eqs. (2) and (3) below) and the other for simulating buoyancy-driven stabilization of unstable regions of the water column. Consistent with this Lagrangian approach, the water column is discretized dividing the domain into n sub-volumes having the same individual volume, which allows for an efficient and easy handling of vertical mixing. In fact, in both Lagrangian-based algorithms, the water column is simply resorted by exchanging the position of the sub-volumes affected either by wind or by buoyancy forces. While reordering the water column, mixing between each pair of exchanged volumes is accounted for through a mixing coefficient. The implementation of a similar Lagrangian algorithm proved to be successful also in different types of problem, such as the formation of double-diffusive small-scale structures in lakes (Toffolon et al. 2015).
The algorithm for wind-driven convective transport is based on the following key quantities:
$${e}_w=\xi {C}_D^{0.5}W$$
(2)
$${V}_{\mathrm{down}}=\eta {C}_D{W}^2\Delta {t}_{\mathrm{wind}}$$
(3)
where ew is the energy per unit volume provided by the wind, Vdown is the surface volume of water moved by the wind forcing, CD is the wind drag coefficient, and ξ and η are the main calibration parameters of the model. Equations (2) and (3) rely on wind speed W and duration Δtwind data only. The specific energy provided by the wind is compared to the amount of energy needed to move Vdown downwards against the buoyancy forces, thus assessing whether the energy input is sufficient to move Vdown below the compensation depth hc and trigger thermobaric instability (see Fig. 2 for a schematic). In all cases, the arrival depth of the sinking volume Vdown is determined by progressively moving it down until the energy provided by the wind is sufficient to balance the change of potential energy resulting from the consecutive switch of positions between sub-volumes.
Equations (2) and (3) are derived from physical principles with reference to the Ekman transport produced by along-shore winds in elongated lakes, coherently with the primary cause of deep thermobaric convection in Lake Baikal (Schmid et al. 2008; Tsimitri et al. 2015). However, their validity can be extended to the most general case, in that the energy flux from the wind to the lake (Ew = ew Vdown) is proportional to the third power of the wind speed, according to basic laws of physics (e.g., Imboden and Wüest 1995). All the other phenomenological details are implicitly accounted for by the calibration coefficients ξ and η.
The same model was successfully applied also to Crater Lake (Wood et al. 2016), where mechanisms other than Ekman transport drive deep thermobaric convection due to the different shape, dimension, and climate characteristics (Crawford and Collier 1997, 2007). In their work, Wood et al. (2016) compared the performance of the proposed model to that of the well-documented one-dimensional lake model DYRESM (Imerito 2014). The inadequacy of the latter model to properly simulate deep ventilation clearly showed the need to explicitly account for thermobaricity in one-dimensional models of these types of lakes. Based on these considerations and on the previous results obtained by Piccolroaz and Toffolon (2013), the proposed model is a suitable one-dimensional option for Lake Baikal.
In the present study, the domain was discretized into 159 sub-volumes having a volume of 40 km3 and thickness ranging from 5 to 66 m, i.e., the same setup used in Piccolroaz and Toffolon (2013), and the model parameters were the same as in this previous study.
### 2.2 External forcing and boundary conditions
In order to address the lack of measurements available for Lake Baikal, the model was designed to require few input variables: wind speed as external forcing and lake surface temperature (Tsurf) as upper boundary condition. This feature makes the model particularly parsimonious, thus attractive for cases where data are scarce and the use of more complex (e.g., three-dimensional) models not possible or questionable due to the lack of information to validate the results.
Observational probabilistic distributions of wind speed were built based on the wind atlas contained in Rzheplinsky and Sorokina (1977), which collected wind data from fixed stations at the coast, on islands, and from ships along 10 years during the ice-free season (May–December, from 1959 to 1968). This dataset is more representative of the conditions at the lake surface than data recorded at weather stations installed around the lake. In fact, the thermal inertia of the massive water volume of Lake Baikal leads to the onset of local atmospheric pressure gradients able to generate winds that are substantially different in intensity and direction compared to the surrounding region (Shimaraev et al. 1994). Owing to the strong seasonality of the wind forcing over the lake, two distinct cumulative frequency functions of wind speed were derived (Fig. 3a): one for the warm season (May–September) and the other for the cold season (October–December).
A probabilistic distribution of Tsurf was extracted from a dataset of vertical temperature profiles covering the period 2000–2008 (courtesy of Prof. Wüest, Eawag, Switzerland), collected at a mooring station in the South Basin (see Schmid et al. (2008) for details). The 9-year series of Tsurf was constructed based on the data of the uppermost thermistor, whose position changed slightly from year to year: mean, minimum, and maximum depth being 17 m, 9 m, and 30 m, respectively. The annual evolution of Tsurf averaged over the 9-year measurement period is shown in Fig. 3b, and the corresponding standard deviation is shown in Fig. 3c.
In order to provide a robust description of deep ventilation statistics, we ran long-term simulations covering a 1000-year period with stationary climate conditions. We used a stochastic approach to determine the sequence of wind and Tsurf conditions to impose as boundary conditions, combining the use of the probabilistic distributions described above and of the ECMWF ERA-40 reanalysis dataset. The ECMWF ERA-40 reanalysis dataset contains wind speed and air temperature for the period 1957–2002 with a temporal resolution of 6 h and provides a realistic chronological sequence of meteorological events (otherwise difficult to reconstruct), although not fully representative of the actual meteorological conditions at the lake due to its coarse resolution (~ 125 km). These data were statistically downscaled to local scale using the quantile-mapping approach (Panofsky and Brier 1968) and the in situ, observational statistical distribution of W and Tsurf discussed above (see Piccolroaz and Toffolon (2013) for details). Note that in the case of Tsurf, the downscaling procedure used air temperature as predictor, which is reasonable due to the strong air–water temperature correlation (see, e.g., Piccolroaz et al. 2013; Toffolon et al. 2014a). In this way, a 1000-year long simulation was generated by randomly extracting a sequence of 1000 years from the ECMWF ERA-40 dataset.
### 2.3 Synthetic climate change scenarios
We prepared a suite of synthetic climate change scenarios varying the probabilistic distributions of W and Tsurf. The scenarios were kept simple to easily assess the effects that changes in one or both variables are expected to have on deep ventilation. This choice was also motivated by the lack of future climate studies in the Lake Baikal region. This is particularly true for wind speed, for which ad hoc future projections do not exist. We therefore introduced two idealized yet reasonable scenarios by simply assuming that either the May–September or the October–December probabilistic distribution of wind speed (Fig. 3a) holds for the entire year, thus defining a calm wind (CW) and a strong wind (SW) scenario, respectively.
We used the CMIP5 multi-model mean projections under the IPCC AR5 RCP8.5 high emission scenario for the grid cells covering the South Basin of Lake Baikal to predict the corresponding increase in Tsurf in the period 2041–2050, through the air2water model (Piccolroaz et al. 2013; Piccolroaz 2016). The air2water model is a simple but mechanistically based tool to predict lake surface temperature based on air temperature only, which was shown to be effective for use in climate change studies (Piccolroaz et al. 2018). The model can be classified as a hybrid model (Toffolon and Piccolroaz 2015) combining a physically based equation with a stochastic calibration of model parameters. The details of the model and of its application are provided in the Supplementary Material. Based on the air2water results, we defined the global warming (GW) scenario for Tsurf shown in Fig. 3b, where the curves depict the mean annual cycle. The standard deviation of Tsurf is assumed to be the same for all scenarios and kept equal to that of measurements (Fig. 3c).
The increase in Tsurf expected in 2041–2050 and relative to current (2000–2008) conditions is shown in Fig. 3d (scenario GW). The largest warming of Tsurf is expected in the warm season (August–October), with mean and maximum increase of ~ 1.9 °C and ~ 4 °C, respectively. This is coherent with observations covering the last century, which registered the strongest warming of water temperature (measured at 25 m depth) in fall (Hampton et al. 2008). Moreover, the warm season is the one characterized by the highest variability (see the largest standard deviation of Tsurf measurements in Fig. 3c), thus the most responsive to changes in the external forcing. The amplified warming of Tsurf in this period compared to that of air temperature (~ 1.4 °C on average during the year, Fig. 3d) is essentially due to anticipation of strong thermal stratification, according to what reported for other deep lakes on Earth (see, e.g., Piccolroaz et al. 2015; Woolway and Merchant 2017). Conversely, due to the huge heat capacity of the lake’s surface-mixed layer when thermal stratification is weak, Tsurf is expected to undergo minor changes during the rest of the year, including the period favorable for deep ventilation, i.e., when the lake is weakly inversely stratified and Tsurf is close to Tρmax (see Fig. 2). Wind (CW and SW) and global warming (GW) scenarios were combined together to produce the scenarios GW-CW and GW-SW.
We defined an additional warming scenario (GW*), which is not realistic but useful to evaluate the effect that a possible shortening of the deep ventilation season due to Tsurf warming would have on the renewal of deep layers. To this end, we described the warming through a trapezoidal-shape function, with steep sides in correspondence of the deep ventilation season. To allow for a comparison with the more realistic scenario GW, the GW* scenario was constructed having the same mean annual Tsurf warming (i.e., 0.70 °C, about 50% of the mean air temperature warming).
## 3 Results
A set of 1000 years long-term simulations was run under the different climate scenarios listed in Table 1, including the current condition (control) scenario. In all simulations, the first 100 years were used as warm-up period, during which the initial (current) conditions of W and Tsurf were gradually modified to match those prescribed by the different scenarios.
Table 1
Climate change scenarios and their effect on deep ventilation. Mean downwelling temperature Tdown, mean annual downwelling volume Vdown, and mean DO concentration of downwelling water DOdown for downwelling events deeper than 1300 m depth. Statistics are evaluated excluding the first 100-year period used as warm-up. The downwelling periods are also listed. All variables are presented as mean ± standard deviation
Scenario
Description
Tdown [°C]
Vdown [km3]
DOdown [mg/l]
Downwelling period
Spring
Winter
Control
Current condition*
3.28 ± 0.06
88 ± 71
11.56 ± 0.50
3 Jun ± 5
20 Dec ± 6
CW
Calm wind
3.42 ± 0.07
65 ± 44
11.23 ± 0.58
5 Jun ± 5
17 Dec ± 6
SW
Strong wind
3.06 ± 0.06
102 ± 82
11.82 ± 0.43
30 May ± 4
24 Dec ± 5
GW
Global warming
3.30 ± 0.06
87 ± 70
11.52 ± 0.48
31 May ± 4
24 Dec ± 6
GW-CW
Global warming + calm wind
3.41 ± 0.07
61 ± 43
11.19 ± 0.57
1 Jun ± 5
23 Dec ± 7
GW-SW
Global warming + strong wind
3.05 ± 0.07
100 ± 81
11.81 ± 0.44
27 May ± 4
28 Dec ± 7
GW*
Global warming (synthetic)
3.52 ± 0.07
67 ± 56
11.28 ± 0.47
30 May ± 3
24 Dec ± 3
*The slight difference between Tdown and Vdown presented here and reported in Piccolroaz and Toffolon (2013) is due to the different randomly generated 1000-year simulation, while all calibration parameters are kept unaltered
Figure 4a shows the comparison between temperature profiles on February 15 obtained for the different scenarios (all profiles are averaged over the entire simulation, excluding the first 100-year period used as warm-up). Observed profiles (period 2000–2008) are also shown for comparison. The figure clearly shows that changes in the wind dynamics have strong impacts: the deep-mixing activity is markedly enhanced under the SW scenario causing a progressive cooling of hypolimnetic waters. The opposite (warming of deep waters) occurs for the CW scenario due to reduced downwelling volumes. On the contrary, the increase of Tsurf expected in scenario GW taken alone or in combination with the wind scenarios (GW-CW and GW-SW) does not have a relevant effect on the vertical water temperature profile, which undergoes only a slight warming compared to the counterparts control scenario, CW and SW.
The same conclusions emerge from the analysis of the downwelling parameters listed in Table 1, where a robust statistical estimate was possible, thanks to the availability of long-term simulation results. In particular, the mean annual downwelling temperature Tdown and downwelling volume Vdown, both evaluated for downwelling events deeper than 1300 m depth, were taken as significant quantities characterizing deep ventilation in the lake. The different contribution of warming Tsurf and changing wind forcing is evident: relative to the control scenario, GW does not alter Vdown while it decreases by 26% and increases by 16% under the CW and SW scenarios, respectively. Additionally, Tdown does not undergo significant changes under the GW scenario, while it is expected to increase by 4% and decrease by 7% under the CW and SW scenarios, respectively. We notice that such a relatively small difference in Tdown is actually significant considering the small temperature gradients in the hypolimnion of Lake Baikal. The differences between the statistics of GW-CW and GW-SW compared to their counterparts CW and SW are not significant, confirming the minor impact of the GW scenario.
The influence of climate change on deep ventilation is also visible looking at the deep oxygen concentration, which eventually affects the peculiar ecosystem of Lake Baikal. DO profiles simulated under the different scenarios are shown in Fig. 4b, in comparison with available DO measurements and the simulated profile under the control scenario. Unlike temperature, the change in DO in the hypolimnion results from the combination of two effects: the change in Vdown according to the results summarized in Table 1 and the dependence of DO saturation conditions (i.e., the upper boundary condition for DO) on Tsurf. The latter factor influences deep oxygenation altering both the diffusive flux of DO from the atmosphere to the lake throughout the year, and the DO concentration of the downwelling water according to its temperature (i.e., Tdown, see Table 1). According to Fig. 4a, CW and SW scenarios show a stronger effect than GW alone (compared to the control scenario, mean DO concentration below 1300 m depth is − 0.6 mg/l and + 0.5 mg/l in the first two case and − 0.1 mg/l in the last case).
The secondary importance of Tsurf warming on deep temperature and DO profiles is attributable to the fact that only a small temporal shift in the deep ventilation period is expected for GW compared to the control scenario, while its duration does not undergo significant changes (Table 1). In fact, Tsurf warming is likely to affect deep ventilation only if the periods when Tsurf is close to Tρmax are significantly shortened or extended. It is therefore interesting to analyze also the synthetic (and unrealistic) GW* scenario, specifically constructed to evaluate this effect. Results suggest that in response to almost halving of the downwelling period (Table 1) due to a marked steepening of Tsurf (Fig. 3b, d), the warming effect would become relevant and clearly visible in the temperature and, due to the combination of effects discussed above, even more in the DO profiles. Relative to the control scenario, Vdown decreases by 24%, Tdown increases by 7%, and mean DO concentration below 1300 m depth decreases by 0.6 mg/l.
## 4 Discussion
The analysis of deep-water renewal described in the previous sections indicates a complex but predictable response of Lake Baikal. The results show that the main mechanisms affecting deep ventilation under a changing climate are well identifiable and that the role played by the main external variables can be assessed and quantified. Overall, changes in the wind forcing have been shown to alter deep ventilation significantly more than rising lake surface water temperature, the latter factor playing a secondary role also when considering a severe climate change scenario (the GW scenario has been constructed based on the most severe IPCC AR5 emission scenario, i.e., the RCP8.5). In fact, it is not just the intensity of Tsurf warming that dictates the impact on deep ventilation, but more importantly its timing. This clearly emerged by comparing the response of the lake under two different Tsurf scenarios characterized by the same mean annual warming, but with two different warming distribution during the year (Fig. 3d): realistic in one case (GW) and artificial in the other case (GW*). In the first case (GW), the main Tsurf warming occurs far from the downwelling periods, and the duration of the periods favorable to downwelling occurrence is not significantly altered, thus determining a negligible impact on deep ventilation. Contrarily, in the second case, the warming scenario was deliberately shaped to shorten the favorable periods, thus obtaining a marked reduction of thermobaric-driven deep ventilation. Such a completely unrealistic scenario (GW*) can be seen as an artificial experiment to demonstrate the importance of the duration of the ventilation periods.
The high resistance of deep thermobaric convection to global warming is an important result for Lake Baikal, but can be reasonably extended to other deep thermobarically stratified lakes. This complements the conclusions of Boehrer et al. (2008, 2013) that deep-water temperature in thermobarically stratified lakes is much less susceptible to changes in surface temperature than in lakes where deep-water temperature is chiefly controlled by winter thermal conditions at the surface. Recent studies have shown that global warming is expected to inhibit the intensity and duration of deep mixing in many of these lakes, with consequent deep-water warming and deoxygenation. Some examples are lakes Tahoe (USA, Sahoo et al. 2013), Iseo (Italy, Valerio et al. 2015), Geneva (Switzerland/France, Schwefel et al. 2016), and Garda (Italy, Salmaso et al. 2017). Another evocative example is Crater Lake, for which Wood et al. (2016) showed that, under the RCP8.5 scenario, the lake will undergo a thermal regime shift progressively transitioning from dimictic to warm monomictic or to oligomictic, decreasing the frequency of episodic deep downwelling events. This is not likely to occur in Lake Baikal, for which projected air temperature in winter will still remain well below 0 °C even under the RCP8.5 scenario, at least in the near future.
A catastrophic shift occurred in Lake Tanganyika, the second deepest lake in the world, where the warming of lake surface temperature combined with the weakening of winds progressively reduced the mixing depth, causing a significant decline in oxygen concentrations and in primary productivity rates (O’Reilly et al. 2003). Despite the inherent differences between the two lakes (Lake Tanganyika is a tropical, meromictic lake), we can speculate that a scenario GW-CW would likely affect the primary productivity also in Lake Baikal, where the internal recycling of nutrients from deep to shallow depths was evaluated of the same order of magnitude as all external inputs (Müller et al. 2005; Moore et al. 2009).
We remark that the present analysis was based on idealized (yet realistic) scenarios of wind forcing. Possibly, a more complex behavior could be expected if, besides the seasonal distribution of the wind speed, also the time distribution of wind events will be affected by climate change. However, the definition of reliable wind speed scenarios is typically associated to large uncertainties (Chen et al. 2012; Carvalho et al. 2017) and is likely hampered in Lake Baikal, where local pressure gradients and strong lake-atmosphere interactions may challenge the use of climate models. Contrasting scenarios have been formulated for the lake area: Shimaraev et al. (1994) suggested that it is likely that warming will generate greater wind activity, while Potemkina et al. (2018) observed that winds are weakening in the last decades. Further efforts are therefore required in this direction, provided the importance of winds in controlling stratification and mixing dynamics in deep lakes, as clearly demonstrated in some recent works (e.g., Austin and Allen 2011; Butcher et al. 2015; Valerio et al. 2015; Wood et al. 2016), including the present one.
Despite results certainly depend on the choice of the climate change scenario, here we propose a meaningful description of the fundamental response of Lake Baikal to changing external forcing. These results respond to a major research need raised by Moore et al. (2009), concerning the importance of understanding how deep ventilation and mixing processes are likely to change in the future, due to their major effect on the oxygenation of deep layers and transport of nutrients from deep to shallow depths. In fact, previous climate change studies on Lake Baikal were mainly aimed at assessing the consequences on the lake ecosystem through the use of multivariate relationships between biotic and abiotic (e.g., physical, chemical, climatic) variables, but without the support of any hydrodynamic model of the lake (see, e.g., Mackay et al. 2006; Hampton et al. 2008).
## 5 Conclusions
In the present work, we presented the first detailed sensitivity analysis of the response of Lake Baikal to climate change, showing that this lake is sensible to climate change to an extent that its ecosystem and water quality may undergo profound disturbances. We showed that deep ventilation is sensitive to changes in wind speed, but resistant to changes in lake surface water temperature. This suggests that improving the definition of future wind scenarios is a main research need and that more attention should be paid to properly include these scenarios in climate change studies of deep lakes, an aspect that is often overlooked. The self-resistance of thermobaric instability to global warming is inherent in the system as it is ensured by the large thermal inertia of the lake during the downwelling periods (when the lake is nearly homothermal) which mitigates changes in lake surface water temperature in these periods of the year. This is a trait that can be reasonably extended also to other deep thermobarically stratified lakes, and introduces an interesting difference relative to monomictic or oligomictic lakes in warmer climates which, on the contrary, are particularly sensitive to a warming climate. Further research is needed to assess how changes in climate drivers are expected to affect the lake ecosystem, directly and through their influence on lake’s physical processes. To this aim, we believe that additional efforts should be put in a deeper collaboration between physicists and biologists, trying to overcome the actual fragmentation of the limnological community into expert, specialized fields with limited interaction among each other (Lewis 1995; Salmaso and Mosello 2010; Toffolon et al. 2014b).
## Notes
### Acknowledgments
The authors are grateful to Alfred Wüest and his research group at Eawag (Switzerland) for providing the temperature data and for fruitful discussion during the initial stage of this research. The authors thank two anonymous reviewers and Mathew Wells for their comments and suggestions, which helped to improve the manuscript.
The IPCC AR5 RCP8.5 projections of air temperature and historical observations at the Irkutsk meteorological station were downloaded from http://climexp.knmi.nl and the ECMWF ERA-40 reanalysis data set from the ECMWF data server (http://data-portal.ecmwf.int, thanks to Samuel Somot and Clotilde Dubois, CNRM-Météo France for technical support). The air2water model is available at https://github.com/spiccolroaz/.
## Supplementary material
10584_2018_2275_MOESM1_ESM.docx (903 kb)
ESM 1 (DOCX 902 kb)
## References
1. Austin JA, Allen J (2011) Sensitivity of summer Lake Superior thermal structure to meteorological forcing. Limnol Oceanogr 56:1141–1154.
2. Boehrer B, Schultze M (2008) Stratification of lakes. Rev Geophys 46:RG2005.
3. Boehrer B, Fukuyama R, Chikita K (2008) Stratification of very deep, thermally stratified lakes. Geophys Res Lett 35:L16405.
4. Boehrer B, Golmen L, Løvik JE et al (2013) Thermobaric stratification in very deep Norwegian freshwater lakes. J Great Lakes Res 39:690–695.
5. Bondarenko NA, Tuji A, Nakanishi M (2006) A comparison of phytoplankton communities between the ancient lakes Biwa and Baikal. Hydrobiologia 568:25–29
6. Butcher JB, Nover D, Johnson TE, Clark CM (2015) Sensitivity of lake thermal and mixing dynamics to climate change. Clim Chang 129:295–305.
7. Carvalho D et al (2017) Potential impacts of climate change on European wind energy resource under the CMIP5 future climate projections. Renew Energy 101:29–40.
8. Chapelle G, Peck LS (1999) Polar gigantism dictated by oxygen availability. Nature 399:114–115.
9. Chen L, Pryor SC, Li D (2012) Assessing the performance of intergovernmental panel on climate change AR5 climate models in simulating and projecting wind speeds over China. 117:D24102.
10. Crawford GB, Collier RW (1997) Observations of a deep-mixing event in Crater Lake, Oregon. Limnol Oceanogr 42:299–306.
11. Crawford GB, Collier RW (2007) Long-term observations of deepwater renewal in Crater Lake, Oregon. Hydrobiologia 574:47.
12. Dokulil MT (2014) Impact of climate warming on European inland waters. Inland Waters 4:27–40.
13. Eklund H (1963) Fresh water: temperature of maximum density calculated from compressibility. Science 142:1457–1458.
14. Eklund H (1965) Stability of lakes near the temperature of maximum density. Science 149:632–633.
15. Hampton SE, Izmest’eva LR, Moore MV et al (2008) Sixty years of environmental change in the world’s largest freshwater Lake - Lake Baikal, Siberia. Glob Chang Biol 14:1947–1958.
16. Imboden DM, Wüest A (1995) Mixing mechanisms in lakes. In: Lerman A, Imboden DM, Gat JR (eds) Physics and chemistry of lakes. Springer, Berlin, pp 83–138
17. Imerito A (2014) Dynamic reservoir simulation model DYRESM v4—v4.0 science manual. University of Western Australia, Centre for Water Research, Perth 42 pGoogle Scholar
18. Johnson L (1964) Temperature regime of deep lakes. Science 144(3624):1336–1337.
19. Khatiwala S, Primeau F, Holzer M (2012) Ventilation of the deep ocean constrained with tracer observations and implications for radiocarbon estimates of ideal mean age. Earth Planet Sci Lett 325-326:116–125.
20. Laval BE, Vagle S, Potts D et al (2012) The joint effects of riverine, thermal, and wind forcing on a temperate fjord lake: Quesnel Lake, Canada. J Great Lakes Res 38:540–549.
21. Lewis WM (1995) Limnology, as seen by limnologists. J Contemp Water Res Educ 98:4–8Google Scholar
22. Mackay AW, Ryves DB, Morely DW et al (2006) Assessing the vulnerability of endemic diatom species in Lake Baikal to predicted future climate change: a multivariate approach. Glob Chang Biol 12:2297–2315.
23. McDougall TJ (1987) Thermobaricity, cabbeling, and water-mass conversion. J Geophys Res 93:5448–5464.
24. McManus J, Collier RW, Dymond J (1993) Mixing processes in Crater Lake, Oregon. J Geophys Res 98(C10):18295–18307.
25. Moore MV, Hampton SE, Izmest’eva LR et al (2009) Climate change and the world’s “sacred sea” - Lake Baikal, Siberia. BioScience 59:405–417.
26. Müller B, Maerki M, Schmid M et al (2005) Internal carbon and nutrient cycling in Lake Baikal: sedimentation, upwelling, and early diagenesis. Glob Planet Chang 46:101–124.
27. O’Reilly CM, Alin SR, Plisnier PD et al (2003) Climate change decreases aquatic ecosystem productivity of Lake Tanganyika, Africa. Nature 424:766–768.
28. Panofsky HA, Brier GW (1968) Some applications of statistics to meteorology. University Park Penn, State University, College of Earth and Mineral SciencesGoogle Scholar
29. Piccolroaz S (2016) Prediction of lake surface temperature using the air2water model: guidelines, challenges, and future perspectives. Adv Oceanogr Limnol 7:36–50.
30. Piccolroaz S, Toffolon M (2013) Deep water renewal in Lake Baikal: a model for long-term analyses. J Geophys Res 118:6717–6733.
31. Piccolroaz S, Toffolon M, Majone B (2013) A simple lumped model to convert air temperature into surface water temperature in lakes. Hydrol Earth Syst Sci 7:3323–3338.
32. Piccolroaz S, Toffolon M, Majone B (2015) The role of stratification on lakes’ thermal response: the case of Lake Superior. Water Resour Res 51:7878–7894.
33. Piccolroaz S, Healey NC, Lenters JD et al (2018) On the predictability of lake surface temperature using air temperature in a changing climate: a case study for Lake Tahoe (USA). Limnol Oceanogr 63:243–261.
34. Potemkina TG, Potemkin VL, Fedotov AL (2018) Climatic factors as risks of recent ecological changes in the shallow zone of Lake Baikal. Russ Geol Geophys 59:556–565.
35. Rzheplinsky G, Sorokina A (1977) Atlas of wave and wind action in Lake Baikal. Gidrometeoizdat [in Russian]Google Scholar
36. Sahoo GB, Schladow SG, Reuter JE et al (2013) The response of Lake Tahoe to climate change. Clim Chang 116:71–95.
37. Salmaso N, Mosello R (2010) Limnological research in the deep southern subalpine lakes: synthesis, directions and perspectives. Adv Oceanogr Limnol 1:29–66.
38. Salmaso N, Boscaini A, Capelli C, Cerasino L (2017) Ongoing ecological shifts in a large lake are driven by climate change and eutrophication: evidences from a three decade study in Lake Garda. Hydrobiol.
39. Schmid M, Budnev NM, Granin NG et al (2008) Lake Baikal deepwater renewal mystery solved. Geophys Res Lett 35:1–5.
40. Schwefel R, Gaudard A, Wüest A, Bouffard D (2016) Effects of climate change on deepwater oxygen and winter mixing in a deep lake (Lake Geneva): comparing observational findings and modeling. Water Resour Res 52:8811–8826.
41. Shimaraev MN, Granin NG, Zhdanov AA (1993) Deep ventilation of Lake Baikal waters due to spring thermal bars. Limnol Oceanogr 38:1068–1072.
42. Shimaraev MN, Verbolov VI, Granin NG, Sherstyankin PP (1994) Physical limnology of Lake Baikal: a review. Baikal International Center for Ecological Research, Irkutsk-OkayamGoogle Scholar
43. Shimaraev MN, Gnatovskii RY, Blinov VV, Ivanov VG (2011) Renewal of deep waters of Lake Baikal revisited. Dokl Earth Sci 438:652–655.
44. Strøm KM (1945) The temperature of maximum density in fresh waters. Geofys Piublikasjoner Norske Videnskaps-Akad. Oslo 16(8):3–14Google Scholar
45. Toffolon M (2013) Ekman circulation and downwelling in narrow lakes. Adv Water Resour 53:76–86.
46. Toffolon M, Piccolroaz S (2015) A hybrid model for river water temperature as a function of air temperature and discharge. Environ Res Lett 10:114011.
47. Toffolon M, Piccolroaz S, Majone B et al (2014a) Prediction of surface temperature in lakes with different morphology using air temperature. Limnol Oceanogr 59:2182–2202.
48. Toffolon M, Piccolroaz S, Bouffard D (2014b) Crossing the boundaries of physical limnology. Eos 95:403.
49. Toffolon M, Wüest A, Sommer T (2015) Minimal model for double diffusion and its application to Kivu, Nyos and Powell Lake. J Geophys Res 120:6202–6224.
50. Tsimitri C, Rockel B, Wüest A et al (2015) Drivers of deep-water renewal events observed over 13 years in the South Basin of Lake Baikal. J Geophys Res 120:1508–1526.
51. Valerio G, Pilotti M, Barontini S, Leoni B (2015) Sensitivity of the multiannual thermal dynamics of a deep pre-alpine lake to climatic change. Hydrol Process 29:767–779.
52. Vereshchagin YuG (1936) In the Jubilee volume for semi-centenary of academician V.I. Vernadskii’s scientific and educational work. Akad. Nauk SSSR, Moscow, Part 2, pp 1207–1230 [in Russian]Google Scholar
53. Weiss RF, Carmack EC, Koropalov VM (1991) Deep-water renewal and biological production in Lake Baikal. Nature 349:665–669.
54. Wood TM, Wherry SA, Piccolroaz S, Girdner SF (2016) Simulation of deep ventilation in Crater Lake, Oregon, 1951–2099. U.S. Geological Survey Scientific Investigations Report 2016–5046, 43 p.
55. Woolway RI, Merchant CJ (2017) Amplified surface temperature response of cold, deep lakes to inter-annual air temperature variability. Sci Rep 7:4130.
56. Wüest A, Ravens T, Granin N et al (2005) Cold intrusions in Lake Baikal: direct observational evidence for deep-water renewal. Limnol Oceanogr 50:184–196. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8079478740692139, "perplexity": 4041.0638258742733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202523.0/warc/CC-MAIN-20190321112407-20190321134407-00492.warc.gz"} |
http://slideplayer.com/slide/677229/ | # Limitations of Quantum Advice and One-Way Communication Scott Aaronson UC Berkeley IAS Useful?
## Presentation on theme: "Limitations of Quantum Advice and One-Way Communication Scott Aaronson UC Berkeley IAS Useful?"— Presentation transcript:
Limitations of Quantum Advice and One-Way Communication Scott Aaronson UC Berkeley IAS Useful?
What Are Quantum States? To many quantum computing skeptics, theyre exponentially long vectorsand therefore a bad description of Nature Yet a classical probability distribution over {0,1} n also takes 2 (n) bits to specify! Sure, but each sample is only n bits… Distributions over n-bit strings 2 n -bit strings We give complexity-theoretic evidence that quantum states lie to the left end of this spectrum Supplements information-theoretic evidence (e.g. Holevo)
Quantum Advice BQP/qpoly: Class of languages decidable by polynomial-size, bounded-error quantum circuits, given a polynomial-size quantum advice state | n that depends only on the input length n Nielsen & Chuang: We know that many systems in Nature prefer to sit in highly entangled states of many systems; might it be possible to exploit this preference to obtain extra computational power?
Example (Watrous) For each n, fix a group G n and subgroup H n G n (|G n | 2 n, but group operations are polytime) Given an element x G n as input, is x H n ? Solvable in BQP/qpoly using the advice state Idea: Check whether H n |xH n is 1 or 0 Not known to be in BQP/poly
Maybe BQP/qpoly even contains NP! Obvious Challenge: Prove an oracle separation between BQP/poly and BQP/qpoly Buhrman: Hey Scottwhy not try for an unrelativized separation? After all, if quantum states are like 2 n -bit classical strings, then maybe BQP/qpoly NEEEEE/poly!
Result #1 BQP/qpoly PP/poly Proof based on new communication result: Given f:{0,1} n {0,1} m {0,1} (partial or total), D 1 (f) = O(m Q 1 (f) logQ 1 (f)) D 1 (f) = deterministic 1-way communication complexity of f Q 1 (f) = bounded-error quantum 1-way complexity Corollary: Cant show BQP/poly BQP/qpoly without also showing PP P/poly
Result #2 NP A BQP A /qpoly for some oracle A (actually, a random oracle) Proof based on new Direct Product Theorem for quantum search: Theorem: With few ( N) quantum queries, the probability of finding all K marked items is 2 - (K) Fixes a wrong result of Klauck N items, K of them marked
Result #3 (Wont say any more about this one) Ambainis: Suppose Alice has x,y F p and Bob has a,b F p. They want to know whether y ax+b. 1-way quantum communication complexity? Alices point Bobs line Theorem: Alice must send (log p) qubits to Bob Invented new trace distance method to show this Previously, even randomized complexity was unknown
Then after the measurement, we can recover a such that The Almost As Good As New Lemma Suppose a 2-outcome measurement of a mixed state yields 0 w.p. 1- and 1 w.p.
D 1 (f) = O(m Q 1 (f) logQ 1 (f)) for all f : {0,1} n {0,1} m {0,1} xy 1,y 2,… f(x,y) BobAlice Alice can decrease the error probability to 1/Q 1 (f) 10, by sending K=O(Q 1 (f)logQ 1 (f)) qubits Bob can then compute f(x,y) for Q 1 (f) 2 values of y simultaneously, with probability 0.9 With no communication, he can still do that with probability 0.9/2 K, by guessing x =I f(x,y 1 ) f(x,y 2 ) x = maximally mixed state?
Alices Classical Message Bob, let p 0 (y) be the probability youd guess f(x,y)=1 using I in place of x. Then y 1 is the lexicographically first y for which |p 0 (y)-f(x,y)| ½. Now let I 1 be the reduced state assuming you guessed f(x,y 1 ) correctly. Let p 1 (y) be the probability youd guess f(x,y)=1 using I 1 in place of x. Then y 2 is the first y after y 1 for which |p 1 (y)-f(x,y)| ½. y1y1 y2y2
Clearly Alices message lets Bob compute f(x,y) for any y in his range Claim: Alice never has to send more than K y i sso her total message length is O(mK) Suppose not. Then Bob would succeed on y 1,…,y K+1 simultaneously with probability 1/2 K+1 But we already know he succeeds with probability 0.9/2 K, contradiction
BQP/qpoly PP/poly Alice is the advisor Bob is the PP algorithm Suppose quantum advice has p(n) qubits. Then classical advice consists of K = O(p(n) log p(n)) inputs x 1,…,x K {0,1} n, on which algorithm would make the wrong guess using maximally mixed state in place of advice (as before) Adleman, DeMarrais, Huang: In PP, we can decide which of two sequences of measurement outcomes has greater probability Improves earlier result: BQP/qpoly EXP/poly
NP A BQP A /qpoly Claim: If L A BQP A /qpoly, then using boosted advice, we can find all 2 n/10 elements of S w.h.p. using 2 n/10 poly(n) quantum queries Oracle: A(x)=1 iff x S, where S {0,1} n is chosen uniformly at random subject to |S|=2 n/10 Language: (y,z) L A iff there exists an x S between y and z lexicographically (clearly L A NP A ) Now replace advice by maximally mixed state. Success probability becomes 2 -O(poly(n))
Direct Product Theorem Goal: Show that with o(2 n/2 ) quantum queries, the probability of finding all 2 n/10 marked items must be doubly exponentially small in n Beals et al: If a quantum algorithm makes T queries to X {0,1} N, then the probability it accepts a random X with |X|=k is a univariate polynomial p(k) of degree 2T INTUITIVELY PLAUSIBLE p(k) 0 1 012..... k N
Have the algorithm accept iff it finds |S|=2 n/10 marked items. Then (1) p(k)=0 for all k {0,…,|S|–1} (2) p(|S|) = 2 -O(poly(n)) (3) p(k) [0,1] for all k {0,…,2 n } p(k) 0 1 012... k..... 2n2n |S| Theorem: Given the above, (Improved by Klauck et al.)
Idea: Let Then V.A. Markov (younger brother of A.A. Markov) showed in 1892 that provided -1 p(x) 2 for all 0 x 2 n. On the other hand, one can show by induction on m that r (m) 2 -O(poly(n)) /m!
Open Questions Can we show BQP/poly BQP/qpoly relative to an oracle? What about SZK BQP/qpoly? Are randomized and quantum 1-way communication complexities polynomially related for all total Boolean functions? (No asymptotic gap is known) ? ? ? ? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8370596766471863, "perplexity": 4815.198132973081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864063.9/warc/CC-MAIN-20180521102758-20180521122758-00403.warc.gz"} |
https://www.physicsforums.com/threads/anatomy-physiology.316824/ | # Anatomy & Physiology
• Start date
• #1
4
0
I'm a first year student Anatomy & Physiology and this really got to. I have been in Chemistry for about two weeks now and really need some help with this one.
Questions is: Which of these is the pH of an acid solution?
A. pH 7.1
B. pH 7.0
C. pH 12.4
D. pH 6.9
E. pH 8.3
No Relevant equations
The attempt at a solution was to guest after reading about chemical bonding, Elements and compounds, but when I got to pH it was a whole new subject, with the K+ CI-KCI I have know idea. Please help me.
• #2
9
0
You are on the Calculus forum, but in answer to your question acidic solutions have a pH of < 7.
• #3
Mark44
Mentor
34,671
6,384
Take a look at this wikipedia article, particularly the section titled Applications: http://en.wikipedia.org/wiki/PH.
Freshly distilled water, which is neither acid nor alkiline, has a pH of 7.0. You should notice that one of your five choices is unlike the others, and is therefore the one you want.
• #4
4
0
Thank you, Mark so if I have any pH questions I can go to wikipedia.org
• #5
4
0
You are on the Calculus forum, but in answer to your question acidic solutions have a pH of < 7.
Thank you so much I was going crazy for a little.
• #6
4
0
Take a look at this wikipedia article, particularly the section titled Applications: http://en.wikipedia.org/wiki/PH.
Freshly distilled water, which is neither acid nor alkiline, has a pH of 7.0. You should notice that one of your five choices is unlike the others, and is therefore the one you want
Re: Anatomy & Physiology
Thank you, Mark so if I have any pH questions I can go to wikipedia.org | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8656024932861328, "perplexity": 1810.853882134752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358956.39/warc/CC-MAIN-20210227114444-20210227144444-00302.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/35755-distribution-density-expected.html | # Math Help - Distribution to density to expected
1. ## Distribution to density to expected
A random variable X has the cumulative distribution function
$F(x) = 0$ for $x<1$
$F(x) = \frac{x^2 - 2x + 2}{2}$ for $1 \leq x < 2$
$F(x) = 1$ for $x \geq 2$
Calculate the variance of X.
The answer specifies that the density function is
$f(x) = 0.5$ if x=1
$f(x) = x-1$ if 1<x<2
$f(x) = 0$ otherwise
Then
$E(X) = 0.5 + \int_1^2 x(x-1) dx$
$E(X^2) = 0.5 + \int_1^2 x^2(x-1) dx$
I got the f(x) = x-1 part, and I got how to calculate the variance after you have the expected values, but I'm lost on other questions.
My questions are:
Where do we get $f(x) = 0.5$ if x=1? F(1) = 0.5, but I can't figure out why f(1) would equal 0.5.
What is the rule for putting parts of the stepwise density function into the expected value equations? I don't know what the rule is called so I don't know how to review it. We're adding the slope at a single point to the slope over a big area, which is something I can't quite work out visually.
2. Originally Posted by Boris B
A random variable X has the cumulative distribution function
$F(x) = 0$ for $x<1$
$F(x) = \frac{x^2 - 2x + 2}{2}$ for $1 \leq x < 2$
$F(x) = 1$ for $x \geq 2$
Calculate the variance of X.
The answer specifies that the density fucntion is
$f(x) = 0.5$ if x=1
$f(x) = x-1$ if 1<x<2
$f(x) = 0$ otherwise
Then
$E(X) = 0.5 + \int_1^2 x(x-1) dx$
$E(X^2) = 0.5 + \int_1^2 x^2(x-1) dx$
I got the f(x) = x-1 part, and I got how to calculate the variance after you have the expected values, but I'm lost on other questions.
My questions are:
Where do we get $f(x) = 0.5$ if x=1? F(1) = 0.5, but I can't figure out why f(1) would equal 0.5.
What is the rule for putting parts of the stepwise density function into the expected value equations? I don't know what the rule is called so I don't know how to review it. We're adding the slope at a single point to the slope over a big area, which is something I can't quite work out visually.
$F(x) = 0$ for $x<1$ implies that $\Pr(X < 1) = 0$.
$F(x) = \frac{x^2 - 2x + 2}{2}$ for $1 \leq x < 2$ implies that $\Pr(X \leq 1) = \frac{1^2 - 2(1) + 2}{2} = \frac{1}{2}$.
It follows that $\Pr(X = 1) = \Pr(X \leq 1) - \Pr(X < 1) = \frac{1}{2}$.
3. Originally Posted by Boris B
A random variable X has the cumulative distribution function
$F(x) = 0$ for $x<1$
$F(x) = \frac{x^2 - 2x + 2}{2}$ for $1 \leq x < 2$
$F(x) = 1$ for $x \geq 2$
Calculate the variance of X.
The answer specifies that the density fucntion is
$f(x) = 0.5$ if x=1
$f(x) = x-1$ if 1<x<2
$f(x) = 0$ otherwise
Then
$E(X) = 0.5 + \int_1^2 x(x-1) dx$
$E(X^2) = 0.5 + \int_1^2 x^2(x-1) dx$
I got the f(x) = x-1 part, and I got how to calculate the variance after you have the expected values, but I'm lost on other questions.
My questions are:
Where do we get $f(x) = 0.5$ if x=1? F(1) = 0.5, but I can't figure out why f(1) would equal 0.5.
The $1/2$ comes from the jump discontinuity in the cumulative distribution $F(x)$ at $x=1$, indicating a probability mass of $1/2$ at that point.
Another way of thinking about this is to think of the density as a generalised function, then we may represent the density of a piecewise continuous cumulative distribution as the sum of a continuous function and delta functionals at the discontiuities of amplitude equal to the size of the jumps.
RonL
4. Originally Posted by Boris B
A random variable X has the cumulative distribution function
$F(x) = 0$ for $x<1$
$F(x) = \frac{x^2 - 2x + 2}{2}$ for $1 \leq x < 2$
$F(x) = 1$ for $x \geq 2$
Calculate the variance of X.
The answer specifies that the density fucntion is
$f(x) = 0.5$ if x=1
$f(x) = x-1$ if 1<x<2
$f(x) = 0$ otherwise
If this last is the given answer for the "density" then it is wrong, as as given it is not a density since:
$\int_{-\infty}^{\infty} f(x)~dx=1/2$
The density is the generalised function:
$f(x)=g(x)+(1/2)\delta(x-1)$
where
$g(x) = x-1, \ \ 1
$g(x) = 0, \ \ \mbox{otherwise}.$
RonL
5. Originally Posted by Boris B
What is the rule for putting parts of the stepwise density function into the expected value equations? I don't know what the rule is called so I don't know how to review it. We're adding the slope at a single point to the slope over a big area, which is something I can't quite work out visually.
If you use the generalised function approach you have a perfectly normal looking equation for the expected value. The problem arrises if you represent the distribution as mixed discrete/continuous, when you have to work with the two components seperately.
RonL | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 60, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9748625755310059, "perplexity": 353.4832830102772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://collegephysicsanswers.com/openstax-solutions/500-times-105-textrm-kg-rocket-accelerating-straight-its-engines-produce-1250 | Question
A $5.00 \times 10^5 \textrm{ kg}$ rocket is accelerating straight up. Its engines produce $1.250 \times 10^7 \textrm{ N}$ of thrust, and air resistance is $4.50 \times 10^6 \textrm{ N}$. What is the rocket’s acceleration? Explicitly show how you follow the steps in the Problem-Solving Strategy for Newton’s laws of motion.
Question by OpenStax is licensed under CC BY 4.0.
Final Answer
$6.20 \textrm{ m/s}^2$
Solution Video
# OpenStax College Physics Solution, Chapter 4, Problem 23 (Problems & Exercises) (1:36)
#### Sign up to view this solution video!
View sample solution
## Calculator Screenshots
Video Transcript
This is College Physics Answers with Shaun Dychko. One of the best ways to prepare any of your solutions is to draw a picture. So that's what I've done here. This is especially important for forces or dynamics questions because you need to draw the free body diagram to show all the forces involved. So with this rocket we have a thrust force upwards of 1.25 times ten to the seven Newtons and then downwards we have gravity and then we also have the air resistance which is directed opposite to the direction of motion and so that will be downwards as well because the rocket is moving up. So the air friction down is 4.5 times ten to the six Newtons. We're told that the mass of the rocket is 5 times ten to the five kilograms and so we've drawn a free body diagram and written down everything that we know. Then we can proceed to the algebra which is that the net force is the up force, up forces minus the down forces, and so we have up as the thrust force and then minus each of the down forces, air and gravity. That net force equals mass times acceleration. Gravity is mg. So we'll divide both sides by m here and then substitute for fg and we get this line here, acceleration is thrust, minus air resistance, minus mg weight, divided by the mass, m. We substitute each of those numbers in here and we end up with 6.2 meters per second squared as the acceleration of the rocket. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9213443398475647, "perplexity": 471.7602653268274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823712.21/warc/CC-MAIN-20181212022517-20181212044017-00392.warc.gz"} |
https://www.studyadda.com/sample-papers/rrbs-assistant-loco-pilot-and-technician-cbt-stage-i-sample-paper-28_q58/1364/413512 | • # question_answer Soft copy is an intangible output, so then what is a hard copy? A) The physical parts of the computer B) The printed parts of the computer C) The printed output D) The physical output device
Solution :
A soft copy is an electronic copy of some type of data, such as a file viewed on a computer's display or transmitted as an e-mail attachment. Such material, when printed, is referred to as a hard copy.
You need to login to perform this action.
You will be redirected in 3 sec | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.94691002368927, "perplexity": 3664.4281610232733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306335.77/warc/CC-MAIN-20220128182552-20220128212552-00696.warc.gz"} |
http://mathhelpforum.com/discrete-math/49724-big-o-notation-problems.html | 1. the Big-O notation problems
hello folks.
Can you help me solve these problems of big O notation and moreover, can you explain for me the easiest way to solve Big O notation problems? I mean, how do we begin to solve and what is the general concept?
Attached Thumbnails
2. The question in your textbook is not quite rigorously written: it is necessary to specify at the neighbourhood of which point you define the big-O notation. For instance, it seems clear in the present case that it should be $\underset{x\to\infty}{O}(x^n)$.
That said, $f(x)=\underset{x\to\infty}{O}(x^n)$ means no more than $|f(x)|\leq C x^n$ for some $C$ and for large enough $x$. I'll neglect the subscript under the O in the following (but write it on your paper anyway!).
Because $O(x^n)+O(x^n)=O(x^n)$, what you have to do is only to look at the largest term (when $x$ tends to $+\infty$). You determine which one is larger by using usual comparisons between polynomial terms, and between polynomial and logarithmic terms. For instance, for a), the dominating term is $x^3\log x$ (it is larger than $x^3$, which dominates $x^2$). This term is not $O(x^3)$ because $x^3\log x \leq Cx^3$ is not compatible with $\log x\to+\infty$. On the other hand $\log x=O(x)$, so that $x^3\log x=O(x^4)$. As a summary, $2x^2+x^3\log x=O(x^2)+x^3 O(x)=O(x^2)+O(x^4)=O(x^4)$ and it is not $O(x^3)$ since $x^{-3}(2x^2+x^3\log x)=2x^{-1}+\log x\to +\infty$.
For b), remember $(\log x)^4=O(x)$.
For c), you can use $\frac{1}{x^4+1}=O(x^{-4})$ (or look at the limit as $x$ tends to $+\infty$).
For d), cf. a) and c).
3. Originally Posted by Laurent
The question in your textbook is not quite rigorously written: it is necessary to specify at the neighbourhood of which point you define the big-O notation. For instance, it seems clear in the present case that it should be $\underset{x\to\infty}{O}(x^n)$.
There is no need for the $x \to \infty$ notation it is implicit in the usual definition of Big-O notation.
RonL
4. Originally Posted by narbe
hello folks.
Can you help me solve these problems of big O notation and moreover, can you explain for me the easiest way to solve Big O notation problems? I mean, how do we begin to solve and what is the general concept?
For a description of Big-O notation see the Wikipedia article.
RonL
5. 7 new problems
Hello my friends,
I added 7 new problems regarding Big-O notation. I need to know, how you solve them. I want to learn the way and the logic to solve them. Thank you for helping me
Attached Thumbnails
6. Landau symbol
Originally Posted by narbe
hello folks.
Can you help me solve these problems of big O notation and moreover, can you explain for me the easiest way to solve Big O notation problems? I mean, how do we begin to solve and what is the general concept? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9809084534645081, "perplexity": 221.52514135211382}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824104.30/warc/CC-MAIN-20171020120608-20171020140608-00578.warc.gz"} |
http://mathhelpforum.com/calculus/139816-minimum-point-f-x-y.html | # Thread: Minimum point of f(x,y)
1. ## Minimum point of f(x,y)
I have the following function: $f(x,y)=x^4+2x^3y+x^2y^2+y^4$ and I know that there is a critical point at $(0,0)$. I have been asked to determined the type of critical point. The second derivatives give no information, and I'm not supposed to consider higher derivatives.
"Obviously" it is a minimum point, so I thought it would be enough to show that $f(x,y)\geq 0$ near $(0,0)$.
By introducing polar coordinates I got:
$f(r\cos(\theta),r\sin(\theta))=...=r^4\left[1+2\cos^3(\theta)\sin(\theta)-cos^2(\theta)\sin^2(\theta)\right]$.
Since $r^4\geq 0$, it should be enough to show that $1+2\cos^3(\theta)\sin(\theta)-cos^2(\theta)\sin^2(\theta)\geq 0$, but I seem unable to do it. Does this seem like the right method, and if so, does anyone know how to go on from here?
Another thought I had was to compare with $(x+y)^4$ or $(x-y)^4$, but that didn't get me anywhere.
Aliquantus
2. Originally Posted by Aliquantus
I have the following function: $f(x,y)=x^4+2x^3y+x^2y^2+y^4$ and I know that there is a critical point at $(0,0)$. I have been asked to determined the type of critical point. The second derivatives give no information, and I'm not supposed to consider higher derivatives.
"Obviously" it is a minimum point, so I thought it would be enough to show that $f(x,y)\geq 0$ near $(0,0)$.
By introducing polar coordinates I got:
$f(r\cos(\theta),r\sin(\theta))=...=r^4\left[1+2\cos^3(\theta)\sin(\theta)-cos^2(\theta)\sin^2(\theta)\right]$.
Since $r^4\geq 0$, it should be enough to show that $1+2\cos^3(\theta)\sin(\theta)-cos^2(\theta)\sin^2(\theta)\geq 0$, but I seem unable to do it. Does this seem like the right method, and if so, does anyone know how to go on from here?
Another thought I had was to compare with $(x+y)^4$ or $(x-y)^4$, but that didn't get me anywhere.
Aliquantus
Note the following
$x^4 + 2x^3y+x^2y^2+y^4 = x^2\left(x^2+2xy+y^2\right) + y^4 = x^2\left(x+y\right)^2+y^4$.
3. Of course, thanks alot! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.968462347984314, "perplexity": 102.76076400387612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889733.57/warc/CC-MAIN-20180120201828-20180120221828-00271.warc.gz"} |
http://use.perl.org/use.perl.org/_jplindstrom/journal/12546.html | NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.
## All the Perl that's Practical to Extract and Report
#### jplindstrom (594)
jplindstrom
(email not shown publicly)
### Journal of jplindstrom (594)
Sunday June 01, 2003
12:52 PM
### Perl Oasis 0.3 released
[ #12546 ]
New version of Perl Oasis available.
New stuff: Scintilla source viewer and a couple of new supported editors: gVim, EditPlus and PFE in addition to the original UltraEdit.
The Scintilla control acted up when I tried to build a freestanding PerlApp application. Scintilla uses a SciLexer.dll file which is located in the site/lib/auto/... directory tree in the Perl installation. But adding it to the application in that relative location didn't work, nor did adding it in the current directory.
It's strange that Scintilla or Perl (I don't know which sub system does this) can't find the dll when located in the exact same place as it is in my Perl installation.
I ended up placing it outside the application, in the same dir as Oasis.exe itself. Not even adding the location to \$ENV{PATH} worked. Oh, well.
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
Full
Abbreviated
Hidden
• #### Did you try to auto-extract the DLL?(Score:2, Informative)
Did you try to bind it like this:
``` --bind SciLexer.dll[file=site\lib\auto\...\SciLexer.dll,extract] ```
(with the correct full /site/lib/auto path of course)?
• #### Yups(Score:1)
Yes, I made sure the extracted file was next to the Scintilla dll in site/lib/auto/Win32/Scintilla, just as it was in the Perl installation. I really think that should have worked.
I also put it in the current directory, still no dice.
I even tried setting the PATH (since there is no LD_LIBRARY_PATH on Windows) to different locations where it was, but nooo.
Maybe I made some other mistake, that's certainly possible.
• #### Re:Yups(Score:1)
The extracted file should be in the main extraction directory (which automatically gets added to the PATH etc.), and not to site/lib/auto/Win32/Scintilla directory, which is where is normally lives. The --bind option quoted in my previous reply should have done this correctly.
Could you run PerlApp again, and send me both the commandline used and the STDOUT/STDERR output via email?
• #### Re:Yups(Score:1)
Thanks, I'll try that when I find the time.
You should note that I use PerlApp 2 so the exact syntax isn't the same.
• #### Re:Yups(Score:1)
Ah, ok. I don't think PerlApp 2 supports automatic extraction of bound files. You'll have to manually write it out to a directory on the PATH in order to make it work I guess. In that case it is probably easier to just ship the DLL separately with the executable. The syntax I gave earlier requires PerlApp 4.1 or later I believe. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8802457451820374, "perplexity": 4428.148750257531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122152935.2/warc/CC-MAIN-20150124175552-00137-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://community.boredofstudies.org/13/mathematics-extension-1/383209/please-help-me-find-domain.html | Could someone please explain how to find the domain and range of: 1/square root (x^2+4). Without graphing. Thank
The domain is all reals; x can take any real value.
How do you work this out? Well, you start by asking yourself "are there any points or regions at which the function is undefined?" Since the function is a rational function (i.e. a fraction), an obvious place to start is by asking yourself for what values of x will the denominator be zero (and cause a problem)?
But a quick glance at the denominator should tell you that the denominator can never be zero for any value of x; specifically, the denominator is always 2 or higher.
Now, for any other possible points at which the function becomes undefined, note that there is a square root. Since we're considering only real numbers, recall that we cannot find the square root of a negative number. So you should now ask yourself for what values of x is x^2 + 4 < 0 ? The answer is none. So the square root doesn't cause any problems in the function.
So there are no values of x such that the function will become undefined, and therefore the domain of the function is all reals.
Hope this helps.
Oh ok I get it thanks so much. I’m not sure how to find the range though.
The Range would be
$0
You should think about what the lowest value you could have in your denominator, in that case it would be $\sqrt{4} = 2$
That lowest value would give you the highest value in the function (the maximum)
Now think about the largest possible value in the denominator, that would be infinity since x could always increase without bound.
This means the lowest value of the function is 0. (Now it actually doesn't get to the value 0, this way the range doesn't include that value)
The reason it can't be negative is that when you take a square root, you can only take a square root of a positive number, which only gives a positive value.
Ok thanks I understand how that works now, but then how would I find the range of 1/ squareroot(x^2+5x+6)? because there’s a 5x too
Originally Posted by happy10
Ok thanks I understand how that works now, but then how would I find the range of 1/ squareroot(x^2+5x+6)? because there’s a 5x too
$y>0$
Since
$y \rightarrow \infty \\ \\ \ \ as \ \ x \rightarrow -2 \ \ or \ \ x \rightarrow -3$
There are currently 1 users browsing this thread. (0 members and 1 guests)
Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
• | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919381141662598, "perplexity": 306.6976945033825}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861752.19/warc/CC-MAIN-20180619021643-20180619041643-00228.warc.gz"} |
http://www-old.newton.ac.uk/programmes/MOS/seminars/2011031710009.html | # MOS
## Seminar
### Surfaces and bounded cohomology
Iozzi, A (ETH Zürich)
Thursday 17 March 2011, 10:00-11:00
Satellite
#### Abstract
We introduce the notion of causal representation of a surface group and relate it to that of maximal representation and of tight homomorphism. When the target is SL(2,R) we show that these are hyperbolizations. In the process we define and study the bounded fundamental class of a compact surface (with or without boundary) and establish a result characterizing it among all bounded classes. We relate this to the winding number of Chillingsworth and to work of Calegari on stable commutator length. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.924166202545166, "perplexity": 791.9381737737798}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159031.19/warc/CC-MAIN-20160205193919-00266-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.qalaxia.com/questions/Pythagoras-Theory-Of-A-Right-Angled-Triangle-States-That | Mahesh Godavarti
1
If you build squares on each side of the right triangle, then the area of the square built on the hypotenuse is the sum of the squares built on the other two sides. I.e. if the a, b, \text{ and } c denote the the lengths of the two sides and the hypotenuse, respectively then c^2 = a^2 + b^2 . Image source: https://www.pinterest.com/pin/225391156321293954/ | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9537656307220459, "perplexity": 263.97370557578273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038089289.45/warc/CC-MAIN-20210416191341-20210416221341-00259.warc.gz"} |
https://www.mathway.com/examples/trigonometry/algebra-concepts-and-expressions-review/factoring-a-sum-of-cubes?id=80 | # Trigonometry Examples
Rewrite as .
Since both terms are perfect cubes, factor using the sum of cubes formula, where and .
Simplify.
Multiply by .
One to any power is one. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.948581337928772, "perplexity": 2747.7771986551447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512268.20/warc/CC-MAIN-20181019020142-20181019041642-00019.warc.gz"} |
http://www.physicsforums.com/showthread.php?p=2192692 | # Solution to diffusion equation - different input
by Dave007
Tags: diffusion, equation, input, solution
P: 4 Hi, I have seen the solution to the diffusion equation written as C=(N/sqrt(4PiDt))exp(-x^2/4Dt). Hoever, as I understand it, this is for an instant input of N material. I want to express the concentration of substance at a point x away from the source for an arbitrary input signal. Is there any nice way to do this please? Thanks, Dave
Sci Advisor PF Gold P: 1,776 The solution you give can be translated by replacing x with x-a to give a distribution centered around x=a. You can also superpose many solutions since the diffusion equation is linear.
P: 4 Thanks jambaugh. I've actually modified the equation to include an advective part by making x = (x+vt). However, what I want to do is change N to be a function of time. This may require a different equation because as soon as I do that, it doesn't satisfy the diffusion equation anymore I don't think. But I am a bit confused by it all. Essentially what I want is this: I have an ion transient through channels in a cell and I want to represent that transient at a point 'x' away from those channels. I thought the diffusion (advective-diffusion) equation is perfect. However, the solution I found is only valid for an initial injection of substance. I want to continually be injecting substance at a varying rate. Is there any solution you know of that would enable me to do this? Thanks!
PF Gold
P: 1,776
Solution to diffusion equation - different input
Quote by Dave007 Thanks jambaugh. I've actually modified the equation to include an advective part by making x = (x+vt). However, what I want to do is change N to be a function of time. This may require a different equation because as soon as I do that, it doesn't satisfy the diffusion equation anymore I don't think. But I am a bit confused by it all. Essentially what I want is this: I have an ion transient through channels in a cell and I want to represent that transient at a point 'x' away from those channels. I thought the diffusion (advective-diffusion) equation is perfect. However, the solution I found is only valid for an initial injection of substance. I want to continually be injecting substance at a varying rate. Is there any solution you know of that would enable me to do this? Thanks!
Introducing (uniform constant velocity) advection should be equivalent to choosing a moving coordinate system. This will alter the differential equation but it is an equivalent problem in that the solutions to the regular diffusion equation, once the velocity transform is applied, will be solutions to the advective diffusion equation.
If your velocity is a function of position and/or time then things are going to get nasty and I'm not sure there are simple methods. You may need to execute a Finite Elements Model to numerically solve the equation. Look around the web for numerical packages which may work.
As far as continuously adding substance you are now talking about an inhomogeneous diffusion equation:
du/dt = D d^2u/dx^2 + f(x,t)
where u(x,t) be the concentration of substance at a given time and position and f is the source term.
There's much literature on solving the diffusion (heat) equation and a great deal of it is online. Look into the Green's function approach and/or solutions via Fourier transforms.
P: 4 As far as I can see, though, the solutions all depend on knowing what f(x,t) is. In my case, I can't express it as a function. Is there any easy way to solve this? Thanks for all your help!!
P: 23,577
You may want to consult electrochemist working with voltammetric methods. They deal with similar problems all the time. IMHO simple answer to the question
Quote by Dave007 Is there any easy way to solve this?
is NO.
P: 4 Any electrochemist who works with voltammetric methods in the house? PLEASE?
P: 333
Quote by Dave007 Thanks jambaugh. ... I thought the diffusion (advective-diffusion) equation is perfect. However, the solution I found is only valid for an initial injection of substance. I want to continually be injecting substance at a varying rate. Is there any solution you know of that would enable me to do this? Thanks!
Postulates of the heat/diffusion equation are only approximations to the physical situation. But they are good approximations. This is what I have recently learned. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8180558681488037, "perplexity": 451.76648425040037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500833115.44/warc/CC-MAIN-20140820021353-00183-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://phys.libretexts.org/Courses/Joliet_Junior_College/Physics_201_-_Fall_2019v2/Book%3A_Custom_Physics_textbook_for_JJC/12%3A_Temperature_and_Kinetic_Theory/12.06%3A_The_Kinetic_Theory_of_Gases/The_Kinetic_Theory_of_Gases_Introduction_(Exercises) | $$\require{cancel}$$
# The Kinetic Theory of Gases Introduction (Exercises)
## Conceptual Questions
### 2.1 Molecular Model of an Ideal Gas
1. Two $$\displaystyle H_2$$ molecules can react with one (O_2\) molecule to produce two $$\displaystyle H_2O$$ molecules. How many moles of hydrogen molecules are needed to react with one mole of oxygen molecules?
2. Under what circumstances would you expect a gas to behave significantly differently than predicted by the ideal gas law?
3. A constant-volume gas thermometer contains a fixed amount of gas. What property of the gas is measured to indicate its temperature?
4. Inflate a balloon at room temperature. Leave the inflated balloon in the refrigerator overnight. What happens to the balloon, and why?
5. In the last chapter, free convection was explained as the result of buoyant forces on hot fluids. Explain the upward motion of air in flames based on the ideal gas law.
### 2.2 Pressure, Temperature, and RMS Speed
6. How is momentum related to the pressure exerted by a gas? Explain on the molecular level, considering the behavior of molecules.
7. If one kind of molecule has double the radius of another and eight times the mass, how do their mean free paths under the same conditions compare? How do their mean free times compare?
8. What is the average velocity of the air molecules in the room where you are right now?
9. Why do the atmospheres of Jupiter, Saturn, Uranus, and Neptune, which are much more massive and farther from the Sun than Earth is, contain large amounts of hydrogen and helium?
10. Statistical mechanics says that in a gas maintained at a constant temperature through thermal contact with a bigger system (a “reservoir”) at that temperature, the fluctuations in internal energy are typically a fraction $$\displaystyle 1/\sqrt{N}$$ of the internal energy. As a fraction of the total internal energy of a mole of gas, how big are the fluctuations in the internal energy? Are we justified in ignoring them?
11. Which is more dangerous, a closet where tanks of nitrogen are stored, or one where tanks of carbon dioxide are stored?
### 2.3 Heat Capacity and Equipartition of Energy
12. Experimentally it appears that many polyatomic molecules’ vibrational degrees of freedom can contribute to some extent to their energy at room temperature. Would you expect that fact to increase or decrease their heat capacity from the value R? Explain.
13. One might think that the internal energy of diatomic gases is given by $$\displaystyle E_{int}=5RT/2$$.. Do diatomic gases near room temperature have more or less internal energy than that? Hint: Their internal energy includes the total energy added in raising the temperature from the boiling point (very low) to room temperature.
14. You mix 5 moles of $$\displaystyle H_2$$ at 300 K with 5 moles of He at 360 K in a perfectly insulated calorimeter. Is the final temperature higher or lower than 330 K?
### 2.4 Distribution of Molecular Speeds
15. One cylinder contains helium gas and another contains krypton gas at the same temperature. Mark each of these statements true, false, or impossible to determine from the given information.
(a) The rms speeds of atoms in the two gases are the same.
(b) The average kinetic energies of atoms in the two gases are the same.
(c) The internal energies of 1 mole of gas in each cylinder are the same.
(d) The pressures in the two cylinders are the same.
16. Repeat the previous question if one gas is still helium but the other is changed to fluorine, $$\displaystyle F_2$$.
17. An ideal gas is at a temperature of 300 K. To double the average speed of its molecules, what does the temperature need to be changed to?
## Problems
### 2.1 Molecular Model of an Ideal Gas
18. The gauge pressure in your car tires is $$\displaystyle 2.50×10^5N/m^2$$ a temperature of 35.0°C when you drive it onto a ship in Los Angeles to be sent to Alaska. What is their gauge pressure on a night in Alaska when their temperature has dropped to −40.0°C? Assume the tires have not gained or lost any air.
19. Suppose a gas-filled incandescent light bulb is manufactured so that the gas inside the bulb is at atmospheric pressure when the bulb has a temperature of 20.0°C.
(a) Find the gauge pressure inside such a bulb when it is hot, assuming its average temperature is 60.0°C (an approximation) and neglecting any change in volume due to thermal expansion or gas leaks.
(b) The actual final pressure for the light bulb will be less than calculated in part (a) because the glass bulb will expand. Is this effect significant?
20. People buying food in sealed bags at high elevations often notice that the bags are puffed up because the air inside has expanded. A bag of pretzels was packed at a pressure of 1.00 atm and a temperature of 22.0°C.When opened at a summer picnic in Santa Fe, New Mexico, at a temperature of 32.0°C, the volume of the air in the bag is 1.38 times its original volume. What is the pressure of the air?
21. How many moles are there in
(a) 0.0500 g of $$\displaystyle N_2$$ gas (M=28.0g/mol)?
(b) 10.0 g of $$\displaystyle CO_2$$ gas (M=44.0g/mol)?
(c) How many molecules are present in each case?
22. A cubic container of volume 2.00 L holds 0.500 mol of nitrogen gas at a temperature of 25.0°C. What is the net force due to the nitrogen on one wall of the container? Compare that force to the sample’s weight.
23. Calculate the number of moles in the 2.00-L volume of air in the lungs of the average person. Note that the air is at 37.0°C (body temperature) and that the total volume in the lungs is several times the amount inhaled in a typical breath as given in Example 2.2.
24. An airplane passenger has $$\displaystyle 100cm^3$$ of air in his stomach just before the plane takes off from a sea-level airport. What volume will the air have at cruising altitude if cabin pressure drops to $$\displaystyle 7.50×10^4N/m^2$$?
25. A company advertises that it delivers helium at a gauge pressure of $$\displaystyle 1.72×10^7Pa$$ in a cylinder of volume 43.8 L. How many balloons can be inflated to a volume of 4.00 L with that amount of helium? Assume the pressure inside the balloons is 1.01×105Pa1.01×105Pa and the temperature in the cylinder and the balloons is 25.0°C25.0°C.
26. According to http://hyperphysics.phy-astr.gsu.edu.../venusenv.html, the atmosphere of Venus is approximately $$\displaystyle 96.5%CO_2$$ and $$\displaystyle 3.5%N_2$$ by volume. On the surface, where the temperature is about 750 K and the pressure is about 90 atm, what is the density of the atmosphere?
27. An expensive vacuum system can achieve a pressure as low as $$\displaystyle 1.00×10^{−7}N/m^2$$ at 20.0°C. How many molecules are there in a cubic centimeter at this pressure and temperature?
28. The number density N/V of gas molecules at a certain location in the space above our planet is about $$\displaystyle 1.00×10^{11}m^{−3}$$, and the pressure is $$\displaystyle 2.75×10^{−10N}/m^2$$ in this space. What is the temperature there?
29. A bicycle tire contains 2.00 L of gas at an absolute pressure of $$\displaystyle 7.00×10^5N/m^2$$ and a temperature of 18.0°C. What will its pressure be if you let out an amount of air that has a volume of $$\displaystyle 100cm^3$$ at atmospheric pressure? Assume tire temperature and volume remain constant.
30. In a common demonstration, a bottle is heated and stoppered with a hard-boiled egg that’s a little bigger than the bottle’s neck. When the bottle is cooled, the pressure difference between inside and outside forces the egg into the bottle. Suppose the bottle has a volume of 0.500 L and the temperature inside it is raised to 80.0°C while the pressure remains constant at 1.00 atm because the bottle is open.
(a) How many moles of air are inside?
(b) Now the egg is put in place, sealing the bottle. What is the gauge pressure inside after the air cools back to the ambient temperature of 25°C but before the egg is forced into the bottle?
31. A high-pressure gas cylinder contains 50.0 L of toxic gas at a pressure of $$\displaystyle 1.40×10^7N/m^2$$ and a temperature of 25.0°C. The cylinder is cooled to dry ice temperature (−78.5°C) to reduce the leak rate and pressure so that it can be safely repaired.
(a) What is the final pressure in the tank, assuming a negligible amount of gas leaks while being cooled and that there is no phase change?
(b) What is the final pressure if one-tenth of the gas escapes? (c) To what temperature must the tank be cooled to reduce the pressure to 1.00 atm (assuming the gas does not change phase and that there is no leakage during cooling)? (d) Does cooling the tank as in part
(c) appear to be a practical solution?
32. Find the number of moles in 2.00 L of gas at 35.0°Cand under $$\displaystyle 7.41×10^7N/m^2$$ of pressure.
33. Calculate the depth to which Avogadro’s number of table tennis balls would cover Earth. Each ball has a diameter of 3.75 cm. Assume the space between balls adds an extra 25.0% to their volume and assume they are not crushed by their own weight.
34. a) What is the gauge pressure in a 25.0°C car tire containing 3.60 mol of gas in a 30.0-L volume?
(b) What will its gauge pressure be if you add 1.00 L of gas originally at atmospheric pressure and 25.0°C ? Assume the temperature remains at 25.0°C and the volume remains constant.
### 2.2 Pressure, Temperature, and RMS Speed
In the problems in this section, assume all gases are ideal.
35. A person hits a tennis ball with a mass of 0.058 kg against a wall. The average component of the ball’s velocity perpendicular to the wall is 11 m/s, and the ball hits the wall every 2.1 s on average, rebounding with the opposite perpendicular velocity component.
(a) What is the average force exerted on the wall?
(b) If the part of the wall the person hits has an area of $$\displaystyle 3.0m^2$$, what is the average pressure on that area?
36. A person is in a closed room (a racquetball court) with $$\displaystyle V=453m^3$$ hitting a ball (m=42.0g)(m=42.0g) around at random without any pauses. The average kinetic energy of the ball is 2.30 J.
(a) What is the average value of $$\displaystyle v^2_x$$? Does it matter which direction you take to be x?
(b) Applying the methods of this chapter, find the average pressure on the walls?
(c) Aside from the presence of only one “molecule” in this problem, what is the main assumption in Pressure, Temperature, and RMS Speed that does not apply here?
37. Five bicyclists are riding at the following speeds: 5.4 m/s, 5.7 m/s, 5.8 m/s, 6.0 m/s, and 6.5 m/s. (a) What is their average speed? (b) What is their rms speed?
38. Some incandescent light bulbs are filled with argon gas. What is $$\displaystyle v_{rms}$$ for argon atoms near the filament, assuming their temperature is 2500 K?
39. Typical molecular speeds ($$\displaystyle v_{rms}$$) are large, even at low temperatures. What is $$\displaystyle v_{rms}$$ for helium atoms at 5.00 K, less than one degree above helium’s liquefaction temperature?
40. What is the average kinetic energy in joules of hydrogen atoms on the 5500°C surface of the Sun?
(b) What is the average kinetic energy of helium atoms in a region of the solar corona where the temperature is $$\displaystyle 6.00×10^5K$$ ?
41. What is the ratio of the average translational kinetic energy of a nitrogen molecule at a temperature of 300 K to the gravitational potential energy of a nitrogen-molecule−Earth system at the ceiling of a 3-m-tall room with respect to the same system with the molecule at the floor?
42. What is the total translational kinetic energy of the air molecules in a room of volume $$\displaystyle 23m^3$$ if the pressure is $$\displaystyle 9.5×10^4Pa$$ (the room is at fairly high elevation) and the temperature is 21°C? Is any item of data unnecessary for the solution?
43. The product of the pressure and volume of a sample of hydrogen gas at 0.00°C is 80.0 J.
(a) How many moles of hydrogen are present?
(b) What is the average translational kinetic energy of the hydrogen molecules?
(c) What is the value of the product of pressure and volume at 200°C?
44. What is the gauge pressure inside a tank of $$\displaystyle 4.86×10^4mol$$ of compressed nitrogen with a volume of $$\displaystyle 6.56m^3$$ if the rms speed is 514 m/s?
45. If the rms speed of oxygen molecules inside a refrigerator of volume $$\displaystyle 22.0ft.^3$$ is 465 m/s, what is the partial pressure of the oxygen? There are 5.71 moles of oxygen in the refrigerator, and the molar mass of oxygen is 32.0 g/mol.
46. The escape velocity of any object from Earth is 11.1 km/s. At what temperature would oxygen molecules (molar mass is equal to 32.0 g/mol) have root-mean-square velocity $$\displaystyle v_{rms}$$ equal to Earth’s escape velocity of 11.1 km/s?
47. The escape velocity from the Moon is much smaller than that from the Earth, only 2.38 km/s. At what temperature would hydrogen molecules (molar mass is equal to 2.016 g/mol) have a root-mean-square velocity vrmsvrms equal to the Moon’s escape velocity?
48. Nuclear fusion, the energy source of the Sun, hydrogen bombs, and fusion reactors, occurs much more readily when the average kinetic energy of the atoms is high—that is, at high temperatures. Suppose you want the atoms in your fusion experiment to have average kinetic energies of $$\displaystyle 6.40×10^{−14}J$$. What temperature is needed?
49. Suppose that the typical speed ($$\displaystyle v_{rms}$$) of carbon dioxide molecules (molar mass is 44.0 g/mol) in a flame is found to be 1350 m/s. What temperature does this indicate?
50. (a) Hydrogen molecules (molar mass is equal to 2.016 g/mol) have vrmsvrms equal to 193 m/s. What is the temperature? (b) Much of the gas near the Sun is atomic hydrogen (H rather than $$\displaystyle H_2$$). Its temperature would have to be $$\displaystyle 1.5×10^7K$$ for the rms speed vrmsvrms to equal the escape velocity from the Sun. What is that velocity?
51. There are two important isotopes of uranium, $$\displaystyle ^{235}U$$ and $$\displaystyle ^{238}U$$; these isotopes are nearly identical chemically but have different atomic masses. Only $$\displaystyle ^{235}U$$ is very useful in nuclear reactors. Separating the isotopes is called uranium enrichment (and is often in the news as of this writing, because of concerns that some countries are enriching uranium with the goal of making nuclear weapons.) One of the techniques for enrichment, gas diffusion, is based on the different molecular speeds of uranium hexafluoride gas, $$\displaystyle UF_6$$.
(a) The molar masses of $$\displaystyle ^{235}U$$ and $$\displaystyle ^{238}UF_6$$ are 349.0 g/mol and 352.0 g/mol, respectively. What is the ratio of their typical speeds vrmsvrms?
(b) At what temperature would their typical speeds differ by 1.00 m/s?
(c) Do your answers in this problem imply that this technique may be difficult?
52. The partial pressure of carbon dioxide in the lungs is about 470 Pa when the total pressure in the lungs is 1.0 atm. What percentage of the air molecules in the lungs is carbon dioxide? Compare your result to the percentage of carbon dioxide in the atmosphere, about 0.033%.
53. Dry air consists of approximately 78%nitrogen,21%oxygen,and 1%argon by mole, with trace amounts of other gases. A tank of compressed dry air has a volume of 1.76 cubic feet at a gauge pressure of 2200 pounds per square inch and a temperature of 293 K. How much oxygen does it contain in moles?
54. (a) Using data from the previous problem, find the mass of nitrogen, oxygen, and argon in 1 mol of dry air. The molar mass of $$\displaystyle N_2$$ is 28.0 g/mol, that of $$\displaystyle O_2$$ is 32.0 g/mol, and that of argon is 39.9 g/mol.
(b) Dry air is mixed with pentane ($$\displaystyle C_5H_{12}$$, molar mass 72.2 g/mol), an important constituent of gasoline, in an air-fuel ratio of 15:1 by mass (roughly typical for car engines). Find the partial pressure of pentane in this mixture at an overall pressure of 1.00 atm.
55. (a) Given that air is 21% oxygen, find the minimum atmospheric pressure that gives a relatively safe partial pressure of oxygen of 0.16 atm.
(b) What is the minimum pressure that gives a partial pressure of oxygen above the quickly fatal level of 0.06 atm?
(c) The air pressure at the summit of Mount Everest (8848 m) is 0.334 atm. Why have a few people climbed it without oxygen, while some who have tried, even though they had trained at high elevation, had to turn back?
56. (a) If the partial pressure of water vapor is 8.05 torr, what is the dew point? (760torr=1atm=101,325Pa)
(b) On a warm day when the air temperature is 35°C and the dew point is 25°C, what are the partial pressure of the water in the air and the relative humidity?
### 2.3 Heat Capacity and Equipartition of Energy
57. To give a helium atom nonzero angular momentum requires about 21.2 eV of energy (that is, 21.2 eV is the difference between the energies of the lowest-energy or ground state and the lowest-energy state with angular momentum). The electron-volt or eV is defined as $$\displaystyle 1.60×10^{−19}J.$$ Find the temperature T where this amount of energy equals $$\displaystyle k_BT/2$$. Does this explain why we can ignore the rotational energy of helium for most purposes? (The results for other monatomic gases, and for diatomic gases rotating around the axis connecting the two atoms, have comparable orders of magnitude.)
58. (a) How much heat must be added to raise the temperature of 1.5 mol of air from 25.0°C to 33.0°C at constant volume? Assume air is completely diatomic.
(b) Repeat the problem for the same number of moles of xenon, Xe.
59. A sealed, rigid container of 0.560 mol of an unknown ideal gas at a temperature of 30.0°C is cooled to −40.0°C. In the process, 980 J of heat are removed from the gas. Is the gas monatomic, diatomic, or polyatomic?
60. A sample of neon gas (Ne, molar mass M=20.2g/mol) at a temperature of 13.0°C is put into a steel container of mass 47.2 g that’s at a temperature of −40.0°C. The final temperature is −28.0°C. (No heat is exchanged with the surroundings, and you can neglect any change in the volume of the container.) What is the mass of the sample of neon?
61. A steel container of mass 135 g contains 24.0 g of ammonia, $$\displaystyle NH_3$$, which has a molar mass of 17.0 g/mol. The container and gas are in equilibrium at 12.0°C. How much heat has to be removed to reach a temperature of −20.0°C? Ignore the change in volume of the steel.
62. A sealed room has a volume of $$\displaystyle 24m^3$$. It’s filled with air, which may be assumed to be diatomic, at a temperature of 24°C and a pressure of 9.83×104Pa.9.83×104Pa. A 1.00-kg block of ice at its melting point is placed in the room. Assume the walls of the room transfer no heat. What is the equilibrium temperature?
63. Heliox, a mixture of helium and oxygen, is sometimes given to hospital patients who have trouble breathing, because the low mass of helium makes it easier to breathe than air. Suppose helium at 25°C is mixed with oxygen at 35°C to make a mixture that is 70% helium by mole. What is the final temperature? Ignore any heat flow to or from the surroundings, and assume the final volume is the sum of the initial volumes.
64. Professional divers sometimes use heliox, consisting of 79% helium and 21% oxygen by mole. Suppose a perfectly rigid scuba tank with a volume of 11 L contains heliox at an absolute pressure of $$\displaystyle 2.1×10^7Pa$$ at a temperature of 31°C
(a) How many moles of helium and how many moles of oxygen are in the tank?
(b) The diver goes down to a point where the sea temperature is 27°C while using a negligible amount of the mixture. As the gas in the tank reaches this new temperature, how much heat is removed from it?
65. In car racing, one advantage of mixing liquid nitrous oxide ($$\displaystyle N_2O$$) with air is that the boiling of the “nitrous” absorbs latent heat of vaporization and thus cools the air and ultimately the fuel-air mixture, allowing more fuel-air mixture to go into each cylinder. As a very rough look at this process, suppose 1.0 mol of nitrous oxide gas at its boiling point, −88°C, is mixed with 4.0 mol of air (assumed diatomic) at 30°C30°C. What is the final temperature of the mixture? Use the measured heat capacity of $$\displaystyle N_2O$$ at 25°C, which is 30.4J/mol°C. (The primary advantage of nitrous oxide is that it consists of 1/3 oxygen, which is more than air contains, so it supplies more oxygen to burn the fuel. Another advantage is that its decomposition into nitrogen and oxygen releases energy in the cylinder.)
### 2.4 Distribution of Molecular Speeds
66. In a sample of hydrogen sulfide (M=34.1g/mol) at a temperature of $$\displaystyle 3.00×10^2K$$, estimate the ratio of the number of molecules that have speeds very close to $$\displaystyle v_{rms}$$ to the number that have speeds very close to $$\displaystyle 2v_{rms}$$.
67. Using the approximation $$\displaystyle ∫^{v_1+Δv}_{v_1}f(v)dv≈f(v1)Δv$$ for small Δv, estimate the fraction of nitrogen molecules at a temperature of $$\displaystyle 3.00×10^2K$$ that have speeds between 290 m/s and 291 m/s.
68. Using the method of the preceding problem, estimate the fraction of nitric oxide (NO) molecules at a temperature of 250 K that have energies between $$\displaystyle 3.45×10^{−21}J$$ and $$\displaystyle 3.50×10^{−21}J$$.
69. By counting squares in the following figure, estimate the fraction of argon atoms at T=300K that have speeds between 600 m/s and 800 m/s. The curve is correctly normalized. The value of a square is its length as measured on the x-axis times its height as measured on the y-axis, with the units given on those axes.
70. Using a numerical integration method such as Simpson’s rule, find the fraction of molecules in a sample of oxygen gas at a temperature of 250 K that have speeds between 100 m/s and 150 m/s. The molar mass of oxygen ($$\displaystyle O_2$$) is 32.0 g/mol. A precision to two significant digits is enough.
71. Find (a) the most probable speed,
(b) the average speed, and
(c) the rms speed for nitrogen molecules at 295 K.
72. Repeat the preceding problem for nitrogen molecules at 2950 K.
73. At what temperature is the average speed of carbon dioxide molecules (M=44.0g/mol) 510 m/s?
74. The most probable speed for molecules of a gas at 296 K is 263 m/s. What is the molar mass of the gas? (You might like to figure out what the gas is likely to be.)
75. a) At what temperature do oxygen molecules have the same average speed as helium atoms (M=4.00g/mol) have at 300 K?
b) What is the answer to the same question about most probable speeds?
c) What is the answer to the same question about rms speeds?
76. In the deep space between galaxies, the density of molecules (which are mostly single atoms) can be as low as $$\displaystyle 10^6atoms/m^3$$, and the temperature is a frigid 2.7 K. What is the pressure?
(b) What volume (in $$\displaystyle m^3$$) is occupied by 1 mol of gas?
(c) If this volume is a cube, what is the length of its sides in kilometers?
77. (a) Find the density in SI units of air at a pressure of 1.00 atm and a temperature of 20°C20°C, assuming that air is $$\displaystyle 78%N_2,21%O_2,$$ and $$\displaystyle 1%Ar$$
(b) Find the density of the atmosphere on Venus, assuming that it’s $$\displaystyle 96%CO_2$$ and $$\displaystyle 4%N_2$$, with a temperature of 737 K and a pressure of 92.0 atm.
78. The air inside a hot-air balloon has a temperature of 370 K and a pressure of 101.3 kPa, the same as that of the air outside. Using the composition of air as $$\displaystyle 78%N_2,21%O_2,$$ and $$\displaystyle 1%Ar$$, find the density of the air inside the balloon.
79. When an air bubble rises from the bottom to the top of a freshwater lake, its volume increases by 80%. If the temperatures at the bottom and the top of the lake are 4.0 and 10 °C, respectively, how deep is the lake?
80. (a) Use the ideal gas equation to estimate the temperature at which 1.00 kg of steam (molar mass M=18.0g/mol) at a pressure of $$\displaystyle 1.50×10^6Pa$$ occupies a volume of $$\displaystyle 0.220m^3$$.
(b) The van der Waals constants for water are $$\displaystyle a=0.5537Pa⋅m^6/mol^2$$ and $$\displaystyle b=3.049×10^{−5}m^3/mol$$. Use the Van der Waals equation of state to estimate the temperature under the same conditions.
(c) The actual temperature is 779 K. Which estimate is better?
81. One process for decaffeinating coffee uses carbon dioxide (M=44.0g/mol) at a molar density of about $$\displaystyle 14,600mol/m^3$$ and a temperature of about 60°C.
(a) Is $$\displaystyle CO_2$$ a solid, liquid, gas, or supercritical fluid under those conditions?
(b) The van der Waals constants for carbon dioxide are $$\displaystyle a=0.3658Pa⋅m^6/mol^2$$ and $$\displaystyle b=4.286×10^{−5}m^3/mol$$. Using the van der Waals equation, estimate the pressure of $$\displaystyle CO_2$$ at that temperature and density.
82. On a winter day when the air temperature is 0°C, the relative humidity is 50%. Outside air comes inside and is heated to a room temperature of 20°C. What is the relative humidity of the air inside the room. (Does this problem show why inside air is so dry in winter?)
83. On a warm day when the air temperature is 30°C, a metal can is slowly cooled by adding bits of ice to liquid water in it. Condensation first appears when the can reaches 15°C. What is the relative humidity of the air?
84. (a) People often think of humid air as “heavy.” Compare the densities of air with 0% relative humidity and 100%100% relative humidity when both are at 1 atm and 30°C. Assume that the dry air is an ideal gas composed of molecules with a molar mass of 29.0 g/mol and the moist air is the same gas mixed with water vapor.
(b) As discussed in the chapter on the applications of Newton’s laws, the air resistance felt by projectiles such as baseballs and golf balls is approximately $$\displaystyle F_D=CρAv^2/2$$, where ρ is the mass density of the air, A is the cross-sectional area of the projectile, and C is the projectile’s drag coefficient. For a fixed air pressure, describe qualitatively how the range of a projectile changes with the relative humidity.
(c) When a thunderstorm is coming, usually the humidity is high and the air pressure is low. Do those conditions give an advantage or disadvantage to home-run hitters?
85. The mean free path for helium at a certain temperature and pressure is $$\displaystyle 2.10×10^{−7}m$$. The radius of a helium atom can be taken as $$\displaystyle 1.10×10^{−11}m$$. What is the measure of the density of helium under those conditions
(a) in molecules per cubic meter and
(b) in moles per cubic meter?
86. The mean free path for methane at a temperature of 269 K and a pressure of $$\displaystyle 1.11×10^5Pa$$ is $$\displaystyle 4.81×10^{−8}m$$. Find the effective radius r of the methane molecule.
87. In the chapter on fluid mechanics, Bernoulli’s equation for the flow of incompressible fluids was explained in terms of changes affecting a small volume dV of fluid. Such volumes are a fundamental idea in the study of the flow of compressible fluids such as gases as well. For the equations of hydrodynamics to apply, the mean free path must be much less than the linear size of such a volume, $$\displaystyle a≈dV^{1/3}$$. For air in the stratosphere at a temperature of 220 K and a pressure of 5.8 kPa, how big should a be for it to be 100 times the mean free path? Take the effective radius of air molecules to be $$\displaystyle 1.88×10^{−11}m$$, which is roughly correct for $$\displaystyle N_2$$.
88. Find the total number of collisions between molecules in 1.00 s in 1.00 L of nitrogen gas at standard temperature and pressure (0°C, 1.00 atm). Use $$\displaystyle 1.88×10^{−10}m$$ as the effective radius of a nitrogen molecule. (The number of collisions per second is the reciprocal of the collision time.) Keep in mind that each collision involves two molecules, so if one molecule collides once in a certain period of time, the collision of the molecule it hit cannot be counted.
89. (a) Estimate the specific heat capacity of sodium from the Law of Dulong and Petit. The molar mass of sodium is 23.0 g/mol.
(b) What is the percent error of your estimate from the known value, 1230J/kg⋅°C?
90. A sealed, perfectly insulated container contains 0.630 mol of air at 20.0°C and an iron stirring bar of mass 40.0 g. The stirring bar is magnetically driven to a kinetic energy of 50.0 J and allowed to slow down by air resistance. What is the equilibrium temperature?
91. Find the ratio $$\displaystyle f(v_p)/f(v_{rms})$$ for hydrogen gas (M=2.02g/mol) at a temperature of 77.0 K.
92. Unreasonable results. (a) Find the temperature of 0.360 kg of water, modeled as an ideal gas, at a pressure of $$\displaystyle 1.01×10^5Pa$$ if it has a volume of $$\displaystyle 0.615m^3$$.
93. Unreasonable results. (a) Find the average speed of hydrogen sulfide, $$\displaystyle H_2S$$, molecules at a temperature of 250 K. Its molar mass is 31.4 g/mol
(b) The result isn’t very unreasonable, but why is it less reliable than those for, say, neon or nitrogen?
## Challenge Problems
94. An airtight dispenser for drinking water is 25cm×10cm in horizontal dimensions and 20 cm tall. It has a tap of negligible volume that opens at the level of the bottom of the dispenser. Initially, it contains water to a level 3.0 cm from the top and air at the ambient pressure, 1.00 atm, from there to the top. When the tap is opened, water will flow out until the gauge pressure at the bottom of the dispenser, and thus at the opening of the tap, is 0. What volume of water flows out? Assume the temperature is constant, the dispenser is perfectly rigid, and the water has a constant density of $$\displaystyle 1000kg/m^3$$.
95. Eight bumper cars, each with a mass of 322 kg, are running in a room 21.0 m long and 13.0 m wide. They have no drivers, so they just bounce around on their own. The rms speed of the cars is 2.50 m/s. Repeating the arguments of Pressure, Temperature, and RMS Speed, find the average force per unit length (analogous to pressure) that the cars exert on the walls.
96. Verify that $$\displaystyle v_p=\sqrt{\frac{2k_BT}{m}}$$.
97. Verify the normalization equation $$\displaystyle ∫^∞_0f(v)dv=1$$. In doing the integral, first make the substitution $$\displaystyle u=\sqrt{\frac{m}{2k_BT}}v=\frac{v}{v_p}$$.This “scaling” transformation gives you all features of the answer except for the integral, which is a dimensionless numerical factor. You’ll need the formula $$\displaystyle ∫^∞_0x^2e^{−x^2}dx=\frac{\sqrt{π}}{4}$$ find the numerical factor and verify the normalization.
98. Verify that $$\displaystyle \bar{v}=\sqrt{\frac{8}{π}\frac{k_BT}{m}}.$$. Make the same scaling transformation as in the preceding problem.
99. Verify that $$\displaystyle v_{rms}=\sqrt{\bar{v^2}=\sqrt{\frac{3k_BT}{m}}$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679251670837402, "perplexity": 536.6605360880282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152112.54/warc/CC-MAIN-20210806020121-20210806050121-00719.warc.gz"} |
https://www.lessonplanet.com/teachers/printing-the-letter-s-english-language-arts-pre-k-1st | # Printing the Letter S
In this printing the letter S worksheet, students practice their printing skills as they trace the uppercase and lowercase letters on the worksheet. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8005643486976624, "perplexity": 3826.2394838573055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608004.38/warc/CC-MAIN-20170525063740-20170525083740-00025.warc.gz"} |
http://mathhelpforum.com/statistics/53084-probability-tree-diagram-problem.html | # Math Help - [Probability] – A tree diagram problem
1. ## [Probability] – A tree diagram problem
A certain statistician’s breakfast consists of either some cereal or toast (but not both) to eat and one drink from a choice of fruit juice, tea or coffee. If he has cereal to eat, the probability that he chooses coffee is 3/5 and the probability he chooses tea is 3/10. If he has toast to eat, the probability he chooses coffee is 2/5 and the probability he chooses tea is 1/5.
Given that he has cereal with probability ¾.
(a) find the probability that on any particular day he has
(i) fruit juice (ii) cereal & coffee.
(b) Find his most popular breakfast combination.
I draw a tree diagram like this.
Is it right? If wrong then how could I draw this?
Then now what to do?
Attached Thumbnails
2. Originally Posted by geton
A certain statistician’s breakfast consists of either some cereal or toast (but not both) to eat and one drink from a choice of fruit juice, tea or coffee. If he has cereal to eat, the probability that he chooses coffee is 3/5 and the probability he chooses tea is 3/10. If he has toast to eat, the probability he chooses coffee is 2/5 and the probability he chooses tea is 1/5.
Given that he has cereal with probability ¾.
(a) find the probability that on any particular day he has
(i) fruit juice (ii) cereal & coffee.
(b) Find his most popular breakfast combination.
I draw a tree diagram like this.
Is it right? If wrong then how could I draw this?
Then now what to do?
Your tree diagram is correct. Where are you stuck using it? For (b), use it to find the combination that has the largest probability.
3. Originally Posted by mr fantastic
Your tree diagram is correct. Where are you stuck using it? For (b), use it to find the combination that has the largest probability.
Thanks I got it. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8042688369750977, "perplexity": 1775.771663331522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445219.14/warc/CC-MAIN-20151124205405-00142-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://orbit.dtu.dk/en/publications/micromechanical-models-of-metallic-sponges-with-hollow-struts | # Micromechanical Models of Metallic Sponges with Hollow Struts
Thomas Daxner, Robert Bitsche, Helmut J. Böhm
Research output: Contribution to journalJournal articleResearchpeer-review
## Abstract
Coating of a precursor structure, which is subsequently removed by chemical or thermal treatment, is a technology for producing cellular materials with interesting properties, for example in the form of metallic sponges with hollow struts. In this paper idealized models for determining the effective elastic properties of such materials are presented. The chosen models for the structures are space-filling, periodically repeating unit cell models based on idealized models of wet foams, which were generated with the program ‘Surface Evolver’. The underlying topology is that of a Weaire-Phelan structure. The geometry of the micro-structures can be described by two principal parameters, viz. the volume fraction of solid material in the precursor structures, which determines the shape of the final structures, and the thickness of the metallic coating, which defines their apparent density. The influence of these two parameters on the macro-mechanical behavior is investigated. The elastic properties of the micro-structures are described by three independent elastic constants owing to overall cubic material symmetry. The dependence of the effective Young’s modulus on the direction of uniaxial loading is investigated, and the elastic anisotropy of the structures is evaluated.
Original language English Materials Science Forum 539-543 1857-1862 0255-5476 https://doi.org/10.4028/www.scientific.net/MSF.539-543.1857 Published - 2007 Yes
## Fingerprint
Dive into the research topics of 'Micromechanical Models of Metallic Sponges with Hollow Struts'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8230129480361938, "perplexity": 1812.3264390802774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710902.80/warc/CC-MAIN-20221202114800-20221202144800-00402.warc.gz"} |
https://www.physicsforums.com/threads/mean-value-theorem-mvt-to-prove-the-inequality.267642/ | # Mean value theorem(mvt) to prove the inequality
1. Oct 28, 2008
### Khayyam89
1. The problem statement, all variables and given/known data
Essentially, the question asks to use the mean value theorem(mvt) to prove the inequality: $$\left|$$sin a -sin b $$\right|$$ $$\leq$$ $$\left|$$ a - b$$\right|$$ for all a and b
3. The attempt at a solution
I do not have a graphing calculator nor can I use one for this problem, so I need to prove that the inequality basically by proof. What I did was to look at the mvt hypotheses: if the function is continous and differetiable on closed and open on interval a,b, respectively. However, the problem I am having is that I am getting thrown off by the absolute values and the fact that I've never used mvt on inequalities. I know the absolute value of the sin will look like a sequence of upside-down cups with vertical tangents between them. Hints most appreciated.
2. Oct 28, 2008
### dirk_mec1
The mean value theorem states that in an interval [a,b]:
$$f'(c) = \frac{ \sin b - \sin a}{b-a} = \cos(c)$$
Now put absolute value signs there and make use of $$|\cos(x)| \leq 1$$
Similar Discussions: Mean value theorem(mvt) to prove the inequality | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957304835319519, "perplexity": 376.88025262244236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805809.59/warc/CC-MAIN-20171119210640-20171119230640-00375.warc.gz"} |
http://mathoverflow.net/questions/42557?sort=oldest | ## Computing the hopf invariant (without integration or homology, as in Milnor) of the hopf map
In exercise 15 of Milnor's Topology from a Differentiable Viewpoint, one is asked to compute the Hopf invariant of the Hopf map. The way one is supposed to do this is to compute the linking number of two of the fibres, but Milnor doesn't define the linking number in terms of an integral. He says to compute it as the degree of the map $\frac{x-y}{||x-y||}$ from the product of two compact oriented boundaryless manifolds embedded in $\mathbf{R}^{k+1}$ to the sphere of dimension $k$ where the sum of the dimension of the manifolds is $k$.
I'm aware of other ways to compute the Hopf invariant by using deRham cohomology (see Bott and Tu, for instance), but I'm curious how one is actually supposed to do it by hand. Is there a particularly concrete way to compute the linking number without using this other machinery? Most of the other exercises in the book have cute little solutions, but is that true of this problem?
(Not homework!!)
-
The degree of a map $f : M \to N$ provided $M$ and $N$ are compact, orientable and of the same dimension is given by an integral. Think about $\int_M f^* \omega$, provided $\int_N \omega = 1$. – Ryan Budney Oct 17 2010 at 23:45
Sure, but integration is not covered in the book, and all of the other exercises only use material covered in the book. – Harry Gindi Oct 17 2010 at 23:59
This is not research level. I voted to close. – Andy Putman Oct 18 2010 at 0:35
People have asked problems from Atiyah-MacDonald here before, and this is certainly more research-level than those. – Harry Gindi Oct 18 2010 at 1:08
Atiyah-MacDonald and Milnor's "Topology from the Differentiable Viewpoint" are at similar levels (1st year grad), though Milnor is maybe a little easier. However, AM contains a couple of exercises that are notoriously difficult (even for experts) and thus are borderline appropriate. Milnor does not, and what you asked is absolutely standard 1st year graduate topology. – Andy Putman Oct 18 2010 at 3:26
If you have the Hopf link embedded in some standard way in $\mathbb{R}^3$, you can see the linking number as given by the degree of a map $S^1 \times S^1 \to S^2$ in a number of ways. For instance, the pre-image of the north pole in $S^2$ consists of pairs of points stacked vertically above each other, i.e., crossings between the two components in the knot diagram given by projection to the $xy$ plane. (Crossings will correspond to preimages of the north pole or south pole, depending on your conventions.) For the standard diagram for the Hopf link, there's only one crossing that counts. The hard part from this point of view is getting the orientation right (is the Hopf invariant $-1$ or $+1$?), but that can be done with care and attention.
what's a standard way to embed the hopf link in $\mathbf{R}^3$? – Harry Gindi Oct 18 2010 at 0:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8250935673713684, "perplexity": 219.7797803120189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383218/warc/CC-MAIN-20130516092623-00053-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://mathhelpforum.com/differential-geometry/141874-zeros-sum-two-matrix-valued-polynomials-print.html | # zeros of sum of two (matrix-valued!) polynomials
• Apr 28th 2010, 05:05 AM
choovuck
zeros of sum of two (matrix-valued!) polynomials
hi,
I have a nontrivial problem. let f(z) and g(z) be two (possibly monic) matrix-valued polynomials (i.e. of the form $\sum_{j=0}^n A_j z^j$ with $A_j$ being $m\times m$ matrices, and $z\in\mathbb{C}$). assume that det(f(z)) and det(g(z)) has all real and negative roots. assume det(f(z)+g(z)) has all roots real.
prove (or disprove) that det(f(z)+g(z)) cannot have a positive zero.
for the case of dimension 1, it's easy since f(z) and g(z) are positive for z>0, and so f(z)+g(z) is positive for z>0, so all the roots are negative.
any suggestion/idea/reference would be highly appreciated...
• Apr 28th 2010, 06:18 AM
Laurent
Quote:
Originally Posted by choovuck
hi,
I have a nontrivial problem. let f(z) and g(z) be two (possibly monic) matrix-valued polynomials (i.e. of the form $\sum_{j=0}^n A_j z^j$ with $A_j$ being $m\times m$ matrices, and $z\in\mathbb{C}$). assume that det(f(z)) and det(g(z)) has all real and negative roots. assume det(f(z)+g(z)) has all roots real.
prove (or disprove) that det(f(z)+g(z)) cannot have a positive zero.
A piece of contribution, perhaps:
In dimension at least 2, you can easily find situations where $\det (f(z)+g(z))$ is identically zero. For instance, in dimension 2, choose $f(z)$ satisfying the assumptions, and define $g(z)$ by exchanging the columns of $f(z)$. Then the roots of their determinants are the same (hence negative), while $f(z)+g(z)$ has two identical columns hence is singular for any $z$. But maybe this is not allowed since all complex numbers are roots of $\det(f(z)+g(z))$...
You should specify what you mean by "monic" in your problem. There is no obvious definition, I think. In dimension 1, if you discard this hypothesis, the theorem becomes clearly false ( $(2z+1)+(-z-2)=z-1$ has a positive root).
• Apr 28th 2010, 06:48 PM
choovuck
Quote:
Originally Posted by Laurent
A piece of contribution, perhaps:
In dimension at least 2, you can easily find situations where $\det (f(z)+g(z))$ is identically zero. For instance, in dimension 2, choose $f(z)$ satisfying the assumptions, and define $g(z)$ by exchanging the columns of $f(z)$. Then the roots of their determinants are the same (hence negative), while $f(z)+g(z)$ has two identical columns hence is singular for any $z$. But maybe this is not allowed since all complex numbers are roots of $\det(f(z)+g(z))$...
You should specify what you mean by "monic" in your problem. There is no obvious definition, I think. In dimension 1, if you discard this hypothesis, the theorem becomes clearly false ( $(2z+1)+(-z-2)=z-1$ has a positive root).
by monic I mean $\sum_{j=0}^n A_j z^j$ with $A_n=I$, identity matrix. Then your counterexample doesn't really work, since after interchanging columns, we obtain $A_n$ not being identity anymore.
unfortunately I found another counterexample :( it's painfully simple. basically if the eigenvalues of two matrices are negative, then the eigenvalues of their sum might be ugly. the explicit example could be smth like this: take $f(z)=z-\left(\begin{array}{cc} -2 &0\\0& -1 \end{array}\right)$ having zeros -1 and -2. take $g(z)=z-\left(\begin{array}{cc} -c &-1-c^2\\1& c \end{array}\right) \left(\begin{array}{cc} -1 &0\\0& -2 \end{array}\right) \left(\begin{array}{cc} -c &-1-c^2\\1& c \end{array}\right)^{-1}$ having zeros -1,-2. Then their the sum is $2z-T$, where T is a matrix with determinant (using Mathematica) $8-c^2$. So taking $c=2\sqrt2$ will make 0 to be a root.
...that sucks :(
thanks anyway | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9899100661277771, "perplexity": 639.3866360699886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121752.57/warc/CC-MAIN-20170423031201-00548-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://math.stackexchange.com/users/957/user957?tab=activity&sort=comments | user957
less info
reputation
617
bio website location age member for 3 years, 4 months seen Aug 31 '12 at 23:18 profile views 129
Aug31 comment simple inclusion exclusion problem Cool. Thanks a lot! May18 comment how to solve double integral of a min function cool thank you. May18 comment how to solve double integral of a min function yes that is correct May9 comment How to evaluate the following stochastic integral? I got this far but I am having trouble with substituting the poisson process $N_s$ into it. Apr3 comment leading and lagging moving average indicator Thanks. In the context of stocks, it is not possible to compute a leading or central moving average as we do not know the prices in advance. Correct? So what does leading/central MA mean in that context? Feb25 comment example on variance of stochastic processes got it. linearity of covariance was new to me. Feb13 comment spectrum and phase of function in frequency domain Got it guys. Once I have the fourier transform computed analytically, I can easily set up the complex vector and compute the amplitude and phase. R has support for complex numbers ugrad.stat.ubc.ca/R/library/base/html/complex.html and includes functions for finding amplitude and phase. Thanks all. Feb13 comment spectrum and phase of function in frequency domain Sweet. Do you happen to come across function in R that does this similar to matlab? Feb13 comment spectrum and phase of function in frequency domain reference: en.wikipedia.org/wiki/Fourier_transform Feb13 comment spectrum and phase of function in frequency domain Thanks. But I am using R and I don't think it has these functions. I am looking for mathematical formula for finding magnitude/phase so that I can write my functions. Dec11 comment problem on strong law of large numbers Excellent. So we apply SLLN on the denominator and CLT on the numerator. Dec9 comment problem on strong law of large numbers How do you get this result? Dec9 comment what does this set definition mean defined on independent random variables? It is also interesting to see that if I introduce two additional sets, defined as $A2 = (X2=X3)$ and $A3 = (X3=X1)$, $A1$, $A2$ and $A3$ are pairwise independent but not independent. Dec9 comment martingale and filtration Yes I did Shai. But need better understanding of filtration in continuous-time martingale Dec8 comment distribution of iid sequence of integrable random variables Yes, it uses conditional expectation and independence. Its not too difficult after all. Dec7 comment How do I nominate someone? I would like to nominate Qiaochu Yuan and Arturo Magidin Dec7 comment application of strong vs weak law of large numbers Can you give an example where weak law holds but strong law does not hold? Nov4 comment convergence of sequence of random variables Wiki page mentions convergence in mean implies convergence in probability. Why is that? Oct25 comment combination of brownian motion Thanks. I have a stock price function that is a stochastic process (e.g. $S = S_0 + B_t$). Now I am interested in finding various option values over those stock prices which involves finding the expectation. So to find asian call value, I need to find $E(\frac{S_1+S_2}{2} - K)^+$ which requires finding the density function $B_1+B_2$ computed by differentiating the distribution function. Hope this makes sense. Oct25 comment Lebesgue Dominated Convergence example Thanks Arturo. The function $f_n(x)$ converges to 0 as $n\to\infty$. So it converges point wise to 0. So the integral as $n\to\infty$ should evaluate to 0 by DCT. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9608342051506042, "perplexity": 577.0210647705248}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164026161/warc/CC-MAIN-20131204133346-00028-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://www.math.uni-luebeck.de/veranstaltungen/vortraege/boehm.php | ## Shrinkage estimation in the frequency domain of multivariate time series
In this talk on developing shrinkage for spectral analysis of multivariate time series of high dimensionality, we propose a new non-parametric estimator of the spectral matrix with two appealing properties. First, compared to the traditional smoothed periodogram our shrinkage estimator has a smaller L2 risk. Second, the proposed shrinkage estimator is more numerically stable due to a smaller condition number. We use the concept of "Kolmogorov" asymptotics where simultaneously the sample size and the dimensionality tend to infinity, to show that the smoothed periodogram is not consistent and to derive the asymptotic properties of our regularized estimator. This estimator is shown to have asymptotically minimal risk among all linear combinations of the identity and the averaged periodogram matrix. Compared to existing work on shrinkage in the time domain, our results show that in the frequency domain it is necessary to take the size of the smoothing span as "effective sample size" into account. Furthermore, we perform extensive Monte Carlo studies showing the overwhelming gain in terms of lower L2 risk of our shrinkage estimator, even in situations of oversmoothing the periodogram by using a large smoothing span. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.869825541973114, "perplexity": 328.2499579091669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146127.10/warc/CC-MAIN-20200225172036-20200225202036-00245.warc.gz"} |
https://journalarjom.com/index.php/ARJOM/issue/view/4561 | ##### Convergence of the Ishikawa Type Iteration Process with Errors of I-Asymptotically Quasi-nonexpansive Mappings in Cone Metric Spaces
Ashfaque Ur Rahman, K. Qureshi, Geeta Modi, Manoj Ughade
Asian Research Journal of Mathematics, Page 1-9
DOI: 10.9734/arjom/2019/v14i230121
The goal of this article is to consider an Ishikawa type iteration process with errors to approximate the fixed point of -asymptotically quasi non-expansive mapping in convex cone metric spaces. Our results extend and generalize many known results from complete generalized convex metric spaces to cone metric spaces.
##### On Simplicial Polytopic Numbers
Okoh Ufuoma, Agun Ikhile
Asian Research Journal of Mathematics, Page 1-20
DOI: 10.9734/arjom/2019/v14i230122
The ultimate goal of this work is to provide in a concise manner old and new results relating to the simplicial polytopic numbers.
##### Proof of Collatz Conjecture
R. Deloin
Asian Research Journal of Mathematics, Page 1-18
DOI: 10.9734/arjom/2019/v14i230123
Collatz conjecture (stated in 1937 by Collatz and also named Thwaites conjecture, or Syracuse, 3n+1 or oneness problem) can be described as follows:
Take any positive whole number N. If N is even, divide it by 2. If it is odd, multiply it by 3 and add 1. Repeat this process to the result over and over again. Collatz conjecture is the supposition that for any positive integer N, the sequence will invariably reach the value 1. The main contribution of this paper is to present a new approach to Collatz conjecture. The key idea of this new approach is to clearly differentiate the role of the division by two and the role of what we will name here the jump: a = 3n + 1. With this approach, the proof of the conjecture is given as well as informations on generalizations for jumps of the form qn + r and for jumps being polynomials of degree m >1. The proof of Collatz algorithm necessitates only 5 steps:
1- to differentiate the main function and the jumps;
2- to differentiate branches as well as their rst and last terms a and n;
3- to identify that left and irregular right shifts in branches can be replaced by regular shifts in 2m-type columns;
4- to identify the key equation ai = 3ni−1 + 1 = 2m as well as its solutions;
5- to reduce the problem to compare the number of jumps to the number of divisions in a trajectory.
##### Multiple Exact Travelling Solitary Wave Solutions of Nonlinear Evolution Equations
M. M. El-Horbaty, F. M. Ahmed
Asian Research Journal of Mathematics, Page 1-13
DOI: 10.9734/arjom/2019/v14i230124
An extended Tanh-function method with Riccati equation is presented for constructing multiple exact travelling wave solutions of some nonlinear evolution equations which are particular cases of a generalized equation. The results of solitary waves are general compact forms with non-zero constants of integration. Taking the full advantage of the Riccati equation improves the applicability and reliability of the Tanh method with its extended form.
##### The Thermistor Problem with Hyperbolic Electrical Conductivity
M. O. Durojaye, J. T. Agee
Asian Research Journal of Mathematics, Page 1-12
DOI: 10.9734/arjom/2019/v14i230125
This paper presents the one-dimensional, positive temperature coefficient (PTC) thermistor equation, using the hyperbolic-tangent function as an approximation to the electrical conductivity of the device. The hyperbolic-tangent function describes the qualitative behaviour of the evolving solution of the thermistor in the entire domain. The steady state solution using the new approximation yielded a distribution of device temperature over the spatial dimension and all the phases of the temperature distribution of the device without having to look for a moving boundary. The analysis of the steady state solution and the numerical solution of the unsteady state is presented in the paper. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8728072643280029, "perplexity": 735.4646609983379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104244535.68/warc/CC-MAIN-20220703134535-20220703164535-00409.warc.gz"} |
http://mathhelpforum.com/statistics/96536-chance-rain-probability.html | # Thread: chance of rain , probability
1. ## chance of rain , probability
can anyone help with this?
chance of rain
the weather report claims that the chance of rain for certain days
next week are
Monday: 50% chance of rain
Tuesday: 20% chance of rain
Wednesday: 30% chance of rain
Thursday: 30% chance of rain
Friday: 10% chance of rain
what is the chance of rain
for the entire week as a whole ?
tx
2. Hi
The chance of not raining is
$0.5 \cdot 0.8 \cdot 0.7 \cdot 0.7 \cdot 0.9 =0.1764$
The complement to the event "it is not raining" is "it is raining".
A="It is not raining".
So $P(A^{C})=1-0.1764= 0.8236$
3. Originally Posted by jickjoker
can anyone help with this?
chance of rain
the weather report claims that the chance of rain for certain days
next week are
Monday: 50% chance of rain
Tuesday: 20% chance of rain
Wednesday: 30% chance of rain
Thursday: 30% chance of rain
Friday: 10% chance of rain
what is the chance of rain
for the entire week as a whole ?
tx
Do you mean raining every day during the week?
Or at least once during the whole week?
If it's the former it's just the product of the probabilities for each given day.
If it's the latter, think of the probability of something happening "at least once", as 1 - Probability of it not happening at all... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8279806971549988, "perplexity": 2293.7765720663015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052338/warc/CC-MAIN-20131204131732-00077-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://biz.libretexts.org/Courses/Gettysburg_College/MGT_235%3A_Introductory_Business_Statistics/09%3A_F_Distribution_and_One-Way_ANOVA/9.05%3A_Facts_About_the_F_Distribution | Here are some facts about the $$\bf F$$ distribution.
3. The $$F$$ statistic is greater than or equal to zero.
4. As the degrees of freedom for the numerator and for the denominator get larger, the curve approximates the normal as can be seen in the two figures below. Figure (b) with more degrees of freedom is more closely approaching the normal distribution, but remember that the $$F$$ cannot ever be less than zero so the distribution does not have a tail that goes to infinity on the left as the normal distribution does.
5. Other uses for the $$F$$ distribution include comparing two variances and two-way Analysis of Variance. Two-Way Analysis is beyond the scope of this chapter. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.960127592086792, "perplexity": 143.2436962751585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150264.90/warc/CC-MAIN-20210724094631-20210724124631-00464.warc.gz"} |
https://www.nature.com/articles/s41467-017-00442-6?error=cookies_not_supported&code=7d9666ca-f2ed-48a3-8bd7-6c2758704235 | ## Introduction
Single-electron transistors (SETs) are based on a nanostructure such as a nanoparticle, molecule, or a quantum dot, which is resistively coupled to the source and drain leads and capacitively coupled to a gate electrode. Electrons are confined to a small volume and their number in the nanostructure is quantized. The current through the nanostructure can be tuned via the gate voltage which controls the number of electrons in the SET. Each time a single electron is added, the current is blocked due to Coulomb blockade (CB) effect1,2,3,4,5. Hence the device exhibits conductance oscillations as a function of gate voltage, V g, with a well-defined CB periodicity (P CB = e/C G) equal to the ratio of an electron charge e to a gate capacitance C G. The periodic conductance oscillations make the SET a promising ground for applications such as an electronic switch6, memory device7, extremely sensitive charge and displacement sensor8, 9, logic gates10, 11, voltage amplifier12, and so on.
Applications of multi-input gate SETs for multiple-valued logic circuits have also been proposed13. This requires the fabrication of multi-SETs for each logic circuit. For instance, two multi-gate SETs can produce three current levels. A significant increase in SET functionality may be achieved if a single SET would be able to produce multiple well-distinguished current values between the high state (CB peak) and low state (CB valley). This would enable the use of a single SET for switching between more than two states and therefore would facilitate going beyond binary logic. In this case, a set of multiple inputs, all connected to the gate electrode of a single SET, would generate an output which can have a few different values. Such a device will not only allow easier access to a multiple-valued logic function but would also be a large step toward further miniaturization of integrated circuit elements.
In this work, we show that under certain conditions an SET can manifest multiple-periodicity where the relative intensity of each period can be well-controlled. A superposition of multiple periods having different intensities naturally yields different output values. The ability to manipulate the relative intensity of each period enables one to fine-control the output current values of the SET device.
Multiple-periodicity is not a typical characteristic of conventional SET devices in which the coupling between the nanostructure and any lead is weak and the conductance through each barrier is much smaller than the quantum conductance e 2/h. In this weak coupling regime, the charge confined in such a closed dot is quantized and one observes a well pronounced set of conductance peaks. Each peak is associated with the change of total charge on the dot by one electron. The other limit is the strong coupling regime for which at least one of the leads is open, with the coupling greater than e 2/h. In this open dot the charge is not quantized, CB effects are suppressed, and one observes only a weak modulation of the conductance through the dot, with the same period5, 14. The crossover between closed and open dots is seldom investigated. In recent theoretical works15, 16 the crossover regime was approached from an open dot limit (see Supplementary Discussion). It was shown that for chaotic dots (for which the electron explores the entire space of the nanoparticle)17, with a large number of weakly open channels, additional conductance vs. gate voltage oscillations emerge with periods that are equal to the base P CB, divided by an integer number n. These are clearly detected using Fourier transform analysis as multiple harmonics f n = n/P CB. Such oscillations correspond to a change of the charge on the dot by e/n. Of course, this does not violate charge quantization, but rather means that in the strong coupling regime the electronic wave function is split between inside and outside the dot. Since the CB effect is sensitive only to the charge confined inside the dot the conductance oscillates with a fractional charge periodicity. Remarkably, chaotic dynamics gives rise only to integer fractions of P CB, as indeed clearly seen in our experiment.
Over the past few decades several experimental methods have been used to fabricate SETs, most of which cannot fulfill the predicted conditions for additional periodicities in the conductance. Conventional SETs are based on low-carrier-density two-dimensional electron gases in which opening the dot is achieved by applying back gate voltages2,3,4,5, 18, hence making it more likely to yield very few channels which are well connected to the leads rather than many weakly connected channels. Other fabrication methods include electromigration19, 20, mechanically controlled break junction21, angle evaporation22, 23, chemically assembled SETs11, 24, and synthesis of a dimer structure25. Within the framework of these works many attempts have been taken to reduce the tunneling barriers in SETs devices in order to improve its signal-to-noise ratio and functionality. In some of these works the strong coupling regime has been reached and studied22. Nevertheless the ability to systematically control the coupling within the regime where the quantum conductance is e 2/h or larger, remains challenging.
Here, we present results on a unique set-up of an SET based on a metallic nanoparticle with controllable coupling to a set of leads. We show that for a relatively open dot the conductance vs. gate voltage features multi-periodic structure which can be controlled either by coupling strength or bias voltage.
## Results
### Experimental set-up of the SET system
The SETs are prepared utilizing a unique technique that combines nanolithography, atomic force microscope (AFM) nanomanipulation and electrodeposition as illustrated in Fig. 1 and described in the Methods section. Electrodeposition techniques have been used in the past to control the gap between two electrodes26,27,28. We take this method one step further and construct controllable SETs that are based on chaotic metallic nanoparticles29, 30. These SETs fulfill the theoretical conditions required for observing multi-periodicity. First, the nanoparticle contains a set of crystallographic facets as is clear from the transmission electron microscope (TEM) image of Fig. 1d, thus rendering the dot chaotic. More importantly, these devices are naturally characterized by multi-channel coupling since coupling is achieved via a set of atoms which couple in parallel to the Au nanoparticle (see Methods section and Fig. 1b). Finally, the technique enables one to fine-tune the system to the right dot-lead coupling strength which is relevant for CB harmonics to be measurable.
### Appearance of higher harmonics
A typical conductance vs. gate voltage curve G(V g) of such an SET system taken at T = 4.2 K is shown in Fig. 2a. The curve exhibits two clear periods, one is the base P CB and the other is half this period. We measured CB effects on 21 similar samples. Multiple-periodicity was observed in nine such SET devices. In most cases the SETs exhibit conductance oscillation with the base period P CB and additional oscillation with a period P CB/2. However, some SETs exhibit oscillations that give rise to the third (Fig. 2b), fifth (Fig. 2c), and sixth (Fig. 2d) harmonics of the CB frequency f CB = 1/P CB.
It turns out that the intensity of the faster periods is very sensitive to the dot-lead coupling strength η = (1/R S + 1/R D)h/e 2, controlled by the resistances between the dot and both source (R S) and drain (R D) electrodes. We only observe multiple-periodicity for SETs which are tuned to the appropriate coupling regime. Unfortunately, it is not possible to measure the coupling strength directly from the conductance. Moreover, since our dot is asymmetrically coupled, these two quantities are quite different. While the measured conductance, G, is governed by the weakly connected lead, the coupling strength, η is determined by the well connected one. Nevertheless, it is possible to extract η by fitting IV curves to the results of ref. 31 as discussed in the Supplementary Discussion (see also Supplementary Fig. 2 and ref. 30). Doing so we find that there is a small coupling window where multiple-periodicity can be observed, that lies in the interval 2 < η < 5. This is demonstrated in Fig. 3 which depicts conductance vs. gate voltage curves G(V g) for two SETs with two different coupling strengths. In Fig. 3a the dot is strongly coupled (η = 7.7 ± 0.1) and the conductance oscillates with a base P CB only, as demonstrated in the Fourier transform (Fig. 3b). In Fig. 3c, on the other hand, the dot is weaker coupled (the coupling strength is smaller by about a factor of two, η = 3.3 ± 0.1) and the conductance exhibits additional periods. In this case, the structure is much richer than those of Fig. 2. The Fourier transform depicted in Fig. 3d reveals that the conductance curve is composed of seven well-defined periodicities which are identified as harmonics of the basic f CB.
Interestingly, it is seen that the relative amplitude of the different harmonics is not monotonic with the harmonic order. In this case the second harmonic has a larger amplitude than the first harmonic and the fifth harmonic is the most prominent. Similar Fourier analysis on Figs. 2c and 4a reveal that the intensity of the high harmonic is larger than that of f CB. Though the theory predicts (Supplementary Discussion) that the occurrence of additional harmonics in the conductance is a generic property of open dots, and accounts for the IV characteristics and the magnetic field dependence (Supplementary Fig. 3), it does not explain enhanced amplitudes of high harmonics observed in some of our samples. Because this feature is sample dependent it is reasonable to attribute it to the stochastic nature of our dots. It is quite plausible that some of the dots are not fully ergodic, and cannot be accounted for by random matrix theory. In this case one expects large sample to sample fluctuations of the harmonic strength that are sensitive to the details of dot-lead coupling and cannot be computed analytically.
### Bias dependence
A major advantage of our devices is that the strength of the different periods can be tuned not only by coupling strength but also by bias voltage. It turns out that, for any coupling strength, as the bias voltage is increased, the higher harmonics are suppressed. The mechanism behind this suppression are non-equilibrium current fluctuations that destroy quantum coherence. This suppresses higher harmonics stronger than the lower ones, as demonstrated in Fig. 4 for two different dots measured at different values of bias voltage. While for small bias voltage an extra harmonic is clearly observed in the conductance curves, at higher V SD this harmonic is unmeasurable and only the base f CB is observed. Therefore by scanning the bias voltage, one can tune the relative strength of the periodicities and control the values of output current of an SET. The fact that the additional harmonics can be switched on and off by bias voltage makes these SETs very appealing for device applications.
In summary, we have demonstrated an SET device, which incorporates multi-periodic conductance oscillations. The relative strength of the periods can be tuned by either the dot-lead coupling or by the bias voltage making this type of device very favorable for integration in electronic circuits. The conditions for such multiple-periodicity are clearly listed and fulfilled by our unique fabrication process which enables one to tune the coupling strength to a regime in which the charging of the dot corresponds to a fraction of an electron charge.
## Methods
### Fabrication of Au electrodes
The first step in device fabrication is the formation of source, drain, and gate Au electrodes on a Si–SiO substrate. This is achieved by a combination of photo-lithography and e-beam lithography (RAITH, Elphy Quantum) for fabrication of source and drain electrodes separated by a small gap of 10–30 nm and a side gate electrode at a distance of 200 nm.
### Nanomanipulation of Au nanoparticle
The nanoparticles we use are gold colloids with diameter of 30 nm. We use colloidal particles in solution which are stabilized by negatively charged ions that prevent agglomeration of the particles in solution. By using an organic layer that is terminated with amino groups, it is possible to adsorb the negative shell of the particles to the substrate by electrostatic interaction. We chose Poly-L-Lysine (P.L.L) as an adhesive layer. After the deposition of the adhesive layer the sample is ready for the deposition of the gold colloids. The Au colloid deposition takes place by adding a 10–20 μl drop of the solution on the modified substrate for an hour. This results in 20–30 gold colloids within a 1 μm square area.
For placing the nanoparticle in a desired location we utilize AFM nanomanipulation (Nanoman system in DI Veeco 3100 Scanning Probe Microscope). The nanoparticle is moved between the electrodes using the AFM tip, by pushing the particle to the right position.
### Electrodeposition
After positioning the colloid in the gap, the distance to each lead is usually smaller than 10 nm; however, the dot is not yet electrically connected to the leads. At this stage, we are not able to monitor current through the device. For minimizing the gap between the nanoparticle and the leads, we use an electrodeposition process by which we grow atoms on the leads. During the deposition process, we measure the conductance between the source and drain and stop the process when current can be measured. For the electrodeposition process, we place the sample on a holder which is made of inert materials (glass and Teflon). This is very important for preventing any metal dissolution that may result from reaction with the solution. The electrodes are connected to the circuit through Cu wires pressed by Indium which are placed out of the solution.
Our electrodeposition set-up consists of solution, counter electrode, reference electrode, and working electrodes (Fig. 1a). The electrolyte is an aqueous solution consisting of 0.01 M of potassium cyanurate (KAu(CN)2) and a buffer (PH 10) composed of 1 M potassium bicarbonate (KHCO3) and 0.2 M potassium hydroxide (KOH). A 25 mm diameter Au wire (99.9985% purity) is used as the counter electrode and the reference electrode. The two separated evaporated gold electrodes are the working electrodes. When applying electrochemical DC voltage between the working electrodes and the counter electrode, the cyanurate ion accepts an electron from the working electrodes and liberates the cyanide ligands. Hence, neutral gold atoms collect on the surface of the two gold electrodes and close the gap between the leads and the dot.
The deposition occurs only on the electrodes and not on the colloid as it is not connected to the circuit. Deposition occurs on the colloid only after it becomes electrically connected to one of the electrodes and becomes part of it.
The value of the DC voltage between the counter electrode and the source and drain electrodes determines the deposition rate. The slower the deposition, the higher is the quality of the film. We found that a value of 1 V yields a slow enough deposition rate of 0.3 nm min−1. Under these conditions the deposition is very uniform and smooth, thus the possibility of large gold clusters forming during the electrodeposition process becomes improbable. In addition, the slow rate enables to stop the process at different degrees of couplings.
The deposition is isotropic, hence the gold is deposited uniformly on the surface of the electrodes with no preference to a certain direction. Therefore, bridging the dot-leads distance causing the widening of the electrodes tips. It is for this reason that the coupling between the dot to the leads and hence the dot-lead conductance is believed to be achieved by a large number of channels.
For measuring the conductance between the working electrodes during the electrodeposition process, we use AC conductivity measurement which ensures equal deposition on both the electrodes. An AC voltage of 2 mV is applied between the two working gold electrodes. We are able to monitor the separation between the electrodes once the distance becomes very small. The oscillation frequency should be very low in order to reduce the ion conductivity of the solution which appears as a background measurement. Applying frequency of 2 Hz reduces the measured conductivity of the solution to 5 MΩ. This value is always measured at the background. Therefore it is extremely difficult to stop the process at the stage of a very weakly connected barrier. Figure 5 shows a conductance as a function of time of a dot-lead system during the deposition process. It is seen that after 10 min of deposition the conductance starts to increase indicating that the dot-lead barriers have conductances comparable to the solution background conductance.
After electric contact is achieved the sample is taken out of the aqueous solution and transferred to a measurement probe where it is cooled down to 4.2 K for electrical measurements. High resolution scanning electron microscope (SEM) images of different samples taken after different fabrication stages are shown in Fig. 6.
The success of the electrodeposition process crucially depends on the symmetry of the distance between the dot and the leads. If the metallic colloid is not positioned at the center of the gap, there is a very small chance to stop the electrodeposition process at measured conductance of G ≤ e 2/h while still being able to observe CB effects. Twenty-five percent of our electrodeposited samples showed CB effects while the rest showed ohmic conductance curves, apparently due to strong antisymmetric geometry. An example of such asymmetric device is shown in Fig. 6c. In this case, when measurable current appeared through the device, the dot was already fully connected to the right electrode, and therefore no CB effects were resolved.
Finally, it is important to note that the same fabrication technique, including the adsorption of the adhesive layer, P.L.L, was applied to several devices without placing a nanoparticle in the gap. All these devices did not show any signature of CB effects in the conductance curves (Supplementary Discussion and Supplementary Fig. 1). This assures that parasitic junctions or dots are not formed during the fabrication process.
### Data availability
The data that support the findings of this study are available from the corresponding author on reasonable request. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8764277696609497, "perplexity": 985.2037742714413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00748.warc.gz"} |
https://worldwidescience.org/topicpages/r/radiative+weak+nonleptonic.html | #### Sample records for radiative weak nonleptonic
1. Strong effects in weak nonleptonic decays
International Nuclear Information System (INIS)
Wise, M.B.
1980-04-01
In this report the weak nonleptonic decays of kaons and hyperons are examined with the hope of gaining insight into a recently proposed mechanism for the ΔI = 1/2 rule. The effective Hamiltonian for ΔS = 1 weak nonleptonic decays and that for K 0 -anti K 0 mixing are calculated in the six-quark model using the leading logarithmic approximation. These are used to examine the CP violation parameters of the kaon system. It is found that if Penguin-type diagrams make important contributions to K → ππ decay amplitudes then upcoming experiments may be able to distinguish the six-quark model for CP violation from the superweak model. The weak radiative decays of hyperons are discussed with an emphasis on what they can teach us about hyperon nonleptonic decays and the ΔI = 1/2 rule
2. Chiral realization of the non-leptonic weak interactions
International Nuclear Information System (INIS)
Ecker, G.
1990-01-01
After a short introduction to chiral perturbation theory an attempt to relate the strong and the non-leptonic weak low-energy constants is reviewed. The weak deformation model is stimulated both by the geometrical structure of chiral perturbation theory and by phenomenological considerations. Applications to the radiative decays K → πγγ and K L → γe + e - are discussed. (Author) 38 refs., 4 figs
3. Effective field theory and weak non-leptonic interactions
International Nuclear Information System (INIS)
Miller, R.D.C.
1982-06-01
The techniques of Ovrut and Schnitzer (1981) are used to calculate the finite decoupling renormalisation constants resulting from heavy fermion decoupling in a non-abelian gauge theory exhibiting broken flavour symmetry. The results of this calculation are applied to realistic, massive QCD. The decoupling information may be absorbed into renormalisation group (R.G.) invariants. Working in the Landau gauge R.G. invariants are derived for the running coupling constants and running quark masses of effective QCD in the modified minimal subtraction scheme (for effective QCD with 3 to 8 flavours). This work is then applied to the major part of the thesis; a complete derivation of the effective weak non-leptonic sector of the standard model (SU(3)/sub c/ x SU(2) x U(1)), that is the construction of all effective weak non-leptonic Hamiltonians resulting from the standard model when all quark generations above the third along with the W and Z are explicitily decoupled. The form of decoupling in the work of Gilman and Wise (1979) has been adopted. The weak non-leptonic sector naturally decomposes into flavour changing and flavour conserving sectors relative to anomalous dimension calculations. The flavour changing sector further decomposes into penguin free and penguin generating sectors. Individual analyses of these three sectors are given. All sectors are analysed uniformly, based upon a standard model with n generations
4. Non-leptonic weak decay of hadrons and chiral symmetry
International Nuclear Information System (INIS)
Suzuki, Katsuhiko
2000-01-01
We review the non-leptonic weak decay of hyperons and ΔI=1/2 rule with a special emphasis on the role of chiral symmetry. The soft-pion theorem provides a powerful framework to understand the origin of ΔI=1/2 rule qualitatively. However, quantitative description is still incomplete in any model of the hadrons. Naive chiral perturbation theory cannot explain the parity-conserving and violating amplitudes simultaneously, and convergence of the chiral expansion seems to be worse. We demonstrate how the non-leptonic weak decay amplitudes are sensitive to the quark-pair correlation in the baryons, and show the importance of the strong quark correlation in the spin-0 channel to reproduce the experimental data. We finally remark several related topics. (author)
5. The chiral anomaly in non-leptonic weak interactions
International Nuclear Information System (INIS)
Bijnens, J.; Pich, A.; Ecker, G.
1992-01-01
The interplay between the chiral anomaly and the non-leptonic weak hamiltonian is studied. The structure of the corresponding effective lagrangian of odd intrinsic parity is established. It is shown that the factorizable contributions (leading in 1/N C ) to that lagrangian can be calculated without free parameters. As a first application, the decay K + →π + π 0 γ is investigated. (orig.)
6. Effective Hamiltonian for ΔS=1 weak nonleptonic decays in the six-quark model
International Nuclear Information System (INIS)
Gilman, F.J.; Wise, M.B.
1979-01-01
Strong-interaction corrections to the nonleptonic weak-interaction Hamiltonian are calculated in the leading-logarithmic approximation using quantum chromodynamics. Starting with a six-quark theory, the W boson, t quark, b quark, and c quark are successively considered as ''heavy'' and the effective Hamiltonian is calculated. The resulting effective Hamiltonian for strangeness-changing nonleptonic decays involves u, d, and s quarks and has possible CP-violating pieces both in the usual (V-A) x (V-A) terms and in induced, ''penguin''-type terms. Numerically, the CP-violating compared to CP-conserving parts of the latter terms are close to results calculated on the basis of the lowest-order ''penguin'' diagram
7. Non-Leptonic Weak Decays of B Mesons
CERN Document Server
Neubert, Matthias; Neubert, Matthias; Stech, Berthold
1997-01-01
We present a detailed study of non-leptonic two-body decays of B mesons based on a generalized factorization hypothesis. We discuss the structure of non-factorizable corrections and present arguments in favour of a simple phenomenological description of their effects. To evaluate the relevant transition form factors in the factorized decay amplitudes, we use information extracted from semileptonic decays and incorporate constraints imposed by heavy-quark symmetry. We discuss tests of the factorization hypothesis and show how unknown decay constants may be determined from non-leptonic decays. In particular, we find f_{Ds}=(234+-25) MeV and f_{Ds*}=(271+-33) MeV.
8. Chiral perturbation theory approach to hadronic weak amplitudes
International Nuclear Information System (INIS)
Rafael, E. de
1989-01-01
We are concerned with applications to the non-leptonic weak interactions in the sector of light quark flavors: u, d and s. Both strangeness changing ΔS=1 and ΔS=2 non-leptonic transitions can be described as weak perturbations to the strong effective chiral Lagrangian; the chiral structure of the weak effective Lagrangian being dictated by the transformation properties of the weak non-leptonic Hamiltonian of the Standard Model under global SU(3) Left xSU(3) Right rotations of the quark-fields. These lectures are organized as follows. Section 2 gives a review of the basic properties of chiral symmetry. Section 3 explains the effective chiral realization of the non-leptonic weak Hamiltonian of the Standard Model to lowest order in derivatives and masses. Section 4 deals with non-leptonic weak transitions in the presence of electromagnetism. Some recent applications to radiative kaon decays are reviewed and the effect of the so called electromagnetic penguin like diagrams is also discussed. Section 5 explains the basic ideas of the QCD-hadronic duality approach to the evaluation of coupling constants of the non-leptonic chiral weak Lagrangian. (orig./HSI)
9. Non-leptonic weak decay rate of explicitly flavored heavy mesons
International Nuclear Information System (INIS)
Suzuki, M.; California Univ., Berkeley
1981-01-01
It is argued quantitatively that a large difference between the D 0 and D + lifetimes is mainly due to non-perturbative long-distance effects. The total non-leptonic weak decay rates are related to the soft limit of short-distance processes. Scaling laws for the decay rates of heavy mesons with respect to mass are inferred from the QCD analysis of the soft limit of fragmentation. It is found that the decay rates are not determined by the disconnected spectator diagrams alone even in the limit of the heavy quark mass M Going to infinity ( 5 exp √ c log M. Some numerical discussion is made for the decay of B mesons and T mesons. (orig.)
10. New approach to nonleptonic weak interactions. I. Derivation of asymptotic selection rules for the two-particle weak ground-state-hadron matrix elements
International Nuclear Information System (INIS)
Tanuma, T.; Oneda, S.; Terasaki, K.
1984-01-01
A new approach to nonleptonic weak interactions is presented. It is argued that the presence and violation of the Vertical BarΔIVertical Bar = 1/2 rule as well as those of the quark-line selection rules can be explained in a unified way, along with other fundamental physical quantities [such as the value of g/sub A/(0) and the smallness of the isoscalar nucleon magnetic moments], in terms of a single dynamical asymptotic ansatz imposed at the level of observable hadrons. The ansatz prescribes a way in which asymptotic flavor SU(N) symmetry is secured levelwise for a certain class of chiral algebras in the standard QCD model. It yields severe asymptotic constraints upon the two-particle hadronic matrix elements of nonleptonic weak Hamiltonians as well as QCD currents and their charges. It produces for weak matrix elements the asymptotic Vertical BarΔIVertical Bar = 1/2 rule and its charm counterpart for the ground-state hadrons, while for strong matrix elements quark-line-like approximate selection rules. However, for the less important weak two-particle vertices involving higher excited states, the Vertical BarΔIVertical Bar = 1/2 rule and its charm counterpart are in general violated, providing us with an explicit source of the violation of these selection rules in physical processes
11. SU(3) properties of semileptonic and nonleptonic decays of mesons
International Nuclear Information System (INIS)
Montvay, I.
1977-11-01
The recent discovery of charmed D and F mesons led to an accumulation of a lot of information on the weak decays of these particles. The facts known at present are generally consistent with the Glashow-Iliopoulos-Maiami scheme for the weak currents, which are predicted the fourth flavour of quarks, the charm. The weak decays of the charmed mesons are governed by SU(3) rules analogous to the Okubo-Zweig-Iizuka rule for strong decays. Such Su(3) rules are given for semileptonic and nonleptonic decays of strange and charmed mesons. These relations depend on the colour structure of currents in the nonleptonic case. (D.P.)
12. Soccer in Indiana and models for non-leptonic decays of heavy flavours
International Nuclear Information System (INIS)
Bigi, I.I.
1989-01-01
Various descriptions of non-leptonic charm decays are reviewed and their relative strengths and weaknesses are listed. I conclude that it is mainly (though no necessarily solely) a destructive interference in nonleptonic D + decays that shapes the decays of charm mesons. Some more subtle features in these decays are discussed in a preview of future research before I address the presently confused situation in D s decays. Finally I give a brief theoretical discussion of inclusive and exclusive non-leptonic decays of beauty mesons
13. Soccer in Indiana and models for non-leptonic decays of heavy flavors
International Nuclear Information System (INIS)
Bigi, I.I.
1989-01-01
Various descriptions of non-leptonic charm decays are reviewed and their relative strengths and weaknesses are listed. The author concludes that it is mainly (though not necessarily solely) a destructive interference in nonleptonic D + decays that shapes the decays of charm mesons. Some more subtle features in these decays are discussed in a preview of future research before he addresses the presently confused situation in D s decays. Finally, he gives a brief theoretical discussion of inclusive and exclusive non-leptonic decays of beauty mesons. 13 refs., 1 tab
14. Why most flavor-dependence predictions for nonleptonic charm decays are wrong: flavor symmetry and final-state interactions in nonleptonic decays of charmed hadrons
International Nuclear Information System (INIS)
Lipkin, H.J.
1980-09-01
Nonleptonic weak decays of strange hadrons are complicated by the interplay of weak and strong interactions. Models based either on symmetry properties or on the selection of certain types of diagrams are both open to criticism. The symmetries used are all broken in strong interactions, and the selection of some diagrams and neglect of others is never seriously justified. Furthermore, the number of related decays of strange hadrons is small, so that experimental data are insufficient for singificant tests of phenomenological models with a few free parameters. The discovery of charmed particles with many open channels for nonleptonic decays has provided a new impetus for a theoretical understanding of these processes. The GIM current provides a well defined weak hamiltonian, which can justifiably be used to first order. The QCD approach to strong interactions gives flavor-indpendent couplings and flavor symmetry broken only by quark masses. In a model with n generations of quarks and 2n flavors, a flavor symmetry group SU(2n) can be defined which is broken only by H/sub weak/ and the quark masses.Here again, the same two approaches by symmetry and dynamics have been used. But both types of treatment tend to consider only the symmetry properties or dominant diagrams of the weak interaction, including some subtle effects, while overlooking rather obvious effects of strong interactions
15. Two-body non-leptonic decays on the lattice
CERN Document Server
Ciuchini, M; Martinelli, G; Silvestrini, L
1996-01-01
We show that, under reasonable hypotheses, it is possible to study two-body non-leptonic weak decays in numerical simulations of lattice QCD. By assuming that final-state interactions are dominated by the nearby resonances and that the couplings of the resonances to the final particles are smooth functions of the external momenta, it is possible indeed to overcome the difficulties imposed by the Maiani-Testa no-go theorem and to extract the weak decay amplitudes, including their phases. Under the same assumptions, results can be obtained also for time-like form factors and quasi-elastic processes.
16. Nonleptonic decays of 1/2+-baryons and pseudo-connected-line diagrams, 1
International Nuclear Information System (INIS)
Abe, Yoshikazu; Fujii, Kanji
1978-01-01
Under the SU(4)-20''-spurion dominance in nonleptonic weak decays, we investigate algebraic structures of the effective Hamiltonian H sub(eff) which describes the main features of the nonleptonic weak decays of ordinary baryons. When H sub(eff) is written by using 20'-baryon (1/2 + ) wave function of the form B sub(α)sup([βγ]), one can select out of H sub(eff) two terms which describe most simply the main features of the P-wave amplitudes for ordinary baryons. Only these terms are s-u dual in the sense of 'pseudo-connected-line diagrams' (pseudo-CLD's) obtained by writing CLD's with 4- and 4*-lines corresponding directly to the lower and the upper indices of B sub(α)sup([βγ]). By assuming Lee-Sugawara relation and s-u dual property of the P-wave amplitudes, various relations among ordinary and charmed baryon decays are derived. Comments on the parity-violating amplitudes are also given. (auth.)
17. Nonleptonic decay of charmed mesons and chiral lagrangians
International Nuclear Information System (INIS)
Kalinovskij, Yu.L.; Pervushin, V.N.
1978-01-01
Nonleptonic decays of charmed mesons in chiral theory are considered. The lagrangian of strong interaction is taken to be invariant under the SU(4)xSU(4) group. Symmetry breaking is chosen according to the (4,4sup(*))+(4sup(*),4) simplest representation of the SU(4)xSU(4) group. The lagrangian of weak interaction is taken in the ''current x current'' form and satisfies exactly the rule probabilities of decays for D and F mesons are compared with available experimental data
18. Bs mesons: semileptonic and nonleptonic decays
Directory of Open Access Journals (Sweden)
Albertus C.
2014-01-01
Full Text Available In this contribution we compute some nonleptonic and semileptonic decay widths of Bs mesons, working in the context of constituent quark models [1, 2]. For the case of semileptonic decays we consider reactions leading to kaons or different Jπ Ds mesons. The study of nonleptonic decays has been done in the factorisation approximation and includes the final states enclosed in Table 2.
19. Lattice calculation of nonleptonic charm decays
International Nuclear Information System (INIS)
Simone, J.N.
1991-11-01
The decays of charmed mesons into two body nonleptonic final states are investigated. Weak interaction amplitudes of interest in these decays are extracted from lattice four-point correlation functions using a effective weak Hamiltonian including effects to order G f in the weak interactions yet containing effects to all orders in the strong interactions. The lattice calculation allows a quantitative examination of non-spectator processes in charm decays helping to elucidate the role of effects such as color coherence, final state interactions and the importance of the so called weak annihilation process. For D → Kπ, we find that the non-spectator weak annihilation diagram is not small, and we interpret this as evidence for large final state interactions. Moreover, there is indications of a resonance in the isospin 1/2 channel to which the weak annihilation process contributes exclusively. Findings from the lattice calculation are compared to results from the continuum vacuum saturation approximation and amplitudes are examined within the framework of the 1/N expansion. Factorization and the vacuum saturation approximation are tested for lattice amplitudes by comparing amplitudes extracted from lattice four-point functions with the same amplitude extracted from products of two-point and three-point lattice correlation functions arising out of factorization and vacuum saturation
20. Electroweak penguin contributions to non-leptonic ΔF=1 decays at NNLO
International Nuclear Information System (INIS)
Buras, Andrzej J.; Gambino, Paolo; Haisch, Ulrich A.
2000-01-01
We calculate the O(α s ) corrections to the Z 0 -penguin and electroweak box diagrams relevant for non-leptonic ΔF=1 decays with F=S,B. This calculation provides the complete O(α W α s ) and O(α W α s sin 2 θ W m t 2 ) corrections (α W =α/sin 2 θ W ) to the Wilson coefficients of the electroweak penguin four quark operators relevant for non-leptonic K- and B-decays. We argue that this is the dominant part of the next-next-to-leading (NNLO) contributions to these coefficients. Our results allow us to reduce considerably the uncertainty due to the definition of the top quark mass present in the existing NLO calculations of non-leptonic decays. The NNLO corrections to the coefficient of the color singlet (V-A)x(V-A) electroweak penguin operator Q 9 relevant for B-decays are generally moderate, amount to a few percent for the choice m t (μ t =m t ) and depend only weakly on the renormalization scheme. Larger NNLO corrections with substantial scheme dependence are found for the coefficients of the remaining electroweak penguin operators Q 7 , Q 8 and Q 10 . In particular, the strong scheme dependence of the NNLO corrections to C 8 allows us to reduce considerably the scheme dependence of C 8 8 > 2 relevant for the ratio ε'/ε
1. P-odd effects in πN-scattering at low energies and determination of the isotopical structure of the weak nonleptonic interaction
International Nuclear Information System (INIS)
Gershtein, S.S.; Folomeshkin, V.N.; Khlopov, M.Yu.
1974-01-01
P-odd effects in the πN-scattering on a target polarized along and again a pion beam have been considered. The P-odd correlations are intensified by interference of weak and strong interactions, whose amplitude is great in the energy range of the order of 100 to 300 MeV. When measuring cross-section differences of the πN-scattering at meson factories, it is possible to hope that the Lobashow integral method may be used in this range. The P-odd amplitudes have been calculated in the approximation of low-energy pions from the P-odd πNN vertex. High-energy meson effects are taken account of in the model of a rho-meson exchange. A kinematic analysis shows that the P-odd effects in a backward charge exchange reaction are sensitive to the presence of neutral currents. Investigation of the P-odd effects in a forward (elastica and with charge exchange) πN-scattering makes it possible to establish the isotopic structure of the nonlepton weak interaction and in particular to check the assumption of an intensified rho-meson exchange which has been offered by. Danilov to explain the high value of circular polarization of γ-quanta in the np → dγ reaction
2. Weak decays of heavy quarks
International Nuclear Information System (INIS)
Gaillard, M.K.
1978-08-01
The properties that may help to identify the two additional quark flavors that are expected to be discovered. These properties are lifetime, branching ratios, selection rules, and lepton decay spectra. It is also noted that CP violation may manifest itself more strongly in heavy particle decays than elsewhere providing a new probe of its origin. The theoretical progress in the understanding of nonleptonic transitions among lighter quarks, nonleptonic K and hyperon decay amplitudes, omega minus and charmed particle decay predictions, and lastly the Kobayashi--Maskawa model for the weak coupling of heavy quarks together with the details of its implications for topology and bottomology are treated. 48 references
3. Weak decay amplitudes in large N/sub c/ QCD
International Nuclear Information System (INIS)
Bardeen, W.A.
1988-10-01
A systematic analysis of nonleptonic decay amplitudes is presented using the large N/sub c/ expansion of quantum chromodynamics. In the K-meson system, this analysis is applied to the calculation of the weak decay amplitudes, weak mixing and CP violation. 10 refs., 5 figs., 2 tabs
4. Properties of charmed particle nonleptonic interactions following from Δ T=1/2 rule for usual mesons and baryons
International Nuclear Information System (INIS)
Arbuzov, B.A.; Cartasheva, V.G.; Tikhonin, F.F.
1978-01-01
A version of weak interaction has been considered in the frame of the model of 4-colour quarks with integer charges. The white part of the nonleptonic Lagrangian, responsible for the weak decay of usual particles, is constructed in such a way, that the ΔT=1/2 rule be fulfilled. This requirement fixed in a certain way the set of the parameters of full weak hadronic current. Parameters obtained in such a way give a complete description of the part of the Lagrangian with c-quark. The properties of the latter one reproduce to a certain extent those of the Lagrangian obtained in the GIM model, however the Lagrangian obtained in this work contains additional terms, that cannot be derived in the GIM model
5. Exclusive nonleptonic B→VV decays
International Nuclear Information System (INIS)
Barik, N.; Naimuddin, Sk.; Dash, P. C.; Kar, Susmita
2009-01-01
The exclusive two-body nonleptonic B→VV decays are investigated, within the factorization approximation, in the relativistic independent quark model based on a confining potential in the scalar-vector harmonic form. The branching ratios and the longitudinal polarization fraction (R L ) are calculated yielding the model predictions in agreement with experiment. Our predicted CP-odd fraction (R perpendicular ) for B→D*D (s) * decays are in general agreement with other model predictions and within the existing experimental limit.
6. Non-leptonic heavy meson decays - Theory Status
International Nuclear Information System (INIS)
Feldmann, T.
2014-08-01
The author briefly reviews the status and recent progress in the theoretical understanding of non-leptonic decays of beauty and charm hadrons. Focusing on a personal selection of topics, this covers perturbative calculations in quantum chromodynamics, analyses using flavour symmetries of strong interactions, and the modelling of the relevant hadronic input functions. The dynamics of strong interactions in non-leptonic decays of heavy mesons is extremely complex. While one has to admit that on the theory side a conceptual breakthrough for the systematic calculation of non-factorizable hadronic effect is still lacking, the combination of several theoretical methods in many cases still gives a satisfactory phenomenological picture. We have to note that: -) short-distance kernels in the QCD factorization approach are now being calculated at NNLO for a variety of decays; -) systematic studies of SU(3) F flavour-symmetry breaking effects on the basis of phenomenological data are available; and -) the ongoing improvement of the experimental situation leads to better knowledge on hadronic input parameters and more reliable estimates of systematic theoretical uncertainties
7. Non-leptonic hyperon decays and the chiral meson coupling to bags
International Nuclear Information System (INIS)
1986-01-01
Hyperon nonleptonic decays have been analyzed using a chiral-bag model instead of the MIT-bag model which was used in earlier analyses. The adopted theoretical formalism allows a step by step comparison between the new and the old approaches. The results are in agreement with the calculation which has used chiral model in its cloudy-bag variant. Chiral-bag model based theoretical predictions are not significantly different from the old MIT-bag model based results. Theory can account for overall gross features of the hyperon nonleptonic decays but not for the fine details like the exact, almost vanishing, value of the A(Σsub(+) + ) amplitude. (orig.)
8. Light quarks and the origin of the Δ 1=1/2 rule in the nonleptonic decays of strange particles
International Nuclear Information System (INIS)
Shifman, M.A.; Vainshtein, A.I.; Zakharov, V.I.
1975-01-01
A dynamical mechanism for the Δ I=1/2 rule in the nonleptonic decays of the strange particles is considered. The weak interactions are described within the Weinberg-Salam model while the strong interactions are assumed to be mediated by exchange of an octet of the colour vector gluons. It is shown that the account of the strong interactions gives rise to the new operators in the effective Hamiltonian of weak interactions which contain both left- and right-handed fermions. These operators satisfy the Δ I=1/2 rule and the estimates within the relativistic quark model indicate that their contribution dominates the physical amplitudes of the K → 2π, 3π decays
9. Theoretical status of weak and electromagnetic interactions
Energy Technology Data Exchange (ETDEWEB)
Pandit, L. K.
1980-07-01
An extended simple version of the Weinberg gauge model is proposed to bring together weak and electromagnetic interactions under one theory. The essential features of the standard SU/sub 2/ (operating on)U/sub 1/ gauge scheme with four leptons and four quark flavours is recalled. Charged-current and neutral current interactions are described. Non-leptonic decays of strange particles are studied. The treatment is extended to 6-leptons and 6-quark flavours. The short comings of this model are discussed. Speculations on the unification of strong, weak and electromagnetic interactions are made.
10. Exclusive nonleptonic B{yields}VV decays
Energy Technology Data Exchange (ETDEWEB)
Barik, N [Department of Physics, Utkal University, Bhubaneswar-751004 (India); Naimuddin, Sk [Department of Physics, Maharishi College of Natural Law, Bhubaneswar-751007 (India); Dash, P C [Department of Physics, Prananath Autonomous College, Khurda-752057 (India); Kar, Susmita [Department of Physics, North Orissa University, Baripada-757003 (India)
2009-07-01
The exclusive two-body nonleptonic B{yields}VV decays are investigated, within the factorization approximation, in the relativistic independent quark model based on a confining potential in the scalar-vector harmonic form. The branching ratios and the longitudinal polarization fraction (R{sub L}) are calculated yielding the model predictions in agreement with experiment. Our predicted CP-odd fraction (R{sub perpendicular}) for B{yields}D*D{sub (s)}* decays are in general agreement with other model predictions and within the existing experimental limit.
International Nuclear Information System (INIS)
Roberts, B.L.; Booth, E.C.; Gall, K.P.; McIntyre, E.K.; Miller, J.P.; Whitehouse, D.A.; Bassalleck, B.; Hall, J.R.; Larson, K.D.; Wolfe, D.M.; Fickinger, W.J.; Robinson, D.K.; Hallin, A.L.; Hasinoff, M.D.; Measday, D.F.; Noble, A.J.; Waltham, C.E.; Hessey, N.P.; Lowe, J.; Horvath, D.; Salomon, M.
1990-01-01
New measurements of the Σ + and Λ weak radiative decays are discussed. The hyperons were produced at rest by the reaction K - p → Yπ where Y = Σ + or Λ. The monoenergetic pion was used to tag the hyperon production, and the branching ratios were determined from the relative amplitudes of Σ + → pγ to Σ + → pπ 0 and Λ → nγ to Λ → nπ 0 . The photons from weak radiative decays and from π 0 decays were detected with modular NaI arrays. (orig.)
12. Nonleptonic B decays involving tensor mesons
Energy Technology Data Exchange (ETDEWEB)
Lopez Castro, G. [Departamento de Fisica, Cinvestav del IPN, Apdo. Postal 14-740, 07000 Mexico, D.F. (Mexico); Munoz, J.H. [Departamento de Fisica, Cinvestav del IPN, Apdo. Postal 14-740, 07000 Mexico, D.F. (Mexico)]|[Departamento de Fisica, Universidad del Tolima, A.A. 546, Ibague (Colombia)
1997-05-01
Two-body nonleptonic decays of B mesons into PT and VT modes are calculated using the nonrelativistic quark model of Isgur {ital et al.} The predictions obtained for B{r_arrow}{pi}D{sub 2}{sup {asterisk}},{rho}D{sub 2}{sup {asterisk}} are a factor of 3{endash}5 below present experimental upper limits. Interesting patterns are obtained for ratios of B decays involving mesons with different spin excitations and their relevance for additional tests of forms factor models are briefly discussed. {copyright} {ital 1997} {ital The American Physical Society}
13. Towards a theory of weak hadronic decays of charmed particles
International Nuclear Information System (INIS)
Blok, B.Yu.; Shifman, M.A.
1986-01-01
Weak decays of charmed mesons are considered. A new quantitative framework for theoretical analysis of nonleptonic two-body decays based on the QCD sum rules are proposed. This is the first of a series of papers devoted to the subject. Theoretical foundations of the approach ensuring model-independent predictions for the partial decay widths are discussed
14. An on-line non-leptonic neural trigger applied to an experiment looking for beauty
CERN Document Server
Baldanza, C; Cotta-Ramusino, A; D'Antone, I; Malferrari, L; Mazzanti, P; Odorici, F; Odorico, R; Zuffa, M; Bruschini, C; Musico, P; Novelli, P; Passaseo, M
1994-01-01
Results from a non-leptonic neural-network trigger hosted by experiment WA92, looking for beauty particle production from 350 GeV 1t- on a Cu target, are presented. The neural trigger has been used to send on a special data stream (the Fast Stream) events to be analyzed with high priority. The non-leptonic signature uses microvertex detector data and was devised so as to enrich the fraction of events containing C3 secondary vertices (i.e, vertices having three tracks whith sum of electric charges equal to +1 or -1). The neural trigger module consists of a VME crate hosting two ET ANN analog neural chips from Intel. The neural trigger operated for two continuous weeks during the WA92 1 993 run. For an acceptance of 15% for C3 events, the neural trigger yields a C3 enrichment factor of 6.6-7.l (depending on the event sample considered), which multiplied by that already provided by the standard non-leptonic trigger leads to a global C3 enrichment factor of -1 50. In the event sample selected by the neural trigge...
15. Theory of CP violation based on the charm and strangeness changing righthanded weak current. [Quark mass term
Energy Technology Data Exchange (ETDEWEB)
Fritzsch, H; Minkowski, P [California Inst. of Tech., Pasadena (USA)
1976-06-21
If the charged weak current contains the righthanded current (anti cs)sub(R), the quark mass term can be the origin of CP violation, which is then intimately related to the origin of the dominating mod(..delta..I)=1/2 and mod(..delta..S)=1 nonleptonic weak interaction. The electric dipole moment of the neutron is predicted to be of the order of 10/sup -25/ecm.
16. Results from an on-line non-leptonic neural trigger implemented in an experiment looking for beauty
International Nuclear Information System (INIS)
Baldanza, C.; Musico, P.; Novelli, P.; Passaseo, M.
1995-01-01
Results from a non-leptonic neural-network trigger hosted by experiment WA92, looking for beauty particle production from 350 GeV negative pions on a fixed Cu target, are presented. The neural trigger has been used to send events selected by means of a non-leptonic signature based on microvertex detector information to a special data stream, meant for early analysis. The non-leptonic signature, defined in a neural-network fashion, was devised so as to enrich the selected sample in the number of events containing C3 secondary vertices (i.e, vertices having three tracks with sum of electric charges equal to +1 or -1), which are sought for further analysis to identify charm and beauty non-leptonic decays. The neural trigger module consists of a VME crate hosting two MA16 digital neural chips from Siemens and two ETANN analog neural chips from Intel. During the experimental run, only the ETANN chips were operational. The neural trigger operated for two continuous weeks during the WA92 1993 run. For an acceptance of 15% for C3 events, the neural trigger yields a C3 enrichment factor of 6.6-7.1 (depending on the event sample considered), which multiplied by that already provided by the standard trigger leads to a global C3 enrichment factor of similar 150. In the event sample selected by the neural trigger, one every similar 7 events contains a C3 vertex. The response time of the neural trigger module is 5.8 μs. (orig.)
17. Results from an on-line non-leptonic neural trigger implemented in an experiment looking for beauty
Energy Technology Data Exchange (ETDEWEB)
Baldanza, C. [INFN, Bologna (Italy). ANNETTHE; Bisi, F. [INFN, Bologna (Italy). ANNETTHE; Cotta-Ramusino, A. [INFN, Bologna (Italy). ANNETTHE; DAntone, I. [INFN, Bologna (Italy). ANNETTHE; Malferrari, L. [INFN, Bologna (Italy). ANNETTHE; Mazzanti, P. [INFN, Bologna (Italy). ANNETTHE; Odorici, F. [INFN, Bologna (Italy). ANNETTHE; Odorico, R. [INFN, Bologna (Italy). ANNETTHE; Zuffa, M. [INFN, Bologna (Italy). ANNETTHE; Bruschini, C. [Istituto Nazionale di Fisica Nucleare, Genoa (Italy); Musico, P. [Istituto Nazionale di Fisica Nucleare, Genoa (Italy); Novelli, P. [Istituto Nazionale di Fisica Nucleare, Genoa (Italy); Passaseo, M. [European Organization for Nuclear Research, Geneva (Switzerland)
1995-07-15
Results from a non-leptonic neural-network trigger hosted by experiment WA92, looking for beauty particle production from 350 GeV negative pions on a fixed Cu target, are presented. The neural trigger has been used to send events selected by means of a non-leptonic signature based on microvertex detector information to a special data stream, meant for early analysis. The non-leptonic signature, defined in a neural-network fashion, was devised so as to enrich the selected sample in the number of events containing C3 secondary vertices (i.e, vertices having three tracks with sum of electric charges equal to +1 or -1), which are sought for further analysis to identify charm and beauty non-leptonic decays. The neural trigger module consists of a VME crate hosting two MA16 digital neural chips from Siemens and two ETANN analog neural chips from Intel. During the experimental run, only the ETANN chips were operational. The neural trigger operated for two continuous weeks during the WA92 1993 run. For an acceptance of 15% for C3 events, the neural trigger yields a C3 enrichment factor of 6.6-7.1 (depending on the event sample considered), which multiplied by that already provided by the standard trigger leads to a global C3 enrichment factor of similar 150. In the event sample selected by the neural trigger, one every similar 7 events contains a C3 vertex. The response time of the neural trigger module is 5.8 {mu}s. (orig.).
18. Bag-model matrix elements of the parity-violating weak hamiltonian for charmed baryons
International Nuclear Information System (INIS)
Ebert, D.; Kallies, W.
1983-01-01
Baryon matrix elements of the parity-violating part of the charmchanging weak Hamiltonian might be significant and comparable with those of the parity-conserving one due to large symmetry breaking. Expression for these new matrix elements by using the MIT-bag model are derived and their implications on earlier calculations of nonleptonic charmed-baryon decays are estimated
19. Nonleptonic decay widths of B0 mesons into D+π−
International Nuclear Information System (INIS)
Parmar, Arpit; Vinodkumar, P.C.; Patel, Bhavin
2012-01-01
In recent years, the non-leptonic decay of B 0 → D + π − has been obtained by BaBar. The B 0 → D + π − processes provide very good opportunities to test the standard model of hadronic B-meson decays due to their clean and dominant hadronic decay channels
20. Simple theory of nonleptonic kaon decays
International Nuclear Information System (INIS)
1987-07-01
We first summarize (a) why the quark s-bar-d-bar loop transition dominated by the physical W + exchange controls the large ΔI=1/2 K π and K 2π o nonleptonic decay amplitudes, and (b) why the vacuum-saturated hadronic (implied W + ) current-current hamiltonian correctly explains the small ΔI-3/2 K 2π + decay. Then we study in greater detail a more complete hadronic D.K.π meson-W ± loop calculation of the ΔI=1/2 and ΔI=3/2 K 2π amplitudes and show that this picture further reinforces our original quark ΔI=1/2 and hadron vacuum-saturated ΔI=3/2 (long distance) scheme. (author). 29 refs, 8 figs
1. Annihilation diagrams in two-body nonleptonic decays of charmed mesons
International Nuclear Information System (INIS)
Bedaque, P.; Das, A.; Mathur, V.S.
1994-06-01
In the pole-dominance model for the two-body nonleptonic decays of charmed mesons D → PV and D → VV, it is shown that the contributions of the intermediate pseudoscalar and the axial-vector meson poles cancel each other in the annihilation diagrams in the chiral limit. In the same limit, the annihilation diagrams for the D → PP decays vanish independently. (author). 6 refs, 3 figs
2. Weak decays of charmed particles and heavy leptons
International Nuclear Information System (INIS)
Walsh, T.F.
1977-11-01
Charm's chirality, cabibbo's angle in semileptonic processes, ΔC = 1 nonleptonic decay and D 0 -anti D 0 mixing and ΔC = -ΔS decays, models for nonleptonic decays and especially the properties of TAU are discussed. (BJ) [de
3. Closing in on the radiative weak chiral couplings
Science.gov (United States)
Cappiello, Luigi; Catà, Oscar; D'Ambrosio, Giancarlo
2018-03-01
We point out that, given the current experimental status of radiative kaon decays, a subclass of the O (p^4) counterterms of the weak chiral lagrangian can be determined in closed form. This involves in a decisive way the decay K^± → π ^± π ^0 l^+ l^-, currently being measured at CERN by the NA48/2 and NA62 collaborations. We show that consistency with other radiative kaon decay measurements leads to a rather clean prediction for the {O}(p^4) weak couplings entering this decay mode. This results in a characteristic pattern for the interference Dalitz plot, susceptible to be tested already with the limited statistics available at NA48/2. We also provide the first analysis of K_S→ π ^+π ^-γ ^*, which will be measured by LHCb and will help reduce (together with the related K_L decay) the experimental uncertainty on the radiative weak chiral couplings. A precise experimental determination of the {O}(p^4) weak couplings is important in order to assess the validity of the existing theoretical models in a conclusive way. We briefly comment on the current theoretical situation and discuss the merits of the different theoretical approaches.
4. Ω- and Σ+→pγ nonleptonic weak decays via current algebra, partial conservation of axial-vector current, and the quark model
International Nuclear Information System (INIS)
1983-01-01
By employing the current-algebra--PCAC (partial conservation of axial-vector current) program at the hadron level, the three decays Ω - →Ψ 0 π - , Ψ - π 0 , ΛK - are reasonably described in terms of only one fitted (ΔI = (1/2))/(ΔI = (3/2)) parameter of expected small 6% magnitude. Other parameters needed in the analysis, the baryon octet and decuplet weak transitions , , and , are completely constrained from B→B'π weak decays and independently from the quark model. The Σ + →pγ radiative decay amplitude and asymmetry parameters are then determined in terms of no free parameters
5. Spontaneously broken SU(2) gauge invariance and the ΔI=1/2 rule
International Nuclear Information System (INIS)
Shito, Okiyasu
1977-01-01
A model of nonleptonic weak interactions is proposed which is based on spontaneously broken SU(2) gauge invariance. The SU(2) group is taken analogously to the U-spin. To this scheme, the source of nonleptonic decays consists of only neutral currents, and violation of strangeness stems from weak vector boson mixings. The model can provide a natural explanation of the ΔI=1/2 rule and of the bulk of the ΔI=1/2 nonleptonic amplitude. As a consequence, a picture is obtained that weak interactions originate in spontaneously broken gauge invariance under orthogonal SU(2) groups. Finally, a possibility of unifying weak and electromagnetic interactions is indicated. (auth.)
6. Weak radiative baryonic decays of B mesons
International Nuclear Information System (INIS)
Kohara, Yoji
2004-01-01
Weak radiative baryonic B decays B→B 1 B 2 -barγ are studied under the assumption of the short-distance b→sγ electromagnetic penguin transition dominance. The relations among the decay rates of various decay modes are derived
7. Towards new frontiers in the exploration of charmless non-leptonic B decays
Science.gov (United States)
Fleischer, Robert; Jaarsma, Ruben; Vos, K. Keri
2017-03-01
Non-leptonic B decays into charmless final states offer an important laboratory to study CP violation and the dynamics of strong interactions. Particularly interesting are B s 0 → K - K + and B d 0 → π - π + decays, which are related by the U-spin symmetry of strong interactions, and allow for the extraction of CP-violating phases and tests of the Standard Model. The theoretical precision is limited by U-spin-breaking corrections and innovative methods are needed in view of the impressive future experimental precision expected in the era of Belle II and the LHCb upgrade. We have recently proposed a novel method to determine the {B}_s^0-{\\overline{B}}_s^0 mixing phase ϕ s from the B s 0 → K - K +, B d 0 → π - π + system, where semileptonic B s 0 → K - ℓ + ν ℓ , B d 0 → π - ℓ + ν ℓ decays are a new ingredient and the theoretical situation is very favourable. We discuss this strategy in detail, with a focus on penguin contributions as well as exchange and penguin-annihilation topologies which can be probed by a variety of non-leptonic B decays into charmless final states. We show that a theoretical precision as high as O(0.5°) for ϕ s can be attained in the future, thereby offering unprecedented prospects for the search for new sources of CP violation.
8. Concluding remarks and outlook: Europhysics conference on flavor-mixing in weak interactions
International Nuclear Information System (INIS)
Chau, L.L.
1984-01-01
Some comments are offered on the present knowledge of the mixing matrix of Kobayashi and Maskawa and of the dynamics of nonleptonic decay. Also, remarks are made concerning CP violation. Plans for research from 1984 to 1989 are listed briefly. The history of studies on weak interactions is briefly reviewed, and several unanswered questions are stated, such as where are the truth particles, how may they be discovered, what is the mass-generating mechanism for the gauge bosons, how many Z 0 's and W's are there, do neutrinos have mass, and how long do protons live
9. On the determination of the b →c handedness using nonleptonic Λc-decays
International Nuclear Information System (INIS)
Koenig, B.
1993-09-01
We consider possibilities to determine the handedness of b→c current transitions using semileptonic baryonic Λ b →Λ c transitions. We propose to analyze the longitudinal polarization of the daughter baryon Δ c by using momentum-spin correlation measurements in the form of forward-backward (FB) asymmetry measures involving its nonleptonic decay products. We use an explicit form factor model to determine the longitudinal polarization of the Λ c in the semileptonic decay Λ b →Λ c +l - + anti v l . The mean longitudinal polarization of the Λ c is negative (positive) for left-chiral (right-chiral) b→c current transitions. The frame dependent longitudinal polarization of the Λ c is large (≅80%) in the Λ b rest frame and somewhat smaller (30%-40%) in the lab frame when the Λ b 's are produced on the Z 0 peak. We suggest to use nonleptonic decay modes of the D c to analyze its polarization and thereby to determine the chirality of the b→c transition. Since the Λ b 's produced on the Z 0 are expected to be polarized we discuss issues of the polarization transfer in Λ b →Λ c transitions. We also investigate the p perpendicular to - and p-cut sensitivity of our predictions for the polarization of the Λ c . (orig.)
10. Nuclear energy - Radioprotection - Procedure for radiation protection monitoring in nuclear installations for external exposure to weakly penetrating radiation, especially to beta radiation
International Nuclear Information System (INIS)
2002-01-01
11. Weak annihilation and new physics in charmless B → MM decays
Energy Technology Data Exchange (ETDEWEB)
Bobeth, Christoph [Institute for Advanced Study, Technische Universitaet Muenchen, Garching (Germany); Gorbahn, Martin [University of Liverpool, Department of Mathematical Sciences, Liverpool (United Kingdom); Vickers, Stefan [Excellence Cluster Universe, Technische Universitaet Muenchen, Garching (Germany)
2015-07-15
We use currently available data of nonleptonic charmless 2-body B → MM decays (MM = PP, PV,VV) that are mediated by b → (d, s) QCD- and QED-penguin operators to study weak annihilation and new-physics effects in the framework of QCD factorization. In particular we introduce one weak-annihilation parameter for decays related by (u <-> d) quark interchange and test this universality assumption. Within the standard model, the data supports this assumption with the only exceptions in the B → Kπ system, which exhibits the well-known ''ΔA{sub CP} puzzle'', and some tensions in B → K*φ. Beyond the standard model, we simultaneously determine weak-annihilation and new-physics parameters from data, employing model independent scenarios that address the ''ΔA{sub CP} puzzle'', such as QED-penguins and b → s anti uu current-current operators. We discuss also possibilities that allow further tests of our assumption once improved measurements from LHCb and Belle II become available. (orig.)
12. Towards new frontiers in the exploration of charmless non-leptonic B decays
Energy Technology Data Exchange (ETDEWEB)
Fleischer, Robert [Nikhef,Science Park 105, NL-1098 XG Amsterdam (Netherlands); Department of Physics and Astronomy, Vrije Universiteit Amsterdam,NL-1081 HV Amsterdam (Netherlands); Jaarsma, Ruben [Nikhef,Science Park 105, NL-1098 XG Amsterdam (Netherlands); Vos, K. Keri [Nikhef,Science Park 105, NL-1098 XG Amsterdam (Netherlands); Van Swinderen Institute for Particle Physics and Gravity, University of Groningen,NL-9747 AG Groningen (Netherlands); Theoretische Physik 1, Naturwissenschaftlich-Technische Fakultät, Universität Siegen, D-57068 Siegen (Germany)
2017-03-09
Non-leptonic B decays into charmless final states offer an important laboratory to study CP violation and the dynamics of strong interactions. Particularly interesting are B{sub s}{sup 0}→K{sup −}K{sup +} and B{sub d}{sup 0}→π{sup −}π{sup +} decays, which are related by the U-spin symmetry of strong interactions, and allow for the extraction of CP-violating phases and tests of the Standard Model. The theoretical precision is limited by U-spin-breaking corrections and innovative methods are needed in view of the impressive future experimental precision expected in the era of Belle II and the LHCb upgrade. We have recently proposed a novel method to determine the B{sub s}{sup 0}–B̄{sub s}{sup 0} mixing phase ϕ{sub s} from the B{sub s}{sup 0}→K{sup −}K{sup +}, B{sub d}{sup 0}→π{sup −}π{sup +} system, where semileptonic B{sub s}{sup 0}→K{sup −}ℓ{sup +}ν{sub ℓ}, B{sub d}{sup 0}→π{sup −}ℓ{sup +}ν{sub ℓ} decays are a new ingredient and the theoretical situation is very favourable. We discuss this strategy in detail, with a focus on penguin contributions as well as exchange and penguin-annihilation topologies which can be probed by a variety of non-leptonic B decays into charmless final states. We show that a theoretical precision as high as O(0.5{sup ∘}) for ϕ{sub s} can be attained in the future, thereby offering unprecedented prospects for the search for new sources of CP violation.
13. Rare Nonleptonic Decays of the Omega Hyperon: Measurement of the Branching Ratios for Omega-+ --> Xi*0(1530) (anti-Xi*0(1530)) pi-+ and Omega-+ --> Xi-+ pi+- pi-+
International Nuclear Information System (INIS)
Kamaev, Oleg; IIT, Chicago
2007-01-01
A clean signal of 78 (24) events has been observed in the rare nonleptonic particle (antiparticle) decay modes (Omega) # -+# → Ξ # -+π# ± π # -+# using data collected with the HyperCP spectrometer during Fermilab's 1999 fixed-target run. We obtain B((Omega) - → Ξ - π + π - ) = [4.32 ± 0.56(stat) ± 0.28(syst)] x 10 -4 and B((Omega) + → Ξ + π - π + ) = 3.13 ± 0.71(stat) ± 0.20(syst) x 10 -4 . This is the first observation of the antiparticle mode. Our measurement for the particle mode agrees with the previous experimental result and has an order-of-magnitude better precision. We extract the contribution from the resonance decay mode (Omega) # -+# → Ξ* 1530 0 ((ovr Ξ* 1530 0 ))π # -+# to the final state Ξ # -+π# ± π # -+#. This the first actual measurement of the resonance-mode branching ratios, gives B((Omega) - → Ξ* 1530 0 π - ) = [4.55 ± 2.33(stat) ± 0.38(syst)] x 10 -5 , B((Omega) + → (ovr Ξ* 1530 0 )π + ) = [1.40 ± 2.83(stat) ± 0.12(syst)] x 10 -5 and disagrees with the current Particle Data Group review value, being ∼ 14 times smaller. Since the central value of the resonance-mode branching ratio is less than two standard deviations away from zero, we also calculate branching ratio upper limits at 90% confidence level: B((Omega) - → Ξ* 1530 0 π - ) -5 and B((Omega) + → (ovr Ξ* 1530 0 ) π + ) -5 . This analysis provides new data on nonleptonic hyperon decays which allows studies of how weak interaction processes occur in the presence of strong interactions
International Nuclear Information System (INIS)
Shu, D.; Toellner, T. S.; Alp, E. E.; Maser, J.; Ilavsky, J.; Shastri, S. D.; Lee, P. L.; Narayanan, S.; Long, G. G.
2007-01-01
Unlike traditional kinematic flexure mechanisms, laminar overconstrained weak-link mechanisms provide much higher structure stiffness and stability. Using a laminar structure configured and manufactured by chemical etching and lithography techniques, we are able to design and build linear and rotary weak-link mechanisms with ultrahigh positioning sensitivity and stability for synchrotron radiation applications. Applications of laminar rotary weak-link mechanism include: high-energy-resolution monochromators for inelastic x-ray scattering and x-ray analyzers for ultra-small-angle scattering and powder-diffraction experiments. Applications of laminar linear weak-link mechanism include high-stiffness piezo-driven stages with subnanometer resolution for an x-ray microscope. In this paper, we summarize the recent designs and applications of the laminar weak-link mechanisms at the Advanced Photon Source
15. Weak decays of doubly heavy baryons. SU(3) analysis
Energy Technology Data Exchange (ETDEWEB)
Wang, Wei; Xing, Zhi-Peng; Xu, Ji [Shanghai Jiao Tong University, INPAC, Shanghai Key Laboratory for Particle Physics and Cosmology, School of Physics and Astronomy, Shanghai (China)
2017-11-15
Motivated by the recent LHCb observation of doubly charmed baryon Ξ{sub cc}{sup ++} in the Λ{sub c}{sup +}K{sup -}π{sup +}π{sup +} final state, we analyze the weak decays of doubly heavy baryons Ξ{sub cc}, Ω{sub cc}, Ξ{sub bc}, Ω{sub bc}, Ξ{sub bb} and Ω{sub bb} under the flavor SU(3) symmetry. The decay amplitudes for various semileptonic and nonleptonic decays are parametrized in terms of a few SU(3) irreducible amplitudes. We find a number of relations or sum rules between decay widths and CP asymmetries, which can be examined in future measurements at experimental facilities like LHC, Belle II and CEPC. Moreover, once a few decay branching fractions have been measured in the future, some of these relations may provide hints for exploration of new decay modes. (orig.)
16. Weak decays of doubly heavy baryons. Multi-body decay channels
Energy Technology Data Exchange (ETDEWEB)
Shi, Yu-Ji; Wang, Wei; Xing, Ye; Xu, Ji [Shanghai Jiao Tong University, INPAC, Shanghai Key Laboratory for Particle Physics and Cosmology, MOE Key Laboratory for Particle Physics, Astrophysics and Cosmology, School of Physics and Astronomy, Shanghai (China)
2018-01-15
The newly-discovered Ξ{sub cc}{sup ++} decays into the Λ{sub c}{sup +}K{sup -}π{sup +}π{sup +}, but the experimental data has indicated that this decay is not saturated by any two-body intermediate state. In this work, we analyze the multi-body weak decays of doubly heavy baryons Ξ{sub cc}, Ω{sub cc}, Ξ{sub bc}, Ω{sub bc}, Ξ{sub bb} and Ω{sub bb}, in particular the three-body nonleptonic decays and four-body semileptonic decays. We classify various decay modes according to the quark-level transitions and present an estimate of the typical branching fractions for a few golden decay channels. Decay amplitudes are then parametrized in terms of a few SU(3) irreducible amplitudes. With these amplitudes, we find a number of relations for decay widths, which can be examined in future. (orig.)
International Nuclear Information System (INIS)
Larson, K.D.; Noble, A.J.; Bassalleck, B.; Burkhardt, H.; Fickinger, W.J.; Hall, J.R.; Hallin, A.L.; Hasinoff, M.D.; Horvath, D.; Jones, P.G.; Lowe, J.; McIntyre, E.K.; Measday, D.F.; Miller, J.P.; Roberts, B.L.; Robinson, D.K.; Sakitt, M.; Salomon, M.; Stanislaus, S.; Waltham, C.E.; Warner, T.M.; Whitehouse, D.A.; Wolfe, D.M.
1993-01-01
The branching ratio for the Λ weak radiative decay Λ→nγ has been measured. Three statistically independent results from the same experiment (Brookhaven E811) are reported here. They are combined with a previously published measurement, also from Brookhaven E811, to yield a result of (Λ→nγ)/(Λ→anything)=(1.75±0.15)x10 -3 , based on 1800 events after background subtraction. This represents a factor of 75 increase in statistics over the previous world total. A comparison with recent theoretical papers shows that no existing model provides a completely satisfactory description of all data on weak radiative decays. A search is also reported for the radiative capture process K - p→Σ(1385)γ at rest. No signal was observed and an upper limit on the branching ratio of [K - p→Σ(1385)γ]/[K - p→anything] -4 (90% C.L.) was determined
18. Electromagnetic radiation damping of charges in external gravitational fields (weak field, slow motion approximation). [Harmonic coordinates, weak field slow-motion approximation, Green function
Energy Technology Data Exchange (ETDEWEB)
Rudolph, E [Max-Planck-Institut fuer Physik und Astrophysik, Muenchen (F.R. Germany)
1975-01-01
As a model for gravitational radiation damping of a planet the electromagnetic radiation damping of an extended charged body moving in an external gravitational field is calculated in harmonic coordinates using a weak field, slowing-motion approximation. Special attention is paid to the case where this gravitational field is a weak Schwarzschild field. Using Green's function methods for this purpose it is shown that in a slow-motion approximation there is a strange connection between the tail part and the sharp part: radiation reaction terms of the tail part can cancel corresponding terms of the sharp part. Due to this cancelling mechanism the lowest order electromagnetic radiation damping force in an external gravitational field in harmonic coordinates remains the flat space Abraham Lorentz force. It is demonstrated in this simplified model that a naive slow-motion approximation may easily lead to divergent higher order terms. It is shown that this difficulty does not arise up to the considered order.
19. Spectator scattering at NLO in non-leptonic B decays: Leading penguin amplitudes
International Nuclear Information System (INIS)
Beneke, M.; Jaeger, S.
2007-01-01
We complete the computation of the 1-loop (α s 2 ) corrections to hard spectator scattering in non-leptonic B decays at leading power in Λ/m b by evaluating the penguin amplitudes. This extends the knowledge of these next-to-next-to-leading-order contributions in the QCD factorization formula for B decays to a much wider class of final states, including all pseudoscalar-pseudoscalar, pseudoscalar-vector, and longitudinally polarized vector-vector final states, except final states with η or η ' mesons. The new 1-loop correction is significant for the colour-suppressed amplitudes, but turns out to be strongly suppressed for the leading QCD penguin amplitude α 4 p . We provide numerical values of the phenomenological P/T and C/T amplitude ratios for the ππ, πρ and ρρ final states, and discuss corrections to several relations between electroweak penguin and tree amplitudes
20. Weak-beam electron microscopy of radiation-induced segregation
International Nuclear Information System (INIS)
Saka, H.
1983-01-01
The segregation of solute atoms to dislocations during irradiation by 1 MeV electrons in a HVEM was studied by measuring the dissociation width of extended dislocations in Cu-5.1 at.%Si, Cu-5.3 at.%Ge, Ag-9.4 at.% In and Ag-9.6 at.%Al alloys. 'Weak-beam' electron microscopy was used. In Cu-Si (oversized solute), Cu-Ge (oversize) and Ag-Al (undersize), solute enrichment was observed near dislocations, while in Ag-In (oversize) solute depletion was observed. The results are discussed in terms of current mechanisms for radiation-induced segregation. (author)
1. Bc meson weak decays and CP violation
International Nuclear Information System (INIS)
Liu, J.; Chao, K.
1997-01-01
The form factors for B c transitions are calculated with a relativistic constituent quark model based on the Bethe-Salpeter formalism. The rates for some semileptonic and nonleptonic B c weak decays and CP-violating asymmetries for two-body hardonic B c decays are estimated as well. The calculated widths are compared with those predicted in other quark models of mesons. For the most promising signatures for the discovery of B c : B c →ψlν→(l '+ l '- )lν and B c →ψπ→(l '+ l '- )π (with l ' =e or μ), the combined branching ratios are, respectively, estimated to be 1.06x10 -3 and 4.8a 1 2 x10 -5 for τ B c =0.5 ps and as large as 2.56x10 -3 and 1.15a 1 2 x10 -4 for τ B c =1.2 ps. There are large CP-violating effects in some B c decay modes, and the rates for some of these (e.g., B c →ψD * ,η c D, and η c D * , etc.) are large too. copyright 1997 The American Physical Society
2. Radiation from quantum weakly dynamical horizons in loop quantum gravity.
Science.gov (United States)
Pranzetti, Daniele
2012-07-06
We provide a statistical mechanical analysis of quantum horizons near equilibrium in the grand canonical ensemble. By matching the description of the nonequilibrium phase in terms of weakly dynamical horizons with a local statistical framework, we implement loop quantum gravity dynamics near the boundary. The resulting radiation process provides a quantum gravity description of the horizon evaporation. For large black holes, the spectrum we derive presents a discrete structure which could be potentially observable.
3. Inner-shell photoionization in weak and strong radiation fields
International Nuclear Information System (INIS)
Southworth, S.H.; Dunford, R.W.; Ederer, D.L.; Kanter, E.P.; Kraessig, B.; Young, L.
2004-01-01
The X-ray beams presently produced at synchrotron-radiation facilities interact weakly with matter, and the observation of double photoionization is due to electron-electron interactions. The intensities of future X-ray free-electron lasers are expected to produce double photoionization by absorption of two photons. The example of double K-shell photoionization of neon is discussed in the one- and two-photon cases. We also describe an experiment in which X rays photoionize the K shell of krypton in the presence of a strong AC field imposed by an optical laser
4. Weak boson emission in hadron collider processes
International Nuclear Information System (INIS)
Baur, U.
2007-01-01
The O(α) virtual weak radiative corrections to many hadron collider processes are known to become large and negative at high energies, due to the appearance of Sudakov-like logarithms. At the same order in perturbation theory, weak boson emission diagrams contribute. Since the W and Z bosons are massive, the O(α) virtual weak radiative corrections and the contributions from weak boson emission are separately finite. Thus, unlike in QED or QCD calculations, there is no technical reason for including gauge boson emission diagrams in calculations of electroweak radiative corrections. In most calculations of the O(α) electroweak radiative corrections, weak boson emission diagrams are therefore not taken into account. Another reason for not including these diagrams is that they lead to final states which differ from that of the original process. However, in experiment, one usually considers partially inclusive final states. Weak boson emission diagrams thus should be included in calculations of electroweak radiative corrections. In this paper, I examine the role of weak boson emission in those processes at the Fermilab Tevatron and the CERN LHC for which the one-loop electroweak radiative corrections are known to become large at high energies (inclusive jet, isolated photon, Z+1 jet, Drell-Yan, di-boson, tt, and single top production). In general, I find that the cross section for weak boson emission is substantial at high energies and that weak boson emission and the O(α) virtual weak radiative corrections partially cancel
5. Global existence of a weak solution for a model in radiation magnetohydrodynamics
Czech Academy of Sciences Publication Activity Database
Ducomet, B.; Kobera, M.; Nečasová, Šárka
2017-01-01
Roč. 150, č. 1 (2017), s. 43-65 ISSN 0167-8019 R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : radiation magnetohydrodynamics * Navier-Stokes-Fourier system * weak solutio Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.702, year: 2016 https://link.springer.com/article/10.1007%2Fs10440-016-0093-y
6. Global existence of a weak solution for a model in radiation magnetohydrodynamics
Czech Academy of Sciences Publication Activity Database
Ducomet, B.; Kobera, M.; Nečasová, Šárka
2017-01-01
Roč. 150, č. 1 (2017), s. 43-65 ISSN 0167-8019 R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : radiation magnetohydrodynamics * Navier-Stokes- Fourier system * weak solutio Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.702, year: 2016 https://link.springer.com/article/10.1007%2Fs10440-016-0093-y
7. Efficient weakly-radiative wireless energy transfer: An EIT-like approach
International Nuclear Information System (INIS)
Hamam, Rafif E.; Karalis, Aristeidis; Joannopoulos, J.D.; Soljacic, Marin
2009-01-01
Inspired by a quantum interference phenomenon known in the atomic physics community as electromagnetically induced transparency (EIT), we propose an efficient weakly radiative wireless energy transfer scheme between two identical classical resonant objects, strongly coupled to an intermediate classical resonant object of substantially different properties, but with the same resonance frequency. The transfer mechanism essentially makes use of the adiabatic evolution of an instantaneous (so called 'dark') eigenstate of the coupled 3-object system. Our analysis is based on temporal coupled mode theory (CMT), and is general enough to be valid for various possible sorts of coupling, including the resonant inductive coupling on which witricity-type wireless energy transfer is based. We show that in certain parameter regimes of interest, this scheme can be more efficient, and/or less radiative than other, more conventional approaches. A concrete example of wireless energy transfer between capacitively-loaded metallic loops is illustrated at the beginning, as a motivation for the more general case. We also explore the performance of the currently proposed EIT-like scheme, in terms of improving efficiency and reducing radiation, as the relevant parameters of the system are varied.
8. Effects of quenching and partial quenching on penguin matrix elements
NARCIS (Netherlands)
Golterman, Maarten; Pallante, Elisabetta
2001-01-01
In the calculation of non-leptonic weak decay rates, a "mismatch" arises when the QCD evolution of the relevant weak hamiltonian down to hadronic scales is performed in unquenched QCD, but the hadronic matrix elements are then computed in (partially) quenched lattice QCD. This mismatch arises
9. Acoustic radiation from weakly wrinkled premixed flames
Energy Technology Data Exchange (ETDEWEB)
Lieuwen, Tim; Mohan, Sripathi; Rajaram, Rajesh; Preetham, [School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0150 (United States)
2006-01-01
This paper describes a theoretical analysis of acoustic radiation from weakly wrinkled (i.e., u'/S{sub L}<1) premixed flames. Specifically, it determines the transfer function relating the spectrum of the acoustic pressure oscillations, P'({omega}), to that of the turbulent velocity fluctuations in the approach flow, U'({omega}). In the weakly wrinkled limit, this transfer function is local in frequency space; i.e., velocity fluctuations at a frequency {omega} distort the flame and generate sound at the same frequency. This transfer function primarily depends upon the flame Strouhal number St (based on mean flow velocity and flame length) and the correlation length, {lambda}, of the flow fluctuations. For cases where the ratio of the correlation length and duct radius {lambda}/a>>1, the acoustic pressure and turbulent velocity power spectra are related by P'({omega})-{omega}{sup 2}U'({omega}) and P'({omega})-U'({omega}) for St<<1 and St>>1, respectively. For cases where {lambda}/a<<1, the transfer functions take the form P'({omega})-{omega}{sup 2}({lambda}/a){sup 2}U'({omega}) and P'({omega})-{omega}{sup 2}({lambda}/a){sup 2}({psi}-{delta}ln({lambda}/a))U'({omega}) for St<<1 and St>>1, respectively, where (PS) and {delta} are constants. The latter result demonstrates that this transfer function does not exhibit a simple power law relationship in the high frequency region of the spectra. The simultaneous dependence of this pressure-velocity transfer function upon the Strouhal number and correlation length suggests a mechanism for the experimentally observed maximum in acoustic spectra and provides some insight into the controversy in the literature over how this peak should scale with the flame Strouhal number.
Science.gov (United States)
Halgamuge, Malka N
2017-01-01
The aim of this article was to explore the hypothesis that non-thermal, weak, radiofrequency electromagnetic fields (RF-EMF) have an effect on living plants. In this study, we performed an analysis of the data extracted from the 45 peer-reviewed scientific publications (1996-2016) describing 169 experimental observations to detect the physiological and morphological changes in plants due to the non-thermal RF-EMF effects from mobile phone radiation. Twenty-nine different species of plants were considered in this work. Our analysis demonstrates that the data from a substantial amount of the studies on RF-EMFs from mobile phones show physiological and/or morphological effects (89.9%, p radiofrequency radiation influence on plants. Hence, this study provides new evidence supporting our hypothesis. Nonetheless, this endorses the need for more experiments to observe the effects of RF-EMFs, especially for the longer exposure durations, using the whole organisms. The above observation agrees with our earlier study, in that it supported that it is not a well-grounded method to characterize biological effects without considering the exposure duration. Nevertheless, none of these findings can be directly associated with human; however, on the other hand, this cannot be excluded, as it can impact the human welfare and health, either directly or indirectly, due to their complexity and varied effects (calcium metabolism, stress proteins, etc.). This study should be useful as a reference for researchers conducting epidemiological studies and the long-term experiments, using whole organisms, to observe the effects of RF-EMFs.
11. Laboratory simulation of Euclid-like sky images to study the impact of CCD radiation damage on weak gravitational lensing
Science.gov (United States)
Prod'homme, T.; Verhoeve, P.; Oosterbroek, T.; Boudin, N.; Short, A.; Kohley, R.
2014-07-01
Euclid is the ESA mission to map the geometry of the dark universe. It uses weak gravitational lensing, which requires the accurate measurement of galaxy shapes over a large area in the sky. Radiation damage in the 36 Charge-Coupled Devices (CCDs) composing the Euclid visible imager focal plane has already been identified as a major contributor to the weak-lensing error budget; radiation-induced charge transfer inefficiency (CTI) distorts the galaxy images and introduces a bias in the galaxy shape measurement. We designed a laboratory experiment to project Euclid-like sky images onto an irradiated Euclid CCD. In this way - and for the first time - we are able to directly assess the effect of CTI on the Euclid weak-lensing measurement free of modelling uncertainties. We present here the experiment concept, setup, and first results. The results of such an experiment provide test data critical to refine models, design and test the Euclid data processing CTI mitigation scheme, and further optimize the Euclid CCD operation.
12. CP-violation in K0(K-bar0) → 3π decays from chiral Lagrangians with fourth-order derivative terms, including isospin-breaking and rescattering effects
International Nuclear Information System (INIS)
Bel'kov, A.A.; Lanyov, A.V.; Ebert, D.
1990-08-01
In the framework of recently proposed effective Lagrangians for weak nonleptonic meson interactions the amplitudes of the decays K 0 → 3π have been calculated with inclusion of isospin breaking and meson rescattering effects. The imaginary part of the penguin diagram contribution, which determines direct CP-violation in nonleptonic kaon decays, has been fixed with the help of the measured ratio ε'/ε of CP-violation parameters. The modification of the Li-Wolfenstein relation for the direct CP-violation parameter in K 0 (K-bar 0 ) → π + π - π 0 decays is discussed. (author). 27 refs, 3 figs, 1 tab
13. Radiation tails of the scalar wave equation in a weak gravitational field
International Nuclear Information System (INIS)
Mankin, R.; Piir, I.
1974-01-01
A class of solutions of the linearized Einstein equations is found making use of the Newman-Penrose spin coefficient formalism. These solutions describe a weak retarded gravitational field with an arbitrary multipole structure. The study of the radial propagation of the scalar waves in this gravitational field shows that in the first approximation the tails of the scalar outgoing radiation appear either in the presence of a gravitational mass or in the case of a nonzero linear momentum of the gravitational source. The quadrupole moment and the higher multipole moments of the gravitational field as well as the constant dipole moment and the angular moment of the source do not contribute to the tail
14. Weak decays of doubly heavy baryons. The 1/2 → 1/2 case
Energy Technology Data Exchange (ETDEWEB)
Wang, Wei; Zhao, Zhen-Xing [Shanghai Jiao Tong University, INPAC, Shanghai Key Laboratory for Particle Physics and Cosmology, School of Physics and Astronomy, Shanghai (China); Yu, Fu-Sheng [Lanzhou University, School of Nuclear Science and Technology, Lanzhou (China)
2017-11-15
Very recently, the LHCb collaboration has observed in the final state Λ{sub c}{sup +}K{sup -}π{sup +}π{sup +} a resonant structure that is identified as the doubly charmed baryon Ξ{sub cc}{sup ++}. Inspired by this observation, we investigate the weak decays of doubly heavy baryons Ξ{sub cc}{sup ++}, Ξ{sub cc}{sup +}, Ω{sub cc}{sup +}, Ξ{sub bc}{sup (')+}, Ξ{sub bc}{sup (')0}, Ω{sub bc}{sup (')0}, Ξ{sub bb}{sup 0}, Ξ{sub bb}{sup -} and Ω{sub bb}{sup -} and focus on the decays into spin 1/2 baryons in this paper. At the quark level these decay processes are induced by the c → d/s or b → u/c transitions, and the two spectator quarks can be viewed as a scalar or axial vector diquark. We first derive the hadronic form factors for these transitions in the light-front approach and then apply them to predict the partial widths for the semileptonic and nonleptonic decays of doubly heavy baryons. We find that the number of decay channels is sizable and can be examined in future measurements at experimental facilities like LHC, Belle II and CEPC. (orig.)
15. DETERMINATION OF SUPERFICIAL ABSORBED DOSE FROM EXTERNAL EXPOSURE OF WEAKLY PENETRATING RADIATIONS
Institute of Scientific and Technical Information of China (English)
陈丽姝
1994-01-01
The methods of determining the superficial absorbed dose distributions in a water phantom by means of the experiments and available theories have been reported.The distributions of beta dose were measured by an extrapolation ionization chamber at definite depthes corresponding to some superficial organs and tissues such as the radiosensitive layer of the skin,cornea,sclera,anterior chamber and lens of eyeball.The ratios among superficial absorbed dose D(0.07) and average absorbed doses at the depthes 1,2,3,4,5 and 6mm are also obtained with Cross's methods.They can be used for confining the deterministic effects of some superficial tissues and organs such as the skin and the components of eyeball for weakly penetrating radiations.
16. Notification: Follow-Up on OIG Report 12-P-0417, Weaknesses in EPA’s Management of the Radiation Network System Demand Attention
Science.gov (United States)
Project #OPE-FY14-0010, January 2, 2014. The EPA OIG is beginning preliminary research on the EPA's actions to address the recommendations in the Apr 19, 2012, OIG Report, Weaknesses in EPA's Management of the Radiation Network System Demand Attention.
17. Updated NNLO QCD predictions for the weak radiative B-meson decays
CERN Document Server
Misiak, M; Boughezal, R; Czakon, M; Ewerth, T; Ferroglia, A; Fiedler, P; Gambino, P; Greub, C; Haisch, U; Huber, T; Kaminski, M; Ossola, G; Poradzinski, M; Rehman, A; Schutzmeier, T; Steinhauser, M; Virto, J
2015-01-01
We perform an updated analysis of the inclusive weak radiative B-meson decays in the standard model, incorporating all our results for the O(alpha_s^2) and lower-order perturbative corrections that have been calculated after 2006. New estimates of non-perturbative effects are taken into account, too. For the CP- and isospin-averaged branching ratios, we find B_{s gamma} = (3.36 +_ 0.23) * 10^-4 and B_{d gamma} = 1.73^{+0.12}_{-0.22} * 10^-5, for E_gamma > 1.6 GeV. These results remain in agreement with the current experimental averages. Normalizing their sum to the inclusive semileptonic branching ratio, we obtain R_gamma = ( B_{s gamma} + B_{d gamma})/B_{c l nu} = (3.31 +_ 0.22) * 10^-3. A new bound from B_{s gamma} on the charged Higgs boson mass in the two-Higgs-doublet-model II reads M_{H^+} > 480 GeV at 95%C.L.
18. Weak transitions in 44Ca
International Nuclear Information System (INIS)
Tauhata, L.; Marques, A.
1972-01-01
Energy levels and gamma radiation transitions of Ca 44 are experimentally determined, mainly the weak transition at 564 KeV and 728 KeV. The decay scheme and the method used (coincidence with Ge-Li detector) are also presented [pt
19. A study of the #Delta# I = 1/2 rule in the weak decay of S-shell hypernuclei: BNL E931
International Nuclear Information System (INIS)
Gill, R.L.
2000-01-01
It is empirically observed that the non-leptonic decay of strange hadrons is enhanced when the change in isospin is 1/2. This is generalized in the ''ΔI = 1/2 rule'' that states that all such decays proceed predominantly through ΔI = 1/2 amplitudes. However, there is no definitive explanation for this apparently universal rule. Non-mesonic decay of Λ-hypernuclei can occur through a weak decay process ΛN -> ηN. When stimulated by a neutron, two neutrons are emitted from the nucleus, and when stimulated by a proton, a proton and neutron are emitted. By measuring the relative decay widths (Γ n /Γ p ) in the full set of s-shell hypernuclei, a sensitive test of the ΔI = 1/2 rule, and the determination of its applicability to non-mesonic decays can be made. In addition, information about the spin-isospin dependence of the weak decay process can be extracted. A measurement of Γ n /Γ p , to an accuracy of even 50% will be sufficient to address important issues relating to the ΔI = 1/2 rule and to the weak decay process. The experiment will measure the ratio Γ n /Γ p , following the decay of 4 H which is produced by a stopped K - beam in a liquid Helium target. The Neutral Meson Spectrometer will be used to identify stopped kaon events by detection of the gamma rays that follow the decay of the emitted π 0 . Arrays of charged particle and neutron detectors will measure the relative neutron and proton emission probabilities. An engineering run was performed in 1998, without the Helium target, which demonstrated that the technique is feasible. The full experiment is scheduled at the Alternating Gradient Synchrotron for the spring 2001 running period
20. Anomalous leptonic U(1) symmetry: Syndetic origin of the QCD axion, weak-scale dark matter, and radiative neutrino mass
Science.gov (United States)
Ma, Ernest; Restrepo, Diego; Zapata, Óscar
2018-01-01
The well-known leptonic U(1) symmetry of the Standard Model (SM) of quarks and leptons is extended to include a number of new fermions and scalars. The resulting theory has an invisible QCD axion (thereby solving the strong CP problem), a candidate for weak-scale dark matter (DM), as well as radiative neutrino masses. A possible key connection is a color-triplet scalar, which may be produced and detected at the Large Hadron Collider.
1. Weak expression of cyclooxygenase-2 is associated with poorer outcome in endemic nasopharyngeal carcinoma: analysis of data from randomized trial between radiation alone versus concurrent chemo-radiation (SQNP-01)
International Nuclear Information System (INIS)
Loong, Susan Li Er; Hwang, Jacqueline Siok Gek; Li, Hui Hua; Wee, Joseph Tien Seng; Yap, Swee Peng; Chua, Melvin Lee Kiang; Fong, Kam Weng; Tan, Terence Wee Kiat
2009-01-01
Over-expression of cyclooxygenase-2 (COX-2) enzyme has been reported in nasopharyngeal carcinoma (NPC). However, the prognostic significance of this has yet to be conclusively determined. Thus, from our randomized trial of radiation versus concurrent chemoradiation in endemic NPC, we analyzed a cohort of tumour samples collected from participants from one referral hospital. 58 out of 88 patients from this institution had samples available for analysis. COX-2 expression levels were stratified by immunohistochemistry, into negligible, weak, moderate and strong, and correlated with overall and disease specific survivals. 58% had negligible or weak COX-2 expression, while 14% and 28% had moderate and strong expression respectively. Weak COX-2 expression conferred a poorer median overall survival, 1.3 years for weak versus 6.3 years for negligible, 7.8 years, strong and not reached for moderate. There was a similar trend for disease specific survival. Contrary to literature published on other malignancies, our findings seemed to indicate that over-expression of COX-2 confer a better prognosis in patients with endemic NPC. Larger studies are required to conclusively determine the significance of COX-2 expression in these patients
2. Strange hadron decays involving e+e- pairs
International Nuclear Information System (INIS)
Soyeur, M.
1996-01-01
A high resolution, large acceptance e + e - detector like HADES coupled to intense secondary kaon beams could offer a remarkable opportunity to study at GSI both the electromagnetic and electroweak decays of strange hadrons. Such data can be very consistently interpreted using effective chiral Lagrangians based on the SU(3) x SU(3) symmetry. Of particular interest are a complete set of data on the electromagnetic form factors for the ρ,ω, φ and K* Dalitz decays, which would put very strong constraints on departures from ideal SU(3) mixings, and measurements of Dalitz decays of hyperons, whose electromagnetic structure is very much unknown. Better data on the nonleptonic radiative (e + e - ) decays of kaons would be most useful to study the strangeness changing weak currents and effects related to CP violation. A major progress in the understanding of these decays came recently from their description in chiral perturbation theory, where the chiral dynamics of Goldstone bosons is coupled to the weak and electromagnetic gauge fields. Those studies could be extended to the electroweak decays of hyperons. (author)
3. Detailed spectra of high power broadband microwave radiation from interactions of relativistic electron beams with weakly magnetized plasmas
International Nuclear Information System (INIS)
Kato, K.G.; Benford, G.; Tzach, D.
1983-01-01
Prodigious quantities of microwave energy are observed uniformly across a wide frequency band when a relativistic electron beam (REB) penetrates a plasma. Measurement calculations are illustrated. A model of Compton-like boosting of ambient plasma waves by beam electrons, with collateral emission of high frequency photons, qualitatively explain the spectra. A transition in spectral behavior is observed from the weak to strong turbulence theories advocated for Type III solar burst radiation, and further into the regime the authors characterize as super-strong REB-plasma interactions
4. J /ψ →Ds ,dπ , Ds ,dK decays with perturbative QCD approach
Science.gov (United States)
Sun, Junfeng; Yang, Yueling; Gao, Jie; Chang, Qin; Huang, Jinshu; Lu, Gongru
2016-08-01
Besides the conventional strong and electromagnetic decay modes, the J /ψ particle can also decay via the weak interaction in the standard model. In this paper, nonleptonic J /ψ →Ds ,dπ , Ds ,dK weak decays, corresponding to the externally emitted virtual W boson process, are investigated with the perturbative QCD approach. It is found that the branching ratio for the Cabibbo-favored J /ψ →Dsπ decay can reach up to O (10-10), which might be potentially measurable at the future high-luminosity experiments.
5. Weak interaction potentials of nucleons in the Weinberg-Salam model
International Nuclear Information System (INIS)
Lobov, G.A.
1979-01-01
Weak interaction potentials of nucleons due to the nonet vector meson exchange are obtained in the Weinberg-Salam model using the vector-meson dominance. Contribution from the hadronic neutral currents to the weak interaction potential due to the charged pion exchange is obtained. The isotopic structure of the obtained potentials, that is unambiguous in the Weinberg-Salam model, is investigated. Enhancement of the nucleon weak interaction in nuclei resulting from the hadronic neutral currents is discussed. A nuclear one-particle weak interaction potential is presented that is a result of averaging of the two-particle potential over the states of the nuclear core. An approach to the nucleon weak interaction based on the quark model, is discussed. Effects of the nucleon weak interaction in the radiative capture of a thermal neutron by a proton, are considered
6. Strange hadron decays involving e{sup +}e{sup -} pairs
Energy Technology Data Exchange (ETDEWEB)
Soyeur, M
1997-12-31
A high resolution, large acceptance e{sup +}e{sup -} detector like HADES coupled to intense secondary kaon beams could offer a remarkable opportunity to study at GSI both the electromagnetic and electroweak decays of strange hadrons. Such data can be very consistently interpreted using effective chiral Lagrangians based on the SU(3) x SU(3) symmetry. Of particular interest are a complete set of data on the electromagnetic form factors for the {rho},{omega}, {phi} and K* Dalitz decays, which would put very strong constraints on departures from ideal SU(3) mixings, and measurements of Dalitz decays of hyperons, whose electromagnetic structure is very much unknown. Better data on the nonleptonic radiative (e{sup +}e{sup -}) decays of kaons would be most useful to study the strangeness changing weak currents and effects related to CP violation. A major progress in the understanding of these decays came recently from their description in chiral perturbation theory, where the chiral dynamics of Goldstone bosons is coupled to the weak and electromagnetic gauge fields. Those studies could be extended to the electroweak decays of hyperons. (author). 42 refs.
7. Global view of PCAC
International Nuclear Information System (INIS)
1997-01-01
When combined with current algebra, the notion of partial conservation of axial currents (PCAC) is quite predictive. In fact, when this PCAC is extended to PCAC consistency for multiple pion or kaon states, the above procedure is in excellent agreement with data for strong, electromagnetic and nonleptonic weak interactions over a wider range of energies even above 1 GeV. Usually physicists are wary of invoking PCAC notions far from the soft-pion limit
8. A Cicerone for the Physics of Charm
OpenAIRE
Bianco, S.; Fabbri, F. L.; Benson, D.; Bigi, I.
2003-01-01
After briefly recapitulating the history of the charm quantum number we sketch the experimental environments and instruments employed to study the behaviour of charm hadrons and then describe the theoretical tools for treating charm dynamics. We discuss a wide range of inclusive production processes before analyzing the spectroscopy of hadrons with hidden and open charm and the weak lifetimes of charm mesons and baryons. Then we address leptonic, exclusive semileptonic and nonleptonic charm d...
9. Optimization of transmission-scan time for the FixER method: a MR-based PET attenuation correction with a weak fixed-position external radiation source
Energy Technology Data Exchange (ETDEWEB)
Kawaguchi, Hiroshi; Hirano, Yoshiyuki; Kershaw, Jeff; Yoshida, Eiji [Molecular Imaging Center, National Institute of Radiological Sciences, Chiba (Japan); Shiraishi, Takahiro [Molecular Imaging Center, National Institute of Radiological Sciences, Chiba (Japan); Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Chiba (Japan); Suga, Mikio [Molecular Imaging Center, National Institute of Radiological Sciences, Chiba (Japan); Center for Frontier Medical Engineering, Chiba University (Japan); Obata, Takayuki [Molecular Imaging Center, National Institute of Radiological Sciences, Chiba (Japan); Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Chiba (Japan); Ito, Hiroshi; Yamaya, Taiga [Molecular Imaging Center, National Institute of Radiological Sciences, Chiba (Japan)
2014-07-29
In recent work, we proposed an MRI-based attenuation-coefficient (μ-value) estimation method that uses a weak fixed-position external radiation source to construct an attenuation map for PET/MRI. In this presentation we refer to this method as FixER, and perform a series of simulations to investigate the duration of the transmission scan required to accurately estimate μ-values.
10. Optimization of transmission-scan time for the FixER method: a MR-based PET attenuation correction with a weak fixed-position external radiation source
International Nuclear Information System (INIS)
Kawaguchi, Hiroshi; Hirano, Yoshiyuki; Kershaw, Jeff; Yoshida, Eiji; Shiraishi, Takahiro; Suga, Mikio; Obata, Takayuki; Ito, Hiroshi; Yamaya, Taiga
2014-01-01
In recent work, we proposed an MRI-based attenuation-coefficient (μ-value) estimation method that uses a weak fixed-position external radiation source to construct an attenuation map for PET/MRI. In this presentation we refer to this method as FixER, and perform a series of simulations to investigate the duration of the transmission scan required to accurately estimate μ-values.
11. The radiative decays $B \\to V_{\\gamma}$ at next-to-leading order in QCD
CERN Document Server
Bosch, S W; Bosch, Stefan W.; Buchalla, Gerhard
2002-01-01
We provide a model-independent framework for the analysis of the radiative B-meson decays B -> K* gamma and B -> rho gamma. In particular, we give a systematic discussion of the various contributions to these exclusive processes based on the heavy-quark limit of QCD. We propose a novel factorization formula for the consistent treatment of B -> V gamma matrix elements involving charm (or up-quark) loops, which contribute at leading power in Lambda_QCD/m_B to the decay amplitude. Annihilation topologies are shown to be power suppressed. In some cases they are nevertheless calculable. The approach is similar to the framework of QCD factorization that has recently been formulated for two-body non-leptonic B decays. These results allow us, for the first time, to compute exclusive b -> s(d) gamma decays systematically beyond the leading logarithmic approximation. We present results for these decays complete to next-to-leading order in QCD and to leading order in the heavy-quark limit. Phenomenological implications ...
12. Method and apparatus for evaluating structural weakness in polymer matrix composites
Science.gov (United States)
Wachter, Eric A.; Fisher, Walter G.
1996-01-01
A method and apparatus for evaluating structural weaknesses in polymer matrix composites is described. An object to be studied is illuminated with laser radiation and fluorescence emanating therefrom is collected and filtered. The fluorescence is then imaged and the image is studied to determine fluorescence intensity over the surface of the object being studied and the wavelength of maximum fluorescent intensity. Such images provide a map of the structural integrity of the part being studied and weaknesses, particularly weaknesses created by exposure of the object to heat, are readily visible in the image.
Energy Technology Data Exchange (ETDEWEB)
Hubert, P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1955-07-01
When the radioactivity has been discovered, it was observed by researchers that different materials as mineral salts or solutions were emitting a weak light when submitted to radioactivity beams. At the beginning it has been thought that it was fluorescent light. In 1934, Cherenkov, a russian physicist, worked on the luminescence of uranyl salts solutions caused by gamma radiation and observed a very weak light was emitted by pure liquid. After further studies, he concluded that this phenomena was different from fluorescence. Since then, it has been called Cherenkov effect. This blue light emission is produced when charged particles are going through a transparent medium with an upper velocity than light velocity. This can happen only in medium with large refractive index as water or glass. It also presents its different properties discovered afterwards. The different applications of the Cherenkov radiation are discussed as counting techniques for radiation detectors or comic ray detectors. (M.P.)
14. Reconstructing weak values without weak measurements
International Nuclear Information System (INIS)
Johansen, Lars M.
2007-01-01
I propose a scheme for reconstructing the weak value of an observable without the need for weak measurements. The post-selection in weak measurements is replaced by an initial projector measurement. The observable can be measured using any form of interaction, including projective measurements. The reconstruction is effected by measuring the change in the expectation value of the observable due to the projector measurement. The weak value may take nonclassical values if the projector measurement disturbs the expectation value of the observable
15. Non-leptonic kaon decays at large Nc
Science.gov (United States)
Donini, Andrea; Hernández, Pilar; Pena, Carlos; Romero-López, Fernando
2018-03-01
We study the scaling with the number of colors Nc of the weak amplitudes mediating kaon mixing and decay, in the limit of light charm masses (mu = md = ms = mc). The amplitudes are extracted directly on the lattice for Nc = 3 - 7 (with preliminar results for Nc = 8 and 17) using twisted mass QCD. It is shown that the (sub-leading) 1 /Nc corrections to B\\hatk are small and that the naive Nc → ∞ limit, B\\hatk = 3/4, seems to be recovered. On the other hand, the O (1/Nc) corrections in K → ππ amplitudes (derived from K → π matrix elements) are large and fully anti-correlated in the I = 0 and I = 2 channels. This may have some implications for the understanding of the ΔI = 1/2 rule.
16. Modular overconstrained weak-link mechanism for ultraprecision motion control
International Nuclear Information System (INIS)
Shu Deming; Toellner, Thomas S.; Alp, Esen E.
2001-01-01
We have designed and constructed a novel miniature overconstrained weak-link mechanism that will allow positioning of two crystals with better than 50 nrad angular resolution and nanometer linear driving sensitivity. The precision and stability of this structure allow the user to align or adjust an assembly of crystals to achieve the same performance as does a single channel-cut crystal, so we call it an ''artificial channel-cut crystal.'' Unlike the traditional kinematic linear spring mechanisms, the overconstrained weak-link mechanism provides much higher structure stiffness and stability. Using a laminar structure configured and manufactured by chemical etching and lithography techniques, we are able to design and build a planar-shape, high stiffness, high precision weak-link mechanism. In this paper, we present recent developments for the overconstrained weak-link mechanism. Applications of this new technique to synchrotron radiation instrumentation are also discussed
17. Nonlinear propagation of intense electromagnetic waves in weakly-ionized plasmas
International Nuclear Information System (INIS)
Shukla, P.K.
1993-01-01
The nonlinear propagation of intense electromagnetic waves in weakly-ionized plasmas is considered. Stimulated scattering mechanisms involving electromagnetic and acoustic waves in an unmagnetized plasma are investigated. The growth rate and threshold for three-wave decay interactions as well as modulational and filamentation instabilities are presented. Furthermore, the electromagnetic wave modulation theory is generalized for weakly ionized collisional magnetoplasmas. Here, the radiation envelope is generally governed by a nonlinear Schroedinger equation. Accounting for the dependence of the attachment frequency on the radiation intensity, ponderomotive force, as well as the differential Joule heating nonlinearity, the authors derive the equations for the nonthermal electron density and temperature perturbations. The various nonlinear terms in the electron motion are compared. The problems of self-focusing and wave localization are discussed. The relevance of the investigation to ionospheric modification by powerful electromagnetic waves is pointed out
18. First observation and branching fraction and decay parameter measurements of the weak radiative decay $\\Xi^0 \\to \\Lambda e^{+}e^{-}$
CERN Document Server
Batley, J Richard; Lazzeroni, C; Munday, D J; Patel, M; Slater, M W; Wotton, S A; Arcidiacono, R; Bocquet, G; Ceccucci, A; Cundy, Donald C; Doble, N; Falaleev, V; Gatignon, L; Gonidec, A; Grafstrm, P; Kubischta, Werner; Mikulec, I; Norton, A; Panzer-Steindel, B; Rubin, P; Wahl, H; Goudzovski, E; Khristov, P Z; Kekelidze, V D; Litov, L; Madigozhin, D T; Molokanova, N A; Potrebenikov, Yu K; Stoynev, S; Zinchenko, A I; Monnier, E; Swallow, E; Winston, R; Sacco, R; Walker, A; Baldini, W; Dalpiaz, P; Frabetti, P L; Gianoli, A; Martini, M; Petrucci, F; Savrié, M; Scarpa, M; Bizzeti, A; Calvetti, M; Collazuol, G; Iacopini, E; Lenti, M; Veltri, M; Ruggiero, G; Behler, M; Eppard, K; Eppard, M; Hirstius, A; Kleinknecht, K; Koch, U; Marouelli, P; Masetti, L; Moosbrugger, U; Morales-Morales, C; Peters, A; Wanke, R; Winhart, A; Dabrowski, A; Fonseca-Martin, T; Velasco, M; Anzivino, Giuseppina; Cenci, P; Imbergamo, E; Lamanna, G; Lubrano, P; Michetti, A; Nappi, A; Pepé, M; Valdata, M; Petrucci, M C; Piccini, M; Cerri, C; Costantini, F; Fantechi, R; Fiorini, L; Giudici, S; Mannelli, I; Pierazzini, G M; Sozzi, M; Cheshkov, C; Chèze, J B; De Beer, M; Debu, P; Gouge, G; Marel, Gérard; Mazzucato, E; Peyaud, B; Vallage, B; Holder, M; Maier, A; Ziolkowski, M; Biino, C; Cartiglia, N; Clemencic, M; Goy-Lopez, S; Marchetto, F; Menichetti, E; Pastrone, N; Wislicki, W; Dibon, Heinz; Jeitler, Manfred; Markytan, Manfred; Neuhofer, G; Widhalm, L
2007-01-01
The weak radiative decay $Xi^0 -> \\Lambda e^+ e^-$ has been detected for the first time. We find 412 candidates in the signal region, with an estimated background of $15pm 5$ events. We determine the branching fraction ${cal{B}}(\\Xi^0 -> \\Lambda e^+ e^- ) = [7.6pm 0.4({m stat})pm 0.4({m syst})pm 0.2({m norm})] imes 10^{-6}$, consistent with an internal bremsstrahlung process, and the decay asymmetry parameter $\\alpha_{\\Xi \\Lambda ee} = -0.8pm 0.2$, consistent with that of $\\Xi^0 -> \\Lambda \\gamma$. The charge conjugate reaction $\\overline{\\Xi^{0}} -> \\overline{\\Lambda} e^+ e^-$ has also been observed.
CERN Document Server
1996-01-01
This book is intended for scientists engaged in the measurement of weak alpha, beta, and gamma active samples; in health physics, environmental control, nuclear geophysics, tracer work, radiocarbon dating etc. It describes the underlying principles of radiation measurement and the detectors used. It also covers the sources of background, analyzes their effect on the detector and discusses economic ways to reduce the background. The most important types of low-level counting systems and the measurement of some of the more important radioisotopes are described here. In cases where more than one type can be used, the selection of the most suitable system is shown.
20. Factorization, the light-cone distribution amplitude of the B-meson and the radiative decay $B \\to \\gamma l \ CERN Document Server Descotes-Genon, S 2003-01-01 We study the radiative decay B -> gamma l nu_l in the framework of QCD factorization. We demonstrate explicitly that, in the heavy-quark limit and at one-loop order in perturbation theory, the amplitude does factorize, i.e. that it can be written as a convolution of a perturbatively calculable hard-scattering amplitude with the (non-perturbative) light-cone distribution amplitude of the B-meson. We evaluate the hard-scattering amplitude at one-loop order and verify that the large logarithms are those expected from a study of the b->u transition in the Soft-Collinear Effective Theory. Assuming that this is also the case at higher orders, we resum the large logarithms and perform an exploratory phenomenological analysis. The questions addressed in this study are also relevant for the applications of the QCD factorization formalism to two-body non-leptonic B-decays, in particular to the component of the amplitude arising from hard spectator interactions. 1. Current algebra formulation of radiative corrections in gauge theories and the universality of the weak interactions Energy Technology Data Exchange (ETDEWEB) Sirlin, A. 1978-07-01 A current algebra formulation of the radiative corrections in gauge theories, with special applications to the analysis of the universality of the weak interactions, is developed in the framework of quantum chromodynamics. For definiteness, we work in the SU(2) x U(1) model with four quark flavors, but the methods are quite general and can be applied to other theories. The explicit cancellation of ultraviolet divergences for arbitrary semileptonic processes is achieved relying solely on the Ward identities and general considerations, both in the W and Higgs sectors. The finite parts of order G/sub F/..cap alpha.. are then evaluated in the case of the superallowed Fermi transitions, including small effects proportional to g/sup -2//sub S/(kappa/sup 2/), which are induced by the strong interactions in the asymptotic domain. We consider here both the simplest version of the Weinberg--Salam model in which the Higgs scalars transform as a single isospinsor, as well as the case of general symmetry breaking. Except for the small effects proportional to g/sup -2//sub S/(kappa/sup 2/), the results are identical to the answers previously found on the basis of heuristic arguments. The phenomenological verification of Cabibbo universality on the basis of these corrections and the superallowed Fermi transitions has been discussed before and found to be in very good agreement with present experimental evidence. The analogous calculation for the transition rate of pion ..beta.. decay is given. Theoretical alternatives to quantum chromdynamics as a framework for the evaluate ion of the radiative corrections are briefly discussed. The appendixes contain a generalization of an important result in the theory of radiative corrections, an analysis of the hadronic contributions to the W and phi propagators, mathematical methods for evaluating the g/sup -2//sub S/(kappa/sup 2/) corrections, and discussions of quark mass renormalization and the absence of operator &apos 2. One loop electro-weak radiative corrections in the standard model International Nuclear Information System (INIS) Kalyniak, P.; Sundaresan, M.K. 1987-01-01 This paper reports on the effect of radiative corrections in the standard model. A sensitive test of the three gauge boson vertices is expected to come from the work in LEPII in which the reaction e + e - → W + W - can occur. Two calculations of radiative corrections to the reaction e + e - → W + W - exist at present. The results of the calculations although very similar disagree with one another as to the actual magnitude of the correction. Some of the reasons for the disagreement are understood. However, due to the reasons mentioned below, another look must be taken at these lengthy calculations to resolve the differences between the two previous calculations. This is what is being done in the present work. There are a number of reasons why we must take another look at the calculation of the radiative corrections. The previous calculations were carried out before the UA1, UA2 data on W and Z bosons were obtained. Experimental groups require a computer program which can readily calculate the radiative corrections ab initio for various experimental conditions. The normalization of sin 2 θ w in the previous calculations was done in a way which is not convenient for use in the experimental work. It would be desirable to have the analytical expressions for the corrections available so that the renormalization scheme dependence of the corrections could be studied 3. Cherenkov radiation International Nuclear Information System (INIS) Hubert, P. 1955-01-01 When the radioactivity has been discovered, it was observed by researchers that different materials as mineral salts or solutions were emitting a weak light when submitted to radioactivity beams. At the beginning it has been thought that it was fluorescent light. In 1934, Cherenkov, a russian physicist, worked on the luminescence of uranyl salts solutions caused by gamma radiation and observed a very weak light was emitted by pure liquid. After further studies, he concluded that this phenomena was different from fluorescence. Since then, it has been called Cherenkov effect. This blue light emission is produced when charged particles are going through a transparent medium with an upper velocity than light velocity. This can happen only in medium with large refractive index as water or glass. It also presents its different properties discovered afterwards. The different applications of the Cherenkov radiation are discussed as counting techniques for radiation detectors or comic ray detectors. (M.P.) 4. Weak measurements and quantum weak values for NOON states Science.gov (United States) Rosales-Zárate, L.; Opanchuk, B.; Reid, M. D. 2018-03-01 Quantum weak values arise when the mean outcome of a weak measurement made on certain preselected and postselected quantum systems goes beyond the eigenvalue range for a quantum observable. Here, we propose how to determine quantum weak values for superpositions of states with a macroscopically or mesoscopically distinct mode number, that might be realized as two-mode Bose-Einstein condensate or photonic NOON states. Specifically, we give a model for a weak measurement of the Schwinger spin of a two-mode NOON state, for arbitrary N . The weak measurement arises from a nondestructive measurement of the two-mode occupation number difference, which for atomic NOON states might be realized via phase contrast imaging and the ac Stark effect using an optical meter prepared in a coherent state. The meter-system coupling results in an entangled cat-state. By subsequently evolving the system under the action of a nonlinear Josephson Hamiltonian, we show how postselection leads to quantum weak values, for arbitrary N . Since the weak measurement can be shown to be minimally invasive, the weak values provide a useful strategy for a Leggett-Garg test of N -scopic realism. 5. Limitations on tests of quantum flavour dynamics from quark confinement International Nuclear Information System (INIS) Pietschmann, H. 1989-01-01 Quantum Flavour Dynamics is a theory of electroweak interactions. The Lagrangian is formulated for leptons and quarks. Since quarks are not directly accessible in experiment, predictions are model-dependent and the predictive power of the theory is limited. In view of these limitations QFD theory is formulated and confronted in several instances with experimental results: leptonic- and semi-leptonic processes, non-leptonic decay processes and radiative decay processes. 17 refs. (qui) 6. Turbulence of Weak Gravitational Waves in the Early Universe. Science.gov (United States) Galtier, Sébastien; Nazarenko, Sergey V 2017-12-01 We study the statistical properties of an ensemble of weak gravitational waves interacting nonlinearly in a flat space-time. We show that the resonant three-wave interactions are absent and develop a theory for four-wave interactions in the reduced case of a 2.5+1 diagonal metric tensor. In this limit, where only plus-polarized gravitational waves are present, we derive the interaction Hamiltonian and consider the asymptotic regime of weak gravitational wave turbulence. Both direct and inverse cascades are found for the energy and the wave action, respectively, and the corresponding wave spectra are derived. The inverse cascade is characterized by a finite-time propagation of the metric excitations-a process similar to an explosive nonequilibrium Bose-Einstein condensation, which provides an efficient mechanism to ironing out small-scale inhomogeneities. The direct cascade leads to an accumulation of the radiation energy in the system. These processes might be important for understanding the early Universe where a background of weak nonlinear gravitational waves is expected. 7. On the possible existence of a long-lived strange dibaryon International Nuclear Information System (INIS) Kondratyuk, L.A.; Ral'chenko, Yu.V.; Vasilets, A.V. 1988-01-01 Using the QCD string model with spin-orbit coupling the masses of strange S=-1 dibaryons are calculated. Possible existence of a long-lived state DB S - (with the lifetime much larger than τ Σ ) with the mass 2.03 GeV ≤ M ≤ M Σ +M N and the isospin I=3/2 is predicted. The weak nonleptonic and semileptonic decay widths of DB S - and its production cross section in the reaction π - d → K + DB S - are calculated. The results are compared with the available experimental data 8. Is CP violation maximal International Nuclear Information System (INIS) Gronau, M. 1984-01-01 Two ambiguities are noted in the definition of the concept of maximal CP violation. The phase convention ambiguity is overcome by introducing a CP violating phase in the quark mixing matrix U which is invariant under rephasing transformations. The second ambiguity, related to the parametrization of U, is resolved by finding a single empirically viable definition of maximal CP violation when assuming that U does not single out one generation. Considerable improvement in the calculation of nonleptonic weak amplitudes is required to test the conjecture of maximal CP violation. 21 references 9. Approximate |ΔI| = 1/2 rule in K → ππ decays from asymptotic quark-line diagram approach International Nuclear Information System (INIS) Terasaki, K.; Oneda, S. 1989-07-01 A general method which copes with both the long and short distance physics aspects of nonleptonic weak interactions in presented. First, the four-point decay amplitude can be expressed in terms of the three-point asymptotic matrix elements of the effective weak Hamiltonian H w , taken between the on-mass-shell single-hadron states with infinite momenta. The study of these matrix elements in terms of the quark-lines in the infinite momentum frame reveals that, for the K → ππ decays, those involving only the ordinary (QQ-bar) mesons do satisfy the strict |ΔI| = 1/2 rule. However, the contribution of the (QQ) (Q-barQ-bar) type exotic mesons leads explicitly to a small violation of the selection rule. (author) 10. ΔT=1/2 rule in quark models with unconfined colour International Nuclear Information System (INIS) Arbuzov, B.A.; Kompaneetz, F.F.; Tikhonin, F.F. 1977-01-01 In the triplet quark model with unconfined colour a weak hadronic current is obtained with the following properties: a) it satisfies weak SU(2) algebra; b) the neutral current is completely diagonal and coincides with electromagnetic one in the quark structure ; c) the ''white'' part of the current possesses the properties of the Cabbibo current. The properties of the ''white'' part of nonleptonic Lagrangian derived from this current are : a)between the coefficients of the transition amplitudes ΔT=1/2 and ΔT=3/2 there is a ratio approximately 25 corresponding to experiment; b) there are no transitions ΔS=2; c) the values for the transitions ΔT=0,1,2 of the Lagrangian without changes of strangeness are compatible with each other 11. Fast-Acquisition/Weak-Signal-Tracking GPS Receiver for HEO Science.gov (United States) Wintemitz, Luke; Boegner, Greg; Sirotzky, Steve 2004-01-01 A report discusses the technical background and design of the Navigator Global Positioning System (GPS) receiver -- . a radiation-hardened receiver intended for use aboard spacecraft. Navigator is capable of weak signal acquisition and tracking as well as much faster acquisition of strong or weak signals with no a priori knowledge or external aiding. Weak-signal acquisition and tracking enables GPS use in high Earth orbits (HEO), and fast acquisition allows for the receiver to remain without power until needed in any orbit. Signal acquisition and signal tracking are, respectively, the processes of finding and demodulating a signal. Acquisition is the more computationally difficult process. Previous GPS receivers employ the method of sequentially searching the two-dimensional signal parameter space (code phase and Doppler). Navigator exploits properties of the Fourier transform in a massively parallel search for the GPS signal. This method results in far faster acquisition times [in the lab, 12 GPS satellites have been acquired with no a priori knowledge in a Low-Earth-Orbit (LEO) scenario in less than one second]. Modeling has shown that Navigator will be capable of acquiring signals down to 25 dB-Hz, appropriate for HEO missions. Navigator is built using the radiation-hardened ColdFire microprocessor and housing the most computationally intense functions in dedicated field-programmable gate arrays. The high performance of the algorithm and of the receiver as a whole are made possible by optimizing computational efficiency and carefully weighing tradeoffs among the sampling rate, data format, and data-path bit width. 12. QCD in heavy quark production and decay Energy Technology Data Exchange (ETDEWEB) Wiss, J. [Univ. of Illinois, Urbana, IL (United States) 1997-06-01 The author discusses how QCD is used to understand the physics of heavy quark production and decay dynamics. His discussion of production dynamics primarily concentrates on charm photoproduction data which are compared to perturbative QCD calculations which incorporate fragmentation effects. He begins his discussion of heavy quark decay by reviewing data on charm and beauty lifetimes. Present data on fully leptonic and semileptonic charm decay are then reviewed. Measurements of the hadronic weak current form factors are compared to the nonperturbative QCD-based predictions of Lattice Gauge Theories. He next discusses polarization phenomena present in charmed baryon decay. Heavy Quark Effective Theory predicts that the daughter baryon will recoil from the charmed parent with nearly 100% left-handed polarization, which is in excellent agreement with present data. He concludes by discussing nonleptonic charm decay which is traditionally analyzed in a factorization framework applicable to two-body and quasi-two-body nonleptonic decays. This discussion emphasizes the important role of final state interactions in influencing both the observed decay width of various two-body final states as well as modifying the interference between interfering resonance channels which contribute to specific multibody decays. 50 refs., 77 figs. 13. QCD in heavy quark production and decay International Nuclear Information System (INIS) Wiss, J. 1997-01-01 The author discusses how QCD is used to understand the physics of heavy quark production and decay dynamics. His discussion of production dynamics primarily concentrates on charm photoproduction data which are compared to perturbative QCD calculations which incorporate fragmentation effects. He begins his discussion of heavy quark decay by reviewing data on charm and beauty lifetimes. Present data on fully leptonic and semileptonic charm decay are then reviewed. Measurements of the hadronic weak current form factors are compared to the nonperturbative QCD-based predictions of Lattice Gauge Theories. He next discusses polarization phenomena present in charmed baryon decay. Heavy Quark Effective Theory predicts that the daughter baryon will recoil from the charmed parent with nearly 100% left-handed polarization, which is in excellent agreement with present data. He concludes by discussing nonleptonic charm decay which is traditionally analyzed in a factorization framework applicable to two-body and quasi-two-body nonleptonic decays. This discussion emphasizes the important role of final state interactions in influencing both the observed decay width of various two-body final states as well as modifying the interference between interfering resonance channels which contribute to specific multibody decays. 50 refs., 77 figs 14. Weak interaction in a three nucleon system: search for an asymmetry in radiative capture n-d International Nuclear Information System (INIS) Avenier, M. 1982-01-01 Experimental determination of the weak interaction rate in a three nucleon neutron-deuteron system: this weak interaction is observed through pseudoscalar parameters such as the asymetric angular distribution of the capture photon in relation with the system polarization. Orientation of the system is achieved by use of a polarized cold neutron beam. This phenomena is explained as a result of weak coupling between nucleons and mesons. Measurements of the gamma asymmetries observed when tests are conducted with or without heavy water and effects of depolarization are discussed [fr 15. Electroweak radiative corrections to parity-violating electroexcitation of the Δ International Nuclear Information System (INIS) Zhu Shilin; Sacco, G.; Maekawa, C.M.; Holstein, B. R.; Ramsey-Musolf, M.J. 2002-01-01 We analyze the degree to which parity-violating (PV) electroexcitation of the Δ(1232) resonance may be used to extract the weak neutral axial vector transition form factors. We find that the axial vector electroweak radiative corrections are large and theoretically uncertain, thereby modifying the nominal interpretation of the PV asymmetry in terms of the weak neutral form factors. We also show that, in contrast with the situation for elastic electron scattering, the axial N→Δ PV asymmetry does not vanish at the photon point as a consequence of a new term entering the radiative corrections. We argue that an experimental determination of these radiative corrections would be of interest for hadron structure theory, possibly shedding light on the violation of Hara's theorem in weak, radiative hyperon decays 16. Measurement of MOS current mismatch in the weak inversion region International Nuclear Information System (INIS) Forti, F.; Wright, M.E. 1994-01-01 The MOS transistor matching properties in the weak inversion region have not received, in the past, the attention that the mismatch in the strong inversion region has. The importance of weak inversion biased transistors in low power CMOS analog systems calls for more extensive data on the mismatch in this region of operation. The study presented in this paper was motivated by the need of controlling the threshold matching in a low power, low noise amplifier discriminator circuit used in a silicon radiation detector read-out, where both the transistor dimensions and the currents had to be kept to a minimum. The authors have measured the current matching properties of MOS transistors operated in the weak inversion region. They measured a total of about 1,400 PMOS and NMOS transistors produced in four different processes and report here the results in terms of mismatch dependence on current density, device dimensions, and substrate voltage, without using any specific model for the transistor 17. Weak interactions International Nuclear Information System (INIS) Ogava, S.; Savada, S.; Nakagava, M. 1983-01-01 The problem of the use of weak interaction laws to study models of elementary particles is discussed. The most typical examples of weak interaction is beta-decay of nucleons and muons. Beta-interaction is presented by quark currents in the form of universal interaction of the V-A type. Universality of weak interactions is well confirmed using as examples e- and μ-channels of pion decay. Hypothesis on partial preservation of axial current is applicable to the analysis of processes with pion participation. In the framework of the model with four flavours lepton decays of hadrons are considered. Weak interaction without lepton participation are also considered. Properties of neutral currents are described briefly 18. Weakly clopen functions International Nuclear Information System (INIS) Son, Mi Jung; Park, Jin Han; Lim, Ki Moon 2007-01-01 We introduce a new class of functions called weakly clopen function which includes the class of almost clopen functions due to Ekici [Ekici E. Generalization of perfectly continuous, regular set-connected and clopen functions. Acta Math Hungar 2005;107:193-206] and is included in the class of weakly continuous functions due to Levine [Levine N. A decomposition of continuity in topological spaces. Am Math Mon 1961;68:44-6]. Some characterizations and several properties concerning weakly clopenness are obtained. Furthermore, relationships among weak clopenness, almost clopenness, clopenness and weak continuity are investigated 19. A weak magnetic field inhibits hippocampal neurogenesis in SD rats Science.gov (United States) Zhang, B.; Tian, L.; Cai, Y.; Pan, Y. 2017-12-01 Geomagnetic field is an important barrier that protects life forms on Earth from solar wind and radiation. Paleomagnetic data have well demonstrated that the strength of ancient geomagnetic field was dramatically weakened during a polarity transition. Accumulating evidence has shown that weak magnetic field exposures has serious adverse effects on the metabolism and behaviors in organisms. Hippocampal neurogenesis occurs throughout life in mammals' brains which plays a key role in brain function, and can be influenced by animals' age as well as environmental factors, but few studies have examined the response of hippocampal neurogenesis to it. In the present study, we have investigated the weak magnetic field effects on hippocampal neurogenesis of adult Sprague Dawley (SD) rats. Two types of magnetic fields were used, a weak magnetic field (≤1.3 μT) and the geomagnetic fields (51 μT).The latter is treated as a control condition. SD rats were exposure to the weak magnetic field up to 6 weeks. We measured the changes of newborn nerve cells' proliferation and survival, immature neurons, neurons and apoptosis in the dentate gyrus (DG) of hippocampus in SD rats. Results showed that, the weak magnetic field (≤1.3 μT) inhibited their neural stem cells proliferation and significantly reduced the survival of newborn nerve cells, immature neurons and neurons after 2 or 4 weeks continuous treatment (i.e. exposure to weak magnetic field). Moreover, apoptosis tests indicated the weak magnetic field can promote apoptosis of nerve cells in the hippocampus after 4 weeks treatment. Together, our new data indicate that weak magnetic field decrease adult hippocampal neurogenesis through inhibiting neural stem cells proliferation and promoting apoptosis, which provides useful experimental constraints on better understanding the mechanism of linkage between life and geomagnetic field. 20. 1/M corrections to baryonic form factors in the quark model International Nuclear Information System (INIS) Cheng, H.; Tseng, B. 1996-01-01 Weak current-induced baryonic form factors at zero recoil are evaluated in the rest frame of the heavy parent baryon using the nonrelativistic quark model. Contrary to previous similar work in the literature, our quark model results do satisfy the constraints imposed by heavy quark symmetry for heavy-heavy baryon transitions at the symmetric point v·v'=1 and are in agreement with the predictions of the heavy quark effective theory for antitriplet-antitriplet heavy baryon form factors at zero recoil evaluated to order 1/m Q . Furthermore, the quark model approach has the merit that it is applicable to any heavy-heavy and heavy-light baryonic transitions at maximum q 2 . Assuming a dipole q 2 behavior, we have applied the quark model form factors to nonleptonic, semileptonic, and weak radiative decays of the heavy baryons. It is emphasized that the flavor suppression factor occurring in many heavy-light baryonic transitions, which is unfortunately overlooked in most literature, is very crucial towards an agreement between theory and experiment for the semileptonic decay Λ c →Λe + ν e . Predictions for the decay modes Λ b →J/ψΛ, Λ c →pφ, Λ b →Λγ, Ξ b →Ξγ, and for the semileptonic decays of Λ b , Ξ b, c, and Ω b are presented. copyright 1996 The American Physical Society 1. Molecular epidemiology of radiation-induced carcinogenesis International Nuclear Information System (INIS) Trosko, J.E. 1996-01-01 The role of ionizing radiation in carcinogenesis is discussed. Every cell contains proto-oncogenes, which if damaged may lead to cell transformation. Every cell also contains tumor suppressor genes, which guard against transformation. Thus, transformation would seem to require a double injury to the DNA in a cell. Ionizing radiation is known to be a relatively weak mutagen, but a good clastogen (inducer of chromosome breaks, deletions and rearrangements). Ionizing radiation may therefore be a 'promoter' of cancer, i.e. a stimulant of the clonal expansion of transformed cells, if it kills enough cells to induce compensatory hyperplasia - i.e. rapid growth of cells. Ionizing radiation may be a 'progressor', if it deactivates tumor suppressor genes tending to suppress the growth of existing clones of transformed cells resulting from any of numerous causes. It may therefore be an oversimplification to say that radiation causes cancer; rather, it seems to be a weak initiator, an indirect promoter, and a late-stage progressor. 2 figs 2. PLASMA EMISSION BY WEAK TURBULENCE PROCESSES Energy Technology Data Exchange (ETDEWEB) Ziebell, L. F.; Gaelzer, R. [Instituto de Física, UFRGS, Porto Alegre, RS (Brazil); Yoon, P. H. [Institute for Physical Science and Technology, University of Maryland, College Park, MD (United States); Pavan, J., E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Instituto de Física e Matemática, UFPel, Pelotas, RS (Brazil) 2014-11-10 The plasma emission is the radiation mechanism responsible for solar type II and type III radio bursts. The first theory of plasma emission was put forth in the 1950s, but the rigorous demonstration of the process based upon first principles had been lacking. The present Letter reports the first complete numerical solution of electromagnetic weak turbulence equations. It is shown that the fundamental emission is dominant and unless the beam speed is substantially higher than the electron thermal speed, the harmonic emission is not likely to be generated. The present findings may be useful for validating reduced models and for interpreting particle-in-cell simulations. 3. Weak value controversy Science.gov (United States) Vaidman, L. 2017-10-01 Recent controversy regarding the meaning and usefulness of weak values is reviewed. It is argued that in spite of recent statistical arguments by Ferrie and Combes, experiments with anomalous weak values provide useful amplification techniques for precision measurements of small effects in many realistic situations. The statistical nature of weak values is questioned. Although measuring weak values requires an ensemble, it is argued that the weak value, similarly to an eigenvalue, is a property of a single pre- and post-selected quantum system. This article is part of the themed issue Second quantum revolution: foundational questions'. 4. Radiative corrections of semileptonic hyperon decays Pt. 1 International Nuclear Information System (INIS) Margaritisz, T.; Szegoe, K.; Toth, K. 1982-07-01 The beta decay of free quarks is studied in the framework of the standard SU(2) x U(1) model of weak and electromagnetic interactions. The so-called 'weak' part of radiative corrections is evaluated to order α in one-loop approximation using a renormalization scheme, which adjusts the counter terms to the electric charge, and to the mass of the charged and neutral vector bosons, Msub(w) and Msub(o), respectively. The obtained result is, to a good approximation, equal with the 'weak' part of radiative corrections for the semileptonic decay of any hyperon. It is shown in the model that the methods, which work excellently in case of the 'weak' corrections, do not, in general, provide us with the dominant part of the 'photonic' corrections. (author) 5. Theoretical interest in B-Meson physics at the B factories, Tevatron and the LHC International Nuclear Information System (INIS) Ali, A. 2007-12-01 We review the salient features of B-meson physics, with particular emphasis on the measurements carried out at the B-factories and Tevatron, theoretical progress in understanding these measurements in the context of the standard model, and anticipation at the LHC. Topics discussed specifically are the current status of the Cabibbo-Kobayashi-Maskawa matrix, the CP-violating phases, rare radiative and semileptonic decays, and some selected non-leptonic two-body decays of the B mesons. (orig.) 6. Theoretical interest in B-Meson physics at the B factories, Tevatron and the LHC Energy Technology Data Exchange (ETDEWEB) Ali, A. 2007-12-15 We review the salient features of B-meson physics, with particular emphasis on the measurements carried out at the B-factories and Tevatron, theoretical progress in understanding these measurements in the context of the standard model, and anticipation at the LHC. Topics discussed specifically are the current status of the Cabibbo-Kobayashi-Maskawa matrix, the CP-violating phases, rare radiative and semileptonic decays, and some selected non-leptonic two-body decays of the B mesons. (orig.) 7. Weak Acid Ionization Constants and the Determination of Weak Acid-Weak Base Reaction Equilibrium Constants in the General Chemistry Laboratory Science.gov (United States) Nyasulu, Frazier; McMills, Lauren; Barlag, Rebecca 2013-01-01 A laboratory to determine the equilibrium constants of weak acid negative weak base reactions is described. The equilibrium constants of component reactions when multiplied together equal the numerical value of the equilibrium constant of the summative reaction. The component reactions are weak acid ionization reactions, weak base hydrolysis… 8. Recovering information of tunneling spectrum from weakly isolated horizon Energy Technology Data Exchange (ETDEWEB) Chen, Ge-Rui; Huang, Yong-Chang [Beijing University of Technology, Institute of Theoretical Physics, Beijing (China) 2015-02-01 In this paper we investigate the properties of tunneling spectrum from weakly isolated horizon (WIH) - a locally defined black hole. We find that there exist correlations among Hawking radiations from a WIH, information can be carried out by such correlations, and the radiation is an entropy conservation process. Through revisiting the calculation of the tunneling spectrum from a WIH, we find that Zhang et al.'s (Ann Phys 326:350, 2011) requirement that radiated particles have the same angular momenta of a unit mass as that of the black hole is unnecessary, and the energy and angular momenta of the emitted particles are very arbitrary, restricted only by keeping the cosmic censorship hypothesis of black holes. So we resolve the information loss paradox based on the method of Zhang et al. (Phys Lett B 675:98, 2009; Ann Phys 326:350, 2011; Int J Mod Phys D 22:1341014, 2013) in a general case. (orig.) 9. Charged-particle multiplicities in B-meson decay International Nuclear Information System (INIS) Alam, M.S.; Csorna, S.E.; Fridman, A.; Hicks, R.G.; Panvini, R.S.; Andrews, D.; Avery, P.; Berkelman, K.; Cabenda, R.; Cassel, D.G.; DeWire, J.W.; Ehrlich, R.; Ferguson, T.; Gilchriese, M.G.D.; Gittelman, B.; Hartill, D.L.; Herrup, D.; Herzlinger, M.; Holzner, S.; Kandaswamy, J.; Kreinick, D.L.; Mistry, N.B.; Morrow, F.; Nordberg, E.; Perchonok, R.; Plunkett, R.; Silverman, A.; Stein, P.C.; Stone, S.; Weber, D.; Wilcke, R.; Sadoff, A.J.; Bebek, C.; Haggerty, J.; Hempstead, M.; Izen, J.M.; Loomis, W.A.; MacKay, W.W.; Pipkin, F.M.; Rohlf, J.; Tanenbaum, W.; Wilson, R.; Chadwick, K.; Chauveau, J.; Ganci, P.; Gentile, T.; Kagan, H.; Kass, R.; Melissinos, A.C.; Olsen, S.L.; Poling, R.; Rosenfeld, C.; Rucinski, G.; Thorndike, E.H.; Green, J.; Sannes, F.; Skubic, P.; Snyder, A.; Stone, R.; Brody, A.; Chen, A.; Goldberg, M.; Horwitz, N.; Lipari, P.; Kooy, H.; Moneti, G.C.; Pistilli, P. 1982-01-01 The charged multiplicity has been measured at the UPSILON(4S) and a value of 5.75 +- 0.1 +- 0.2 has been obtained for the mean charged multiplicity in B-meson decay. Combining this result with the measurement of prompt letpons from B decay, the values 4.1 +- 0.35 +- 0.2 and 6.3 +- 0.2 +- 0.2 are found for the semileptonic and nonleptonic charged multiplicities, respectively. If b→c dominance is assumed for the weak decay of the B meson, then the semileptonic multiplicity is consistent with the recoil mass determined from the lepton momentum spectrum 10. The Problem of Weak Governments and Weak Societies in Eastern Europe Directory of Open Access Journals (Sweden) Marko Grdešić 2008-01-01 Full Text Available This paper argues that, for Eastern Europe, the simultaneous presence of weak governments and weak societies is a crucial obstacle which must be faced by analysts and reformers. The understanding of other normatively significant processes will be deficient without a consciousness-raising deliberation on this problem and its implications. This paper seeks to articulate the “relational” approach to state and society. In addition, the paper lays out a typology of possible patterns of relationship between state and society, dependent on whether the state is weak or strong and whether society is weak or strong. Comparative data are presented in order to provide an empirical support for the theses. Finally, the paper outlines two reform approaches which could enable breaking the vicious circle emerging in the context of weak governments and weak societies. 11. Weak scale from the maximum entropy principle Science.gov (United States) Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu 2015-03-01 The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass. 12. Electro-weak theory International Nuclear Information System (INIS) Deshpande, N.G. 1980-01-01 By electro-weak theory is meant the unified field theory that describes both weak and electro-magnetic interactions. The development of a unified electro-weak theory is certainly the most dramatic achievement in theoretical physics to occur in the second half of this century. It puts weak interactions on the same sound theoretical footing as quantum elecrodynamics. Many theorists have contributed to this development, which culminated in the works of Glashow, Weinberg and Salam, who were jointly awarded the 1979 Nobel Prize in physics. Some of the important ideas that contributed to this development are the theory of beta decay formulated by Fermi, Parity violation suggested by Lee and Yang, and incorporated into immensely successful V-A theory of weak interactions by Sudarshan and Marshak. At the same time ideas of gauge invariance were applied to weak interaction by Schwinger, Bludman and Glashow. Weinberg and Salam then went one step further and wrote a theory that is renormalizable, i.e., all higher order corrections are finite, no mean feat for a quantum field theory. The theory had to await the development of the quark model of hadrons for its completion. A description of the electro-weak theory is given 13. Weak decays International Nuclear Information System (INIS) Wojcicki, S. 1978-11-01 Lectures are given on weak decays from a phenomenological point of view, emphasizing new results and ideas and the relation of recent results to the new standard theoretical model. The general framework within which the weak decay is viewed and relevant fundamental questions, weak decays of noncharmed hadrons, decays of muons and the tau, and the decays of charmed particles are covered. Limitation is made to the discussion of those topics that either have received recent experimental attention or are relevant to the new physics. (JFP) 178 references 14. Weak currents International Nuclear Information System (INIS) Leite Lopes, J. 1976-01-01 A survey of the fundamental ideas on weak currents such as CVC and PCAC and a presentation of the Cabibbo current and the neutral weak currents according to the Salam-Weinberg model and the Glashow-Iliopoulos-Miami model are given [fr 15. Homological properties of modules with finite weak injective and weak flat dimensions OpenAIRE Zhao, Tiwei 2017-01-01 In this paper, we define a class of relative derived functors in terms of left or right weak flat resolutions to compute the weak flat dimension of modules. Moreover, we investigate two classes of modules larger than that of weak injective and weak flat modules, study the existence of covers and preenvelopes, and give some applications. 16. Weak interaction studies from nuclear beta decay International Nuclear Information System (INIS) Morita, M. 1981-01-01 The studies performed at the theoretical nuclear physics division of the Laboratory of Nuclear Studies, Osaka University, are reported. Electron spin density and internal conversion process, nuclear excitation by electron transition, beta decay, weak charged current, and beta-ray angular distributions in oriented nuclei have been studied. The relative intensity of internal conversion electrons for the case in which the radial wave functions of orbital electrons are different for electron spin up and down was calculated. The calculated value was in good agreement with the experimental one. The nuclear excitation following the transition of orbital electrons was studied. The calculated probability of the nuclear excitation of Os 189 was 1.4 x 10 - 7 in conformity with the experimental value 1.7 x 10 - 7 . The second class current and other problems on beta-decay have been extensively studied, and described elsewhere. Concerning weak charged current, the effects of all induced terms, the time component of main axial vector, all partial waves of leptons, Coulomb correction for the electrons in finite size nuclei, and radiative correction were studied. The beta-ray angular distribution for the 1 + -- 0 + transition in oriented B 12 and N 12 was investigated. In this connection, investigation on the weak magnetism to include all higher order corrections for the evaluation of the spectral shape factors was performed. Other works carried out by the author and his collaborators are also explained. (Kato, T.) 17. Communication strengths and weaknesses of radiation protection professionals in the United States and Canada International Nuclear Information System (INIS) Johnson, R.H.; Petcovic, W.L.; Alexander, R.E. 1988-01-01 Effective health risk communication may well determine the future of peaceful applications of nuclear technology and the social acceptance of risks from radiation in medicine, research, and industry. However, radiation protection professionals who know how to quantify risks and provide appropriate safeguards have historically encountered great difficulties in communicating their risk perspectives to the concerned public. In the United States, organisations such as the Health Physics Society and the American Nuclear Society have traditionally attributed communication difficulties to the public's lack of technical understanding. This has led to the belief that if the public could be provided sufficient information or education, they would understand radiation issues and their concerns about radiation risks would be resolved. Consequently, these national organisations have established public information programs and speaker bureaus. These programs primarily focus on presentation of technically accurate data and attempt to foster understanding of radiation by analogies with background radiation or other sources of risks commonly accepted by society. This paper shows that such public information programs can at their best reach only about 25% of the general public. These programs could greatly enhance their effectiveness by learning the different ways that radiation professionals and the general public prefer to gather data and make decisions 18. Tidal radiation International Nuclear Information System (INIS) Mashhoon, B. 1977-01-01 The general theory of tides is developed within the framework of Einstein's theory of gravitation. It is based on the concept of Fermi frame and the associated notion of tidal frame along an open curve in spacetime. Following the previous work of the author an approximate scheme for the evaluation of tidal gravitational radiation is presented which is valid for weak gravitational fields. The emission of gravitational radiation from a body in the field of a black hole is discussed, and for some cases of astrophysical interest estimates are given for the contributions of radiation due to center-of-mass motion, purely tidal deformation, and the interference between the center of mass and tidal motions 19. Parametric Cherenkov radiation (development of idea) International Nuclear Information System (INIS) Buts, V.A. 2004-01-01 Some physical results of researches about charged particles radiation in mediums with a periodic heterogeneity and in periodic potential are reported. The development of ideas Parametric Cherenkov Radiation has shown, that in mediums, which have even a weak degree of a periodic heterogeneity of an permittivity or potential, the nonrelativistic oscillators can radiated as relativistic. They effectively radiate the high numbers of harmonics. In particular, in the carried out experiments the ultra-violet radiation was excited at action on a crystal of intensive ten-centimetric radiation. These results give the reasons to hope for making of nonrelativistic lasers on free electrons 20. Hartman effect and weak measurements that are not really weak International Nuclear Information System (INIS) Sokolovski, D.; Akhmatskaya, E. 2011-01-01 We show that in wave packet tunneling, localization of the transmitted particle amounts to a quantum measurement of the delay it experiences in the barrier. With no external degree of freedom involved, the envelope of the wave packet plays the role of the initial pointer state. Under tunneling conditions such ''self-measurement'' is necessarily weak, and the Hartman effect just reflects the general tendency of weak values to diverge, as postselection in the final state becomes improbable. We also demonstrate that it is a good precision, or a 'not really weak' quantum measurement: no matter how wide the barrier d, it is possible to transmit a wave packet with a width σ small compared to the observed advancement. As is the case with all weak measurements, the probability of transmission rapidly decreases with the ratio σ/d. 1. Weak KAM theory for a weakly coupled system of Hamilton–Jacobi equations KAUST Repository Figalli, Alessio; Gomes, Diogo A.; Marcon, Diego 2016-01-01 Here, we extend the weak KAM and Aubry–Mather theories to optimal switching problems. We consider three issues: the analysis of the calculus of variations problem, the study of a generalized weak KAM theorem for solutions of weakly coupled systems of Hamilton–Jacobi equations, and the long-time behavior of time-dependent systems. We prove the existence and regularity of action minimizers, obtain necessary conditions for minimality, extend Fathi’s weak KAM theorem, and describe the asymptotic limit of the generalized Lax–Oleinik semigroup. © 2016, Springer-Verlag Berlin Heidelberg. 2. Weak KAM theory for a weakly coupled system of Hamilton–Jacobi equations KAUST Repository Figalli, Alessio 2016-06-23 Here, we extend the weak KAM and Aubry–Mather theories to optimal switching problems. We consider three issues: the analysis of the calculus of variations problem, the study of a generalized weak KAM theorem for solutions of weakly coupled systems of Hamilton–Jacobi equations, and the long-time behavior of time-dependent systems. We prove the existence and regularity of action minimizers, obtain necessary conditions for minimality, extend Fathi’s weak KAM theorem, and describe the asymptotic limit of the generalized Lax–Oleinik semigroup. © 2016, Springer-Verlag Berlin Heidelberg. 3. Dark-Matter Particles without Weak-Scale Masses or Weak Interactions International Nuclear Information System (INIS) Feng, Jonathan L.; Kumar, Jason 2008-01-01 We propose that dark matter is composed of particles that naturally have the correct thermal relic density, but have neither weak-scale masses nor weak interactions. These models emerge naturally from gauge-mediated supersymmetry breaking, where they elegantly solve the dark-matter problem. The framework accommodates single or multiple component dark matter, dark-matter masses from 10 MeV to 10 TeV, and interaction strengths from gravitational to strong. These candidates enhance many direct and indirect signals relative to weakly interacting massive particles and have qualitatively new implications for dark-matter searches and cosmological implications for colliders 4. Solar and infrared radiation measurements CERN Document Server Vignola, Frank; Michalsky, Joseph 2012-01-01 The rather specialized field of solar and infrared radiation measurement has become more and more important in the face of growing demands by the renewable energy and climate change research communities for data that are more accurate and have increased temporal and spatial resolution. Updating decades of acquired knowledge in the field, Solar and Infrared Radiation Measurements details the strengths and weaknesses of instruments used to conduct such solar and infrared radiation measurements. Topics covered include: Radiometer design and performance Equipment calibration, installation, operati 5. Weak radiative decays of the B meson and bounds on M{sub H}± in the Two-Higgs-Doublet Model Energy Technology Data Exchange (ETDEWEB) Misiak, Mikolaj [University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); CERN, Theoretical Physics Department, Geneva 23 (Switzerland); Steinhauser, Matthias [Karlsruhe Institute of Technology (KIT), Institut fuer Theoretische Teilchenphysik, Karlsruhe (Germany) 2017-03-15 In a recent publication (Abdesselam et al. arXiv:1608.02344), the Belle collaboration updated their analysis of the inclusive weak radiative B-meson decay, including the full dataset of (772 ± 11) x 10{sup 6} B anti B pairs. Their result for the branching ratio is now below the Standard Model prediction (Misiak et al. Phys Rev Lett 114:221801, 2015, Czakon et al. JHEP 1504:168, 2015), though it remains consistent with it. However, bounds on the charged Higgs boson mass in the Two-Higgs-Doublet Model get affected in a significant manner. In the so-called Model II, the 95% C.L. lower bound on M{sub H}± is now in the 570-800 GeV range, depending quite sensitively on the method applied for its determination. Our present note is devoted to presenting and discussing the updated bounds, as well as to clarifying several ambiguities that one might encounter in evaluating them. One of such ambiguities stems from the photon energy cutoff choice, which deserves re-consideration in view of the improved experimental accuracy. (orig.) 6. Weak mixing below the weak scale in dark-matter direct detection Science.gov (United States) Brod, Joachim; Grinstein, Benjamin; Stamou, Emmanuel; Zupan, Jure 2018-02-01 If dark matter couples predominantly to the axial-vector currents with heavy quarks, the leading contribution to dark-matter scattering on nuclei is either due to one-loop weak corrections or due to the heavy-quark axial charges of the nucleons. We calculate the effects of Higgs and weak gauge-boson exchanges for dark matter coupling to heavy-quark axial-vector currents in an effective theory below the weak scale. By explicit computation, we show that the leading-logarithmic QCD corrections are important, and thus resum them to all orders using the renormalization group. 7. Bagging Weak Predictors DEFF Research Database (Denmark) Lukas, Manuel; Hillebrand, Eric Relations between economic variables can often not be exploited for forecasting, suggesting that predictors are weak in the sense that estimation uncertainty is larger than bias from ignoring the relation. In this paper, we propose a novel bagging predictor designed for such weak predictor variab... 8. Delayed radiation neuropathy Energy Technology Data Exchange (ETDEWEB) Nagashima, T.; Miyamoto, K.; Beppu, H.; Hirose, K.; Yamada, K. (Tokyo Metropolitan Neurological Hospital (Japan)) 1981-07-01 A case of cervical plexus neuropathy was reported in association with chronic radio-dermatitis, myxedema with thyroid adenoma and epiglottic tumor. A 38-year-old man has noticed muscle weakness and wasting of the right shoulder girdle since age 33. A detailed history taking revealed a previous irradiation to the neck because of the cervical lymphadenopathy at age 10 (X-ray 3,000 rads), keroid skin change at age 19, obesity and edema since 26, and hoarseness at 34. Laryngoscopic examination revealed a tumor on the right vocal cord, diagnosed as benign papilloma by histological study. In addition, there were chronic radio-dermatitis around the neck, primary hypothyroidism with a benign functioning adenoma on the right lobe of the thyroid, the right phrenic nerve palsy and the right recurrent nerve palsy. All these lesions were considered to be the late sequellae of radiation to the neck in childhood. Other neurological signs were weakness and amyotrophy of the right shoulder girdle with patchy sensory loss, and areflexia of the right arm. Gross power was fairly well preserved in the right hand. EMG showed neurogenic changes in the tested muscles, suggesting a peripheral nerve lesion. Nerve conduction velocities were normal. No abnormal findings were revealed by myelography and spinal CT. The neurological findings of the patient were compatible with the diagnosis of middle cervical plexus palsy apparently due to late radiation effect. In the literature eight cases of post-radiation neuropathy with a long latency have been reported. The present case with the longest latency after the radiation should be included in the series of the reported cases of ''delayed radiation neuropathy.'' (author). 9. Delayed radiation neuropathy International Nuclear Information System (INIS) Nagashima, Toshiko; Miyamoto, Kazuto; Beppu, Hirokuni; Hirose, Kazuhiko; Yamada, Katsuhiro 1981-01-01 A case of cervical plexus neuropathy was reported in association with chronic radio-dermatitis, myxedema with thyroid adenoma and epiglottic tumor. A 38-year-old man has noticed muscle weakness and wasting of the right shoulder girdle since age 33. A detailed history taking revealed a previous irradiation to the neck because of the cervical lymphadenopathy at age 10 (X-ray 3,000 rads), keroid skin change at age 19, obesity and edema since 26, and hoarseness at 34. Laryngoscopic examination revealed a tumor on the right vocal cord, diagnosed as benign papilloma by histological study. In addition, there were chronic radio-dermatitis around the neck, primary hypothyroidism with a benign functioning adenoma on the right lobe of the thyroid, the right phrenic nerve palsy and the right recurrent nerve palsy. All these lesions were considered to be the late sequellae of radiation to the neck in childhood. Other neurological signs were weakness and amyotrophy of the right shoulder girdle with patchy sensory loss, and areflexia of the right arm. Gross power was fairly well preserved in the right hand. EMG showed neurogenic changes in the tested muscles, suggesting a peripheral nerve lesion. Nerve conduction velocities were normal. No abnormal findings were revealed by myelography and spinal CT. The neurological findings of the patient were compatible with the diagnosis of middle cervical plexus palsy apparently due to late radiation effect. In the literature eight cases of post-radiation neuropathy with a long latency have been reported. The present case with the longest latency after the radiation should be included in the series of the reported cases of ''delayed radiation neuropathy.'' (author) 10. Phenomenological Application of$k_T$factorization CERN Document Server Keum, Yong-Yeon 2004-01-01 We discuss applications of the perturbative QCD approach in the exclusive non-leptonic two body B-meson decays. We briefly review its ingredients and some important theoretical issues on the factorization approaches. PQCD results are compatible with present experimantal data for the charmless B-meson decays. We predict the possibility of large direct CP asymmetry in$B^0 \\to \\pi^{+}\\pi^{-}(23\\pm7 %)$and$B^0\\to K^{+}\\pi^{-}(-17\\pm5%)$. We also investigate the Branching ratios, CP asymmetry and isopsin symmetry breaking in radiative$B \\to (K^*/\\rho) \\gammadecays. 11. Progressive Muscle Atrophy and Weakness After Treatment by Mantle Field Radiotherapy in Hodgkin Lymphoma Survivors International Nuclear Information System (INIS) Leeuwen-Segarceanu, Elena M. van; Dorresteijn, Lucille D.A.; Pillen, Sigrid; Biesma, Douwe H.; Vogels, Oscar J.M.; Alfen, Nens van 2012-01-01 Purpose: To describe the damage to the muscles and propose a pathophysiologic mechanism for muscle atrophy and weakness after mantle field radiotherapy in Hodgkin lymphoma (HL) survivors. Methods and Materials: We examined 12 patients treated by mantle field radiotherapy between 1969 and 1998. Besides evaluation of their symptoms, the following tests were performed: dynamometry; ultrasound of the sternocleidomastoid, biceps, and antebrachial flexor muscles; and needle electromyography of the neck, deltoid, and ultrasonographically affected arm muscles. Results: Ten patients (83%) experienced neck complaints, mostly pain and muscle weakness. On clinical examination, neck flexors were more often affected than neck extensors. On ultrasound, the sternocleidomastoid was severely atrophic in 8 patients, but abnormal echo intensity was seen in only 3 patients. Electromyography of the neck muscles showed mostly myogenic changes, whereas the deltoid, biceps, and antebrachial flexor muscles seemed to have mostly neurogenic damage. Conclusions: Many patients previously treated by mantle field radiotherapy develop severe atrophy and weakness of the neck muscles. Neck muscles within the radiation field show mostly myogenic damage, and muscles outside the mantle field show mostly neurogenic damage. The discrepancy between echo intensity and atrophy suggests that muscle damage is most likely caused by an extrinsic factor such as progressive microvascular fibrosis. This is also presumed to cause damage to nerves within the radiated field, resulting in neurogenic damage of the deltoid and arm muscles. 12. Three cases of lumbo-sacral neuropathy due to radiation for uterine cancer Energy Technology Data Exchange (ETDEWEB) Maruyama, Yoshikazu; Hokezu, Yoichi; Kanehisa, Yoshihide; Nagamatsu, Keiji; Onishi, Akio 1985-01-01 Case 1: The 61-year-old woman developed uterine cancer at age 50. Radiation therapy was initiated to the pelvic lumen from both anterior and posterior sides with a total dose of 21,000 rads. Radiation ulcerative enterocolitis and dermatitis revealed at the end of the therapy. At age 52 (2 years after radiation), she noticed muscle weakness and dysesthesia of the lower legs. These symptoms progressed and amyotrophy of the legs appeared. At age 54 (4 years after radiation), she became unable to walk. Case 2: The 51-year-old woman developed uterine cancer at age 40. Postoperative radiation was initiated by the same dose and the same way as in Case 1 and she suffered from radiation dermatitis. At age 49 (9 years after radiation), she noticed dysesthesia of the right toe, which gradually spread to another side. Ten years after radiation, she began to note weakness in dorsiflexion of feet. Case 3: The 69-year-old woman developed uterine cancer at age 67. Radiation (Linac 4,000 rads, Ralstron 2,000 rads) was performed for 3 months into the pelvic lumen. Two years later, she noted dysesthesia and weakness of her legs. These symptoms progressed gradually. In these 3 cases, EMG showed neurogenic changes, suggesting peripheral nerve lesions. Nerve conduction velocities were decreased. Nerve and muscle biopsies revealed neurogenic changes. No abnormal findings were detected by spinal X-rays and myelography. The neurological findings of these patients were compatible with the lumbo-sacrol plexus injuries apparently due to late radiation effect. (J.P.N.). 13. Three cases of lumbo-sacral neuropathy due to radiation for uterine cancer International Nuclear Information System (INIS) Maruyama, Yoshikazu; Hokezu, Yoichi; Kanehisa, Yoshihide; Nagamatsu, Keiji; Onishi, Akio. 1985-01-01 Case 1: The 61-year-old woman developed uterine cancer at age 50. Radiation therapy was initiated to the pelvic lumen from both anterior and posterior sides with a total dose of 21,000 rads. Radiation ulcerative neterocolitis and dermatitis revealed at the end of the therapy. At age 52 (2 years after radiation), she noticed muscle weakness and dysesthesia of the lower legs. These symptoms progressed and amyotrophy of the legs appeared. At age 54 (4 years after radiation), she became unable to walk. Case 2: The 51-year-old woman developed uterine cancer at age 40. Postoperative radiation was initiated by the same dose and the same way as in Case 1 and she suffered from radiation dermatitis. At age 49 (9 years after radiation), she noticed dysesthesia of the right toe, which gradually spread to another side. Ten years after radiation, she began to note weakness in dorsiflexion of feet. Case 3: The 69-year-old woman developed uterine cancer at age 67. Radiation (Linac 4,000 rads, Ralstron 2,000 rads) was performed for 3 months into the pelvic lumen. Two years later, she noted dysesthesia and weakness of her legs. These symptoms progressed gradually. In these 3 cases, EMG showed neurogenic changes, suggesting peripheral nerve lesions. Nerve conduction velocities were decreased. Nerve and muscle biopsies revealed neurogenic changes. No abnormal findings were detected by spinal X-rays and myelography. The neurological findings of these patients were compatible with the lumbo-sacrol plexus injuries apparently due to late radiation effect. (J.P.N.) 14. Can we observationally test the weak cosmic censorship conjecture? International Nuclear Information System (INIS) Kong, Lingyao; Malafarina, Daniele; Bambi, Cosimo 2014-01-01 In general relativity, gravitational collapse of matter fields ends with the formation of a spacetime singularity, where the matter density becomes infinite and standard physics breaks down. According to the weak cosmic censorship conjecture, singularities produced in the gravitational collapse cannot be seen by distant observers and must be hidden within black holes. The validity of this conjecture is still controversial and at present we cannot exclude that naked singularities can be created in our Universe from regular initial data. In this paper, we study the radiation emitted by a collapsing cloud of dust and check whether it is possible to distinguish the birth of a black hole from the one of a naked singularity. In our simple dust model, we find that the properties of the radiation emitted in the two scenarios is qualitatively similar. That suggests that observational tests of the cosmic censorship conjecture may be very difficult, even in principle. (orig.) 15. Can we observationally test the weak cosmic censorship conjecture? Energy Technology Data Exchange (ETDEWEB) Kong, Lingyao; Malafarina, Daniele; Bambi, Cosimo [Fudan University, Department of Physics, Center for Field Theory and Particle Physics, Shanghai (China) 2014-08-15 In general relativity, gravitational collapse of matter fields ends with the formation of a spacetime singularity, where the matter density becomes infinite and standard physics breaks down. According to the weak cosmic censorship conjecture, singularities produced in the gravitational collapse cannot be seen by distant observers and must be hidden within black holes. The validity of this conjecture is still controversial and at present we cannot exclude that naked singularities can be created in our Universe from regular initial data. In this paper, we study the radiation emitted by a collapsing cloud of dust and check whether it is possible to distinguish the birth of a black hole from the one of a naked singularity. In our simple dust model, we find that the properties of the radiation emitted in the two scenarios is qualitatively similar. That suggests that observational tests of the cosmic censorship conjecture may be very difficult, even in principle. (orig.) 16. Radiation mutagenesis in selection of apple trees International Nuclear Information System (INIS) Kolontaev, V.M.; Kolontaev, Yu.V. 1977-01-01 After X-radiation of grafts of antonovka apple trees, three groups of morphological mutants, namely, weak-, average- and violently-growing, have been revealed. Although the mutation spectrum has some indefinite character a dose of 6 kR causes, more frequently and in a greater number, the weak-growing mutants, and a dose of 2 kR, the violently-growing ones. Mutants of each group differ in the precociousness (precocious and latefruiting), type of fruiting (nospur and spur) and yield (high- and low-yielding). Using the method of radiation mutagenesis it is possible to rise the frequency and spectrum of somatic mutability of antonovka apple trees and to induce forms having valuable features 17. What is ''ionizing radiation''? International Nuclear Information System (INIS) Tschurlovits, M. 1997-01-01 The scientific background of radiation protection and hence ''ionizing radiation'' is undergoing substantial regress since a century. Radiations as we are concerned with are from the beginning defined based upon their effects rather than upon the physical origin and their properties. This might be one of the reasons why the definition of the term ''ionizing radiation'' in radiation protection is still weak from an up to date point of view in texts as well as in international and national standards. The general meaning is unambiguous, but a numerical value depends on a number of conditions and the purpose. Hence, a clear statement on a numerical value of the energy threshold beyond a radiation has to be considered as ''ionizing'' is still missing. The existing definitions are, therefore, either correct but very general or theoretical and hence not applicable. This paper reviews existing definitions and suggests some issues to be taken into account for possible improvement of the definition of ''ionizing radiation''. (author) 18. Compatibility between weak gel and microorganisms in weak gel-assisted microbial enhanced oil recovery. Science.gov (United States) Qi, Yi-Bin; Zheng, Cheng-Gang; Lv, Cheng-Yuan; Lun, Zeng-Min; Ma, Tao 2018-03-20 To investigate weak gel-assisted microbial flooding in Block Wang Long Zhuang in the Jiangsu Oilfield, the compatibility of weak gel and microbe was evaluated using laboratory experiments. Bacillus sp. W5 was isolated from the formation water in Block Wang Long Zhuang. The rate of oil degradation reached 178 mg/day, and the rate of viscosity reduction reached 75.3%. Strain W5 could produce lipopeptide with a yield of 1254 mg/L. Emulsified crude oil was dispersed in the microbial degradation system, and the average diameter of the emulsified oil particles was 18.54 μm. Bacillus sp. W5 did not affect the rheological properties of the weak gel, and the presence of the weak gel did not significantly affect bacterial reproduction (as indicated by an unchanged microbial biomass), emulsification (surface tension is 35.56 mN/m and average oil particles size is 21.38 μm), oil degradation (162 mg/day) and oil viscosity reduction (72.7%). Core-flooding experiments indicated oil recovery of 23.6% when both weak gel and Bacillus sp. W5 were injected into the system, 14.76% when only the weak gel was injected, and 9.78% with strain W5 was injected without the weak gel. The results demonstrate good compatibility between strains W5 and the weak gel and highlight the application potential of weak gel-assisted microbial flooding. Copyright © 2018 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved. 19. History of Weak Interactions Science.gov (United States) Lee, T. D. 1970-07-01 While the phenomenon of beta-decay was discovered near the end of the last century, the notion that the weak interaction forms a separate field of physical forces evolved rather gradually. This became clear only after the experimental discoveries of other weak reactions such as muon-decay, muon-capture, etc., and the theoretical observation that all these reactions can be described by approximately the same coupling constant, thus giving rise to the notion of a universal weak interaction. Only then did one slowly recognize that the weak interaction force forms an independent field, perhaps on the same footing as the gravitational force, the electromagnetic force, and the strong nuclear and sub-nuclear forces. 20. Weakly nonlocal symplectic structures, Whitham method and weakly nonlocal symplectic structures of hydrodynamic type International Nuclear Information System (INIS) Maltsev, A Ya 2005-01-01 We consider the special type of field-theoretical symplectic structures called weakly nonlocal. The structures of this type are, in particular, very common for integrable systems such as KdV or NLS. We introduce here the special class of weakly nonlocal symplectic structures which we call weakly nonlocal symplectic structures of hydrodynamic type. We investigate then the connection of such structures with the Whitham averaging method and propose the procedure of 'averaging' the weakly nonlocal symplectic structures. The averaging procedure gives the weakly nonlocal symplectic structure of hydrodynamic type for the corresponding Whitham system. The procedure also gives 'action variables' corresponding to the wave numbers of m-phase solutions of the initial system which give the additional conservation laws for the Whitham system 1. Radiation- and pair-loaded shocks Science.gov (United States) Lyutikov, Maxim 2018-06-01 We consider the structure of mildly relativistic shocks in dense media, taking into account the radiation and pair loading, and diffusive radiation energy transfer within the flow. For increasing shock velocity (increasing post-shock temperature), the first important effect is the efficient energy redistribution by radiation within the shock that leads to the appearance of an isothermal jump, whereby the flow reaches the final state through a discontinuous isothermal transition. The isothermal jump, on scales much smaller than the photon diffusion length, consists of a weak shock and a quick relaxation to the isothermal conditions. Highly radiation-dominated shocks do not form isothermal jump. Pair production can mildly increase the overall shock compression ratio to ≈10 (4 for matter-dominated shocks and 7 of the radiation-dominated shocks). 2. Weak interactions International Nuclear Information System (INIS) Bjorken, J.D. 1978-01-01 Weak interactions are studied from a phenomenological point of view, by using a minimal number of theoretical hypotheses. Charged-current phenomenology, and then neutral-current phenomenology are discussed. This all is described in terms of a global SU(2) symmetry plus an electromagnetic correction. The intermediate-boson hypothesis is introduced and lower bounds on the range of the weak force are inferred. This phenomenology does not yet reconstruct all the predictions of the conventional SU(2)xU(1) gauge theory. To do that requires an additional assumption of restoration of SU(2) symmetry at asymptotic energies 3. Systematic review: role of acid, weakly acidic and weakly alkaline reflux in gastro-oesophageal reflux disease NARCIS (Netherlands) Boeckxstaens, G. E.; Smout, A. 2010-01-01 The importance of weakly acidic and weakly alkaline reflux in gastro-oesophageal reflux disease (GERD) is gaining recognition. To quantify the proportions of reflux episodes that are acidic (pH <4), weakly acidic (pH 4-7) and weakly alkaline (pH >7) in adult patients with GERD, and to evaluate their 4. Weakly infinite-dimensional spaces International Nuclear Information System (INIS) Fedorchuk, Vitalii V 2007-01-01 In this survey article two new classes of spaces are considered: m-C-spaces and w-m-C-spaces, m=2,3,...,∞. They are intermediate between the class of weakly infinite-dimensional spaces in the Alexandroff sense and the class of C-spaces. The classes of 2-C-spaces and w-2-C-spaces coincide with the class of weakly infinite-dimensional spaces, while the compact ∞-C-spaces are exactly the C-compact spaces of Haver. The main results of the theory of weakly infinite-dimensional spaces, including classification via transfinite Lebesgue dimensions and Luzin-Sierpinsky indices, extend to these new classes of spaces. Weak m-C-spaces are characterised by means of essential maps to Henderson's m-compacta. The existence of hereditarily m-strongly infinite-dimensional spaces is proved. 5. Acute muscular weakness in children Directory of Open Access Journals (Sweden) Ricardo Pablo Javier Erazo Torricelli Full Text Available ABSTRACT Acute muscle weakness in children is a pediatric emergency. During the diagnostic approach, it is crucial to obtain a detailed case history, including: onset of weakness, history of associated febrile states, ingestion of toxic substances/toxins, immunizations, and family history. Neurological examination must be meticulous as well. In this review, we describe the most common diseases related to acute muscle weakness, grouped into the site of origin (from the upper motor neuron to the motor unit. Early detection of hyperCKemia may lead to a myositis diagnosis, and hypokalemia points to the diagnosis of periodic paralysis. Ophthalmoparesis, ptosis and bulbar signs are suggestive of myasthenia gravis or botulism. Distal weakness and hyporeflexia are clinical features of Guillain-Barré syndrome, the most frequent cause of acute muscle weakness. If all studies are normal, a psychogenic cause should be considered. Finding the etiology of acute muscle weakness is essential to execute treatment in a timely manner, improving the prognosis of affected children. 6. Spin-polarized free electron beam interaction with radiation and superradiant spin-flip radiative emission Directory of Open Access Journals (Sweden) A. Gover 2006-06-01 Full Text Available The problems of spin-polarized free-electron beam interaction with electromagnetic wave at electron-spin resonance conditions in a magnetic field and of superradiant spin-flip radiative emission are analyzed in the framework of a comprehensive classical model. The spontaneous emission of spin-flip radiation from electron beams is very weak. We show that the detectivity of electron spin resonant spin-flip and combined spin-flip/cyclotron-resonance-emission radiation can be substantially enhanced by operating with ultrashort spin-polarized electron beam bunches under conditions of superradiant (coherent emission. The proposed radiative spin-state modulation and the spin-flip radiative emission schemes can be used for control and noninvasive diagnostics of polarized electron/positron beams. Such schemes are of relevance in important scattering experiments off nucleons in nuclear physics and off magnetic targets in condensed matter physics. 7. Weak openness and almost openness Directory of Open Access Journals (Sweden) David A. Rose 1984-01-01 Full Text Available Weak openness and almost openness for arbitrary functions between topological spaces are defined as duals to the weak continuity of Levine and the almost continuity of Husain respectively. Independence of these two openness conditions is noted and comparison is made between these and the almost openness of Singal and Singal. Some results dual to those known for weak continuity and almost continuity are obtained. Nearly almost openness is defined and used to obtain an improved link from weak continuity to almost continuity. 8. Weak values in collision theory Science.gov (United States) de Castro, Leonardo Andreta; Brasil, Carlos Alexandre; Napolitano, Reginaldo de Jesus 2018-05-01 Weak measurements have an increasing number of applications in contemporary quantum mechanics. They were originally described as a weak interaction that slightly entangled the translational degrees of freedom of a particle to its spin, yielding surprising results after post-selection. That description often ignores the kinetic energy of the particle and its movement in three dimensions. Here, we include these elements and re-obtain the weak values within the context of collision theory by two different approaches, and prove that the results are compatible with each other and with the results from the traditional approach. To provide a more complete description, we generalize weak values into weak tensors and use them to provide a more realistic description of the Stern-Gerlach apparatus. 9. Electromagnetic current in weak interactions International Nuclear Information System (INIS) Ma, E. 1983-01-01 In gauge models which unify weak and electromagnetic interactions, the weak neutral-current interaction also involves the electromagnetic current. The exact nature of such a component can be explored using e + e - experimental data. In recent years, the existence of a new component of the weak interaction has become firmly established, i.e., the neutral-current interaction. As such, it competes with the electromagnetic interaction whenever the particles involved are also charged, but at a very much lower rate because its effective strength is so small. Hence neutrino processes are best for the detection of the neutral-current interaction. However, in any gauge model which unifies weak and electromagnetic interactions, the weak neutral-current interaction also involves the electromagnetic current 10. Radiation of ultrarelativistic particles passing through ideal and mosaic crystals International Nuclear Information System (INIS) Afanas'ev, A.M. 1977-01-01 When a charged particle passes through an ideal crystal, then besides the transition radiation, a new kind of radiation, connected with the periodic structure of the crystal is produced. The influence of mosaic structure of a crystal on the intensity of this radiation is considered. Simple analytical expressions for the integral intensity of this radiation for the case of an ideal crystal are obtained. The results show, that the integral radiation intensity depends weakly on the degree of crystal perfection 11. Enhanced quantum teleportation in the background of Schwarzschild spacetime by weak measurements OpenAIRE Xiao, Xing; Yao, Yao; Li, Yan-Ling; Xie, Ying-Mao 2017-01-01 It is commonly believed that the fidelity of quantum teleportation in the gravitational field would be degraded due to the heat up by the Hawking radiation. In this paper, we point out that the Hawking effect could be eliminated by the combined action of pre- and post-weak measurements, and thus the teleportation fidelity is almost completely protected. It is intriguing to notice that the enhancement of fidelity could not be attributed to the improvement of entanglement, but rather to the pro... 12. Weak interactions with nuclei International Nuclear Information System (INIS) Walecka, J.D. 1983-01-01 Nuclei provide systems where the strong, electomagnetic, and weak interactions are all present. The current picture of the strong interactions is based on quarks and quantum chromodynamics (QCD). The symmetry structure of this theory is SU(3)/sub C/ x SU(2)/sub W/ x U(1)/sub W/. The electroweak interactions in nuclei can be used to probe this structure. Semileptonic weak interactions are considered. The processes under consideration include beta decay, neutrino scattering and weak neutral-current interactions. The starting point in the analysis is the effective Lagrangian of the Standard Model 13. Bayesian Markov Chain Monte Carlo inversion for weak anisotropy parameters and fracture weaknesses using azimuthal elastic impedance Science.gov (United States) Chen, Huaizhen; Pan, Xinpeng; Ji, Yuxin; Zhang, Guangzhi 2017-08-01 A system of aligned vertical fractures and fine horizontal shale layers combine to form equivalent orthorhombic media. Weak anisotropy parameters and fracture weaknesses play an important role in the description of orthorhombic anisotropy (OA). We propose a novel approach of utilizing seismic reflection amplitudes to estimate weak anisotropy parameters and fracture weaknesses from observed seismic data, based on azimuthal elastic impedance (EI). We first propose perturbation in stiffness matrix in terms of weak anisotropy parameters and fracture weaknesses, and using the perturbation and scattering function, we derive PP-wave reflection coefficient and azimuthal EI for the case of an interface separating two OA media. Then we demonstrate an approach to first use a model constrained damped least-squares algorithm to estimate azimuthal EI from partially incidence-phase-angle-stack seismic reflection data at different azimuths, and then extract weak anisotropy parameters and fracture weaknesses from the estimated azimuthal EI using a Bayesian Markov Chain Monte Carlo inversion method. In addition, a new procedure to construct rock physics effective model is presented to estimate weak anisotropy parameters and fracture weaknesses from well log interpretation results (minerals and their volumes, porosity, saturation, fracture density, etc.). Tests on synthetic and real data indicate that unknown parameters including elastic properties (P- and S-wave impedances and density), weak anisotropy parameters and fracture weaknesses can be estimated stably in the case of seismic data containing a moderate noise, and our approach can make a reasonable estimation of anisotropy in a fractured shale reservoir. 14. Weak C* Hopf Symmetry OpenAIRE Rehren, K. -H. 1996-01-01 Weak C* Hopf algebras can act as global symmetries in low-dimensional quantum field theories, when braid group statistics prevents group symmetries. Possibilities to construct field algebras with weak C* Hopf symmetry from a given theory of local observables are discussed. 15. Current algebra International Nuclear Information System (INIS) Jacob, M. 1967-01-01 The first three chapters of these lecture notes are devoted to generalities concerning current algebra. The weak currents are defined, and their main properties given (V-A hypothesis, conserved vector current, selection rules, partially conserved axial current,...). The SU (3) x SU (3) algebra of Gell-Mann is introduced, and the general properties of the non-leptonic weak Hamiltonian are discussed. Chapters 4 to 9 are devoted to some important applications of the algebra. First one proves the Adler- Weisberger formula, in two different ways, by either the infinite momentum frame, or the near-by singularities method. In the others chapters, the latter method is the only one used. The following topics are successively dealt with: semi leptonic decays of K mesons and hyperons, Kroll- Ruderman theorem, non leptonic decays of K mesons and hyperons ( ΔI = 1/2 rule), low energy theorems concerning processes with emission (or absorption) of a pion or a photon, super-convergence sum rules, and finally, neutrino reactions. (author) [fr 16. Directional radiometry and radiative transfer: The convoluted path from centuries-old phenomenology to physical optics International Nuclear Information System (INIS) Mishchenko, Michael I. 2014-01-01 This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics. - Highlights: • History of phenomenological radiometry and radiative transfer is described. • Fundamental weaknesses of these disciplines are discussed. • The process of their conversion into legitimate branches of physical optics is summarized 17. The actuality and discussion for the data management of radiation sources International Nuclear Information System (INIS) Yang Yaoyun; Huang Chaoyun; Wang Xiaofeng; Chen Dongliang; Fu Jie 2008-01-01 Large amounts of data and information in radiation safety license permits, supervision and inspection have been accumulated in China. Data management of radiation sources is an important aspect of radiation sources security. This paper introduces the main elements, tache and actuality of data management, the strengths and weaknesses of RAIS system in use. This paper analyzes and discusses the approach of establishing radiation sources monitoring information system network. (authors) 18. Weak Solution and Weakly Uniformly Bounded Solution of Impulsive Heat Equations Containing “Maximum” Temperature Directory of Open Access Journals (Sweden) Oyelami, Benjamin Oyediran 2013-09-01 Full Text Available In this paper, criteria for the existence of weak solutions and uniformly weak bounded solution of impulsive heat equation containing maximum temperature are investigated and results obtained. An example is given for heat flow system with impulsive temperature using maximum temperature simulator and criteria for the uniformly weak bounded of solutions of the system are obtained. 19. Heavy flavour decays and the structure of weak interactions International Nuclear Information System (INIS) Bigi, I. 1984-01-01 The so-called Standard Model has been developed describing the electro-weak interactions by an SU(2) L xU(1) gauge theory the community's almost unanimous choice of the candidate theory for the strong interactions is QCD based on an SU(3) gauge theory. It is very instructive to recall the similarities and differences of these two theoretical frameworks. Both are based on non-abelian gauge theories with spin -1/2 matter fields and spin -1 radiation fields the latter being the carriers of the forces. Beyond this basic correspondence there are however crucial differences which I sketch under the headings ''computational tools'' and ''predictive power''; there exist of course correlations between these two items. (orig./HSI) 20. Influence functionals and black body radiation OpenAIRE Anglin, J. R. 1993-01-01 The Feynman-Vernon formalism is used to obtain a microscopic, quantum mechanical derivation of black body radiation, for a massless scalar field in 1+1 dimensions, weakly coupled to an environment of finite size. The model exhibits the absorption, thermal equilibrium, and emission properties of a canonical black body, but shows that the thermal radiation propagates outwards from the body, with the Planckian spectrum applying inside a wavefront region of finite thickness. The black body enviro... 1. Infuence of gamma radiation on the rheological and functional properties of bread wheats International Nuclear Information System (INIS) Paredes-Lopez, O.; Covarrubias-Alvarez, M.M. 1984-01-01 The effects of gamma irradiation on some biochemical, rheological and functional properties of bread wheats were studied. Two wheat cultivars were selected to represent medium-strong and weak dough mixing strengths. Falling number values were severely depressed at doses of 500 and 1000 krad. Rheological dough properties, as assessed with the mixograph and farinograph, were also investigated. Radiation at medium doses produced an increase in the farinograph water absorption for both wheats. Radiation decreased the amount of bound water as compared to the control sample. For the medium-strong wheat low levels of radiation produced bread with volumes and overall bread quality equal to or slightly better than those of the control flour, whereas for the weak wheat an improvement of the baking performance was obtained at all the low doses of radiation. However, the overall bread quality of both wheats was highly reduced at medium doses of radiation. (author) 2. Quantifying Cancer Risk from Radiation. Science.gov (United States) Keil, Alexander P; Richardson, David B 2017-12-06 Complex statistical models fitted to data from studies of atomic bomb survivors are used to estimate the human health effects of ionizing radiation exposures. We describe and illustrate an approach to estimate population risks from ionizing radiation exposure that relaxes many assumptions about radiation-related mortality. The approach draws on developments in methods for causal inference. The results offer a different way to quantify radiation's effects and show that conventional estimates of the population burden of excess cancer at high radiation doses are driven strongly by projecting outside the range of current data. Summary results obtained using the proposed approach are similar in magnitude to those obtained using conventional methods, although estimates of radiation-related excess cancers differ for many age, sex, and dose groups. At low doses relevant to typical exposures, the strength of evidence in data is surprisingly weak. Statements regarding human health effects at low doses rely strongly on the use of modeling assumptions. © 2017 Society for Risk Analysis. 3. Weak hard X-ray emission from broad absorption line quasars: evidence for intrinsic X-ray weakness International Nuclear Information System (INIS) Luo, B.; Brandt, W. N.; Scott, A. E.; Alexander, D. M.; Gandhi, P.; Stern, D.; Teng, S. H.; Arévalo, P.; Bauer, F. E.; Boggs, S. E.; Craig, W. W.; Christensen, F. E.; Comastri, A.; Farrah, D.; Hailey, C. J.; Harrison, F. A.; Koss, M.; Ogle, P.; Puccetti, S.; Saez, C. 2014-01-01 We report NuSTAR observations of a sample of six X-ray weak broad absorption line (BAL) quasars. These targets, at z = 0.148-1.223, are among the optically brightest and most luminous BAL quasars known at z < 1.3. However, their rest-frame ≈2 keV luminosities are 14 to >330 times weaker than expected for typical quasars. Our results from a pilot NuSTAR study of two low-redshift BAL quasars, a Chandra stacking analysis of a sample of high-redshift BAL quasars, and a NuSTAR spectral analysis of the local BAL quasar Mrk 231 have already suggested the existence of intrinsically X-ray weak BAL quasars, i.e., quasars not emitting X-rays at the level expected from their optical/UV emission. The aim of the current program is to extend the search for such extraordinary objects. Three of the six new targets are weakly detected by NuSTAR with ≲ 45 counts in the 3-24 keV band, and the other three are not detected. The hard X-ray (8-24 keV) weakness observed by NuSTAR requires Compton-thick absorption if these objects have nominal underlying X-ray emission. However, a soft stacked effective photon index (Γ eff ≈ 1.8) for this sample disfavors Compton-thick absorption in general. The uniform hard X-ray weakness observed by NuSTAR for this and the pilot samples selected with <10 keV weakness also suggests that the X-ray weakness is intrinsic in at least some of the targets. We conclude that the NuSTAR observations have likely discovered a significant population (≳ 33%) of intrinsically X-ray weak objects among the BAL quasars with significantly weak <10 keV emission. We suggest that intrinsically X-ray weak quasars might be preferentially observed as BAL quasars. 4. A weak balance: the contribution of muscle weakness to postural instability and falls. NARCIS (Netherlands) Horlings, G.C.; Engelen, B.G.M. van; Allum, J.H.J.; Bloem, B.R. 2008-01-01 Muscle strength is a potentially important factor contributing to postural control. In this article, we consider the influence of muscle weakness on postural instability and falling. We searched the literature for research evaluating muscle weakness as a risk factor for falls in community-dwelling 5. Fundamental parameters of He-weak and He-strong stars Science.gov (United States) Cidale, L. S.; Arias, M. L.; Torres, A. F.; Zorec, J.; Frémat, Y.; Cruzado, A. 2007-06-01 Context: He-weak and He-strong stars are chemically peculiar AB objects whose He lines are anomalously weak or strong for their MK spectral type. The determination of fundamental parameters for these stars is often more complex than for normal stars due to their abundance anomalies. Aims: We discuss the determination of fundamental parameters: effective temperature, surface gravity, and visual and bolometric absolute magnitudes of He-weak and He-strong stars. We compare our values with those derived independently from methods based on photometry and model fitting. Methods: We carried out low resolution spectroscopic observations in the wavelength range 3400-4700 Å of 20 He-weak and 8 He-strong stars to determine their fundamental parameters by means of the Divan-Chalonge-Barbier (BCD) spectrophotometric system. This system is based on the measurement of the continuum energy distribution around the Balmer discontinuity (BD). For a few He-weak stars we also estimate the effective temperatures and the angular diameters by integrating absolute fluxes observed over a wide spectral range. Non-LTE model calculations are carried out to study the influence of the He/H abundance ratio on the emergent radiation of He-strong stars and on their T_eff determination. Results: We find that the effective temperatures, surface gravities and bolometric absolute magnitudes of He-weak stars estimated with the BCD system and the integrated flux method are in good agreement between each other, and they also agree with previous determinations based on several different methods. The mean discrepancy between the visual absolute magnitudes derived using the hipparcos parallaxes and the BCD values is on average ±0.3 mag for He-weak stars, while it is ±0.5 mag for He-strong stars. For He-strong stars, we note that the BCD calibration, based on stars in the solar environment, leads to overestimated values of T_eff. By means of model atmosphere calculations with enhanced He/H abundance ratios 6. Radiation chemical behavior of Rh(III) in HClO4 and HNO3 International Nuclear Information System (INIS) Vladimirova, M.V.; Khalkina, E.V. 1995-01-01 The radiation chemical behavior of Rh is very interesting since Rh accumulates in irradiated U but has not been reported in the literature. Scattered data do exist for the radiation chemical behavior of Rh(III) in weakly acidic and alkaline solutions. Pulsed radiolysis was used to investigate the formation of unstable oxidation states of Rh during reduction and oxidation of Rh(III) in neutral solutions. The rate constant of the reaction Rh(III) + e aq - was found to be 6·10 10 liter/mole·sec. The radiation chemical behavior of Rh(III) toward γ-radiolysis in neutral, weakly acidic (up to 0.1 N), and alkaline solutions was examined. In neutral solutions of [Rh(NH 3 ) 5 Cl]Cl 2 and RhCl 3 , metallic Rh is formed. The degree of reduction is ∼ 1%. In neutral and weakly acidic solutions of Rh(NO 3 ) 3 , Rh 2 O 3 ·xH 2 O is formed. Irradiation of Rh(ClO 4 ) 3 solutions produces no reduction. The radiation chemical behavior of Rh(III) in HClO 4 and HNO 3 solutions at concentrations > 1 M is studied in the present work 7. Charged weak currents International Nuclear Information System (INIS) Turlay, R. 1979-01-01 In this review of charged weak currents I shall concentrate on inclusive high energy neutrino physics. There are surely still things to learn from the low energy weak interaction but I will not discuss it here. Furthermore B. Tallini will discuss the hadronic final state of neutrino interactions. Since the Tokyo conference a few experimental results have appeared on charged current interaction, I will present them and will also comment on important topics which have been published during the last past year. (orig.) 8. Weak-interacting holographic QCD International Nuclear Information System (INIS) Gazit, D.; Yee, H.-U. 2008-06-01 We propose a simple prescription for including low-energy weak-interactions into the frame- work of holographic QCD, based on the standard AdS/CFT dictionary of double-trace deformations. As our proposal enables us to calculate various electro-weak observables involving strongly coupled QCD, it opens a new perspective on phenomenological applications of holographic QCD. We illustrate efficiency and usefulness of our method by performing a few exemplar calculations; neutron beta decay, charged pion weak decay, and meson-nucleon parity non-conserving (PNC) couplings. The idea is general enough to be implemented in both Sakai-Sugimoto as well as Hard/Soft Wall models. (author) 9. Second class weak currents International Nuclear Information System (INIS) Delorme, J. 1978-01-01 The definition and general properties of weak second class currents are recalled and various detection possibilities briefly reviewed. It is shown that the existing data on nuclear beta decay can be consistently analysed in terms of a phenomenological model. Their implication on the fundamental structure of weak interactions is discussed [fr 10. Effects of excitation spectral width on decay profile of weakly confined excitons International Nuclear Information System (INIS) Kojima, O.; Isu, T.; Ishi-Hayase, J.; Kanno, A.; Katouf, R.; Sasaki, M.; Tsuchiya, M. 2008-01-01 We report the effect due to a simultaneous excitation of several exciton states on the radiative decay profiles on the basis of the nonlocal response of weakly confined excitons in GaAs thin films. In the case of excitation of single exciton state, the transient grating signal has two decay components. The fast decay component comes from nonlocal response, and the long-lived component is attributed to free exciton decay. With an increase of excitation spectral width, the nonlocal component becomes small in comparison with the long-lived component, and disappears under irradiation of a femtosecond-pulse laser with broader spectral width. The transient grating spectra clearly indicates the contribution of the weakly confined excitons to the signal, and the exciton line width hardly changes by excitation spectral width. From these results, we concluded that the change of decay profile is attributed not to the many-body effect but to the effect of simultaneous excitation of several exciton states 11. Theory of radiative muon capture with applications to nuclear spin and isospin doublets International Nuclear Information System (INIS) Hwang, W.P.; Primakoff, H. 1978-01-01 A theory of radiative muon capture, with applications to nuclear spin and isospin doublets, is formulated on the basis of the conservation of the hadronic electromagnetic current, the conservation of the hadronic weak polar currents, the partial conservation of the hadronic weak axial-vector current, the SU(2) x SU(2) current algebra for the various hadronic current, and a simplifying dynamical approximation for the hadron-radiating part of the transition amplitude: the ''linearity hypothesis''. The resultant total transition amplitude, which also includes the muon-radiating part, is worked out explicitly and applied to treat the processes μ - p → ν/sub μ/nγ and μ - 3 He → ν/sub μ/ 3 Hγ 12. Standard and Null Weak Values OpenAIRE Zilberberg, Oded; Romito, Alessandro; Gefen, Yuval 2013-01-01 Weak value (WV) is a quantum mechanical measurement protocol, proposed by Aharonov, Albert, and Vaidman. It consists of a weak measurement, which is weighed in, conditional on the outcome of a later, strong measurement. Here we define another two-step measurement protocol, null weak value (NVW), and point out its advantages as compared to WV. We present two alternative derivations of NWVs and compare them to the corresponding derivations of WVs. 13. Knowledge in Radiation Protection: a Survey of Professionals in Medical Imaging, Radiation Therapy and Nuclear Medicine Units in Yaounde International Nuclear Information System (INIS) Ongolo-Zogo, P.; Nguehouo, M.B.; Yomi, J.; Nko'o Amven, S. 2013-01-01 Medical use of ionizing radiation is now the most common radiation source of the population at the global level. The knowledge and practices of health professionals working with X-rays determine the level and quality of implementation of internationally and nationally recommended measures for radiation protection of patients and workers. The level of implementation and enforcement of international recommendations in African countries is an issue of concern due to weak laws and regulations and regulatory bodies. We report the results of a cross-sectional survey of health professionals working with ionizing radiation in Yaounde, the capital city of Cameroon. More than 50% of these professionals have a moderate level of knowledge of the norms and principles of radiation protection and more than 80% have never attended a continuing professional development workshop on radiation protection. (authors) 14. Medical radiation exposure and its impact on occupational practices in Korean radiologic technologists Energy Technology Data Exchange (ETDEWEB) Ko, Seul Ki; Lee, Won Jin [Dept. of Preventive Medicine, Korea University College of Medicine, Seoul (Korea, Republic of) 2016-12-15 The use of radiology examinations in medicine has been growing worldwide. Annually an estimated 3.1 billion radiologic exams are performed. According to this expansion of medical radiation exposure, it has been hard to pay no attention to the effects of medical radiation exposures in the exposure from different types of radiation source. This study, therefore, was aimed to assess the association of medical and occupational radiation exposure in Korean radiologic technologists and evaluate necessity for its consideration in occupational studies. This study did not show the strong association between medical radiation exposure and occupational radiation exposure except several modalities with specific frequency. These results are preliminary but certainly meaningful for interpretation of epidemiologic finding, therefore, we need further evaluation specially for the repeatedly exposed imaging tests and high dose procedures that presented somewhat weak relationship in this study linked with health outcomes of radiation exposure. This study did not show the strong association between medical radiation exposure and occupational radiation exposure except several modalities with specific frequency. These results are preliminary but certainly meaningful for interpretation of epidemiologic finding, therefore, we need further evaluation specially for the repeatedly exposed imaging tests and high dose procedures that presented somewhat weak relationship in this study linked with health outcomes of radiation exposure. 15. Medical radiation exposure and its impact on occupational practices in Korean radiologic technologists International Nuclear Information System (INIS) Ko, Seul Ki; Lee, Won Jin 2016-01-01 The use of radiology examinations in medicine has been growing worldwide. Annually an estimated 3.1 billion radiologic exams are performed. According to this expansion of medical radiation exposure, it has been hard to pay no attention to the effects of medical radiation exposures in the exposure from different types of radiation source. This study, therefore, was aimed to assess the association of medical and occupational radiation exposure in Korean radiologic technologists and evaluate necessity for its consideration in occupational studies. This study did not show the strong association between medical radiation exposure and occupational radiation exposure except several modalities with specific frequency. These results are preliminary but certainly meaningful for interpretation of epidemiologic finding, therefore, we need further evaluation specially for the repeatedly exposed imaging tests and high dose procedures that presented somewhat weak relationship in this study linked with health outcomes of radiation exposure. This study did not show the strong association between medical radiation exposure and occupational radiation exposure except several modalities with specific frequency. These results are preliminary but certainly meaningful for interpretation of epidemiologic finding, therefore, we need further evaluation specially for the repeatedly exposed imaging tests and high dose procedures that presented somewhat weak relationship in this study linked with health outcomes of radiation exposure. 16. Weak interactions International Nuclear Information System (INIS) Chanda, R. 1981-01-01 The theoretical and experimental evidences to form a basis for Lagrangian Quantum field theory for Weak Interactions are discussed. In this context, gauge invariance aspects of such interactions are showed. (L.C.) [pt 17. Cosmology with weak lensing surveys International Nuclear Information System (INIS) Munshi, Dipak; Valageas, Patrick; Waerbeke, Ludovic van; Heavens, Alan 2008-01-01 Weak gravitational lensing is responsible for the shearing and magnification of the images of high-redshift sources due to the presence of intervening matter. The distortions are due to fluctuations in the gravitational potential, and are directly related to the distribution of matter and to the geometry and dynamics of the Universe. As a consequence, weak gravitational lensing offers unique possibilities for probing the Dark Matter and Dark Energy in the Universe. In this review, we summarise the theoretical and observational state of the subject, focussing on the statistical aspects of weak lensing, and consider the prospects for weak lensing surveys in the future. Weak gravitational lensing surveys are complementary to both galaxy surveys and cosmic microwave background (CMB) observations as they probe the unbiased non-linear matter power spectrum at modest redshifts. Most of the cosmological parameters are accurately estimated from CMB and large-scale galaxy surveys, so the focus of attention is shifting to understanding the nature of Dark Matter and Dark Energy. On the theoretical side, recent advances in the use of 3D information of the sources from photometric redshifts promise greater statistical power, and these are further enhanced by the use of statistics beyond two-point quantities such as the power spectrum. The use of 3D information also alleviates difficulties arising from physical effects such as the intrinsic alignment of galaxies, which can mimic weak lensing to some extent. On the observational side, in the next few years weak lensing surveys such as CFHTLS, VST-KIDS and Pan-STARRS, and the planned Dark Energy Survey, will provide the first weak lensing surveys covering very large sky areas and depth. In the long run even more ambitious programmes such as DUNE, the Supernova Anisotropy Probe (SNAP) and Large-aperture Synoptic Survey Telescope (LSST) are planned. Weak lensing of diffuse components such as the CMB and 21 cm emission can also 18. Cosmology with weak lensing surveys Energy Technology Data Exchange (ETDEWEB) Munshi, Dipak [Institute of Astronomy, Madingley Road, Cambridge, CB3 OHA (United Kingdom); Astrophysics Group, Cavendish Laboratory, Madingley Road, Cambridge CB3 OHE (United Kingdom)], E-mail: [email protected]; Valageas, Patrick [Service de Physique Theorique, CEA Saclay, 91191 Gif-sur-Yvette (France); Waerbeke, Ludovic van [University of British Columbia, Department of Physics and Astronomy, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 (Canada); Heavens, Alan [SUPA - Scottish Universities Physics Alliance, Institute for Astronomy, University of Edinburgh, Blackford Hill, Edinburgh EH9 3HJ (United Kingdom) 2008-06-15 Weak gravitational lensing is responsible for the shearing and magnification of the images of high-redshift sources due to the presence of intervening matter. The distortions are due to fluctuations in the gravitational potential, and are directly related to the distribution of matter and to the geometry and dynamics of the Universe. As a consequence, weak gravitational lensing offers unique possibilities for probing the Dark Matter and Dark Energy in the Universe. In this review, we summarise the theoretical and observational state of the subject, focussing on the statistical aspects of weak lensing, and consider the prospects for weak lensing surveys in the future. Weak gravitational lensing surveys are complementary to both galaxy surveys and cosmic microwave background (CMB) observations as they probe the unbiased non-linear matter power spectrum at modest redshifts. Most of the cosmological parameters are accurately estimated from CMB and large-scale galaxy surveys, so the focus of attention is shifting to understanding the nature of Dark Matter and Dark Energy. On the theoretical side, recent advances in the use of 3D information of the sources from photometric redshifts promise greater statistical power, and these are further enhanced by the use of statistics beyond two-point quantities such as the power spectrum. The use of 3D information also alleviates difficulties arising from physical effects such as the intrinsic alignment of galaxies, which can mimic weak lensing to some extent. On the observational side, in the next few years weak lensing surveys such as CFHTLS, VST-KIDS and Pan-STARRS, and the planned Dark Energy Survey, will provide the first weak lensing surveys covering very large sky areas and depth. In the long run even more ambitious programmes such as DUNE, the Supernova Anisotropy Probe (SNAP) and Large-aperture Synoptic Survey Telescope (LSST) are planned. Weak lensing of diffuse components such as the CMB and 21 cm emission can also 19. A non-local-thermodynamic equilibrium formulation of the transport equation for polarized light in the presence of weak magnetic fields. Doctoral thesis International Nuclear Information System (INIS) McNamara, D.J. 1977-01-01 The present work is motivated by the desire to better understand solar magnetism. Just as stellar astrophysics and radiative transfer have been coupled in the history of research in physics, so too has the study of radiative transfer of polarized light in magnetic fields and solar magnetism been a history of mutual growth. The Stokes parameters characterize the state of polarization of a beam of radiation. The author considers the changes in polarization, and therefore in the Stokes parameters, due to the transport of a beam through an optically thick medium in a weak magnetic field. The transport equation is derived from a general density matrix equation of motion. This allows the possibility of interference effects arising from the mixing of atomic sublevels in a weak magnetic field to be taken into account. The statistical equilibrium equations are similarly derived. Finally, the coupled system of equations is presented, and the order of magnitude of the interference effects, shown. Collisional effects are not considered. The magnitude of the interference effects in magnetic field measurements of the sun may be evaluated 20. Performance of a written radiation protection inspection of nonstationary gamma radiography users International Nuclear Information System (INIS) Hoehne, M. 1986-01-01 A questionare has been developed for controlling users of nonstationary gamma radiography devices. It is aimed at obtaining information about the weak points according to radiation protection and to give guidance in performing such controls by the respective radiation protection officers. The questionare is included 1. Feasibility of isotachochromatography as a method for the preparative separation of weak acids and weak bases. I. Theoretical considerations NARCIS (Netherlands) Kooistra, C.; Sluyterman, L.A.A.E. 1988-01-01 The fundamental equation of isotachochromatography, i.e., isotachophoresis translated into ion-exchange chromatography, has been derived for weak acids and weak bases. Weak acids are separated on strong cation exchangers and weak bases on strong anion exchangers. According to theory, the elution 2. An anisotropic diffusion approximation to thermal radiative transfer International Nuclear Information System (INIS) Johnson, Seth R.; Larsen, Edward W. 2011-01-01 This paper describes an anisotropic diffusion (AD) method that uses transport-calculated AD coefficients to efficiently and accurately solve the thermal radiative transfer (TRT) equations. By assuming weak gradients and angular moments in the radiation intensity, we derive an expression for the radiation energy density that depends on a non-local function of the opacity. This nonlocal function is the solution of a transport equation that can be solved with a single steady-state transport sweep once per time step, and the function's second angular moment is the anisotropic diffusion tensor. To demonstrate the AD method's efficacy, we model radiation flow down a channel in 'flatland' geometry. (author) 3. Correlation between Auroral kilometric radiation and field-aligned currents International Nuclear Information System (INIS) Green, J.L.; Saflekos, N.A.; Gurnett, D.A.; Potemra, T.A. 1982-01-01 Simultaneous observations of field-aligned currents (FAC) and auroral kilometric radiation (AKR) are compared from the polar-orbiting satellites Triad and Hawkeye. The Triad observations were restricted to the evening-to-midnight local time sector (1900 to 0100 hours magnetic local time) in the northern hemisphere. This is the region in which the most intense storms of AKR are believed to originate. The Hawkeye observations were restricted to when the satellite was in the AKR emission cone in the northern hemisphere and at radial distances > or =7R/sub E/ (earth radii) to avoid local propagation cutoff effects. A(R/7R/sub E/) 2 normalization to the power flux measurements of the kilometric radiation from Hawkeye is used to take into account the radial dependence of this radiation and to scale all intensity measurements so that they are independent of Hawkeye's position in the emission cone. Integrated field-aligned current intensities from Triad are determined from the observed transverse magnetic field disturbances. There appears to be a weak correlation between AKR intensity and the integrated current sheet intensity of field-aligned currents. In general, as the intensity of auroral kilometric radiation increases so does the integrated auroral zone current sheet intensity increase. Statistically, the linear correlation coefficient between the log of the AKR power flux and the log of the current sheet intensity is 0.57. During weak AKR bursts ( - 18 W m - 2 Hz - 1 ), Triad always observed weak FAC'S ( - 1 ), and when Triad observed large FAC's (> or =0.6 A m - 1 ), the AKR intensity from Hawkeye was moderately intense (10 - 5 to 10 - 14 W m - 2 Hz - 1 ) to intense (>10 - 14 W m - 2 Hz - 1 ). It is not clear from these preliminary results what the exact role is that auroral zone field-aligned currents play in the generation or amplification of auroral kilometric radiation 4. Radiation and waste safety: Strengthening national capabilities International Nuclear Information System (INIS) Barretto, P.; Webb, G.; Mrabit, K. 1997-01-01 For many years, the IAEA has been collecting information on national infrastructures for assuring safety in applications of nuclear and radiation technologies. For more than a decade, from 1984-95, information relevant to radiation safety particularly was obtained through more than 60 expert missions undertaken by Radiation Protection Advisory Teams (RAPATs) and follow-up technical visits and expert missions. The RAPAT programme documented major weaknesses and the reports provided useful background for preparation of national requests for IAEA technical assistance. Building on this experience and subsequent policy reviews, the IAEA took steps to more systematically evaluate the needs for technical assistance in areas of nuclear and radiation safety. The outcome was the development of an integrated system designed to more closely assess national priorities and needs for upgrading their infrastructures for radiation and waste safety 5. Riemann Geometric Color-Weak Compensationfor Individual Observers OpenAIRE Kojima, Takanori; Mochizuki, Rika; Lenz, Reiner; Chao, Jinhui 2014-01-01 We extend a method for color weak compensation based on the criterion of preservation of subjective color differences between color normal and color weak observers presented in [2]. We introduce a new algorithm for color weak compensation using local affine maps between color spaces of color normal and color weak observers. We show howto estimate the local affine map and how to determine correspondences between the origins of local coordinates in color spaces of color normal and color weak ob... 6. Expression of telomerase reverse transcriptase in radiation-induced chronic human skin ulcer International Nuclear Information System (INIS) Zhao Po; Li Zhijun; Lu Yali; Zhong Mei; Gu Qingyang; Wang Dewen 2001-01-01 Objective: To investigate the expression of the catalytic subunit of telomerase, telomerase reverse transcriptase (TRT) and the possible relationship between the TRT and cancer transformation or poor healing in radiation-induced chronic ulcer of human skin. Methods: Rabbit antibody against human TRT and SP immunohistochemical method were used to detect TRT expression in 24 cases of formalin-fixed, paraffin-embed human skin chronic ulcer tissues induced by radiation, 5 cases of normal skin, 2 of burned skin, and 8 of carcinoma. Results: The positive rate for TRT was 58.3%(14/24) in chronic radiation ulcers, of which the strongly positive rate was 41.7%(10/24) and the weakly positive 16.7%(4/24), 0% in normal (0/5) and burned skin (0/2), and 100% in carcinoma (8/8). The strongly positive expression of TRT was observed almost always in the cytoplasm and nucleus of squamous epithelial cells of proliferative epidermis but the negative and partly weakly positive expression in the smooth muscles, endothelia of small blood vessels and capillaries, and fibroblasts. Chronic inflammtory cells, plasmacytes and lymphocytes also showed weakly positive for TRT. Conclusion: TRT expression could be involved in the malignant transformation of chronic radiation ulcer into squamous carcinoma, and in the poor healing caused by sclerosis of small blood vessels and lack of granulation tissue consisting of capillaries and fibroblasts 7. Weak nonlinear matter waves in a trapped two-component Bose-Einstein condensates International Nuclear Information System (INIS) Yong Wenmei; Xue Jukui 2008-01-01 The dynamics of the weak nonlinear matter solitary waves in two-component Bose-Einstein condensates (BEC) with cigar-shaped external potential are investigated analytically by a perturbation method. In the small amplitude limit, the two-components can be decoupled and the dynamics of solitary waves are governed by a variable-coefficient Korteweg-de Vries (KdV) equation. The reduction to the KdV equation may be useful to understand the dynamics of nonlinear matter waves in two-component BEC. The analytical expressions for the evolution of soliton, emitted radiation profiles and soliton oscillation frequency are also obtained 8. Classical field approach to quantum weak measurements. Science.gov (United States) Dressel, Justin; Bliokh, Konstantin Y; Nori, Franco 2014-03-21 By generalizing the quantum weak measurement protocol to the case of quantum fields, we show that weak measurements probe an effective classical background field that describes the average field configuration in the spacetime region between pre- and postselection boundary conditions. The classical field is itself a weak value of the corresponding quantum field operator and satisfies equations of motion that extremize an effective action. Weak measurements perturb this effective action, producing measurable changes to the classical field dynamics. As such, weakly measured effects always correspond to an effective classical field. This general result explains why these effects appear to be robust for pre- and postselected ensembles, and why they can also be measured using classical field techniques that are not weak for individual excitations of the field. 9. Chronic ionizing radiation exposure as a tumor promoter in mouse skin International Nuclear Information System (INIS) Mitchel, R.E.J.; Trivedi, A. 1992-01-01 We have tested a chronic exposure to 90 Y beta-radiation as a tumor promoter in mouse skin previously exposed to a chemical tumor initiator. Three different tests of radiation as a stage I tumor promoter, in skin subsequently given chemical stage II promotion, all indicated that the beta-radiation acted as a weak stage I skin tumor promoter. It showed no action as either a stage II or complete tumor promoter. (author) 10. Classical theory of the Kumakhov radiation in axial channeling International Nuclear Information System (INIS) Khokonov, M.K.; Komarov, F.F.; Telegin, V.I. 1984-01-01 The paper considers radiation of ultrarelativistic electrons in axial channeling initially predicted by Kumakhov. The consideration is based on the results of solution of the Fokker-Planck equation. The spectral-angular characteristics of the Kumakhov radiation in thick single crystals are calculated. It is shown that in heavy single crystals the energy losses on radiation can amount to a considerable portion of the initial beam energy. The possibility of a sharp increase of radiation due to a decrease of crystal temperature is discussed. It is shown that radiation intensity in axial channeling is weakly dependent on the initial angle of the electron entrance into the channel if this angle changes within the limits of a critical one. (author) 11. Peripheral facial weakness (Bell's palsy). Science.gov (United States) Basić-Kes, Vanja; Dobrota, Vesna Dermanović; Cesarik, Marijan; Matovina, Lucija Zadro; Madzar, Zrinko; Zavoreo, Iris; Demarin, Vida 2013-06-01 Peripheral facial weakness is a facial nerve damage that results in muscle weakness on one side of the face. It may be idiopathic (Bell's palsy) or may have a detectable cause. Almost 80% of peripheral facial weakness cases are primary and the rest of them are secondary. The most frequent causes of secondary peripheral facial weakness are systemic viral infections, trauma, surgery, diabetes, local infections, tumor, immune disorders, drugs, degenerative diseases of the central nervous system, etc. The diagnosis relies upon the presence of typical signs and symptoms, blood chemistry tests, cerebrospinal fluid investigations, nerve conduction studies and neuroimaging methods (cerebral MRI, x-ray of the skull and mastoid). Treatment of secondary peripheral facial weakness is based on therapy for the underlying disorder, unlike the treatment of Bell's palsy that is controversial due to the lack of large, randomized, controlled, prospective studies. There are some indications that steroids or antiviral agents are beneficial but there are also studies that show no beneficial effect. Additional treatments include eye protection, physiotherapy, acupuncture, botulinum toxin, or surgery. Bell's palsy has a benign prognosis with complete recovery in about 80% of patients, 15% experience some mode of permanent nerve damage and severe consequences remain in 5% of patients. 12. Weakly oval electron lense International Nuclear Information System (INIS) Daumenov, T.D.; Alizarovskaya, I.M.; Khizirova, M.A. 2001-01-01 The method of the weakly oval electrical field getting generated by the axially-symmetrical field is shown. Such system may be designed with help of the cylindric form coaxial electrodes with the built-in quadrupole duplet. The singularity of the indicated weakly oval lense consists of that it provides the conducting both mechanical and electronic adjustment. Such lense can be useful for elimination of the near-axis astigmatism in the electron-optical system 13. Weak decays of stable particles International Nuclear Information System (INIS) Brown, R.M. 1988-09-01 In this article we review recent advances in the field of weak decays and consider their implications for quantum chromodynamics (the theory of strong interactions) and electroweak theory (the combined theory of electromagnetic and weak interactions), which together form the ''Standard Model'' of elementary particles. (author) 14. Weak lensing and dark energy International Nuclear Information System (INIS) Huterer, Dragan 2002-01-01 We study the power of upcoming weak lensing surveys to probe dark energy. Dark energy modifies the distance-redshift relation as well as the matter power spectrum, both of which affect the weak lensing convergence power spectrum. Some dark-energy models predict additional clustering on very large scales, but this probably cannot be detected by weak lensing alone due to cosmic variance. With reasonable prior information on other cosmological parameters, we find that a survey covering 1000 sq deg down to a limiting magnitude of R=27 can impose constraints comparable to those expected from upcoming type Ia supernova and number-count surveys. This result, however, is contingent on the control of both observational and theoretical systematics. Concentrating on the latter, we find that the nonlinear power spectrum of matter perturbations and the redshift distribution of source galaxies both need to be determined accurately in order for weak lensing to achieve its full potential. Finally, we discuss the sensitivity of the three-point statistics to dark energy 15. Ionizing radiation in tumor promotion and progression International Nuclear Information System (INIS) Mitchel, R.E.J. 1990-08-01 Chronic exposure to beta radiation has been tested as a tumor promoting or progressing agent. The dorsal skins of groups of 25 female SENCAR mice were chemically initiated with a single exposure to DMBA, and chronic exposure to strontium-90/yttrium-90 beta radiation was tested as a stage 1, stage 2 or complete skin tumor promoter. Exposure of initiated mice to 0.5 gray twice a week for 13 weeks produced no papillomas, indicating no action as a complete promoter. Another similar group of animals was chemically promoted through stage 1 (with TPA) followed by 0.5 gray of beta radiation twice a week for 13 weeks. Again no papillomas developed indicating no action of chronic radiation as a stage 2 tumor promoter. The same radiation exposure protocol in another DMBA initiated group receiving both stage 1 and 2 chemical promotion resulted in a decrease in papilloma frequency, compared to the control group receiving no beta irradiation, indicating a tumor preventing effect of radiation at stage 2 promotion, probably by killing initiated cells. Chronic beta radiation was tested three different ways as a stage 1 tumor promoter. When compared to the appropriate control, beta radiation given after initiation as a stage 1 promoter (0.5 gray twice a week for 13 weeks), after initiation and along with a known stage 1 chemical promoter (1.0 gray twice a week for 2 weeks), or prior to initiation as a stage 1 promoter (0.5 gray twice a week for 4 weeks), each time showed a weak (∼ 15% stimulation) but statistically significant (p<0.01) ability to act as a stage 1 promoter. When tested as a tumor progressing agent delivered to pre-existing papillomas, beta radiation (0.5 gray twice a week for 13 weeks) increased carcinoma frequency from 0.52 to 0.68 carcinoma/animal, but this increase was not statistically significant at the 95% confidence level. We conclude that in the addition to the known initiating, progressing and complete carcinogenic action of acute exposures to ionizing 16. Radiative muon capture on hydrogen International Nuclear Information System (INIS) Schott, W.; Ahmad, S.; Chen, C.Q.; Gumplinger, P.; Hasinoff, M.D.; Larabee, A.J.; Sample, D.G.; Zhang, N.S.; Armstrong, D.S.; Blecher, M.; Serna-Angel, A.; Azuelos, G.; von Egidy, T.; Macdonald, J.A.; Poutissou, J.M.; Poutissou, R.; Wright, D.H.; Henderson, R.S.; McDonald, S.C.; Taylor, G.N.; Doyle, B.; Depommier, P.; Jonkmans, G.; Bertl, W.; Gorringe, T.P.; Robertson, B.C. 1991-03-01 The induced pseudoscalar coupling constant, g P , of the weak hadronic current can be determined from the measurement of the branching ratio of radiative muon capture (RMC) on hydrogen. This rare process is being investigated in the TRIUMF RMC experiment which is now taking data. This paper describes the experiment and indicates the status of the data analysis. (Author) 8 refs., 7 figs 17. Extensive and equivalent repair in both radiation-resistant and radiation-sensitive E. coli determined by a DNA-unwinding technique International Nuclear Information System (INIS) Ahnstroem, G.; George, A.M.; Cramp, W.A. 1978-01-01 The extent of strand breakage and repair in irradiated E. coli B/r and Bsub(s-l) was studied using a DNA-unwinding technique in denaturing conditions of weak alkali. Although these two strains showed widely different response to the lethal effects of ionizing radiation, they both had an equal capacity to repair radiation-induced breaks in DNA. Oxygen enhancement ratios for the killing of B/r and Bsub(s-l) were respectively 4 and 2; but after repair in non-nutrient or nutrient post-irradiation conditions, the oxygen enhancement values for the residual strand breaks were always the same for the two strains. The equal abilities of E.coli B/r and E.coli Bsub(s-l) to remove the strand breaks measured by this weak-alkali technqiue has led to the suggestion that some other type of damage to either DNA or another macromolecule may play a major role in determining whether or not the cells survive to proliferate. (author) 18. Extensive and equivalent repair in both radiation-resistant and radiation-sensitive E. coli determined by a DNA-unwinding technique Energy Technology Data Exchange (ETDEWEB) Ahnstroem, G [Stockholm Univ. (Sweden); George, A M; Cramp, W A 1978-10-01 The extent of strand breakage and repair in irradiated E. coli B/r and Bsub(s-l) was studied using a DNA-unwinding technique in denaturing conditions of weak alkali. Although these two strains showed widely different response to the lethal effects of ionizing radiation, they both had an equal capacity to repair radiation-induced breaks in DNA. Oxygen enhancement ratios for the killing of B/r and Bsub(s-l) were respectively 4 and 2; but after repair in non-nutrient or nutrient post-irradiation conditions, the oxygen enhancement values for the residual strand breaks were always the same for the two strains. The equal abilities of E.coli B/r and E.coli Bsub(s-l) to remove the strand breaks measured by this weak-alkali technqiue has led to the suggestion that some other type of damage to either DNA or another macromolecule may play a major role in determining whether or not the cells survive to proliferate. 19. Detection of On-Chip Generated Weak Microwave Radiation Using Superconducting Normal-Metal SET Directory of Open Access Journals (Sweden) Behdad Jalali-Jafari 2016-01-01 Full Text Available The present work addresses quantum interaction phenomena of microwave radiation with a single-electron tunneling system. For this study, an integrated circuit is implemented, combining on the same chip a Josephson junction (Al/AlO x /Al oscillator and a single-electron transistor (SET with the superconducting island (Al and normal-conducting leads (AuPd. The transistor is demonstrated to operate as a very sensitive photon detector, sensing down to a few tens of photons per second in the microwave frequency range around f ∼ 100 GHz. On the other hand, the Josephson oscillator, realized as a two-junction SQUID and coupled to the detector via a coplanar transmission line (Al, is shown to provide a tunable source of microwave radiation: controllable variations in power or in frequency were accompanied by significant changes in the detector output, when applying magnetic flux or adjusting the voltage across the SQUID, respectively. It was also shown that the effect of substrate-mediated phonons, generated by our microwave source, on the detector output was negligibly small. 20. Weak self-adjoint differential equations International Nuclear Information System (INIS) Gandarias, M L 2011-01-01 The concepts of self-adjoint and quasi self-adjoint equations were introduced by Ibragimov (2006 J. Math. Anal. Appl. 318 742-57; 2007 Arch. ALGA 4 55-60). In Ibragimov (2007 J. Math. Anal. Appl. 333 311-28), a general theorem on conservation laws was proved. In this paper, we generalize the concept of self-adjoint and quasi self-adjoint equations by introducing the definition of weak self-adjoint equations. We find a class of weak self-adjoint quasi-linear parabolic equations. The property of a differential equation to be weak self-adjoint is important for constructing conservation laws associated with symmetries of the differential equation. (fast track communication) 1. Test of the neoclassical theory of radiation in a weakly excited atomic system International Nuclear Information System (INIS) Brink, G.O. 1975-01-01 The neoclassical theory of radiation predicts that the decay rate of an excited atomic state depends on the population density of the lower state. Experimental evidence is presented here which shows that in the case of 39 K the decay rate is in agreement with the predictions of quantum electrodynamics and definitely in disagreement with the neoclassical theory 2. Quantum discord with weak measurements International Nuclear Information System (INIS) Singh, Uttam; Pati, Arun Kumar 2014-01-01 Weak measurements cause small change to quantum states, thereby opening up the possibility of new ways of manipulating and controlling quantum systems. We ask, can weak measurements reveal more quantum correlation in a composite quantum state? We prove that the weak measurement induced quantum discord, called as the “super quantum discord”, is always larger than the quantum discord captured by the strong measurement. Moreover, we prove the monotonicity of the super quantum discord as a function of the measurement strength and in the limit of strong projective measurement the super quantum discord becomes the normal quantum discord. We find that unlike the normal discord, for pure entangled states, the super quantum discord can exceed the quantum entanglement. Our results provide new insights on the nature of quantum correlation and suggest that the notion of quantum correlation is not only observer dependent but also depends on how weakly one perturbs the composite system. We illustrate the key results for pure as well as mixed entangled states. -- Highlights: •Introduced the role of weak measurements in quantifying quantum correlation. •We have introduced the notion of the super quantum discord (SQD). •For pure entangled state, we show that the SQD exceeds the entanglement entropy. •This shows that quantum correlation depends not only on observer but also on measurement strength 3. Radiation and platinum drug interaction International Nuclear Information System (INIS) Nias, A.H.W. 1985-01-01 The ideal platinum drug-radiation interaction would achieve radiosensitization of hypoxic tumour cells with the use of a dose of drug which is completely non-toxic to normal tissues. Electron-affinic agents are employed with this aim, but the commoner platinum drugs are only weakly electron-affinic. They do have a quasi-alkylating action however, and this DNA targeting may account for the radiosensitizing effect which occurs with both pre- and post-radiation treatments. Because toxic drug dosage is usually required for this, the evidence of the biological responses to the drug and to the radiation, as well as to the combination, requires critical analysis before any claim of true enhancement, rather than simple additivity, can be accepted. The amount of enhancement will vary with both the platinum drug dose and the time interval between drug administration and radiation. Clinical schedules may produce an increase in tumour response and/or morbidity, depending upon such dose and time relationships. (author) 4. Spin effects in the weak interaction International Nuclear Information System (INIS) Freedman, S.J.; Chicago Univ., IL; Chicago Univ., IL 1990-01-01 Modern experiments investigating the beta decay of the neutron and light nuclei are still providing important constraints on the theory of the weak interaction. Beta decay experiments are yielding more precise values for allowed and induced weak coupling constants and putting constraints on possible extensions to the standard electroweak model. Here we emphasize the implications of recent experiments to pin down the strengths of the weak vector and axial vector couplings of the nucleon 5. Weak interactions in astrophysics and cosmology International Nuclear Information System (INIS) Taylor, R.J. 1977-01-01 There ar many problems in astrophysics and cosmology in which the form of the weak interactions, their strength or the number of weakly interacting particles, is very important. It is possible that astronomical observations may give some information about the weak interactions. In the conventional hot big bang cosmological theory the number of leptons with associated neutrinos influences the speed of expansion of the Universe and the chemical composition of pre-galactic matter. The strength of the weak interaction, as exemplified by the half-life of the neutron, has a similar effect. In addition, the form of the weak interactions will determine how effectively neutrino viscosity can smooth out irregularities in the early Universe. Because neutrinos have a very long mean free path, they can escape from the central region of stars whereas photons can only escape from the surface. In late stages of stellar evolution, neutrino luminosity is often believed to be much greater than photon luminosity. This can both accelerate the cooling of dying stars and influence the stages of stellar evolution leading to the onset of supernova explosions. In pre-super-novae it is even possible that very dense stellar cores can be opaque to neutrinos and that the absorption or scattering of neutrinos can cause the explosion. These results depend crucially on the form of the weak interactions, with the discovery of neutral currents being very important. Until the solar neutrino experiment has been reconciled with theory, the possible role of uncertainties in the weak interactions cannot be ignored. (author) 6. Momentum Broadening in Weakly Coupled Quark-Gluon Plasma (with a view to finding the quasiparticles within liquid quark-gluon plasma) CERN Document Server D'Eramo, Francesco; Liu, Hong; Rajagopal, Krishna 2013-01-01 We calculate P(k_\\perp), the probability distribution for an energetic parton that propagates for a distance L through a medium without radiating to pick up transverse momentum k_\\perp, for a medium consisting of weakly coupled quark-gluon plasma. We use full or HTL self-energies in appropriate regimes, resumming each in order to find the leading large-L behavior. The jet quenching parameter \\hat q is the second moment of P(k_\\perp), and we compare our results to other determinations of this quantity in the literature, although we emphasize the importance of looking at P(k_\\perp) in its entirety. We compare our results for P(k_\\perp) in weakly coupled quark-gluon plasma to expectations from holographic calculations that assume a plasma that is strongly coupled at all length scales. We find that the shape of P(k_\\perp) at modest k_\\perp may not be very different in weakly coupled and strongly coupled plasmas, but we find that P(k_\\perp) must be parametrically larger in a weakly coupled plasma than in a strongl... 7. Low-energy Electro-weak Reactions International Nuclear Information System (INIS) Gazit, Doron 2012-01-01 Chiral effective field theory (EFT) provides a systematic and controlled approach to low-energy nuclear physics. Here, we use chiral EFT to calculate low-energy weak Gamow-Teller transitions. We put special emphasis on the role of two-body (2b) weak currents within the nucleus and discuss their applications in predicting physical observables. 8. Indications of a ΔI=1/2 rule in the strong coupling regime International Nuclear Information System (INIS) Angus, I.G. 1988-01-01 The authors attempt to understand the ΔI = 1/2 pattern of the nonleptonic weak decays of the kaons. The calculation scheme employed is the Strong Coupling Expansion of lattice QCD. Kogut-Susskind fermions are used in the Hamiltonian formalism. They describe in detail the methods used to expedite this calculation, all of which was done by computer algebra. The final result is very encouraging. Even though an exact interpretation is clouded by the presence of irrelevant operators, and questions of lattice artifacts, a signal of the /d//I = 1/2 rule appears to be observable. With an appropriate choice of the one free parameter, enhancements greater than those observed experimentally can be obtained. The authors point out a number of surprising results which turn up in the course of the calculation 9. An evaluation of the uranium mine radiation safety course International Nuclear Information System (INIS) 1984-07-01 The report evaluates the Uranium Mine Radiation Safety Course focussing on the following areas: effectivenss of the course; course content; instructional quality; course administration. It notes strengths and weaknesses in these areas and offers preliminary recommendations for future action 10. Instrumental systematics and weak gravitational lensing International Nuclear Information System (INIS) Mandelbaum, R. 2015-01-01 We present a pedagogical review of the weak gravitational lensing measurement process and its connection to major scientific questions such as dark matter and dark energy. Then we describe common ways of parametrizing systematic errors and understanding how they affect weak lensing measurements. Finally, we discuss several instrumental systematics and how they fit into this context, and conclude with some future perspective on how progress can be made in understanding the impact of instrumental systematics on weak lensing measurements 11. Robust weak measurements on finite samples International Nuclear Information System (INIS) Tollaksen, Jeff 2007-01-01 A new weak measurement procedure is introduced for finite samples which yields accurate weak values that are outside the range of eigenvalues and which do not require an exponentially rare ensemble. This procedure provides a unique advantage in the amplification of small nonrandom signals by minimizing uncertainties in determining the weak value and by minimizing sample size. This procedure can also extend the strength of the coupling between the system and measuring device to a new regime 12. On weakly D-differentiable operators DEFF Research Database (Denmark) Christensen, Erik 2016-01-01 Let DD be a self-adjoint operator on a Hilbert space HH and aa a bounded operator on HH. We say that aa is weakly DD-differentiable, if for any pair of vectors ξ,ηξ,η from HH the function 〈eitDae−itDξ,η〉〈eitDae−itDξ,η〉 is differentiable. We give an elementary example of a bounded operator aa......, such that aa is weakly DD-differentiable, but the function eitDae−itDeitDae−itD is not uniformly differentiable. We show that weak DD-differentiability may be characterized by several other properties, some of which are related to the commutator (Da−aD)... 13. Weakly distributive modules. Applications to supplement submodules Indian Academy of Sciences (India) Abstract. In this paper, we define and study weakly distributive modules as a proper generalization of distributive modules. We prove that, weakly distributive supplemented modules are amply supplemented. In a weakly distributive supplemented module every submodule has a unique coclosure. This generalizes a result of ... 14. Geometric phase topology in weak measurement Science.gov (United States) Samlan, C. T.; Viswanathan, Nirmal K. 2017-12-01 The geometric phase visualization proposed by Bhandari (R Bhandari 1997 Phys. Rep. 281 1-64) in the ellipticity-ellipse orientation basis of the polarization ellipse of light is implemented to understand the geometric aspects of weak measurement. The weak interaction of a pre-selected state, acheived via spin-Hall effect of light (SHEL), results in a spread in the polarization ellipticity (η) or ellipse orientation (χ) depending on the resulting spatial or angular shift, respectively. The post-selection leads to the projection of the η spread in the complementary χ basis results in the appearance of a geometric phase with helical phase topology in the η - χ parameter space. By representing the weak measurement on the Poincaré sphere and using Jones calculus, the complex weak value and the geometric phase topology are obtained. This deeper understanding of the weak measurement process enabled us to explore the techniques’ capabilities maximally, as demonstrated via SHEL in two examples—external reflection at glass-air interface and transmission through a tilted half-wave plate. 15. Neurolysis and myocutaneous flap for radiation induced brachial plexus neuropathy International Nuclear Information System (INIS) Hirachi, Kazuhiko; Minami, Akio; Kato, Hiroyuki; Nishio, Yasuhiko; Ohnishi, Nobuki 1998-01-01 Surgical treatment for radiation induced brachial plexus neuropathy is difficult. We followed 9 patients of radiation induced brachial plexus neuropathy who were surgically treated with neurolysis and myocutaneous flap coverage. Their ages ranged from 29 to 72 years old. Their diagnoses were breast cancer in 6 patients, lingual cancer in 1, thyroid cancer in 1 and malignant lymphoma in 1. Total dose of radiation ranged from 44 to 240 Gy. Interval from radiation therapy to our surgery ranged from 1 to 18 years (mean 6.7 years). Chief complaints were dysesthesia in 9 patients, motor weakness in 7 patients and dullach in scar formation of radiated skin in 7 patients. Preoperative neural functions were slight palsy in 1, moderate palsy in 5 and complete palsy in 3. In surgical treatment, neurolysis of the brachial plexus was done and it was covered by latissimus dorsi myocutaneous flap. We evaluated about dysesthesia and motor recovery after treatment for neuropathy. Follow up periods ranged from 1 to 11 years (average in 5 years). Dysesthesia improved in 6 patients and got worse in 3 patients. Motor weakness recovered in only 2 patients and got worse in 7 patients. From our results, intolerable dysesthesia which was first complaint of these patients improved. But motor function had not recovered. Our treatment was thought to be effective for extraneural factor like an compression neuropathy by scar formation and poor vascularity. But it was not effective for intraneural damage by radiation therapy. (author) 16. Research on international cooperation for nuclear and radiation safety International Nuclear Information System (INIS) Cheng Jianxiu 2013-01-01 This paper describes the importance and related requirements of international cooperation on nuclear and radiation safety, analyzes the current status, situation and challenges faced, as well as the existing weakness and needs for improvement, and gives some proposals for reference. (author) 17. Higgs boson production in association with a photon via weak boson fusion CERN Document Server Arnold, Ken; Jäger, Barbara; Zeppenfeld, Dieter 2011-01-01 We present next-to-leading order QCD corrections to Higgs production in association with a photon via weak boson fusion at a hadron collider. Utilizing the fully flexible parton level Monte-Carlo program VBFNLO, we find small overall corrections, while the shape of some distributions is sensitive to radiative contributions in certain regions of phase-space. Residual scale uncertainties at next-to-leading order are at the few-percent level. Being perturbatively well under control and exhibiting kinematic features that allow to distinguish it from potential backgrounds, this process can serve as a valuable source of information on theHb\\bar{b}Yukawa coupling. 18. Diagnosis of 20 cases with chronic radiation syndrome International Nuclear Information System (INIS) Zhang, Hongshou; Shen, Zhezhong; Wen Zhigen; Xie, Xiaoping; Ni, Jinxian 1984-01-01 Twenty cases with chronic radiation syndrome were diagnosed in our department during 1957-1980. All except one were radiologists, and eight of them had worked in radiological departments for over 20 years. Owing to the use of out-dated x-ray machines as well as radium sources without adequate protection, all these cases were apparently overexposed to radiation. They presented following signs and symptoms of chronic radiation syndrome: excitability, palpitation, fatigue, general weakness, loss of weight, oversweating accompanied by tendency of lowered metabolism, peripheral blood cell changes, and chromosome aberrations. The diagnosis of this syndrome was based on definitive professional and over-exposure history, clinical picture and abnormal laboratory findings. (author) 19. Compound Semiconductor Radiation Detector International Nuclear Information System (INIS) Kim, Y. K.; Park, S. H.; Lee, W. G.; Ha, J. H. 2005-01-01 In 1945, Van Heerden measured α, β and γ radiations with the cooled AgCl crystal. It was the first radiation measurement using the compound semiconductor detector. Since then the compound semiconductor has been extensively studied as radiation detector. Generally the radiation detector can be divided into the gas detector, the scintillator and the semiconductor detector. The semiconductor detector has good points comparing to other radiation detectors. Since the density of the semiconductor detector is higher than that of the gas detector, the semiconductor detector can be made with the compact size to measure the high energy radiation. In the scintillator, the radiation is measured with the two-step process. That is, the radiation is converted into the photons, which are changed into electrons by a photo-detector, inside the scintillator. However in the semiconductor radiation detector, the radiation is measured only with the one-step process. The electron-hole pairs are generated from the radiation interaction inside the semiconductor detector, and these electrons and charged ions are directly collected to get the signal. The energy resolution of the semiconductor detector is generally better than that of the scintillator. At present, the commonly used semiconductors as the radiation detector are Si and Ge. However, these semiconductor detectors have weak points. That is, one needs thick material to measure the high energy radiation because of the relatively low atomic number of the composite material. In Ge case, the dark current of the detector is large at room temperature because of the small band-gap energy. Recently the compound semiconductor detectors have been extensively studied to overcome these problems. In this paper, we will briefly summarize the recent research topics about the compound semiconductor detector. We will introduce the research activities of our group, too 20. Weak strange particle production: advantages and difficulties International Nuclear Information System (INIS) Angelescu, Tatiana; Baker, O.K. 2002-01-01 Electromagnetic strange particle production developed at Jefferson Laboratory was an important source of information on strange particle electromagnetic formfactors and induced and transferred polarization. The high quality of the beam and the detection techniques involved could be an argument for detecting strange particles in weak interactions and answer questions about cross sections, weak formfactors, neutrino properties, which have not been investigated yet. The paper analyses some aspects related to the weak lambda production and detection with the Hall C facilities at Jefferson Laboratory and the limitations in measuring the weak interaction quantities. (authors) 1. Study of weak interaction with p-p colliding beam International Nuclear Information System (INIS) Arafune, Jiro; Sugawara, Hirotaka 1975-01-01 Weak interaction in the energy range of TRISTAN project is discussed. The cross-section of production of weak boson in p-p reaction was calculated with the parton model. The observation of weak boson may be possible. The production rate of neutral weak boson was also estimated on the basis of the Weinberg model, and was almost same as that of weak boson. The method of observation of weak boson is suggested. The direct method is the observation of lepton pair due to the decay of neutral weak boson. It is expected that the spectrum of decay products (+ -) in the decay of weak boson shows a characteristic feature, and it shows the existence of weak boson. Weak interaction makes larger contribution in case of large momentum transfer than electromagnetic interaction. When the momentum transfer is larger than 60 GeV/c, the contribution of weak interaction is dominant over the others. Therefore, the experiments at high energy will give informations concerning the relations among the interactions of elementary particles. Possibility of study on the Higgs scalar meson is also discussed. (Kato, T.) 2. Neutral-current weak interactions at an EIC Energy Technology Data Exchange (ETDEWEB) Zhao, Y.X.; Deshpande, A.; Kumar, K.S.; Riordan, S. [Stony Brook University, Department of Physics and Astronomy, Stony Brook, NY (United States); Huang, J. [Brookhaven National Lab, Physics Department, Upton, NY (United States) 2017-03-15 A simulation study of measurements of neutral current structure functions of the nucleon at the future high-energy and high-luminosity polarized electron-ion collider (EIC) is presented. A new series of γ-Z interference structure functions, F{sub 1}{sup γZ}, F{sub 3}{sup γZ}, g{sub 1}{sup γZ}, g{sub 5}{sup γZ} become accessible via parity-violating asymmetries in polarized electron-nucleon deep inelastic scattering (DIS). Within the context of the quark-parton model, they provide a unique and, in some cases, yet-unmeasured combination of unpolarized and polarized parton distribution functions. The uncertainty projections for these structure functions using electron-proton collisions are considered for various EIC beam energy configurations. Also presented are uncertainty projections for measurements of the weak mixing angle sin{sup 2} θ{sub W} using electron-deuteron collisions which cover a much higher Q{sup 2} than that accessible in fixed target measurements. QED and QCD radiative corrections and effects of detector smearing are included with the calculations. (orig.) 3. Radiative muon capture on carbon, oxygen and calcium International Nuclear Information System (INIS) Armstrong, D.S.; Ahmad, S.; Burnham, R.A.; Gorringe, T.P.; Hasinoff, M.D.; Larabee, A.J.; Waltham, C.E.; Azuelos, G.; Macdonald, J.A.; Numao, T.; Poutissou, J.M.; Clifford, E.T.H.; Summhammer, J.; Blecher, M.; Wright, D.H.; Depommier, P.; Poutissou, R.; Mes, H.; Robertson, B.C. 1990-05-01 The photon energy spectra from radiative muon capture on 12 C, 16 O and 40 Ca have been measured using a time projection chamber as a pair spectrometer. The branching ratio for radiative muon capture is sensitive to g p , the induced pseudoscalar coupling constant of the weak interaction. Expressed in terms of the axial-vector weak coupling constant g a , values of g p /g a = 5.7 ± 0.8 and g p /g a = 7.3 ± 0.9 are obtained for 40 Ca and 16 O respectively, from comparison with phenomenological calculations of the nuclear response. From comparison with microscopic calculations, values of g p /g a = 4.6 ± 1.8, 13.6 +1.6 -1.9 and 16.2 +1.3 -0.7 for 40 Ca, 16 O and 12 C, respectively, are obtained. The microscopic results are suggestive of a renormalization of the nucleonic form factors within the nucleus. (Author) (78 refs., 14 tabs, 22 figs.) 4. Weak convergence and uniform normalization in infinitary rewriting DEFF Research Database (Denmark) Simonsen, Jakob Grue 2010-01-01 the starkly surprising result that for any orthogonal system with finitely many rules, the system is weakly normalizing under weak convergence if{f} it is strongly normalizing under weak convergence if{f} it is weakly normalizing under strong convergence if{f} it is strongly normalizing under strong...... convergence. As further corollaries, we derive a number of new results for weakly convergent rewriting: Systems with finitely many rules enjoy unique normal forms, and acyclic orthogonal systems are confluent. Our results suggest that it may be possible to recover some of the positive results for strongly... 5. Measurement of the Weak Mixing Angle in Moller Scattering Energy Technology Data Exchange (ETDEWEB) Klejda, B. 2005-01-28 The weak mixing parameter, sin{sup 2} {theta}{sub w}, is one of the fundamental parameters of the Standard Model. Its tree-level value has been measured with high precision at energies near the Z{sup 0} pole; however, due to radiative corrections at the one-loop level, the value of sin{sup 2} {theta}{sub w} is expected to change with the interaction energy. As a result, a measurement of sin{sup 2} {theta}{sub w} at low energy (Q{sup 2} << m{sub Z}, where Q{sup 2} is the momentum transfer and m{sub Z} is the Z boson mass), provides a test of the Standard Model at the one-loop level, and a probe for new physics beyond the Standard Model. One way of obtaining sin{sup 2} {theta}{sub w} at low energy is from measuring the left-right, parity-violating asymmetry in electron-electron (Moeller) scattering: A{sub PV} = {sigma}{sub R}-{sigma}{sub L}/{sigma}{sub R}+{sigma}{sub L}, where {sigma}{sub R} and {sigma}{sub L} are the cross sections for right- and left-handed incident electrons, respectively. The parity violating asymmetry is proportional to the pseudo-scalar weak neutral current coupling in Moeller scattering, g{sub ee}. At tree level g{sub ee} = (1/4 -sin{sup 2} {theta}{sub w}). A precision measurement of the parity-violating asymmetry in Moeller scattering was performed by Experiment E158 at the Stanford Linear Accelerator Center (SLAC). During the experiment, {approx}50 GeV longitudinally polarized electrons scattered off unpolarized atomic electrons in a liquid hydrogen target, corresponding to an average momentum transfer Q{sup 2} {approx} 0.03 (GeV/c){sup 2}. The tree-level prediction for A{sub PV} at such energy is {approx}300 ppb. However one-loop radiative corrections reduce its value by {approx}40%. This document reports the E158 results from the 2002 data collection period. The parity-violating asymmetry was found to be A{sub PV} = -160 {+-} 21 (stat.) {+-} 17 (syst.) ppb, which represents the first observation of a parity-violating asymmetry in Moeller 6. Microwave-assisted Weak Acid Hydrolysis of Proteins Directory of Open Access Journals (Sweden) Miyeong Seo 2012-06-01 Full Text Available Myoglobin was hydrolyzed by microwave-assisted weak acid hydrolysis with 2% formic acid at 37 oC, 50 oC, and100 oC for 1 h. The most effective hydrolysis was observed at 100 oC. Hydrolysis products were investigated using matrixassistedlaser desorption/ionization time-of-flight mass spectrometry. Most cleavages predominantly occurred at the C-termini ofaspartyl residues. For comparison, weak acid hydrolysis was also performed in boiling water for 20, 40, 60, and 120 min. A 60-min weak acid hydrolysis in boiling water yielded similar results as a 60-min microwave-assisted weak acid hydrolysis at100 oC. These results strongly suggest that microwave irradiation has no notable enhancement effect on acid hydrolysis of proteinsand that temperature is the major factor that determines the effectiveness of weak acid hydrolysis. 7. Reduced growth of soybean seedlings after exposure to weak microwave radiation from GSM 900 mobile phone and base station. Science.gov (United States) Halgamuge, Malka N; Yak, See Kye; Eberhardt, Jacob L 2015-02-01 The aim of this work was to study possible effects of environmental radiation pollution on plants. The association between cellular telephone (short duration, higher amplitude) and base station (long duration, very low amplitude) radiation exposure and the growth rate of soybean (Glycine max) seedlings was investigated. Soybean seedlings, pre-grown for 4 days, were exposed in a gigahertz transverse electromagnetic cell for 2 h to global system for mobile communication (GSM) mobile phone pulsed radiation or continuous wave (CW) radiation at 900 MHz with amplitudes of 5.7 and 41 V m(-1) , and outgrowth was studied one week after exposure. The exposure to higher amplitude (41 V m(-1)) GSM radiation resulted in diminished outgrowth of the epicotyl. The exposure to lower amplitude (5.7 V m(-1)) GSM radiation did not influence outgrowth of epicotyl, hypocotyls, or roots. The exposure to higher amplitude CW radiation resulted in reduced outgrowth of the roots whereas lower CW exposure resulted in a reduced outgrowth of the hypocotyl. Soybean seedlings were also exposed for 5 days to an extremely low level of radiation (GSM 900 MHz, 0.56 V m(-1)) and outgrowth was studied 2 days later. Growth of epicotyl and hypocotyl was found to be reduced, whereas the outgrowth of roots was stimulated. Our findings indicate that the observed effects were significantly dependent on field strength as well as amplitude modulation of the applied field. © 2015 Wiley Periodicals, Inc. 8. Diagnosis of functional (psychogenic paresis and weakness Directory of Open Access Journals (Sweden) Savkov V.S. 2018-03-01 Full Text Available Functional (conversion neurological symptoms represent one of the most common situations faced by neurologists in their everyday practice. Among them, acute or subacute functional weakness may mimic very prevalent conditions such as stroke or traumatic injury. In the diagnosis of functional weakness, although elements of the history are helpful, physical signs are often of crucial importance in the diagnosis and positive signs are as important as absence of signs of disease. Hence, accurate and reliable positive signs of functional weakness are valuable for obtaining timely diagnosis and treatment, making it possible to avoid unnecessary or invasive tests and procedures up to thrombolysis. Functional weakness commonly presents as weakness of an entire limb, paraparesis, or hemiparesis, with observable or demonstrable inconsistencies and non-anatomic accompaniments. Documentation of limb movements during sleep, the arm drop test, the Babinski’s trunk-thigh test, Hoover tests, the Sonoo abductor test, and various dynamometer tests can provide useful bedside diagnostic information on functional weakness. We therefore present here a brief overview of the positive neurological signs of functional weakness available, both in the lower and in the upper limbs; but none should be used in isolation and must be interpreted in the overall context of the presentation. It should be borne in mind that a patient may have both a functional and an organic disorder. 9. Management information system applied to radiation protection services International Nuclear Information System (INIS) Grossi, Pablo Andrade; Souza, Leonardo Soares de; Figueiredo, Geraldo Magela; Figueiredo, Arthur 2013-01-01 An effective management information system based on technology, information and people is necessary to improve the safety on all processes and operations subjected to radiation risks. The complex and multisource information flux from all radiation protection activities on nuclear organizations requires a robust tool/system to highlight the strengths and weaknesses and identify behaviors and trends on the activities requiring radiation protection programs. Those organized and processed data are useful to reach a successful management and to support the human decision-making on nuclear organization. This paper presents recent improvements on a management information system based on the radiation protection directives and regulations from Brazilian regulatory body. This radiation protection control system is applied to any radiation protection services and research institutes subjected to Brazilian nuclear regulation and is a powerful tool for continuous management, not only indicating how the health and safety activities are going, but why they are not going as well as planned showing up the critical points. (author) 10. Management information system applied to radiation protection services Energy Technology Data Exchange (ETDEWEB) Grossi, Pablo Andrade; Souza, Leonardo Soares de; Figueiredo, Geraldo Magela; Figueiredo, Arthur, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil) 2013-07-01 An effective management information system based on technology, information and people is necessary to improve the safety on all processes and operations subjected to radiation risks. The complex and multisource information flux from all radiation protection activities on nuclear organizations requires a robust tool/system to highlight the strengths and weaknesses and identify behaviors and trends on the activities requiring radiation protection programs. Those organized and processed data are useful to reach a successful management and to support the human decision-making on nuclear organization. This paper presents recent improvements on a management information system based on the radiation protection directives and regulations from Brazilian regulatory body. This radiation protection control system is applied to any radiation protection services and research institutes subjected to Brazilian nuclear regulation and is a powerful tool for continuous management, not only indicating how the health and safety activities are going, but why they are not going as well as planned showing up the critical points. (author) 11. Signatures of dark radiation in neutrino and dark matter detectors OpenAIRE Cui, Yanou; Pospelov, Maxim; Pradler, Josef 2018-01-01 We consider the generic possibility that the Universe’s energy budget includes some form of relativistic or semi-relativistic dark radiation (DR) with nongravitational interactions with standard model (SM) particles. Such dark radiation may consist of SM singlets or a nonthermal, energetic component of neutrinos. If such DR is created at a relatively recent epoch, it can carry sufficient energy to leave a detectable imprint in experiments designed to search for very weakly interacting particl... 12. Relative entropies, suitable weak solutions, and weak-strong uniqueness for the compressible Navier–Stokes system Czech Academy of Sciences Publication Activity Database Feireisl, Eduard; Jin, B.J.; Novotný, A. 2012-01-01 Roč. 14, č. 4 (2012), s. 717-730 ISSN 1422-6928 R&D Projects: GA ČR GA201/09/0917 Institutional research plan: CEZ:AV0Z10190503 Keywords : suitable weak solution * weak-strong uniqueness * compressible Navier-Stokes system Subject RIV: BA - General Mathematics Impact factor: 1.415, year: 2012 http://link.springer.com/article/10.1007%2Fs00021-011-0091-9 13. Role of Longwave Cloud-Radiation Feedback in the Simulation of the Madden-Julian Oscillation Science.gov (United States) Kim, Daehyun; Ahn, Min-Seop; Kang, In-Sik; Del Genio, Anthony D. 2015-01-01 The role of the cloud-radiation interaction in the simulation of the Madden-Julian oscillation (MJO) is investigated. A special focus is on the enhancement of column-integrated diabatic heating due to the greenhouse effects of clouds and moisture in the region of anomalous convection. The degree of this enhancement, the greenhouse enhancement factor (GEF), is measured at different precipitation anomaly regimes as the negative ratio of anomalous outgoing longwave radiation to anomalous precipitation. Observations show that the GEF varies significantly with precipitation anomaly and with the MJO cycle. The greenhouse enhancement is greater in weak precipitation anomaly regimes and its effectiveness decreases monotonically with increasing precipitation anomaly. The GEF also amplifies locally when convection is strengthened in association with the MJO, especially in the weak precipitation anomaly regime (less than 5 mm day(exp -1)). A robust statistical relationship is found among CMIP5 climate model simulations between the GEF and the MJO simulation fidelity. Models that simulate a stronger MJO also simulate a greater GEF, especially in the weak precipitation anomaly regime (less than 5 mm day(exp -1)). Models with a greater GEF in the strong precipitation anomaly regime (greater than 30 mm day(-1)) represent a slightly slower MJO propagation speed. Many models that lack the MJO underestimate the GEF in general and in particular in the weak precipitation anomaly regime. The results herein highlight that the cloud-radiation interaction is a crucial process for climate models to correctly represent the MJO. 14. Cosmic Dark Radiation and Neutrinos Directory of Open Access Journals (Sweden) Maria Archidiacono 2013-01-01 Full Text Available New measurements of the cosmic microwave background (CMB by the Planck mission have greatly increased our knowledge about the universe. Dark radiation, a weakly interacting component of radiation, is one of the important ingredients in our cosmological model which is testable by Planck and other observational probes. At the moment, the possible existence of dark radiation is an unsolved question. For instance, the discrepancy between the value of the Hubble constant, H0, inferred from the Planck data and local measurements of H0 can to some extent be alleviated by enlarging the minimal ΛCDM model to include additional relativistic degrees of freedom. From a fundamental physics point of view, dark radiation is no less interesting. Indeed, it could well be one of the most accessible windows to physics beyond the standard model, for example, sterile neutrinos. Here, we review the most recent cosmological results including a complete investigation of the dark radiation sector in order to provide an overview of models that are still compatible with new cosmological observations. Furthermore, we update the cosmological constraints on neutrino physics and dark radiation properties focusing on tensions between data sets and degeneracies among parameters that can degrade our information or mimic the existence of extra species. 15. Attending to weak signals: the leader's challenge. Science.gov (United States) Kerfoot, Karlene 2005-12-01 Halverson and Isham (2003) quote sources that report the accidental death rate of simply being in a hospital is " ... four hundred times more likely than your risk of death from traveling by train, forty times higher than driving a car, and twenty times higher than flying in a commercial aircraft" (p. 13). High-reliability organizations such as nuclear power plants and aircraft carriers have been pioneers in the business of recognizing weak signals. Weike and Sutcliffe (2001) note that high-reliability organizations distinguish themselves from others because of their mindfulness which enables them to see the significance of weak signals and to give strong interventions to weak signals. To act mindfully, these organizations have an underlying mental model of continually updating, anticipating, and focusing the possibility of failure using the intelligence that weak signals provides. Much of what happens is unexpected in health care. However, with a culture that is continually looking for weak signals, and intervenes and rescues when these signals are detected, the unexpected happens less often. This is the epitome of how leaders can build a culture of safety that focuses on recognizing the weak signals to manage the unforeseen. 16. Extrapolating Weak Selection in Evolutionary Games Science.gov (United States) Wu, Bin; García, Julián; Hauert, Christoph; Traulsen, Arne 2013-01-01 In evolutionary games, reproductive success is determined by payoffs. Weak selection means that even large differences in game outcomes translate into small fitness differences. Many results have been derived using weak selection approximations, in which perturbation analysis facilitates the derivation of analytical results. Here, we ask whether results derived under weak selection are also qualitatively valid for intermediate and strong selection. By “qualitatively valid” we mean that the ranking of strategies induced by an evolutionary process does not change when the intensity of selection increases. For two-strategy games, we show that the ranking obtained under weak selection cannot be carried over to higher selection intensity if the number of players exceeds two. For games with three (or more) strategies, previous examples for multiplayer games have shown that the ranking of strategies can change with the intensity of selection. In particular, rank changes imply that the most abundant strategy at one intensity of selection can become the least abundant for another. We show that this applies already to pairwise interactions for a broad class of evolutionary processes. Even when both weak and strong selection limits lead to consistent predictions, rank changes can occur for intermediate intensities of selection. To analyze how common such games are, we show numerically that for randomly drawn two-player games with three or more strategies, rank changes frequently occur and their likelihood increases rapidly with the number of strategies . In particular, rank changes are almost certain for , which jeopardizes the predictive power of results derived for weak selection. PMID:24339769 17. Weak value distributions for spin 1/2 Science.gov (United States) Berry, M. V.; Dennis, M. R.; McRoberts, B.; Shukla, P. 2011-05-01 The simplest weak measurement is of a component of spin 1/2. For this observable, the probability distributions of the real and imaginary parts of the weak value, and their joint probability distribution, are calculated exactly for pre- and postselected states uniformly distributed over the surface of the Poincaré-Bloch sphere. The superweak probability, that the real part of the weak value lies outside the spectral range, is 1/3. This case, with just two eigenvalues, complements our previous calculation (Berry and Shukla 2010 J. Phys. A: Math. Theor. 43 354024) of the universal form of the weak value probability distribution for an operator with many eigenvalues. 18. Weak value distributions for spin 1/2 International Nuclear Information System (INIS) Berry, M V; Dennis, M R; McRoberts, B; Shukla, P 2011-01-01 The simplest weak measurement is of a component of spin 1/2. For this observable, the probability distributions of the real and imaginary parts of the weak value, and their joint probability distribution, are calculated exactly for pre- and postselected states uniformly distributed over the surface of the Poincare-Bloch sphere. The superweak probability, that the real part of the weak value lies outside the spectral range, is 1/3. This case, with just two eigenvalues, complements our previous calculation (Berry and Shukla 2010 J. Phys. A: Math. Theor. 43 354024) of the universal form of the weak value probability distribution for an operator with many eigenvalues. 19. Optimization of strong and weak coordinates NARCIS (Netherlands) Swart, M.; Bickelhaupt, F.M. 2006-01-01 We present a new scheme for the geometry optimization of equilibrium and transition state structures that can be used for both strong and weak coordinates. We use a screening function that depends on atom-pair distances to differentiate strong coordinates from weak coordinates. This differentiation 20. On an incompressible model in radiation hydrodynamics Czech Academy of Sciences Publication Activity Database Ducomet, B.; Nečasová, Šárka 2015-01-01 Roč. 38, č. 4 (2015), s. 765-774 ISSN 0170-4214 R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : radiation hydrodynamics * incompressible Navier-Stokes-Fourier system * weak solution Subject RIV: BA - General Mathematics Impact factor: 1.002, year: 2015 http://onlinelibrary.wiley.com/doi/10.1002/mma.3107/abstract 1. The Q{sup p}{sub Weak} experiment Energy Technology Data Exchange (ETDEWEB) Androic, D. [University of Zagreb (Croatia); Armstrong, D. S. [The College of William and Mary (United States); Asaturyan, A. [Yerevan Physics Institute (Armenia); Averett, T. [The College of William and Mary (United States); Balewski, J. [Massachusetts Institute of Technology (United States); Beaufait, J. [Thomas Jefferson National Accelerator Facility (United States); Beminiwattha, R. S. [Ohio University (United States); Benesch, J. [Thomas Jefferson National Accelerator Facility (United States); Benmokhtar, F. [Duquesne University (United States); Birchall, J. [University of Manitoba (Canada); Carlini, R. D.; Cornejo, J. C. [The College of William and Mary (United States); Covrig, S. [Thomas Jefferson National Accelerator Facility (United States); Dalton, M. M. [University of Virginia (United States); Davis, C. A. [TRIUMF (United States); Deconinck, W. [The College of William and Mary (United States); Diefenbach, J. [Hampton University (United States); Dow, K. [Massachusetts Institute of Technology (United States); Dowd, J. F. [The College of William and Mary (United States); Dunne, J. A. [Mississippi State University (United States); and others 2013-03-15 In May 2012, the Q{sup p}{sub Weak} collaboration completed a two year measurement program to determine the weak charge of the proton Q{sub W}{sup p} = ( 1 - 4sin{sup 2}{theta}{sub W}) at the Thomas Jefferson National Accelerator Facility (TJNAF). The experiment was designed to produce a 4.0 % measurement of the weak charge, via a 2.5 % measurement of the parity violating asymmetry in the number of elastically scattered 1.165 GeV electrons from protons, at forward angles. At the proposed precision, the experiment would produce a 0.3 % measurement of the weak mixing angle at a momentum transfer of Q{sup 2} = 0.026 GeV{sup 2}, making it the most precise stand alone measurement of the weak mixing angle at low momentum transfer. In combination with other parity measurements, Q{sup p}{sub Weak} will also provide a high precision determination of the weak charges of the up and down quarks. At the proposed precision, a significant deviation from the Standard Model prediction could be a signal of new physics at mass scales up to Asymptotically-Equal-To 6 TeV, whereas agreement would place new and significant constraints on possible Standard Model extensions at mass scales up to Asymptotically-Equal-To 2 TeV. This paper provides an overview of the physics and the experiment, as well as a brief look at some preliminary diagnostic and analysis data. 2. Weakly Coretractable Modules Science.gov (United States) Hadi, Inaam M. A.; Al-aeashi, Shukur N. 2018-05-01 If R is a ring with identity and M is a unitary right R-module. Here we introduce the class of weakly coretractable module. Some basic properties are investigated and some relationships between these modules and other related one are introduced. 3. SARIS: a tool for occupational radiation protection improvement in a Nuclear Medicine Department International Nuclear Information System (INIS) Lopez Diaz, A. 2015-01-01 Self-assessment is an organization's internal process to review its current status. The IAEA has developed the SARIS system (Self-Assessment of the Regulatory Infrastructure for Safety) with the objective to improve and encourage the compliment of safety requirements and recommendations of the international safety standards. With the purpose to improve the effectiveness and efficiency of the occupational radiation protection structure in the Nuclear Medicine Department (from 'Hermanos Ameijeiras' Hospital), we applied 3 questionnaires of the Occupational Radiation Protection Module of SARIS. During the answering phase we provided factual responses to questions, appended all necessary documentary evidence and avoided opinion that cannot be objectively supported by evidence. In the analysis phase we identified the strengths and weaknesses, the opportunities for improvement and the risks if action is not taken. We look the expert's opinion and made recommendations to prepare an action plan for improvement. The Cuban regulations have more strengths than weakness. The major weakness founded was: the documental evidence of the knowledge about the legislative safety responsibility of the management structure and workers could be improved. Upon completion of the self-assessment analysis phase, was developed an action plan, trying to cover all the discovered weakness, making emphasis in the improvement of all documental issue related to radiation safety responsibilities. Were defined the responsibilities and activities in the short, medium and long terms. The SARIS self-assessment tools let us to learn more about our organization and provided us the key elements for the organization's continuous development and improvement. (Author) 4. S-parameters for weakly excited slots DEFF Research Database (Denmark) Albertsen, Niels Christian 1999-01-01 A simple approach to account for parasitic effects in weakly excited slots cut in the broad wall of a rectangular waveguide is proposed......A simple approach to account for parasitic effects in weakly excited slots cut in the broad wall of a rectangular waveguide is proposed... 5. Survival and weak chaos. Science.gov (United States) Nee, Sean 2018-05-01 Survival analysis in biology and reliability theory in engineering concern the dynamical functioning of bio/electro/mechanical units. Here we incorporate effects of chaotic dynamics into the classical theory. Dynamical systems theory now distinguishes strong and weak chaos. Strong chaos generates Type II survivorship curves entirely as a result of the internal operation of the system, without any age-independent, external, random forces of mortality. Weak chaos exhibits (a) intermittency and (b) Type III survivorship, defined as a decreasing per capita mortality rate: engineering explicitly defines this pattern of decreasing hazard as 'infant mortality'. Weak chaos generates two phenomena from the normal functioning of the same system. First, infant mortality- sensu engineering-without any external explanatory factors, such as manufacturing defects, which is followed by increased average longevity of survivors. Second, sudden failure of units during their normal period of operation, before the onset of age-dependent mortality arising from senescence. The relevance of these phenomena encompasses, for example: no-fault-found failure of electronic devices; high rates of human early spontaneous miscarriage/abortion; runaway pacemakers; sudden cardiac death in young adults; bipolar disorder; and epilepsy. 6. Enhancing QKD security with weak measurements Science.gov (United States) Farinholt, Jacob M.; Troupe, James E. 2016-10-01 Publisher's Note: This paper, originally published on 10/24/2016, was replaced with a corrected/revised version on 11/8/2016. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. In the late 1980s, Aharonov and colleagues developed the notion of a weak measurement of a quantum observable that does not appreciably disturb the system.1, 2 The measurement results are conditioned on both the pre-selected and post-selected state of the quantum system. While any one measurement reveals very little information, by making the same measurement on a large ensemble of identically prepared pre- and post-selected (PPS) states and averaging the results, one may obtain what is known as the weak value of the observable with respect to that PPS ensemble. Recently, weak measurements have been proposed as a method of assessing the security of QKD in the well-known BB84 protocol.3 This weak value augmented QKD protocol (WV-QKD) works by additionally requiring the receiver, Bob, to make a weak measurement of a particular observable prior to his strong measurement. For the subset of measurement results in which Alice and Bob's measurement bases do not agree, the weak measurement results can be used to detect any attempt by an eavesdropper, Eve, to correlate her measurement results with Bob's. Furthermore, the well-known detector blinding attacks, which are known to perfectly correlate Eve's results with Bob's without being caught by conventional BB84 implementations, actually make the eavesdropper more visible in the new WV-QKD protocol. In this paper, we will introduce the WV-QKD protocol and discuss its generalization to the 6-state single qubit protocol. We will discuss the types of weak measurements that are optimal for this protocol, and compare the predicted performance of the 6- and 4-state WV-QKD protocols. 7. Radiation effects on ion-exchange resins. Part II. Gamma irradiation of Dowex 1 International Nuclear Information System (INIS) Kazanjian, A.R.; Horrell, D.R. 1975-01-01 The effects were determined of gamma radiation on the anion exchange resin, Dowex 1. Part I on Dowex 50W was reported May 10, 1974. The exchange capacity (both strong and weak base), moisture content, radiolysis products, and physical deterioration of the resin were analyzed after irradiation with doses up to 6.9 x 10 8 rads. The resin capacity decreased approximately 50 percent after a radiation dose of 4 x 10 8 rads. Resin irradiated, when air dried in the nitrate form, showed more stability than resin irradiated in 7N nitric acid (HNO 3 ), which in turn showed more stability than resin irradiated when air dried in the chloride form. Radiation decreased the strong base capacity to a greater extent than the total capacity. The result indicates that some of the quarternary ammonium groups were transformed to secondary and tertiary amine groups that have weak base ion-exchange capability. (U.S.) 8. Weak Deeply Virtual Compton Scattering International Nuclear Information System (INIS) Ales Psaker; Wolodymyr Melnitchouk; Anatoly Radyushkin 2006-01-01 We extend the analysis of the deeply virtual Compton scattering process to the weak interaction sector in the generalized Bjorken limit. The virtual Compton scattering amplitudes for the weak neutral and charged currents are calculated at the leading twist within the framework of the nonlocal light-cone expansion via coordinate space QCD string operators. Using a simple model, we estimate cross sections for neutrino scattering off the nucleon, relevant for future high intensity neutrino beam facilities 9. Weakly Idempotent Lattices and Bilattices, Non-Idempotent Plonka Functions Directory of Open Access Journals (Sweden) Davidova D. S. 2015-12-01 Full Text Available In this paper, we study weakly idempotent lattices with an additional interlaced operation. We characterize interlacity of a weakly idempotent semilattice operation, using the concept of hyperidentity and prove that a weakly idempotent bilattice with an interlaced operation is epimorphic to the superproduct with negation of two equal lattices. In the last part of the paper, we introduce the concepts of a non-idempotent Plonka function and the weakly Plonka sum and extend the main result for algebras with the well known Plonka function to the algebras with the non-idempotent Plonka function. As a consequence, we characterize the hyperidentities of the variety of weakly idempotent lattices, using non-idempotent Plonka functions, weakly Plonka sums and characterization of cardinality of the sets of operations of subdirectly irreducible algebras with hyperidentities of the variety of weakly idempotent lattices. Applications of weakly idempotent bilattices in multi-valued logic is to appear. 10. Plane waves with weak singularities International Nuclear Information System (INIS) David, Justin R. 2003-03-01 We study a class of time dependent solutions of the vacuum Einstein equations which are plane waves with weak null singularities. This singularity is weak in the sense that though the tidal forces diverge at the singularity, the rate of divergence is such that the distortion suffered by a freely falling observer remains finite. Among such weak singular plane waves there is a sub-class which does not exhibit large back reaction in the presence of test scalar probes. String propagation in these backgrounds is smooth and there is a natural way to continue the metric beyond the singularity. This continued metric admits string propagation without the string becoming infinitely excited. We construct a one parameter family of smooth metrics which are at a finite distance in the space of metrics from the extended metric and a well defined operator in the string sigma model which resolves the singularity. (author) 11. Weak interaction: past answers, present questions International Nuclear Information System (INIS) Ne'eman, Y. 1977-02-01 A historical sketch of the weak interaction is presented. From beta ray to pion decay, the V-A theory of Marshak and Sudarshan, CVC principle of equivalence, universality as an algebraic condition, PCAC, renormalized weak Hamiltonian in the rehabilitation of field theory, and some current issues are considered in this review. 47 references 12. Weak measurements with a qubit meter DEFF Research Database (Denmark) Wu, Shengjun; Mølmer, Klaus 2009-01-01 We derive schemes to measure the so-called weak values of quantum system observables by coupling of the system to a qubit meter system. We highlight, in particular, the meaning of the imaginary part of the weak values, and show how it can be measured directly on equal footing with the real part... 13. On (weakly precious rings associated to central polynomials Directory of Open Access Journals (Sweden) Hani A. Khashan 2018-04-01 Full Text Available Let R be an associative ring with identity and let g(x be a fixed polynomial over the center of R. We define R to be (weakly g(x-precious if for every element a∈R, there are a zero s of g(x, a unit u and a nilpotent b such that (a=±s+u+b a=s+u+b. In this paper, we investigate many examples and properties of (weakly g(x-precious rings. If a and b are in the center of R with b-a is a unit, we give a characterizations for (weakly (x-a(x-b-precious rings in terms of (weakly precious rings. In particular, we prove that if 2 is a unit, then a ring is precious if and only it is weakly precious. Finally, for n∈ℕ, we study (weakly (xⁿ-x-precious rings and clarify some of their properties. 14. One-loop divergences in chiral perturbation theory and right-invariant metrics on SU(3) International Nuclear Information System (INIS) Esposito-Farese, G. 1991-01-01 In the framework of chiral perturbation theory, we compute the one-loop divergences of the effective Lagrangian describing strong and non-leptonic weak interactions of pseudoscalar mesons. We use the background field method and the heat-kernel expansion, and underline the geometrical meaning of the different terms, showing how the right-invariance of the metrics on SU(3) allows to clarify and simplify the calculations. Our results are given in terms of a minimal set of independent counterterms, and shorten previous ones of the literature, in the particular case where the electromagnetic field is the only external source which is considered. We also show that a geometrical construction of the effective Lagrangian at order O(p 4 ) allows to derive some relations between the finite parts of the coupling constants. These relations do not depend on the scale μ used to renormalize. (orig.) 15. Fixed points of occasionally weakly biased mappings OpenAIRE Y. Mahendra Singh, M. R. Singh 2012-01-01 Common fixed point results due to Pant et al. [Pant et al., Weak reciprocal continuity and fixed point theorems, Ann Univ Ferrara, 57(1), 181-190 (2011)] are extended to a class of non commuting operators called occasionally weakly biased pair[ N. Hussain, M. A. Khamsi A. Latif, Commonfixed points for JH-operators and occasionally weakly biased pairs under relaxed conditions, Nonlinear Analysis, 74, 2133-2140 (2011)]. We also provideillustrative examples to justify the improvements. Abstract.... 16. BUOYANCY INSTABILITIES IN A WEAKLY COLLISIONAL INTRACLUSTER MEDIUM Energy Technology Data Exchange (ETDEWEB) Kunz, Matthew W.; Stone, James M. [Department of Astrophysical Sciences, Princeton University, Peyton Hall, 4 Ivy Lane, Princeton, NJ 08544 (United States); Bogdanovic, Tamara; Reynolds, Christopher S., E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States) 2012-08-01 The intracluster medium (ICM) of galaxy clusters is a weakly collisional plasma in which the transport of heat and momentum occurs primarily along magnetic-field lines. Anisotropic heat conduction allows convective instabilities to be driven by temperature gradients of either sign: the magnetothermal instability (MTI) in the outskirts of clusters and the heat-flux buoyancy-driven instability (HBI) in their cooling cores. We employ the Athena magnetohydrodynamic code to investigate the nonlinear evolution of these instabilities, self-consistently including the effects of anisotropic viscosity (i.e., Braginskii pressure anisotropy), anisotropic conduction, and radiative cooling. We find that, in all but the innermost regions of cool-core clusters, anisotropic viscosity significantly impairs the ability of the HBI to reorient magnetic-field lines orthogonal to the temperature gradient. Thus, while radio-mode feedback appears necessary in the central few Multiplication-Sign 10 kpc, heat conduction may be capable of offsetting radiative losses throughout most of a cool core over a significant fraction of the Hubble time. Magnetically aligned cold filaments are then able to form by local thermal instability. Viscous dissipation during cold filament formation produces accompanying hot filaments, which can be searched for in deep Chandra observations of cool-core clusters. In the case of MTI, anisotropic viscosity leads to a nonlinear state with a folded magnetic field structure in which field-line curvature and field strength are anti-correlated. These results demonstrate that, if the HBI and MTI are relevant for shaping the properties of the ICM, one must self-consistently include anisotropic viscosity in order to obtain even qualitatively correct results. 17. Classical theory of the Kumakhov radiation in axial channeling. 1. Dipole approximation Energy Technology Data Exchange (ETDEWEB) Khokonov, M.K.; Komarov, F.F.; Telegin, V.I. 1984-05-01 The paper considers radiation of ultrarelativistic electrons in axial channeling initially predicted by Kumakhov. The consideration is based on the results of solution of the Fokker-Planck equation. The spectral-angular characteristics of the Kumakhov radiation in thick single crystals are calculated. It is shown that in heavy single crystals the energy losses on radiation can amount to a considerable portion of the initial beam energy. The possibility of a sharp increase of radiation due to a decrease of crystal temperature is discussed. It is shown that radiation intensity in axial channeling is weakly dependent on the initial angle of the electron entrance into the channel if this angle changes within the limits of a critical one. 18. Weakly compact operators and interpolation OpenAIRE Maligranda, Lech 1992-01-01 The class of weakly compact operators is, as well as the class of compact operators, a fundamental operator ideal. They were investigated strongly in the last twenty years. In this survey, we have collected and ordered some of this (partly very new) knowledge. We have also included some comments, remarks and examples. The class of weakly compact operators is, as well as the class of compact operators, a fundamental operator ideal. They were investigated strongly in the last twenty years. I... 19. Policy-based benchmarking of weak heaps and their relatives DEFF Research Database (Denmark) Bruun, Asger; Edelkamp, Stefan; Katajainen, Jyrki 2010-01-01 In this paper we describe an experimental study where we evaluated the practical efficiency of three worst-case efficient priority queues: 1) a weak heap that is a binary tree fulfilling half-heap ordering, 2) a weak queue that is a forest of perfect weak heaps, and 3) a runrelaxed weak queue tha... 20. Diffusion limits in a model of radiative flow Czech Academy of Sciences Publication Activity Database Ducomet, B.; Nečasová, Šárka 2015-01-01 Roč. 61, č. 1 (2015), s. 17-59 ISSN 0430-3202 R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : Navier-Stokes-Fourier system * Oberbeck -Boussinesq * radiation hydrodynamics * weak solution Subject RIV: BA - General Mathematics http://link.springer.com/article/10.1007%2Fs11565-014-0214-3 1. Weak-interaction rates in stellar conditions Science.gov (United States) Sarriguren, Pedro 2018-05-01 Weak-interaction rates, including β-decay and electron captures, are studied in several mass regions at various densities and temperatures of astrophysical interest. In particular, we study odd-A nuclei in the pf-shell region, which are involved in presupernova formations. Weak rates are relevant to understand the late stages of the stellar evolution, as well as the nucleosynthesis of heavy nuclei. The nuclear structure involved in the weak processes is studied within a quasiparticle proton-neutron random-phase approximation with residual interactions in both particle-hole and particle-particle channels on top of a deformed Skyrme Hartree-Fock mean field with pairing correlations. First, the energy distributions of the Gamow-Teller strength are discussed and compared with the available experimental information, measured under terrestrial conditions from charge-exchange reactions. Then, the sensitivity of the weak-interaction rates to both astrophysical densities and temperatures is studied. Special attention is paid to the relative contribution to these rates of thermally populated excited states in the decaying nucleus and to the electron captures from the degenerate electron plasma. 2. Efficient quantum computing with weak measurements International Nuclear Information System (INIS) Lund, A P 2011-01-01 Projective measurements with high quantum efficiency are often assumed to be required for efficient circuit-based quantum computing. We argue that this is not the case and show that the fact that they are not required was actually known previously but was not deeply explored. We examine this issue by giving an example of how to perform the quantum-ordering-finding algorithm efficiently using non-local weak measurements considering that the measurements used are of bounded weakness and some fixed but arbitrary probability of success less than unity is required. We also show that it is possible to perform the same computation with only local weak measurements, but this must necessarily introduce an exponential overhead. 3. Transmission electron microscopy of weakly deformed alkali halide crystals International Nuclear Information System (INIS) Strunk, H. 1976-01-01 Transmission electron microscopy (TEM) is applied to the investigation of the dislocation arrangement of [001]-orientated alkali halide crystals (orientation four quadruple slip) deformed into stage I of the work-hardenig curve. The investigations pertain mainly to NaCl - (0.1-1) mole-% NaBr crystals, because these exhibit a relatively long stage I. The time available for observing the specimens is limited by the ionization radiation damage occuring in the microscope. An optimum reduction of the damage rate is achieved by a combination of several experimental techniques that are briefly outlined. The crystals deform essentially in single glide. According to the observations, stage I deformation of pure and weakly alloyed NaCl crystals is characterized by the glide of screw dislocations, which bow out between jogs and drag dislocation dipoles behind them. In crystals with >= 0.5 mole-% NaBr this process is not observed to occur. This is attributed to the increased importance of solid solution hardening. (orig.) [de 4. Nuclear beta decay and the weak interaction International Nuclear Information System (INIS) Kean, D.C. 1975-11-01 Short notes are presented on various aspects of nuclear beta decay and weak interactions including: super-allowed transitions, parity violation, interaction strengths, coupling constants, and the current-current formalism of weak interaction. (R.L.) 5. Synchrotron radiation research International Nuclear Information System (INIS) Markus, N. 1995-01-01 In the many varied application fields of accelerators, synchrotron radiation ranks as one of the most valuable and widely useful tools. Synchrotron radiation is produced in multi-GeV electron synchrotrons and storage rings, and emerges tangentially in a narrow vertical fan. Synchrotron radiation has been used extensively for basic studies and, more recently, for applied research in the chemical, materials, biotechnology and pharmaceutical industries. Initially, the radiation was a byproduct of high energy physics laboratories but the high demand soon resulted in the construction of dedicated electron storage rings. The accelerator technology is now well developed and a large number of sources have been constructed, with energies ranging from about 1.5 to 8 GeV including the 6 GeV European Synchrotron Radiation Facility (ESRF) source at Grenoble, France. A modern third-generation synchrotron radiation source has an electron storage ring with a complex magnet lattice to produce ultra-low emittance beams, long straights for 'insertion devices', and 'undulator' or 'wiggler' magnets to generate radiation with particular properties. Large beam currents are necessary to give high radiation fluxes and long beam lifetimes require ultra high vacuum systems. Industrial synchrotron radiation research programmes use either Xray diffraction or spectroscopy to determine the structures of a wide range of materials. Biological and pharmaceutical applications study the functions of various proteins. With this knowledge, it is possible to design molecules to change protein behaviour for pharmaceuticals, or to configure more active proteins, such as enzymes, for industrial processes. Recent advances in molecular biology have resulted in a large increase in protein crystallography studies, with researchers using crystals which, although small and weakly diffracting, benefit from the high intensity. Examples with commercial significance include the study of 6. Reversible brachial plexopathy following primary radiation therapy for breast cancer International Nuclear Information System (INIS) Salner, A.L.; Botnick, L.E.; Herzog, A.G.; Goldstein, M.A.; Harris, J.R.; Levene, M.B.; Hellman, S. 1981-01-01 Reversible brachial plexopathy has occurred in very low incidence in patients with breast carcinoma treated definitively with radiation therapy. Of 565 patients treated between January 1968 and December 1979 with moderate doses of supervoltage radiation therapy (average axillary dose of 5000 rad in 5 weeks), eight patients (1.4%) developed the characteristic symptoms at a median time of 4.5 months after radiation therapy. This syndrome consists of paresthesias in all patients, with weakness and pain less commonly seen. The symptom complex differs from other previously described brachial plexus syndromes, including paralytic brachial neuritis, radiation-induced injury, and carcinoma. A possible relationship to adjuvant chemotherapy exists, though the etiology is not well-understood. The cases described demonstrate temporal clustering. Resolution is always seen 7. Singular limits in a model of radiative flow Czech Academy of Sciences Publication Activity Database Ducomet, B.; Nečasová, Šárka 2015-01-01 Roč. 17, č. 2 (2015), s. 341-380 ISSN 1422-6928 R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : radiation hydrodynamics * Navier - Stokes -Fourier system * weak solution Subject RIV: BA - General Mathematics Impact factor: 1.023, year: 2015 http://link.springer.com/article/10.1007%2Fs00021-015-0204-y 8. Singular limits in a model of radiative flow Czech Academy of Sciences Publication Activity Database Ducomet, B.; Nečasová, Šárka 2015-01-01 Roč. 17, č. 2 (2015), s. 341-380 ISSN 1422-6928 R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : radiation hydrodynamics * Navier-Stokes-Fourier system * weak solution Subject RIV: BA - General Mathematics Impact factor: 1.023, year: 2015 http://link.springer.com/article/10.1007%2Fs00021-015-0204-y 9. Stellar explosion in the weak field approximation of the Brans-Dicke theory International Nuclear Information System (INIS) Hamity, Victor H; Barraco, Daniel E 2005-01-01 We treat a very crude model of an exploding star, in the weak field approximation of the Brans-Dicke theory, in a scenario that resembles some characteristic data of a type Ia supernova. The most noticeable feature, in the electromagnetic component, is the relationship between the absolute magnitude at maximum brightness of the star and the decline rate in one magnitude from that maximum. This characteristic has become one of the most accurate methods to measure luminosity distances to objects at cosmological distances (Phillips M M 1993 Astrophys. J. 413 L105; see www.all-science-fair-projects.com/ science f air p rojects e ncyclopedia/Supernova, for a brief description of supernovae types). An interesting result is that the active mass associated with the scalar field is totally radiated to infinity, representing a mass loss in the ratio of the 'tensor' component to the scalar component of 1 to (2ω + 3) (ω is the Brans-Dicke parameter), in agreement with a general result of Hawking (1972 Commun. Math. Phys. 25 167). Then, this model shows explicitly, in a dynamical case, the mechanism of the radiation of a scalar field, which is necessary to understand the Hawking result 10. Management information system on radiation protection Energy Technology Data Exchange (ETDEWEB) Grossi, Pablo Andrade; Souza, Leonardo Soares de; Figueiredo, Geraldo Magela, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil) 2011-07-01 Considering the flux complexity and the multi source information of all radiation protection activities on nuclear organizations, an effective management information system based on technology, information and people is necessary to improve the safety on all processes and operations subjected to radiation risks. An effective management information system is an essential tool to highlight the strengths and weaknesses and identify behaviors and trends on the activities requiring radiation protection programs. Such kind of distinct knowledge is useful to reach an effective management and support the human decision-making on nuclear organization. This paper presents a management information system based on Brazilian directives and regulations on radiation protection. Due to its generic characteristics, this radiation protection control system can be implemented on any nuclear organization by reediting the non restricted parameters which could differ considering all facilities and laboratories expected on-site with diverse technologies applications. This system can be considered as a powerful tool applied on the continuous management of radiation protection activities on nuclear organizations and research institutes as well as for long term planning, not only indicating how the safety activities are going, but why they are not going as well as planned where that is the case. (author) 11. Management information system on radiation protection International Nuclear Information System (INIS) Grossi, Pablo Andrade; Souza, Leonardo Soares de; Figueiredo, Geraldo Magela 2011-01-01 Considering the flux complexity and the multi source information of all radiation protection activities on nuclear organizations, an effective management information system based on technology, information and people is necessary to improve the safety on all processes and operations subjected to radiation risks. An effective management information system is an essential tool to highlight the strengths and weaknesses and identify behaviors and trends on the activities requiring radiation protection programs. Such kind of distinct knowledge is useful to reach an effective management and support the human decision-making on nuclear organization. This paper presents a management information system based on Brazilian directives and regulations on radiation protection. Due to its generic characteristics, this radiation protection control system can be implemented on any nuclear organization by reediting the non restricted parameters which could differ considering all facilities and laboratories expected on-site with diverse technologies applications. This system can be considered as a powerful tool applied on the continuous management of radiation protection activities on nuclear organizations and research institutes as well as for long term planning, not only indicating how the safety activities are going, but why they are not going as well as planned where that is the case. (author) 12. Fermi and the Theory of Weak Interactions Indian Academy of Sciences (India) IAS Admin Quantum Field Theory created by Dirac and used by Fermi to describe weak ... of classical electrodynamics (from which the electric field and magnetic field can be obtained .... Universe. However, thanks to weak interactions, this can be done. 13. Radiation and human health International Nuclear Information System (INIS) Sagan, L. 1979-01-01 The growing controversy over low-level radiation risks stems from challenges to the assumptions that had been made about dose rate, linear theory, and total-body exposure. The debate has focused on whether there is a risk threshold or whether linearity overstates low-level risks, both theories being consistent with the available data. The generally accepted consensus on the risks of cancer and genetic effects are examined for several occupational groups and compared with recent studies which use different methodologies and reach conflicting conclusions. The weaknesses of the more recent studies are noted and the conclusion is reached that adherence to the earlier assumptions is probably the conservative position. Since the relationship of exposure to small doses of carcinogenic agents to cancer is a weak link and difficult to verify statistically, the prognosis is poor for resolving the question of linearity. It is important that the costs of reducing the risks be carefully balanced with the available evidence 14. SIMULATION OF SUBGRADE EMBANKMENT ON WEAK BASE Directory of Open Access Journals (Sweden) V. D. Petrenko 2015-08-01 Full Text Available Purpose. This article provides: the question of the sustainability of the subgrade on a weak base is considered in the paper. It is proposed to use the method of jet grouting. Investigation of the possibility of a weak base has an effect on the overall deformation of the subgrade; the identification and optimization of the parameters of subgrade based on studies using numerical simulation. Methodology. The theoretical studies of the stress-strain state of the base and subgrade embankment by modeling in the software package LIRA have been conducted to achieve this goal. Findings. After making the necessary calculations perform building fields of a subsidence, borders cramped thickness, bed’s coefficients of Pasternak and Winkler. The diagrams construction of vertical stress performs at any point of load application. Also, using the software system may perform peer review subsidence, rolls railroad tracks in natural and consolidated basis. Originality. For weak soils is the most appropriate nonlinear model of the base with the existing areas of both elastic and limit equilibrium, mixed problem of the theory of elasticity and plasticity. Practical value. By increasing the load on the weak base as a result of the second track construction, adds embankment or increasing axial load when changing the rolling stock process of sedimentation and consolidation may continue again. Therefore, one of the feasible and promising options for the design and reconstruction of embankments on weak bases is to strengthen the bases with the help of jet grouting. With the expansion of the railway infrastructure, increasing speed and weight of the rolling stock is necessary to ensure the stability of the subgrade on weak bases. LIRA software package allows you to perform all the necessary calculations for the selection of a proper way of strengthening weak bases. 15. (Weakly) three-dimensional caseology International Nuclear Information System (INIS) Pomraning, G.C. 1996-01-01 The singular eigenfunction technique of Case for solving one-dimensional planar symmetry linear transport problems is extended to a restricted class of three-dimensional problems. This class involves planar geometry, but with forcing terms (either boundary conditions or internal sources) which are weakly dependent upon the transverse spatial variables. Our analysis involves a singular perturbation about the classic planar analysis, and leads to the usual Case discrete and continuum modes, but modulated by weakly dependent three-dimensional spatial functions. These functions satisfy parabolic differential equations, with a different diffusion coefficient for each mode. Representative one-speed time-independent transport problems are solved in terms of these generalised Case eigenfunctions. Our treatment is very heuristic, but may provide an impetus for more rigorous analysis. (author) 16. Nonlinear waves and weak turbulence CERN Document Server Zakharov, V E 1997-01-01 This book is a collection of papers on dynamical and statistical theory of nonlinear wave propagation in dispersive conservative media. Emphasis is on waves on the surface of an ideal fluid and on Rossby waves in the atmosphere. Although the book deals mainly with weakly nonlinear waves, it is more than simply a description of standard perturbation techniques. The goal is to show that the theory of weakly interacting waves is naturally related to such areas of mathematics as Diophantine equations, differential geometry of waves, Poincaré normal forms, and the inverse scattering method. 17. Testing the weak gravity-cosmic censorship connection Science.gov (United States) Crisford, Toby; Horowitz, Gary T.; Santos, Jorge E. 2018-03-01 A surprising connection between the weak gravity conjecture and cosmic censorship has recently been proposed. In particular, it was argued that a promising class of counterexamples to cosmic censorship in four-dimensional Einstein-Maxwell-Λ theory would be removed if charged particles (with sufficient charge) were present. We test this idea and find that indeed if the weak gravity conjecture is true, one cannot violate cosmic censorship this way. Remarkably, the minimum value of charge required to preserve cosmic censorship appears to agree precisely with that proposed by the weak gravity conjecture. 18. A Centennial Episode of Weak East Asian Summer Monsoon in the Midst of the Medieval Warming Science.gov (United States) Jin, C.; Liu, J.; Wang, B.; Wang, Z.; Yan, M. 2017-12-01 Recent paleo-proxy evidences suggested that the East Asian summer monsoon (EASM) was generally strong (i.e., northern China wet and southern China dry) during the Medieval Warm Period (MWP, 9th to the mid-13th century), however, there was a centennial period (around 11th century) during which the EASM was weak. This study aims to explore the causes of this centennial weak EASM episode and in general, what controls the centennial variability of the EASM in the pre-industrial period of AD 501-1850. With the Community Earth System Model (CESM), a suit of control and forced experiments were conducted for the past 2000 years. The model run with all external forcings simulates a warm period of EA from AD 801-1250 with a generally increased summer mean precipitation over the northern EA; however, during the 11th century (roughly from AD 980 to AD 1100), the EASM is significantly weaker than the other periods during the MWP. We find that on the multi-decadal to centennial time scale, a strong EASM is associated with a La Nina-like Indo-Pacific warming and the opposite is also true. This sea surface temperature (SST) anomaly pattern represents the leading EOF mode of centennial SST variations, and it is primarily forced by the solar radiation and volcanic activity, whereas the land use/land cover and greenhouse gases as well as internal dynamics play a negligible role. During the MWP, the solar forcing plays a dominate role in supporting the SST variation as the volcanic activity is weak. The weakening of the EASM during the AD 980-1100 is attributed to the relatively low solar radiation, which leads to a prevailing El Nino-like Indo-Pacific cooling with strongest cooling occurring in the equatorial western Pacific. The suppressed convection over the equatorial western Pacific directly induces a Philippine Sea anticyclone anomaly, which increases southern China precipitation, meanwhile suppresses Philippine Sea precipitation, exciting a meridional teleconnection that 19. The effect of an accretion disk on coherent pulsed emission from weakly magnetized neutron stars International Nuclear Information System (INIS) Asaoka, Ikuko; Hoshi, Reiun. 1989-01-01 Using a simple model for hot spots formed on the magnetic polar regions we calculate the X-ray pulse profiles expected from bright low-mass X-ray binaries. We assume that neutron stars in close binary systems are surrounded by accretion disks extending down in the vicinity of their surfaces. Even partial eclipses of a hot spot by the accretion disk change the coherent pulsed fraction and, in some cases, the phase of pulsations by almost 180deg. Coherent pulsations are clearly seen even for sufficiently compact model neutron stars, if the hot spots emit isotropic or fan-beam radiation. In the case of pencil-beam radiation, coherent pulsations are also seen if the cap-opening angle is less than ∼60deg, while the inclination angle is larger than 68deg. Gravitational lensing alone does not smear coherent pulsations in moderately weak magnetized neutron stars in the presence of an absorbing accretion disk. (author) 20. Detailed spectra of high-power broadband microwave radiation from interactions of relativistic electron beams with weakly magnetized plasmas International Nuclear Information System (INIS) Kato, K.G.; Benford, G.; Tzach, D. 1983-01-01 Prodigious quantities of microwave energy distributed uniformly across a wide frequency band are observed when a relativistic electron beam (REB) penetrates a plasma. Typical measured values are 20 MW total for Δνapprox. =40 GHz with preliminary observations of bandwidths as large as 100 GHz. An intense annular pulsed REB (Iapprox. =128 kA; rapprox. =3 cm; Δrapprox. =1 cm; 50 nsec FWHM; γapprox. =3) is sent through an unmagnetized or weakly magnetized plasma column (n/sub plasma/approx.10 13 cm -3 ). Beam-to-plasma densities of 0.01 >ω/sub p/ and weak harmonic structure is wholly unanticipated from Langmuir scattering or soliton collapse models. A model of Compton-like boosting of ambient plasma waves by the beam electrons, with collateral emission of high-frequency photons, qualitatively explains these spectra. Power emerges largely in an angle approx.1/γ, as required by Compton mechanisms. As n/sub b//n/sub p/ falls, ω/sub p/-2ω/sub p/ structure and harmonic power ratios consistent with soliton collapse theories appear. With further reduction of n/sub b//n/sub p/ only the ω/sub p/ line persists 1. Precision cosmology with weak gravitational lensing Science.gov (United States) Hearin, Andrew P. In recent years, cosmological science has developed a highly predictive model for the universe on large scales that is in quantitative agreement with a wide range of astronomical observations. While the number and diversity of successes of this model provide great confidence that our general picture of cosmology is correct, numerous puzzles remain. In this dissertation, I analyze the potential of planned and near future galaxy surveys to provide new understanding of several unanswered questions in cosmology, and address some of the leading challenges to this observational program. In particular, I study an emerging technique called cosmic shear, the weak gravitational lensing produced by large scale structure. I focus on developing strategies to optimally use the cosmic shear signal observed in galaxy imaging surveys to uncover the physics of dark energy and the early universe. In chapter 1 I give an overview of a few unsolved mysteries in cosmology and I motivate weak lensing as a cosmological probe. I discuss the use of weak lensing as a test of general relativity in chapter 2 and assess the threat to such tests presented by our uncertainty in the physics of galaxy formation. Interpreting the cosmic shear signal requires knowledge of the redshift distribution of the lensed galaxies. This redshift distribution will be significantly uncertain since it must be determined photometrically. In chapter 3 I investigate the influence of photometric redshift errors on our ability to constrain dark energy models with weak lensing. The ability to study dark energy with cosmic shear is also limited by the imprecision in our understanding of the physics of gravitational collapse. In chapter 4 I present the stringent calibration requirements on this source of uncertainty. I study the potential of weak lensing to resolve a debate over a long-standing anomaly in CMB measurements in chapter 5. Finally, in chapter 6 I summarize my findings and conclude with a brief discussion of my 2. New weak keys in simplified IDEA Science.gov (United States) Hafman, Sari Agustini; Muhafidzah, Arini 2016-02-01 Simplified IDEA (S-IDEA) is simplified version of International Data Encryption Algorithm (IDEA) and useful teaching tool to help students to understand IDEA. In 2012, Muryanto and Hafman have found a weak key class in the S-IDEA by used differential characteristics in one-round (0, ν, 0, ν) → (0,0, ν, ν) on the first round to produce input difference (0,0, ν, ν) on the fifth round. Because Muryanto and Hafman only use three differential characteristics in one-round, we conducted a research to find new differential characteristics in one-round and used it to produce new weak key classes of S-IDEA. To find new differential characteristics in one-round of S-IDEA, we applied a multiplication mod 216+1 on input difference and combination of active sub key Z1, Z4, Z5, Z6. New classes of weak keys are obtained by combining all of these characteristics and use them to construct two new differential characteristics in full-round of S-IDEA with or without the 4th round sub key. In this research, we found six new differential characteristics in one round and combined them to construct two new differential characteristics in full-round of S-IDEA. When two new differential characteristics in full-round of S-IDEA are used and the 4th round sub key required, we obtain 2 new classes of weak keys, 213 and 28. When two new differential characteristics in full-round of S-IDEA are used, yet the 4th round sub key is not required, the weak key class of 213 will be 221 and 28 will be 210. Membership test can not be applied to recover the key bits in those weak key classes. The recovery of those unknown key bits can only be done by using brute force attack. The simulation result indicates that the bit of the key can be recovered by the longest computation time of 0,031 ms. 3. Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability Science.gov (United States) Kar, Soummya; Moura, José M. F. 2011-04-01 The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph. 4. A typical R and D center on radiation processing in China International Nuclear Information System (INIS) Sun Jlazhen 1995-01-01 The industrial application of radiation processing has been growing rapidly in China since 1980s in such fields as producing radiation-crosslinked heat shrinkable materials and radiation-crosslinked wires and cables, sterilization of medical products and so on. Changchun Institute of Applied Chemistry (CIAC) as one of the earliest organizations in which the fundamental research works on polymer radiation chemistry have been carried out has played a major role for the R and D of the radiation processing of polymers. From the R and D activities at CIAC, it can be seen how a Chinese research institute transfers its results of fundamental research works into large scale production. The improvement of the weakness of fluoropolymers by radiation crosslinking, the modification of the Charlesby-Pinner equation that describes the relationship of sol fraction to irradiation dose, the radiation grafting for preparing functional materials, the radiation crosslinking of polymer blends and the effect of antioxidant on the radiation crosslinking of polymers are reported. Also applied research works and the radiation processing at CIAC are described. (K.I.) 5. A ferromagnetic chain in a random weak field Science.gov (United States) Avgin, I. 1996-10-01 The harmonic magnon modes in a Heisenberg ferromagnetic chain in a random weak field are studied. The Lyapunov exponent for the uniform ( k = 0) mode is computed using the coherent potential approximation (CPA) in the weak-disorder limit. The CPA results are compared with the numerical and weak-disorder expansions of various random systems. We have found that the inverse localization length and the integrated density of states have anomalous power law behaviour as reported earlier. The CPA also reproduces the dispersion law for the same system, calculated by Pimentel and Stinchcombe using the real space renormalization scaling technique. A brief comment is also made for the uniform weak-field case. 6. Radiative muon capture on hydrogen International Nuclear Information System (INIS) Bertl, W.; Ahmad, S.; Chen, C.Q.; Gumplinger, P.; Hasinoff, M.D.; Larabee, A.J.; Sample, D.G.; Schott, W.; Wright, D.H.; Armstrong, D.S.; Blecher, M.; Azuelos, G.; Depommier, P.; Jonkmans, G.; Gorringe, T.P.; Henderson, R.; Macdonald, J.A.; Poutissou, J.M.; Poutissou, R.; Von Egidy, T.; Zhang, N.S.; Robertson, B.D. 1992-01-01 The radiative capture of negative muons by protons can be used to measure the weak induced pseudoscalar form factor. Brief arguments why this method is preferable to ordinary muon capture are given followed by a discussion of the experimental difficulties. The solution to these problems as attempted by experiment no. 452 at TRIUMF is presented together with preliminary results from the first run in August 1990. An outlook on the expected final precision and the experimental schedule is also given. (orig.) 7. Weak-interaction contributions to hyperfine splitting and Lamb shift International Nuclear Information System (INIS) Eides, M.I. 1996-01-01 Weak-interaction contributions to hyperfine splitting and the Lamb shift in hydrogen and muonium are discussed. The problem of sign of the weak-interaction contribution to HFS is clarified, and simple physical arguments that make this sign evident are presented. It is shown that weak-interaction contributions to HFS in hydrogen and muonium have opposite signs. A weak-interaction contribution to the Lamb shift is obtained. copyright 1996 The American Physical Society 8. Radiation effects on organic materials in nuclear plants. Final report International Nuclear Information System (INIS) Bruce, M.B.; Davis, M.V. 1981-11-01 A literature search was conducted to identify information useful in determining the lowest level at which radiation causes damage to nuclear plant equipment. Information was sought concerning synergistic effects of radiation and other environmental stresses. Organic polymers are often identified as the weak elements in equipment. Data on radiation effects are summarized for 50 generic name plastics and 16 elastomers. Coatings, lubricants, and adhesives are treated as separate groups. Inorganics and metallics are considered briefly. With a few noted exceptions, these are more radiation resistant than organic materials. Some semiconductor devices and electronic assemblies are extremely sensitive to radiation. Any damage threshold including these would be too low to be of practical value. With that exception, equipment exposed to less than 10 4 rads should not be significantly affected. Equipment containing no Teflon should not be significantly affected by 10 5 rads. Data concerning synergistic effects and radiation sensitization are discussed. The authors suggest correlations between the two effects 9. Scattering of point particles by black holes: Gravitational radiation Science.gov (United States) Hopper, Seth; Cardoso, Vitor 2018-02-01 Gravitational waves can teach us not only about sources and the environment where they were generated, but also about the gravitational interaction itself. Here we study the features of gravitational radiation produced during the scattering of a pointlike mass by a black hole. Our results are exact (to numerical error) at any order in a velocity expansion, and are compared against various approximations. At large impact parameter and relatively small velocities our results agree to within percent level with various post-Newtonian and weak-field results. Further, we find good agreement with scaling predictions in the weak-field/high-energy regime. Lastly, we achieve striking agreement with zero-frequency estimates. 10. OBSERVATIONS OF ENHANCED RADIATIVE GRAIN ALIGNMENT NEAR HD 97300 International Nuclear Information System (INIS) Andersson, B-G; Potter, S. B. 2010-01-01 We have obtained optical multi-band polarimetry toward sightlines through the Chamaeleon I cloud, particularly in the vicinity of the young B9/A0 star HD 97300. We show, in agreement with earlier studies, that the radiation field impinging on the cloud in the projected vicinity of the star is dominated by the flux from the star, as evidenced by a local enhancement in the grain heating. By comparing the differential grain heating with the differential change in the location of the peak of the polarization curve, we show that the grain alignment is enhanced by the increase in the radiation field. We also find a weak, but measurable, variation in the grain alignment with the relative angle between the radiation field anisotropy and the magnetic field direction. Such an anisotropy in the grain alignment is consistent with a unique prediction of modern radiative alignment torque theory and provides direct support for radiatively driven grain alignment. 11. Cosmology and the weak interaction International Nuclear Information System (INIS) Schramm, D.N. 1989-12-01 The weak interaction plays a critical role in modern Big Bang cosmology. This review will emphasize two of its most publicized cosmological connections: Big Bang nucleosynthesis and Dark Matter. The first of these is connected to the cosmological prediction of Neutrino Flavours, N ν ∼ 3 which is now being confirmed at SLC and LEP. The second is interrelated to the whole problem of galaxy and structure formation in the universe. This review will demonstrate the role of the weak interaction both for dark matter candidates and for the problem of generating seeds to form structure. 87 refs., 3 figs., 5 tabs 12. Cosmology and the weak interaction Energy Technology Data Exchange (ETDEWEB) Schramm, D.N. (Fermi National Accelerator Lab., Batavia, IL (USA)):(Chicago Univ., IL (USA)) 1989-12-01 The weak interaction plays a critical role in modern Big Bang cosmology. This review will emphasize two of its most publicized cosmological connections: Big Bang nucleosynthesis and Dark Matter. The first of these is connected to the cosmological prediction of Neutrino Flavours, N{sub {nu}} {approximately} 3 which is now being confirmed at SLC and LEP. The second is interrelated to the whole problem of galaxy and structure formation in the universe. This review will demonstrate the role of the weak interaction both for dark matter candidates and for the problem of generating seeds to form structure. 87 refs., 3 figs., 5 tabs. 13. Weak disorder in Fibonacci sequences Energy Technology Data Exchange (ETDEWEB) Ben-Naim, E [Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Krapivsky, P L [Department of Physics and Center for Molecular Cybernetics, Boston University, Boston, MA 02215 (United States) 2006-05-19 We study how weak disorder affects the growth of the Fibonacci series. We introduce a family of stochastic sequences that grow by the normal Fibonacci recursion with probability 1 - {epsilon}, but follow a different recursion rule with a small probability {epsilon}. We focus on the weak disorder limit and obtain the Lyapunov exponent that characterizes the typical growth of the sequence elements, using perturbation theory. The limiting distribution for the ratio of consecutive sequence elements is obtained as well. A number of variations to the basic Fibonacci recursion including shift, doubling and copying are considered. (letter to the editor) 14. Nonperturbative theory of weak pre- and post-selected measurements Energy Technology Data Exchange (ETDEWEB) Kofman, Abraham G., E-mail: [email protected]; Ashhab, Sahel; Nori, Franco 2012-11-01 This paper starts with a brief review of the topic of strong and weak pre- and post-selected (PPS) quantum measurements, as well as weak values, and afterwards presents original work. In particular, we develop a nonperturbative theory of weak PPS measurements of an arbitrary system with an arbitrary meter, for arbitrary initial states of the system and the meter. New and simple analytical formulas are obtained for the average and the distribution of the meter pointer variable. These formulas hold to all orders in the weak value. In the case of a mixed preselected state, in addition to the standard weak value, an associated weak value is required to describe weak PPS measurements. In the linear regime, the theory provides the generalized Aharonov–Albert–Vaidman formula. Moreover, we reveal two new regimes of weak PPS measurements: the strongly-nonlinear regime and the inverted region (the regime with a very large weak value), where the system-dependent contribution to the pointer deflection decreases with increasing the measurement strength. The optimal conditions for weak PPS measurements are obtained in the strongly-nonlinear regime, where the magnitude of the average pointer deflection is equal or close to the maximum. This maximum is independent of the measurement strength, being typically of the order of the pointer uncertainty. In the optimal regime, the small parameter of the theory is comparable to the overlap of the pre- and post-selected states. We show that the amplification coefficient in the weak PPS measurements is generally a product of two qualitatively different factors. The effects of the free system and meter Hamiltonians are discussed. We also estimate the size of the ensemble required for a measurement and identify optimal and efficient meters for weak measurements. Exact solutions are obtained for a certain class of the measured observables. These solutions are used for numerical calculations, the results of which agree with the theory 15. Nonperturbative theory of weak pre- and post-selected measurements International Nuclear Information System (INIS) Kofman, Abraham G.; Ashhab, Sahel; Nori, Franco 2012-01-01 This paper starts with a brief review of the topic of strong and weak pre- and post-selected (PPS) quantum measurements, as well as weak values, and afterwards presents original work. In particular, we develop a nonperturbative theory of weak PPS measurements of an arbitrary system with an arbitrary meter, for arbitrary initial states of the system and the meter. New and simple analytical formulas are obtained for the average and the distribution of the meter pointer variable. These formulas hold to all orders in the weak value. In the case of a mixed preselected state, in addition to the standard weak value, an associated weak value is required to describe weak PPS measurements. In the linear regime, the theory provides the generalized Aharonov–Albert–Vaidman formula. Moreover, we reveal two new regimes of weak PPS measurements: the strongly-nonlinear regime and the inverted region (the regime with a very large weak value), where the system-dependent contribution to the pointer deflection decreases with increasing the measurement strength. The optimal conditions for weak PPS measurements are obtained in the strongly-nonlinear regime, where the magnitude of the average pointer deflection is equal or close to the maximum. This maximum is independent of the measurement strength, being typically of the order of the pointer uncertainty. In the optimal regime, the small parameter of the theory is comparable to the overlap of the pre- and post-selected states. We show that the amplification coefficient in the weak PPS measurements is generally a product of two qualitatively different factors. The effects of the free system and meter Hamiltonians are discussed. We also estimate the size of the ensemble required for a measurement and identify optimal and efficient meters for weak measurements. Exact solutions are obtained for a certain class of the measured observables. These solutions are used for numerical calculations, the results of which agree with the theory 16. On Hardy's paradox, weak measurements, and multitasking diagrams International Nuclear Information System (INIS) Meglicki, Zdzislaw 2011-01-01 We discuss Hardy's paradox and weak measurements by using multitasking diagrams, which are introduced to illustrate the progress of quantum probabilities through the double interferometer system. We explain how Hardy's paradox is avoided and elaborate on the outcome of weak measurements in this context. -- Highlights: → Hardy's paradox explained and eliminated. → Weak measurements: what is really measured? → Multitasking diagrams: introduced and used to discuss quantum mechanical processes. 17. An Experimental Concept for Probing Nonlinear Physics in Radiation Belts Science.gov (United States) Crabtree, C. E.; Ganguli, G.; Tejero, E. M.; Amatucci, B.; Siefring, C. L. 2017-12-01 A sounding rocket experiment, Space Measurement of Rocket-Released Turbulence (SMART), can be used to probe the nonlinear response to a known stimulus injected into the radiation belt. Release of high-speed neutral barium atoms (8- 10 km/s) generated by a shaped charge explosion in the ionosphere can be used as the source of free energy to seed weak turbulence in the ionosphere. The Ba atoms are photo-ionized forming a ring velocity distribution of heavy Ba+ that is known to generate lower hybrid waves. Induced nonlinear scattering will convert the lower hybrid waves into EM whistler/magnetosonic waves. The escape of the whistlers from the ionospheric region into the radiation belts has been studied and their observable signatures quantified. The novelty of the SMART experiment is to make coordinated measurement of the cause and effect of the turbulence in space plasmas and from that to deduce the role of nonlinear scattering in the radiation belts. Sounding rocket will carry a Ba release module and an instrumented daughter section that includes vector wave magnetic and electric field sensors, Langmuir probes and energetic particle detectors. The goal of these measurements is to determine the whistler and lower hybrid wave amplitudes and spectrum in the ionospheric source region and look for precipitated particles. The Ba release may occur at 600-700 km near apogee. Ground based cameras and radio diagnostics can be used to characterize the Ba and Ba+ release. The Van Allen Probes can be used to detect the propagation of the scattering-generated whistler waves and their effects in the radiation belts. By detecting whistlers and measuring their energy density in the radiation belts the SMART mission will confirm the nonlinear generation of whistlers through scattering of lower hybrid along with other nonlinear responses of the radiation belts and their connection to weak turbulence. 18. Hunting the weak bosons International Nuclear Information System (INIS) Anon. 1979-01-01 The possibility of the production of weak bosons in the proton-antiproton colliding beam facilities which are currently being developed, is discussed. The production, decay and predicted properties of these particles are described. (W.D.L.). 19. Measures of weak noncompactness, nonlinear Leray-Schauder ... African Journals Online (AJOL) In this paper, we establish some new nonlinear Leray-Schauder alternatives for the sum and the product of weakly sequentially continuous operators in Banach algebras satisfying certain sequential condition (P). The main condition in our results is formulated in terms of axiomatic measures of weak noncompactness. 20. Late-onset radiation-induced vasculopathy and stroke in a child with medulloblastoma. Science.gov (United States) Bansal, Lalit R; Belair, Jeffrey; Cummings, Dana; Zuccoli, Giulio 2015-05-01 We report a case of a 15-year-old boy who presented to our institution with left-sided weakness and slurred speech. He had a history of medulloblastoma diagnosed at 3 years of age, status postsurgical resection and craniospinal radiation. Magnetic resonance imaging (MRI) of brain revealed a right paramedian pontine infarction, suspected secondary to late-onset radiation-induced vasculopathy of the vertebrobasilar system. Radiation to the brain is associated with increased incidence of ischemic stroke. Clinicians should have a high index of suspicion for stroke when these patients present with new neurologic symptoms. © The Author(s) 2014. 1. Evaluation of awareness on radiation protection and knowledge about radiological examinations in healthcare professionals who use ionized radiation at work. Science.gov (United States) Yurt, Ayşegül; Cavuşoğlu, Berrin; Günay, Türkan 2014-06-01 In this study, we evaluated the knowledge and perception and mitigation of hazards involved in radiological examinations, focusing on healthcare personnel who are not in radiation-related occupations, but who use ionising radiation as a part of their work. A questionnaire was applied to physicians, nurses, technicians and other staff working in different clinics that use radiation in their work, in order to evaluate their knowledge levels about ionizing radiation and their awareness about radiation doses resulting from radiological examinations. The statistical comparisons between the groups were analyzed with the Kruskal Wallis test using the SPSS program. Ninety two participants took part in the study. Their level of knowledge about ionizing radiation and doses in radiological examinations were found to be very weak. The number of correct answers of physicians, nurses, medical technicians and other personnel groups were 15.7±3.7, 13.0±4.0, 10.1±2.9 and 11.8±4.0, respectively. In the statistical comparison between the groups, the level of knowledge of physicians was found to be significantly higher than the level of the other groups (p=0.005). The present study demonstrated that general knowledge in relation to radiation, radiation protection, health risks and doses used for radiological applications are insufficient among health professions using with ionizing radiation in their work. 2. Weak localization in few-layer black phosphorus International Nuclear Information System (INIS) Du, Yuchen; Neal, Adam T; Zhou, Hong; Ye, Peide D 2016-01-01 We have conducted a comprehensive investigation into the magneto-transport properties of few-layer black phosphorus in terms of phase coherence length, phase coherence time, and mobility via weak localization measurement and Hall-effect measurement. We present magnetoresistance data showing the weak localization effect in bare p-type few-layer black phosphorus and reveal its strong dependence on temperature and carrier concentration. The measured weak localization agrees well with the Hikami–Larkin–Nagaoka model and the extracted phase coherence length of 104 nm at 350 mK, decreasing as ∼T −0.513+−0.053 with increased temperature. Weak localization measurement allows us to qualitatively probe the temperature-dependent phase coherence time τ ϕ , which is in agreement with the theory of carrier interaction in the diffusive regime. We also observe the universal conductance fluctuation phenomenon in few-layer black phosphorus within moderate magnetic field and low temperature regime. (paper) 3. Importance of weak minerals on earthquake mechanics Science.gov (United States) Kaneki, S.; Hirono, T. 2017-12-01 The role of weak minerals such as smectite and talc on earthquake mechanics is one of the important issues, and has been debated for recent several decades. Traditionally weak minerals in fault have been reported to weaken fault strength causing from its low frictional resistance. Furthermore, velocity-strengthening behavior of such weak mineral (talc) is considered to responsible for fault creep (aseismic slip) in the San Andreas fault. In contrast, recent studies reported that large amount of weak smectite in the Japan Trench could facilitate gigantic seismic slip during the 2011 Tohoku-oki earthquake. To investigate the role of weak minerals on rupture propagation process and magnitude of slip, we focus on the frictional properties of carbonaceous materials (CMs), which is the representative weak materials widely distributed in and around the convergent boundaries. Field observation and geochemical analyses revealed that graphitized CMs-layer is distributed along the slip surface of a fossil plate-subduction fault. Laboratory friction experiments demonstrated that pure quartz, bulk mixtures with bituminous coal (1 wt.%), and quartz with layered coal samples exhibited almost similar frictional properties (initial, yield, and dynamic friction). However, mixtures of quartz (99 wt.%) and layered graphite (1 wt.%) showed significantly lower initial and yield friction coefficient (0.31 and 0.50, respectively). Furthermore, the stress ratio S, defined as (yield stress-initial stress)/(initial stress-dynamic stress), increased in layered graphite samples (1.97) compared to quartz samples (0.14). Similar trend was observed in smectite-rich fault gouge. By referring the reported results of dynamic rupture propagation simulation using S ratio of 1.4 (typical value for the Japan Trench) and 2.0 (this study), we confirmed that higher S ratio results in smaller slip distance by approximately 20 %. On the basis of these results, we could conclude that weak minerals have lower 4. Weak Hard X-Ray Emission from Broad Absorption Line Quasars: Evidence for Intrinsic X-Ray Weakness DEFF Research Database (Denmark) Luo, B.; Brandt, W. N.; Alexander, D. M. 2014-01-01 We report NuSTAR observations of a sample of six X-ray weak broad absorption line (BAL) quasars. These targets, at z = 0.148-1.223, are among the optically brightest and most luminous BAL quasars known at z 330 times weaker than...... expected for typical quasars. Our results from a pilot NuSTAR study of two low-redshift BAL quasars, a Chandra stacking analysis of a sample of high-redshift BAL quasars, and a NuSTAR spectral analysis of the local BAL quasar Mrk 231 have already suggested the existence of intrinsically X-ray weak BAL...... quasars, i.e., quasars not emitting X-rays at the level expected from their optical/UV emission. The aim of the current program is to extend the search for such extraordinary objects. Three of the six new targets are weakly detected by NuSTAR with ≲ 45 counts in the 3-24 keV band, and the other three... 5. Non-relativistic limit in a model of radiative flow Czech Academy of Sciences Publication Activity Database Nečasová, Šárka; Ducomet, B. 2015-01-01 Roč. 35, č. 2 (2015), s. 117-137 ISSN 0174-4747 R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : radiation hydrodynamics * Navier-Stokes-Fourier system * weak solution Subject RIV: BA - General Mathematics http://www.degruyter.com/view/j/anly.2015.35.issue-2/anly-2012-1295/anly-2012-1295. xml 6. Non-relativistic limit in a model of radiative flow Czech Academy of Sciences Publication Activity Database Nečasová, Šárka; Ducomet, B. 2015-01-01 Roč. 35, č. 2 (2015), s. 117-137 ISSN 0174-4747 R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : radiation hydrodynamics * Navier-Stokes-Fourier system * weak solution Subject RIV: BA - General Mathematics http://www.degruyter.com/view/j/anly.2015.35.issue-2/anly-2012-1295/anly-2012-1295.xml 7. Introduction to weak interactions International Nuclear Information System (INIS) Leite Lopes, J. An account is first given of the electromagnetic interactions of complex, scalar, vector and spinor fields. It is shown that the electromagnetic field may be considered as a gauge field. Yang-Mills fields and the field theory invariant with respect to the non-Abelian gauge transformation group are then described. The construction, owing to this invariance principle, of conserved isospin currents associated with gauge fields is also demonstrated. This is followed by a historical survey of the development of the weak interaction theory, established at first to describe beta disintegration processes by analogy with electrodynamics. The various stages are mentioned from the discovery of principles and rules and violation of principles, such as those of invariance with respect to spatial reflection and charge conjugation to the formulation of the effective current-current Lagrangian and research on the structure of weak currents [fr 8. Down syndrome and ionizing radiation. Science.gov (United States) Verger, P 1997-12-01 This review examines the epidemiologic and experimental studies into the possible role ionizing radiation might play in Down Syndrome (trisomy 21). It is prompted by a report of a temporal cluster of cases of this chromosomal disorder observed in West Berlin exactly 9 mo after the radioactive cloud from Chernobyl passed. In approximately 90% of cases, Down Syndrome is due to the nondisjunction of chromosome 21, most often in the oocyte, which may be exposed to ionizing radiation during two separate periods: before the completion of the first meiosis or around the time of ovulation. Most epidemiologic studies into trisomies and exposure to ionizing radiation examine only the first period; the Chernobyl cluster is related to the second. Analysis of these epidemiologic results indicates that the possibility that ionizing radiation might be a risk factor in Down Syndrome cannot be excluded. The experimental results, although sometimes contradictory, demonstrate that irradiation may induce nondisjunction in oogenesis and spermatogenesis; they cannot, however, be easily extrapolated to humans. The weaknesses of epidemiologic studies into the risk factors for Down Syndrome at birth (especially the failure to take into account the trisomy cases leading to spontaneous abortion) are discussed. We envisage the utility and feasibility of new studies, in particular among women exposed to prolonged or repeated artificially-produced ionizing radiation. 9. Continuing dental education in radiation protection: monitoring the outcomes. Science.gov (United States) Absi, Eg; Drage, Na; Thomas, Hs; Newcombe, Rg; Nash, Es 2009-03-01 To evaluate an evolving radiation protection dental postgraduate course run in Wales between 2003 and 2007. We compared three standardized course series. Course content was enhanced in 2006 to target areas of weakness. In 2007, a single best answer multiple choice questionnaire instrument superseded a true/false format. Practitioners' performance was studied pre- and immediately post-training. 900 participants completed identical pre- and post-course validated multiple choice questionnaires. 809 (90%) paired morning-afternoon records, including those of 52 dental care professionals (DCPs), were analysed. Mean (standard error) pre- and post-course percentage scores for the three courses were 33.8 (0.9), 35.4 (1.4), 34.6 (1.0) and 63.6 (0.9), 59.0 (1.4), 69.5 (0.9). Pre-training, only 2.4%, 3.1% and 4.9% of participants achieved the pass mark compared to 57.7%, 48.4% and 65.9% post-training, indicating a rather greater pass rate and gain in the most recent series than earlier ones. In recent series, older more experienced candidates scored slightly higher; however, their gain from pre- to post-training was slightly less. Baseline levels of radiation protection knowledge remained very low but attending an approved course improved this considerably. Targeting areas of weaknesses produced higher scores. Current radiation protection courses may not be optimal for DCPs. 10. Elementary particle treatment of the radiative muon capture International Nuclear Information System (INIS) Gmitro, M.; Ovchinnikova, A.A. 1979-01-01 Radiative nucleon-capture amplitudes have been constructed for the 12 C(O + ) → 12 B(1 + ) and 16 O(O + ) → 16 N(2 - ) transitions using assumptions about the conservation of electromagnetic and weak hadronic currents supplemented by a dynamical hypothesis. The nucleus is treated as an elementary particle and therefore is completely defined by its charge e, magnetic moment μ, spin J and parity π. In this case the radiative amplitude obtained in the framework of perturbation theory with minimal coupling sometimes does not satisfy the CVC and PCAC conditions and it can be even gauge noninvariant. The method considered allows one to overcome these shortcomings. (G.M.) 11. Obliquity Modulation of the Incoming Solar Radiation Science.gov (United States) Liu, Han-Shou; Smith, David E. (Technical Monitor) 2001-01-01 Based on a basic principle of orbital resonance, we have identified a huge deficit of solar radiation induced by the combined amplitude and frequency modulation of the Earth's obliquity as possibly the causal mechanism for ice age glaciation. Including this modulation effect on solar radiation, we have performed model simulations of climate change for the past 2 million years. Simulation results show that: (1) For the past 1 million years, temperature fluctuation cycles were dominated by a 100-Kyr period due to amplitude-frequency resonance effect of the obliquity; (2) From 2 to 1 million years ago, the amplitude-frequency interactions. of the obliquity were so weak that they were not able to stimulate a resonance effect on solar radiation; (3) Amplitude and frequency modulation analysis on solar radiation provides a series of resonance in the incoming solar radiation which may shift the glaciation cycles from 41-Kyr to 100-Kyr about 0.9 million years ago. These results are in good agreement with the marine and continental paleoclimate records. Thus, the proposed climate response to the combined amplitude and frequency modulation of the Earth's obliquity may be the key to understanding the glaciation puzzles in paleoclimatology. 12. Weak interactions at high energies International Nuclear Information System (INIS) Ellis, J. 1978-08-01 Review lectures are presented on the phenomenological implications of the modern spontaneously broken gauge theories of the weak and electromagnetic interactions, and some observations are made about which high energy experiments probe what aspects of gauge theories. Basic quantum chromodynamics phenomenology is covered including momentum dependent effective quark distributions, the transverse momentum cutoff, search for gluons as sources of hadron jets, the status and prospects for the spectroscopy of fundamental fermions and how fermions may be used to probe aspects of the weak and electromagnetic gauge theory, studies of intermediate vector bosons, and miscellaneous possibilities suggested by gauge theories from the Higgs bosons to speculations about proton decay. 187 references 13. quasi hyperrigidity and weak peak points for non-commutative ... Indian Academy of Sciences (India) 7 Abstract. In this article, we introduce the notions of weak boundary repre- sentation, quasi hyperrigidity and weak peak points in the non-commutative setting for operator systems in C∗-algebras. An analogue of Saskin's theorem relating quasi hyperrigidity and weak Choquet boundary for particular classes of C∗-algebras is ... 14. Testing the dynamics of B ->pi pi and constraints on alpha International Nuclear Information System (INIS) Grossman, Yuval; Hocker, Andreas; Ligeti, Zoltan; Pirjol, Dan 2005-01-01 In charmless nonleptonic B decays to ππ or ρρ, the ''color allowed'' and ''color supressed'' tree amplitudes can be studied in a systematic expansion in α s (m b ) and Λ QCD /m b . At leading order in this expansion their relative strong phase vanishes. The implications of this prediction are obscured by penguin contributions. They propose to use this prediction to test the relative importance of the various penguin amplitudes using experimental data. The present B → ππ data suggest that there are large corrections to the heavy quark limit, which can be due to power corrections to the tree amplitudes, large up-penguin amplitude, or enhanced weak annihilation. Because the penguin contributions are smaller, the heavy quark limit is more consistent with the B → ρρ data, and its implications may become important for the extraction of α from this mode in the future 15. Testing the dynamics of B→ππ and constraints on α International Nuclear Information System (INIS) Grossman, Yuval; Hoecker, Andreas; Ligeti, Zoltan; Pirjol, Dan 2005-01-01 In charmless nonleptonic B decays to ππ or ρρ, the 'color allowed' and 'color suppressed' tree amplitudes can be studied in a systematic expansion in α s (m b ) and Λ QCD /m b . At leading order in this expansion their relative strong phase vanishes. The implications of this prediction are obscured by penguin contributions. We propose to use this prediction to test the relative importance of the various penguin amplitudes using experimental data. The present B→ππ data suggest that there are large corrections to the heavy quark limit, which can be due to power corrections to the tree amplitudes, large up-penguin amplitude, or enhanced weak annihilation. Because the penguin contributions are smaller, the heavy quark limit is more consistent with the B→ρρ data, and its implications may become important for the extraction of α from this mode in the future 16. Simulation of weak and strong Langmuir collapse regimes International Nuclear Information System (INIS) Hadzievski, L.R.; Skoric, M.M.; Kono, M.; Sato, T. 1998-01-01 In order to check the validity of the self-similar solutions and the existence of weak and strong collapse regimes, direct two dimensional simulation of the time evolution of a Langmuir soliton instability is performed. Simulation is based on the Zakharov model of strong Langmuir turbulence in a weakly magnetized plasma accounting for the full ion dynamics. For parameters considered, agreement with self-similar dynamics of the weak collapse type is found with no evidence of the strong Langmuir collapse. (author) 17. Radiative corrections to neutrino deep inelastic scattering revisited International Nuclear Information System (INIS) Arbuzov, Andrej B.; Bardin, Dmitry Yu.; Kalinovskaya, Lidia V. 2005-01-01 Radiative corrections to neutrino deep inelastic scattering are revisited. One-loop electroweak corrections are re-calculated within the automatic SANC system. Terms with mass singularities are treated including higher order leading logarithmic corrections. Scheme dependence of corrections due to weak interactions is investigated. The results are implemented into the data analysis of the NOMAD experiment. The present theoretical accuracy in description of the process is discussed 18. Prevalence of Weak D Antigen In Western Indian Population Directory of Open Access Journals (Sweden) Tanvi Sadaria 2015-12-01 Full Text Available Introduction: Discovery of Rh antigens in 1939 by Landsteiner and Weiner was the revolutionary stage in blood banking. Of these antigens, D, which decides Rh positivity or negativity, is the most antigenic. A problem is encountered when an individual has a weakened expression of D (Du, i.e., fewer numbers of D antigens on red cell membrane. Aims and Objectives: To know the prevalence of weak D in Indian population because incidence varies in different population. To determine the risk of alloimmunization among Rh D negative patients who receives the blood of weak D positive donors. Material and Methods: Rh grouping of 38,962 donors who came to The Department of Immunohematology and Blood Transfusion of Civil Hospital, Ahmedabad from 1st January 2013 to 30th September 2014 was done using the DIAGAST (Automated Grouping. The samples that tested negative for D antigen were further analysed for weak D (Du by indirect antiglobulin test using blend of Ig G and Ig M Anti D. This was done using Column agglutination method in ID card (gel card. Results: The total number of donors studied was 38,962. Out of these 3360(8.6% were tested Rh D negative. All Rh D negative donors were tested for weak D (Du. 22 (0.056% of total donors and 0.65% of Rh negative donors turned out to be weak D (Du positive. Conclusion: The prevalence of weak D (Du in Western Indian population is 0.056 %, So the risk of alloimmunization in our setting due to weak D (Du antigen is marginal. But, testing of weak D antigen is necessary in blood bank because weak D antigen is immunogenic and can produce alloimmunization if transfused to Rh D negative subjects. 19. Radiation-induced camptocormia and dropped head syndrome. Review and case report of radiation-induced movement disorders International Nuclear Information System (INIS) Seidel, Clemens; Kuhnt, Thomas; Kortmann, Rolf-Dieter; Hering, Kathrin 2015-01-01 In recent years, camptocormia and dropped head syndrome (DHS) have gained attention as particular forms of movement disorders. Camptocormia presents with involuntary forward flexion of the thoracolumbar spine that typically increases during walking or standing and may severely impede walking ability. DHS is characterized by weakness of the neck extensors and a consecutive inability to extend the neck; in severe cases the head is fixed in a ''chin to chest position.'' Many diseases may underlie these conditions, and there have been some reports about radiation-induced camptocormia and DHS. A PubMed search with the keywords ''camptocormia,'' ''dropped head syndrome,'' ''radiation-induced myopathy,'' ''radiation-induced neuropathy,'' and ''radiation-induced movement disorder'' was carried out to better characterize radiation-induced movement disorders and the radiation techniques involved. In addition, the case of a patient developing camptocormia 23 years after radiation therapy of a non-Hodgkin's lymphoma of the abdomen is described. In total, nine case series of radiation-induced DHS (n = 45 patients) and - including our case - three case reports (n = 3 patients) about radiogenic camptocormia were retrieved. Most cases (40/45 patients) occurred less than 15 years after radiotherapy involving extended fields for Hodgkin's disease. The use of wide radiation fields including many spinal segments with paraspinal muscles may lead to radiation-induced movement disorders. If paraspinal muscles and the thoracolumbar spine are involved, the clinical presentation can be that of camptocormia. DHS may result if there is involvement of the cervical spine. To prevent these disorders, sparing of the spine and paraspinal muscles is desirable. (orig.) [de 20. Weak limits for quantum random walks International Nuclear Information System (INIS) Grimmett, Geoffrey; Janson, Svante; Scudo, Petra F. 2004-01-01 We formulate and prove a general weak limit theorem for quantum random walks in one and more dimensions. With X n denoting position at time n, we show that X n /n converges weakly as n→∞ to a certain distribution which is absolutely continuous and of bounded support. The proof is rigorous and makes use of Fourier transform methods. This approach simplifies and extends certain preceding derivations valid in one dimension that make use of combinatorial and path integral methods 1. The right choice: extremity dosemeter for different radiation fields International Nuclear Information System (INIS) Brasik, N.; Stadtmann, H.; Kindl, P. 2005-01-01 Full text: Measurements of weakly penetrating radiation in personal dosimetry present problems in the design of suitable detectors and in the interpretation of their readings. For the measurement of the individual beta radiation dose, personal dosemeter for the fingers/tips are required. In general, the dosemeters currently used for personal monitoring of beta and low energy photon doses suffer from an energy threshold problem because the detector and/or the filter are too thick. TLDs of a standard thickness can seriously underestimate personal skin doses, especially in external fields of weakly penetrating radiation fields. LiF:Mg, Cu, P is a promising TL material which allows the production of thin detectors with sufficient sensitivity. Dosimetric properties of two different types of extremity dosemeters, designed to measure the personal dose equivalent Hp(0.07), have been compared: LiF:Mg, Ti (TLD100) and LiF:Mg, Cu, P (TLD700H). The first one consists of 100 mg.cm -2 LiF:Mg, Ti (TLD 100) chip and a 35mg. cm -2 cap, the other consists of a 7mg. cm -2 layer of LiF:Mg, Cu, P (TLD-700H) powder and a 5mg. cm -2 cap. The evaluation was done in two steps: performance tests (ISO-12794) and measurements in real workplaces. In the first step type test results for beta calibration were compared. In addition calibration for low energy photon radiation according to ISO 4037-3 was carried out. In the second step, simultaneous measurements with both types of dosemeters were performed at workplaces, where radiopharmaceuticals containing different radioisotopes are prepared and applied. Practices in these fields are characterized by handling of high activities at very small distances between source and skin. The results from the comparison of the two dosemeter types are presented and analyzed with respect to different radiation fields. Experiments showed a satisfactory sensitivity for the thinner dosemeter (TLD 700H) for detecting beta radiation at protection levels and a good 2. Weak values in a classical theory with an epistemic restriction International Nuclear Information System (INIS) Karanjai, Angela; Cavalcanti, Eric G; Bartlett, Stephen D; Rudolph, Terry 2015-01-01 Weak measurement of a quantum system followed by postselection based on a subsequent strong measurement gives rise to a quantity called the weak value: a complex number for which the interpretation has long been debated. We analyse the procedure of weak measurement and postselection, and the interpretation of the associated weak value, using a theory of classical mechanics supplemented by an epistemic restriction that is known to be operationally equivalent to a subtheory of quantum mechanics. Both the real and imaginary components of the weak value appear as phase space displacements in the postselected expectation values of the measurement device's position and momentum distributions, and we recover the same displacements as in the quantum case by studying the corresponding evolution in our theory of classical mechanics with an epistemic restriction. By using this epistemically restricted theory, we gain insight into the appearance of the weak value as a result of the statistical effects of post selection, and this provides us with an operational interpretation of the weak value, both its real and imaginary parts. We find that the imaginary part of the weak value is a measure of how much postselection biases the mean phase space distribution for a given amount of measurement disturbance. All such biases proportional to the imaginary part of the weak value vanish in the limit where disturbance due to measurement goes to zero. Our analysis also offers intuitive insight into how measurement disturbance can be minimized and the limits of weak measurement. (paper) 3. Principles of the radiosity method versus radiative transfer for canopy reflectance modeling Science.gov (United States) Gerstl, Siegfried A. W.; Borel, Christoph C. 1992-01-01 The radiosity method is introduced to plant canopy reflectance modeling. We review the physics principles of the radiosity method which originates in thermal radiative transfer analyses when hot and cold surfaces are considered within a given enclosure. The radiosity equation, which is an energy balance equation for discrete surfaces, is described and contrasted with the radiative transfer equation, which is a volumetric energy balance equation. Comparing the strengths and weaknesses of the radiosity method and the radiative transfer method, we conclude that both methods are complementary to each other. Results of sample calculations are given for canopy models with up to 20,000 discrete leaves. 4. Constrained Deep Weak Supervision for Histopathology Image Segmentation. Science.gov (United States) Jia, Zhipeng; Huang, Xingyi; Chang, Eric I-Chao; Xu, Yan 2017-11-01 In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images. 5. Weak hadronic currents in compensation theory International Nuclear Information System (INIS) Pappas, R.C. 1975-01-01 Working within the framework of a compensation theory of strong and weak interactions, it is shown that: (1) an axial vector baryon number current can be included in the weak current algebra if certain restrictions on the K-meson strong couplings are relaxed; (2) the theory does not permit the introduction of strange currents of the chiral form V + A; and (3) the assumption that the superweak currents of the theory cannot contain certain CP conserving terms can be justified on the basis of compensation requirements 6. Weak interactions of the b quark International Nuclear Information System (INIS) Branco, G.C.; Mohapatra, R.N. 1978-01-01 In weak-interaction models with two charged W bosons of comparable mass, there exists a novel possibility for the weak interactions of the b quark, in which the (u-barb)/sub R/ current occurs with maximal strength. It is noted that multimuon production in e + e - annihilation at above Q 2 > or approx. = (12 GeV) 2 will distinguish this scheme from the conventional one. We also present a Higgs system that leads naturally to this type of coupling, in a class of gauge models 7. CPT non-invariance and weak interactions International Nuclear Information System (INIS) Hsu, J.P. 1973-01-01 In this talk, I will describe a possible violation of CPT invariance in the domain of weak interactions. One can construct a model of weak interactions which, in order to be consistent with all experimental data, must violate CPT maximally. The model predicts many specific results for decay processes which could be tested in the planned neutral hyperon beam or neutrino beam at NAL. The motivations and the physical idea in the model are explained and the implications of the model are discussed. (U.S.) 8. The weak interaction in nuclear, particle and astrophysics International Nuclear Information System (INIS) Grotz, K.; Klapdor, H.V. 1989-01-01 This book is an introduction to the concepts of weak interactions and their importance and consequences for nuclear physics, particle physics, neutrino physics, astrophysics and cosmology. After a general introduction to elementary particles and interactions the Fermi theory of weak interactions is described together with its connection with nuclear structure and beta decay including the double beta decay. Then, after a general description of gauge theories the Weinberg-Salam theory of the electroweak interactions is introduced. Thereafter the weak interactions are considered in the framework of grand unification. Then the physics of neutrinos is discussed. Thereafter connections of weak interactions with astrophysics are considered with special regards to the gravitational collapse and the synthesis of heavy elements in the r-process. Finally, the connections of grand unified theories and cosmology are considered. (HSI) With 141 figs., 39 tabs 9. CAUSES: Attribution of Surface Radiation Biases in NWP and Climate Models near the U.S. Southern Great Plains Science.gov (United States) Van Weverberg, K.; Morcrette, C. J.; Petch, J.; Klein, S. A.; Ma, H.-Y.; Zhang, C.; Xie, S.; Tang, Q.; Gustafson, W. I.; Qian, Y.; Berg, L. K.; Liu, Y.; Huang, M.; Ahlgrimm, M.; Forbes, R.; Bazile, E.; Roehrig, R.; Cole, J.; Merryfield, W.; Lee, W.-S.; Cheruy, F.; Mellul, L.; Wang, Y.-C.; Johnson, K.; Thieman, M. M. 2018-04-01 Many Numerical Weather Prediction (NWP) and climate models exhibit too warm lower tropospheres near the midlatitude continents. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. This paper presents an attribution study on the net radiation biases in nine model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, water vapor, and aerosols are quantified, using an array of radiation measurement stations near the Atmospheric Radiation Measurement Southern Great Plains site. Furthermore, an in-depth analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface shortwave radiation is overestimated in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation, although nonnegligible contributions from the surface albedo exist in most models. Missing deep cloud events and/or simulating deep clouds with too weak cloud radiative effects dominate in the cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, cloud radiative deficiencies are related to too weak convective cloud detrainment and too large precipitation efficiencies. 10. Weak relativity CERN Document Server Selleri, Franco 2015-01-01 Weak Relativity is an equivalent theory to Special Relativity according to Reichenbach’s definition, where the parameter epsilon equals to 0. It formulates a Neo-Lorentzian approach by replacing the Lorentz transformations with a new set named “Inertial Transformations”, thus explaining the Sagnac effect, the twin paradox and the trip from the future to the past in an easy and elegant way. The cosmic microwave background is suggested as a possible privileged reference system. Most importantly, being a theory based on experimental proofs, rather than mutual consensus, it offers a physical description of reality independent of the human observation. 11. Staggering towards a calculation of weak amplitudes Energy Technology Data Exchange (ETDEWEB) Sharpe, S.R. 1988-09-01 An explanation is given of the methods required to calculate hadronic matrix elements of the weak Hamiltonians using lattice QCD with staggered fermions. New results are presented for the 1-loop perturbative mixing of the weak interaction operators. New numerical techniques designed for staggered fermions are described. A preliminary result for the kaon B parameter is presented. 24 refs., 3 figs. 12. Precision phase estimation based on weak-value amplification Science.gov (United States) Qiu, Xiaodong; Xie, Linguo; Liu, Xiong; Luo, Lan; Li, Zhaoxue; Zhang, Zhiyou; Du, Jinglei 2017-02-01 In this letter, we propose a precision method for phase estimation based on the weak-value amplification (WVA) technique using a monochromatic light source. The anomalous WVA significantly suppresses the technical noise with respect to the intensity difference signal induced by the phase delay when the post-selection procedure comes into play. The phase measured precision of this method is proportional to the weak-value of a polarization operator in the experimental range. Our results compete well with the wide spectrum light phase weak measurements and outperform the standard homodyne phase detection technique. 13. Consistency tests of Ampcalculator and chiral amplitudes in SU(3) Chiral Perturbation Theory: A tutorial-based approach International Nuclear Information System (INIS) Ananthanarayan, B.; Sentitemsu Imsong, I.; Das, Diganta 2012-01-01 Ampcalculator (AMPC) is a Mathematica copyright based program that was made publicly available some time ago by Unterdorfer and Ecker. It enables the user to compute several processes at one loop (upto O(p 4 )) in SU(3) chiral perturbation theory. They include computing matrix elements and form factors for strong and non-leptonic weak processes with at most six external states. It was used to compute some novel processes and was tested against well-known results by the original authors. Here we present the results of several thorough checks of the package. Exhaustive checks performed by the original authors are not publicly available, and hence the present effort. Some new results are obtained from the software especially in the kaon odd-intrinsic parity non-leptonic decay sector involving the coupling G 27 . Another illustrative set of amplitudes at tree level we provide is in the context of τ-decays with several mesons including quark mass effects, of use to the BELLE experiment. All eight meson-meson scattering amplitudes have been checked. The Kaon-Compton amplitude has been checked and a minor error in the published results has been pointed out. This exercise is a tutorial-based one, wherein several input and output notebooks are also being made available as ancillary files on the arXiv. Some of the additional notebooks we provide contain explicit expressions that we have used for comparison with established results. The purpose is to encourage users to apply the software to suit their specific needs. An automatic amplitude generator of this type can provide error-free outputs that could be used as inputs for further simplification, and in varied scenarios such as applications of chiral perturbation theory at finite temperature, density and volume. This can also be used by students as a learning aid in low-energy hadron dynamics. (orig.) 14. Weak lensing probes of modified gravity International Nuclear Information System (INIS) Schmidt, Fabian 2008-01-01 We study the effect of modifications to general relativity on large-scale weak lensing observables. In particular, we consider three modified gravity scenarios: f(R) gravity, the Dvali-Gabadadze-Porrati model, and tensor-vector-scalar theory. Weak lensing is sensitive to the growth of structure and the relation between matter and gravitational potentials, both of which will in general be affected by modified gravity. Restricting ourselves to linear scales, we compare the predictions for galaxy-shear and shear-shear correlations of each modified gravity cosmology to those of an effective dark energy cosmology with the same expansion history. In this way, the effects of modified gravity on the growth of perturbations are separated from the expansion history. We also propose a test which isolates the matter-potential relation from the growth factor and matter power spectrum. For all three modified gravity models, the predictions for galaxy and shear correlations will be discernible from those of dark energy with very high significance in future weak lensing surveys. Furthermore, each model predicts a measurably distinct scale dependence and redshift evolution of galaxy and shear correlations, which can be traced back to the physical foundations of each model. We show that the signal-to-noise for detecting signatures of modified gravity is much higher for weak lensing observables as compared to the integrated Sachs-Wolfe effect, measured via the galaxy-cosmic microwave background cross-correlation. 15. Proximal Limb Weakness Reverting After CSF Diversion In Intracranial Hypertension Directory of Open Access Journals (Sweden) Sinha S 2005-01-01 Full Text Available We report about two young girls who developed progressive visual failure secondary to increased intracranial pressure and had significant proximal muscle weakness of limbs. Patients with elevated intracranial pressure (ICP may present with "false localizing signs", besides having headache, vomiting and papilledema. Radicular pain as a manifestation of raised ICP is rare and motor weakness attributable to polyradiculopathy is exceptional. Two patients with increased intracranial pressure without lateralizing signs′ had singnificant muscle weakness. Clinical evaluation and laboratory tests did not disclose any other cause for weakness. Following theco-peritoneal shunt, in both patients, there was variable recovery of vision but the proximal weakness and symptoms of elevated ICP improved rapidly. Recognition of this uncommon manifestation of raised ICP may obviate the need for unnecessary investigation and reduce morbidity due to weakness by CSF diversion procedure. 16. Collective migration of adsorbed atoms on a solid surface in the laser radiation field International Nuclear Information System (INIS) Andreev, V V; Ignat'ev, D V; Telegin, Gennadii G 2004-01-01 The lateral (in the substrate plane) interaction between dipoles induced in particles adsorbed on a solid surface is studied in a comparatively weak laser radiation field with a Gaussian transverse distribution. It is shown that the particles migrate over the surface in the radial direction either outside an illuminated spot with the formation of a 'crater' or inside the spot with the formation of a 'mound'. (interaction of laser radiation with matter. laser plasma) 17. In Vivo Predictive Dissolution: Comparing the Effect of Bicarbonate and Phosphate Buffer on the Dissolution of Weak Acids and Weak Bases. Science.gov (United States) Krieg, Brian J; Taghavi, Seyed Mohammad; Amidon, Gordon L; Amidon, Gregory E 2015-09-01 Bicarbonate is the main buffer in the small intestine and it is well known that buffer properties such as pKa can affect the dissolution rate of ionizable drugs. However, bicarbonate buffer is complicated to work with experimentally. Finding a suitable substitute for bicarbonate buffer may provide a way to perform more physiologically relevant dissolution tests. The dissolution of weak acid and weak base drugs was conducted in bicarbonate and phosphate buffer using rotating disk dissolution methodology. Experimental results were compared with the predicted results using the film model approach of (Mooney K, Mintun M, Himmelstein K, Stella V. 1981. J Pharm Sci 70(1):22-32) based on equilibrium assumptions as well as a model accounting for the slow hydration reaction, CO2 + H2 O → H2 CO3 . Assuming carbonic acid is irreversible in the dehydration direction: CO2 + H2 O ← H2 CO3 , the transport analysis can accurately predict rotating disk dissolution of weak acid and weak base drugs in bicarbonate buffer. The predictions show that matching the dissolution of weak acid and weak base drugs in phosphate and bicarbonate buffer is possible. The phosphate buffer concentration necessary to match physiologically relevant bicarbonate buffer [e.g., 10.5 mM (HCO3 (-) ), pH = 6.5] is typically in the range of 1-25 mM and is very dependent upon drug solubility and pKa . © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association. 18. Comparison of Solar UVA and UVB Radiation Measured in Selangor, Malaysia International Nuclear Information System (INIS) Kamarudin, S. U.; Gopir, G.; Yatim, B.; Sanusi, H.; Mahmud, P. S. Megat; Choo, P. Y. 2010-01-01 The solar ultraviolet A (UVA) radiation data was measured at Physics Building, Universiti Kebangsaan Malaysia (2 degree sign 55' N, 101 degree sign 46' E, 50m asl) by the Xplorer GLX Pasco that connected to UVA Light sensor. The measured solar UVA data were compared with the total daily solar ultraviolet B (UVB) radiation data recorded by the Malaysian Metrological Department at Petaling Jaya, Malaysia (3 degree sign 06' N, 101 degree sign 39' E, 50m asl) for 18 days in year 2007. The daily total average of UVA radiation received is (298±105) kJm -2 while the total daily maximum is (600±56) kJm -2 . From the analysis, it shows that the values of UVA radiation data were higher than UVB radiation data with the average ratio of 6.41% between 3-14%. A weak positive correlation was found (the correlation coefficient, r, is 0.22). The amount of UVA radiation that reached the earth surface is less dependence on UVB radiation and the factors were discussed. 19. Reducing Weak to Strong Bisimilarity in CCP Directory of Open Access Journals (Sweden) Andrés Aristizábal 2012-12-01 Full Text Available Concurrent constraint programming (ccp is a well-established model for concurrency that singles out the fundamental aspects of asynchronous systems whose agents (or processes evolve by posting and querying (partial information in a global medium. Bisimilarity is a standard behavioural equivalence in concurrency theory. However, only recently a well-behaved notion of bisimilarity for ccp, and a ccp partition refinement algorithm for deciding the strong version of this equivalence have been proposed. Weak bisimiliarity is a central behavioural equivalence in process calculi and it is obtained from the strong case by taking into account only the actions that are observable in the system. Typically, the standard partition refinement can also be used for deciding weak bisimilarity simply by using Milner's reduction from weak to strong bisimilarity; a technique referred to as saturation. In this paper we demonstrate that, because of its involved labeled transitions, the above-mentioned saturation technique does not work for ccp. We give an alternative reduction from weak ccp bisimilarity to the strong one that allows us to use the ccp partition refinement algorithm for deciding this equivalence. 20. Weak form factors of beauty baryons International Nuclear Information System (INIS) Ivanov, M.A.; Lyubovitskij, V.E. 1992-01-01 Full analysis of semileptonic decays of beauty baryons with J p =1/2 2 and J p =3/2 2 into charmed ones within the Quark Confinement Model is reported. Weak form factors and decay rates are calculated. Also the heavy quark limit m Q →∞ (Isgur-Wise symmetry) is examined. The weak heavy-baryon form factors in the Isgur-Wise limit and 1/m Q -corrections to them are computered. The Ademollo-Gatto theorem is spin-flavour symmetry of heavy quarks is checked. 33 refs.; 1 fig.; 9 tabs 1. Design and construction of the prototype synchrotron radiation detector CERN Document Server Anderhub, H; Baetzner, D; Baumgartner, S; Biland, A; Camps, C; Capell, M; Commichau, V; Djambazov, L; Fanchiang, Y J; Flügge, G; Fritschi, M; Grimm, O; Hangarter, K; Hofer, H; Horisberger, Urs; Kan, R; Kaestli, W; Kenney, G P; Kim, G N; Kim, K S; Koutsenko, V F; Kraeber, M; Kuipers, J; Lebedev, A; Lee, M W; Lee, S C; Lewis, R; Lustermann, W; Pauss, Felicitas; Rauber, T; Ren, D; Ren, Z L; Röser, U; Son, D; Ting, Samuel C C; Tiwari, A N; Viertel, Gert M; Gunten, H V; Wicki, S W; Wang, T S; Yang, J; Zimmermann, B 2002-01-01 The Prototype Synchrotron Radiation Detector (PSRD) is a small-scale experiment designed to measure the rate of low-energy charged particles and photons in near the Earth's orbit. It is a precursor to the Synchrotron Radiation Detector (SRD), a proposed addition to the upgraded version of the Alpha Magnetic Spectrometer (AMS-02). The SRD will use the Earth's magnetic field to identify the charge sign of electrons and positrons with energies above 1 TeV by detecting the synchrotron radiation they emit in this field. The differential energy spectrum of these particles is astrophysically interesting and not well covered by the remaining components of AMS-02. Precise measurements of this spectrum offer the possibility to gain information on the acceleration mechanism and characteristics of all cosmic rays in our galactic neighbourhood. The SRD will discriminate against protons as they radiate only weakly. Both the number and energy of the synchrotron photons that the SRD needs to detect are small. The identificat... 2. In-situ radiation dosimetry based on radio-fluorogenic co-polymerization International Nuclear Information System (INIS) Warman, John M; Luthjens, Leonard H; Haas, Matthijs P de 2009-01-01 A fluorimetric method of radiation dosimetry is presented for which the intensity of the fluorescence of a (tissue equivalent) medium is linearly dependent on accumulated dose from a few Gray up to kiloGrays. The method is based on radio-fluorogenic co-polymerization (RFCP) in which a normally very weakly fluorescent molecule becomes highly fluorescent when incorporated into a (radiation-initiated) growing polymer chain. The method is illustrated with results of in-situ measurements within the chamber of a cobalt-60 irradiator. It is proposed that RFCP could form the basis for fluorimetric multi-dimensional dose imaging. 3. Use of commercial VDMOSFETS in an electronic system subjected to radiation; Utilisation des VDMOSFETs commerciaux dans un systeme electronique soumis aux radiations Energy Technology Data Exchange (ETDEWEB) Picard, C.; Brisset, C.; Quittard, O.; Marceau, M.; Joffre, F. [CEA Saclay, Lab. d' Electronique et de Technologie de l' Informatique, LETI, 91 - Gif-sur-Yvette (France); Hoffmann, A.; Charles, J.P. [centre Lorrain d' Optique et Electronique des Solides, Supelec, 57 - Metz (France) 1999-07-01 This study explores the usefulness of pre-irradiation as a hardening technique for NMOS transistors. NMOS transistors have been exposed to Co-60 gamma radiation with a dose-rate of 10 krad(SiO{sub 2})/h. The pre-irradiation technique is based on 2 phenomena occurring when the polarization is negative or equals to 0: the weak shift and the saturation of the threshold voltage. 4. On Weak-BCC-Algebras Science.gov (United States) Thomys, Janus; Zhang, Xiaohong 2013-01-01 We describe weak-BCC-algebras (also called BZ-algebras) in which the condition (x∗y)∗z = (x∗z)∗y is satisfied only in the case when elements x, y belong to the same branch. We also characterize ideals, nilradicals, and nilpotent elements of such algebras. PMID:24311983 5. Detection of shielded radionuclides from weak and poorly resolved spectra using group positive RIVAL International Nuclear Information System (INIS) Kump, Paul; Bai, Er-Wei; Chan, Kung-Sik; Eichinger, William 2013-01-01 This paper is concerned with the identification of nuclides from weak and poorly resolved spectra in the presence of unknown radiation shielding materials such as carbon, water, concrete and lead. Since a shield will attenuate lower energies more so than higher ones, isotope sub-spectra must be introduced into models and into detection algorithms. We propose a new algorithm for detection, called group positive RIVAL, that encourages the selection of groups of sub-spectra rather than the selection of individual sub-spectra that may be from the same parent isotope. Indeed, the proposed algorithm incorporates group positive LASSO, and, as such, we supply the consistency results of group positive LASSO and adaptive group positive LASSO. In an example employing various shielding materials and material thicknesses, group positive RIVAL is shown to perform well in all scenarios with the exception of ones in which the shielding material is lead. - Highlights: ► Identification of nuclides from weak and poorly resolved spectra. ► Shielding materials such as carbon, water, concrete, and lead are considered. ► Isotope spectra are decomposed into their sub-spectra. ► A variable selection algorithm is proposed that encourages group selection. ► Simulations demonstrate the proposed method's performance when nuclides have been shielded 6. On the steady equations for compressible radiative gas Czech Academy of Sciences Publication Activity Database Kreml, Ondřej; Nečasová, Šárka; Pokorný, M. 2013-01-01 Roč. 64, č. 3 (2013), s. 539-571 ISSN 0044-2275 R&D Projects: GA ČR(CZ) GAP201/11/1304; GA ČR GA201/08/0012 Institutional research plan: CEZ:AV0Z10190503 Keywords : radiative gas * variational entropy solution * weak solution Subject RIV: BA - General Mathematics Impact factor: 1.214, year: 2013 http://link.springer.com/article/10.1007%2Fs00033-012-0246-4 7. Drift waves in a weakly ionized plasma DEFF Research Database (Denmark) Popovic, M.; Melchior, H. 1968-01-01 A dispersion relation for low frequency drift waves in a weakly ionized plasma has been derived, and through numerical calculations the effect of collisions between the charged and the neutral particles is estimated.......A dispersion relation for low frequency drift waves in a weakly ionized plasma has been derived, and through numerical calculations the effect of collisions between the charged and the neutral particles is estimated.... 8. Hypernuclear weak decay puzzle International Nuclear Information System (INIS) Barbero, C.; Horvat, D.; Narancic, Z.; Krmpotic, F.; Kuo, T.T.S.; Tadic, D. 2002-01-01 A general shell model formalism for the nonmesonic weak decay of the hypernuclei has been developed. It involves a partial wave expansion of the emitted nucleon waves, preserves naturally the antisymmetrization between the escaping particles and the residual core, and contains as a particular case the weak Λ-core coupling formalism. The extreme particle-hole model and the quasiparticle Tamm-Dancoff approximation are explicitly worked out. It is shown that the nuclear structure manifests itself basically through the Pauli principle, and a very simple expression is derived for the neutron- and proton-induced decays rates Γ n and Γ p , which does not involve the spectroscopic factors. We use the standard strangeness-changing weak ΛN→NN transition potential which comprises the exchange of the complete pseudoscalar and vector meson octets (π,η,K,ρ,ω,K * ), taking into account some important parity-violating transition operators that are systematically omitted in the literature. The interplay between different mesons in the decay of Λ 12 C is carefully analyzed. With the commonly used parametrization in the one-meson-exchange model (OMEM), the calculated rate Γ NM =Γ n +Γ p is of the order of the free Λ decay rate Γ 0 (Γ NM th congruent with Γ 0 ) and is consistent with experiments. Yet the measurements of Γ n/p =Γ n /Γ p and of Γ p are not well accounted for by the theory (Γ n/p th p th > or approx. 0.60Γ 0 ). It is suggested that, unless additional degrees of freedom are incorporated, the OMEM parameters should be radically modified 9. The possible Bπ molecular state and its radiative decay Energy Technology Data Exchange (ETDEWEB) Ke, Hong-Wei; Gao, Lei [Tianjin University, School of Science, Tianjin (China); Li, Xue-Qian [Nankai University, School of Physics, Tianjin (China) 2017-05-15 Recently, several exotic bosons have been confirmed as multi-quark states. However, there are violent disputes about their inner structures, namely if they are molecular states or tetraquarks, or even mixtures of the two structures. It would be interesting to search experimentally for non-strange four-quark states with open charm or bottom which are lighter than Λ{sub c} or Λ{sub b}. Reasonable arguments indicate that they are good candidates of pure molecular states Dπ or Bπ because pions are the lightest boson. Both Bπ and Dπ bound states do not decay via the strong interaction. The Bπ molecule may decay into B* by radiating a photon, whereas the Dπ molecule can only decay via weak interaction. In this paper we explore the mass spectra of the Bπ molecular states by solving the corresponding instantaneous B-S equation. Then the rate of radiative decay vertical stroke (3)/(2), (1)/(2) right angle → B*γ is calculated and our numerical results indicate that the processes can be measured by the future experiment. We also briefly discuss the Dπ case. Due to the constraint of the final state phase space it can only decay via weak interaction. (orig.) 10. Light weakly interacting massive particles Science.gov (United States) Gelmini, Graciela B. 2017-08-01 Light weakly interacting massive particles (WIMPs) are dark matter particle candidates with weak scale interaction with the known particles, and mass in the GeV to tens of GeV range. Hints of light WIMPs have appeared in several dark matter searches in the last decade. The unprecedented possible coincidence into tantalizingly close regions of mass and cross section of four separate direct detection experimental hints and a potential indirect detection signal in gamma rays from the galactic center, aroused considerable interest in our field. Even if these hints did not so far result in a discovery, they have had a significant impact in our field. Here we review the evidence for and against light WIMPs as dark matter candidates and discuss future relevant experiments and observations. 11. Qubit state tomography in a superconducting circuit via weak measurements Science.gov (United States) Qin, Lupei; Xu, Luting; Feng, Wei; Li, Xin-Qi 2017-03-01 In this work we present a study on a new scheme for measuring the qubit state in a circuit quantum electrodynamics (QED) system, based on weak measurement and the concept of weak value. To be applicable under generic parameter conditions, our formulation and analysis are carried out for finite-strength weak measurement, and in particular beyond the bad-cavity and weak-response limits. The proposed study is accessible to present state-of-the-art circuit QED experiments. 12. Composite weak bosons Energy Technology Data Exchange (ETDEWEB) Suzuki, M. 1988-04-01 Dynamical mechanism of composite W and Z is studied in a 1/N field theory model with four-fermion interactions in which global weak SU(2) symmetry is broken explicitly by electromagnetic interaction. Issues involved in such a model are discussed in detail. Deviation from gauge coupling due to compositeness and higher order loop corrections are examined to show that this class of models are consistent not only theoretically but also experimentally. 13. Acoustical and optical radiation pressure and the development of single beam acoustical tweezers International Nuclear Information System (INIS) Thomas, Jean-Louis; Marchiano, Régis; Baresch, Diego 2017-01-01 Studies on radiation pressure in acoustics and optics have enriched one another and have a long common history. Acoustic radiation pressure is used for metrology, levitation, particle trapping and actuation. However, the dexterity and selectivity of single-beam optical tweezers are still to be matched with acoustical devices. Optical tweezers can trap, move and position micron size particles, biological samples or even atoms with subnanometer accuracy in three dimensions. One limitation of optical tweezers is the weak force that can be applied without thermal damage due to optical absorption. Acoustical tweezers overcome this limitation since the radiation pressure scales as the field intensity divided by the speed of propagation of the wave. However, the feasibility of single beam acoustical tweezers was demonstrated only recently. In this paper, we propose a historical review of the strong similarities but also the specificities of acoustical and optical radiation pressures, from the expression of the force to the development of single-beam acoustical tweezers. - Highlights: • Studies on radiation pressure in acoustics and optics have enriched one another and have a long common history. • Acoustic radiation pressure is used for metrology, levitation, particle trapping and actuation. • However, the dexterity and selectivity of single-beam optical tweezers are still to be matched with acoustical devices. • Optical tweezers can trap, move and positioned micron size particles with subnanometer accuracy in three dimensions. • One limitation of optical tweezers is the weak force that can be applied without thermal damage due to optical absorption. • Acoustical tweezers overcome this limitation since the force scales as the field intensity divided by its propagation speed. • However, the feasibility of single beam acoustical tweezers was demonstrated only recently. • We propose a review of the strong similarities but also the specificities of acoustical 14. Emission of electromagnetic radiation from beam driven plasmas International Nuclear Information System (INIS) Newman, D.L. 1985-01-01 Two production mechanisms for electromagnetic radiation from a plasma containing electron-beam-driven weak Langmuir turbulence are studied: induced Compton conversion and two-Langmuir-wave coalescence. Induced Compton conversion in which a Langmuir wave scatters off a relativistic electron while converting into a transversely polarized electromagnetic wave is considered as a means for producing amplified electromagnetic radiation from a beam-plasma system at frequencies well above the electron plasma frequency. The induced emission growth rates of the radiation produced by a monoenergetic ultrarelativistic electron beam are determined as a function of the Langmuir turbulence spectrum in the background plasma and are numerically evaluated for a range of model Langmuir spectra. Induced Compton conversion can play a role in emission from astrophysical beam-plasma systems if the electron beam is highly relativistic and sufficiently narrow. However, it is found that the growth rates for this process are too small in all cases studied to account for the intense high-frequency radiation observed in laboratory experiments. Two-Langmuir-wave coalescence as a means of producing radiation at 2omega/sub p/ is investigated in the setting of the earth's foreshock 15. Radiation effects on relativistic electrons in strong external fields International Nuclear Information System (INIS) Iqbal, Khalid 2013-01-01 The effects of radiation of high energy electron beams are a major issue in almost all types of charged particle accelerators. The objective of this thesis is both the analytical and numerical study of radiation effects. Due to its many applications the study of the self force has become a very active and productive field of research. The main part of this thesis is devoted to the study of radiation effects in laser-based plasma accelerators. Analytical models predict the existence of radiation effects. The investigation of radiation reaction show that in laser-based plasma accelerators, the self force effects lower the energy gain and emittance for moderate energies electron beams and increase the relative energy spread. However, for relatively high energy electron beams, the self radiation and retardation (radiation effects of one electron on the other electron of the system) effects increase the transverse emittance of the beam. The energy gain decreases to even lower value and relative energy spread increases to even higher value due to high radiation losses. The second part of this thesis investigates with radiation reaction in focused laser beams. Radiation effects are very weak even for high energy electrons. The radiation-free acceleration and the simple practical setup make direct acceleration in a focused laser beam very attractive. The results presented in this thesis can be helpful for the optimization of future electron acceleration experiments, in particular in the case of laser-plasma accelerators. 16. Weakly supervised classification in high energy physics Energy Technology Data Exchange (ETDEWEB) Dery, Lucio Mwinmaarong [Physics Department, Stanford University,Stanford, CA, 94305 (United States); Nachman, Benjamin [Physics Division, Lawrence Berkeley National Laboratory,1 Cyclotron Rd, Berkeley, CA, 94720 (United States); Rubbo, Francesco; Schwartzman, Ariel [SLAC National Accelerator Laboratory, Stanford University,2575 Sand Hill Rd, Menlo Park, CA, 94025 (United States) 2017-05-29 As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. This paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics — quark versus gluon tagging — we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervised classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available. 17. Weakly supervised classification in high energy physics International Nuclear Information System (INIS) Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; Schwartzman, Ariel 2017-01-01 As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. This paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics — quark versus gluon tagging — we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervised classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available. 18. Weak lensing in the Dark Energy Survey Science.gov (United States) Troxel, Michael 2016-03-01 I will present the current status of weak lensing results from the Dark Energy Survey (DES). DES will survey 5000 square degrees in five photometric bands (grizY), and has already provided a competitive weak lensing catalog from Science Verification data covering just 3% of the final survey footprint. I will summarize the status of shear catalog production using observations from the first year of the survey and discuss recent weak lensing science results from DES. Finally, I will report on the outlook for future cosmological analyses in DES including the two-point cosmic shear correlation function and discuss challenges that DES and future surveys will face in achieving a control of systematics that allows us to take full advantage of the available statistical power of our shear catalogs. 19. Weak layer fracture: facets and depth hoar Directory of Open Access Journals (Sweden) I. Reiweger 2013-09-01 Full Text Available Understanding failure initiation within weak snow layers is essential for modeling and predicting dry-snow slab avalanches. We therefore performed laboratory experiments with snow samples containing a weak layer consisting of either faceted crystals or depth hoar. During these experiments the samples were loaded with different loading rates and at various tilt angles until fracture. The strength of the samples decreased with increasing loading rate and increasing tilt angle. Additionally, we took pictures of the side of four samples with a high-speed video camera and calculated the displacement using a particle image velocimetry (PIV algorithm. The fracture process within the weak layer could thus be observed in detail. Catastrophic failure started due to a shear fracture just above the interface between the depth hoar layer and the underlying crust. 20. The Maslov index in weak symplectic functional analysis DEFF Research Database (Denmark) Booss-Bavnbek, Bernhelm; Zhu, Chaofeng 2013-01-01 We recall the Chernoff-Marsden definition of weak symplectic structure and give a rigorous treatment of the functional analysis and geometry of weak symplectic Banach spaces. We define the Maslov index of a continuous path of Fredholm pairs of Lagrangian subspaces in continuously varying Banach... 1. Current research in Canada on biological effects of ionizing radiation International Nuclear Information System (INIS) Marko, A.M. 1980-05-01 A survey of current research in Canada on the biological effects of ionizing radiation has been compiled. The list of projects has been classified according to structure (organizational state of the test system) as well as according to the type of effects. Using several assumptions, ballpark estimates of expenditures on these activities have been made. Agencies funding these research activities have been tabulated and the break-down of research in government laboratories and in academic institutions has been designated. Wherever possible, comparisons have been made outlining differences or similarities that exist between the United States and Canada concerning biological radiation research. It has been concluded that relevant research in this area in Canada is inadequate. Wherever possible, strengths and weaknesses in radiation biology programs have been indicated. The most promising course for Canada to follow is to support adequately fundamental studies of the biological effects of radiation. (auth) 2. Information flow between weakly interacting lattices of coupled maps Energy Technology Data Exchange (ETDEWEB) Dobyns, York [PEAR, Princeton University, Princeton, NJ 08544-5263 (United States); Atmanspacher, Harald [Institut fuer Grenzgebiete der Psychologie und Psychohygiene, Wilhelmstr. 3a, 79098 Freiburg (Germany)]. E-mail: [email protected] 2006-05-15 Weakly interacting lattices of coupled maps can be modeled as ordinary coupled map lattices separated from each other by boundary regions with small coupling parameters. We demonstrate that such weakly interacting lattices can nevertheless have unexpected and striking effects on each other. Under specific conditions, particular stability properties of the lattices are significantly influenced by their weak mutual interaction. This observation is tantamount to an efficacious information flow across the boundary. 3. Information flow between weakly interacting lattices of coupled maps International Nuclear Information System (INIS) Dobyns, York; Atmanspacher, Harald 2006-01-01 Weakly interacting lattices of coupled maps can be modeled as ordinary coupled map lattices separated from each other by boundary regions with small coupling parameters. We demonstrate that such weakly interacting lattices can nevertheless have unexpected and striking effects on each other. Under specific conditions, particular stability properties of the lattices are significantly influenced by their weak mutual interaction. This observation is tantamount to an efficacious information flow across the boundary 4. Weak interaction and nucleus: the relationship keeps on International Nuclear Information System (INIS) Martino, J.; Frere, J.M.; Naviliat-Cuncic, O.; Volpe, C.; Marteau, J.; Lhuillier, D.; Vignaud, D.; Legac, R.; Marteau, J.; Legac, R. 2003-01-01 This document gathers the lectures made at the Joliot-Curie international summer school in 2003 whose theme, that year, was the relationship between weak interaction and nucleus. There were 8 contributions whose titles are: 1) before the standard model: from beta decay to neutral currents; 2) the electro-weak theory and beyond; 3) testing of the standard model at low energies; 4) description of weak processes in nuclei; 5) 20.000 tonnes underground, an approach to the neutrino-nucleus interaction; 6) parity violation from atom to nucleon; 7) how neutrinos got their masses; and 8) CP symmetry 5. Tight Bell Inequalities and Nonlocality in Weak Measurement Science.gov (United States) Waegell, Mordecai A general class of Bell inequalities is derived based on strict adherence to probabilistic entanglement correlations observed in nature. This derivation gives significantly tighter bounds on local hidden variable theories for the well-known Clauser-Horne-Shimony-Holt (CHSH) inequality, and also leads to new proofs of the Greenberger-Horne-Zeilinger (GHZ) theorem. This method is applied to weak measurements and reveals nonlocal correlations between the weak value and the post-selection, which rules out various classical models of weak measurement. Implications of these results are discussed. Fetzer-Franklin Fund of the John E. Fetzer Memorial Trust. 6. Hypernuclear weak decay and the ΔI = 1/2 rule International Nuclear Information System (INIS) Barnes, P.D. 1987-01-01 Recent measurements of the weak decay of Λ hypernuclei are reported and discussed in the context of the weak hyperon-baryon effective Hamiltonian. The results are compared to predictions of both meson exchange and quark-quark weak interaction models. 14 refs., 4 figs., 2 tabs 7. Higgs production via weak boson fusion in the standard model and the MSSM International Nuclear Information System (INIS) Figy, Terrance; Palmer, Sophy 2010-12-01 Weak boson fusion is expected to be an important Higgs production channel at the LHC. Complete one-loop results for weak boson fusion in the Standard Model have been obtained by calculating the full virtual electroweak corrections and photon radiation and implementing these results into the public Monte Carlo program VBFNLO (which includes the NLO QCD corrections). Furthermore the dominant supersymmetric one-loop corrections to neutral Higgs production, in the general case where the MSSM includes complex phases, have been calculated. These results have been combined with all one-loop corrections of Standard Model type and with the propagator-type corrections from the Higgs sector of the MSSM up to the two-loop level. Within the Standard Model the electroweak corrections are found to be as important as the QCD corrections after the application of appropriate cuts. The corrections yield a shift in the cross section of order 5% for a Higgs of mass 100-200 GeV, confirming the result obtained previously in the literature. For the production of a light Higgs boson in the MSSM the Standard Model result is recovered in the decoupling limit, while the loop contributions from superpartners to the production of neutral MSSM Higgs bosons can give rise to corrections in excess of 10% away from the decoupling region. (orig.) 8. Radiation and desiccation response motif mediates radiation induced gene expression in D. radiodurans International Nuclear Information System (INIS) Anaganti, Narasimha; Basu, Bhakti; Apte, Shree Kumar 2015-01-01 Deinococcus radiodurans is an extremophile that withstands lethal doses of several DNA damaging agents such as gamma irradiation, UV rays, desiccation and chemical mutagens. The organism responds to DNA damage by inducing expression of several DNA repair genes. At least 25 radiation inducible gene promoters harbour a 17 bp palindromic sequence known as radiation and desiccation response motif (RDRM) implicated in gamma radiation inducible gene expression. However, mechanistic details of gamma radiation-responsive up-regulation in gene expression remain enigmatic. The promoters of highly radiation induced genes ddrB (DR0070), gyrB (DR0906), gyrA (DR1913), a hypothetical gene (DR1143) and recA (DR2338) from D. radiodurans were cloned in a green fluorescence protein (GFP)-based promoter probe shuttle vector pKG and their promoter activity was assessed in both E. coli as well as in D. radiodurans. The gyrA, gyrB and DR1143 gene promoters were active in E. coli although ddrB and recA promoters showed very weak activity. In D. radiodurans, all the five promoters were induced several fold following 6 kGy gamma irradiation. Highest induction was observed for ddrB promoter (25 fold), followed by DR1143 promoter (15 fold). The induction in the activity of gyrB, gyrA and recA promoters was 5, 3 and 2 fold, respectively. To assess the role of RDRM, the 17 bp palindromic sequence was deleted from these promoters. The promoters devoid of RDRM sequence displayed increase in the basal expression activity, but the radiation-responsive induction in promoter activity was completely lost. The substitution of two conserved bases of RDRM sequence yielded decreased radiation induction of PDR0070 promoter. Deletion of 5 bases from 5'-end of PDR0070 RDRM increased basal promoter activity, but radiation induction was completely abolished. Replacement of RDRM with non specific sequence of PDR0070 resulted in loss of basal expression and radiation induction. The results demonstrate that 9. Non-Hermitian wave packet approximation for coupled two-level systems in weak and intense fields Energy Technology Data Exchange (ETDEWEB) Puthumpally-Joseph, Raiju; Charron, Eric [Institut des Sciences Moléculaires d’Orsay (ISMO), CNRS, Univ. Paris-Sud, Université Paris-Saclay, F-91405 Orsay (France); Sukharev, Maxim [Science and Mathematics Faculty, College of Letters and Sciences, Arizona State University, Mesa, Arizona 85212 (United States) 2016-04-21 We introduce a non-Hermitian Schrödinger-type approximation of optical Bloch equations for two-level systems. This approximation provides a complete and accurate description of the coherence and decoherence dynamics in both weak and strong laser fields at the cost of losing accuracy in the description of populations. In this approach, it is sufficient to propagate the wave function of the quantum system instead of the density matrix, providing that relaxation and dephasing are taken into account via automatically adjusted time-dependent gain and decay rates. The developed formalism is applied to the problem of scattering and absorption of electromagnetic radiation by a thin layer comprised of interacting two-level emitters. 10. Underwater inspection training in intense radiation field International Nuclear Information System (INIS) Taniguchi, Ryoichi 2017-01-01 Osaka Prefecture University has a large dose cobalt 60 gamma ray source of about 2 PBq, and is engaged in technological training and human resource development. It is assumed that the decommissioning underwater operation of Fukushima Daiichi Nuclear Power Station would be the focus. The university aims at acquisition of the basic of underwater inspection work under radiation environment that is useful for the above purpose, radiation measurement under water, basic training in image measurement, and aims as well to evaluate the damage of imaging equipment due to radiation, and master practical knowledge for the use of inspection equipment under a large dose. In particular, it is valuable to train in the observation of Cherenkov light emitted from a large dose cobalt radiation source in water using a high sensitivity camera. The measurement of radiation dose distribution in water had difficulty in remote measurement due to water shielding effect. Although it took much time before, the method using high sensitivity camera is easy to sequentially perform two-dimensional measurement, and its utility value is large. Its effect on the dose distribution measurement of irregularly shaped sources is great. The contents of training includes the following: radiation source imaging in water, use of a laser rangefinder in water, dose distribution measurement in water and Cherenkov light measurement, judgment of equipment damage due to irradiation, weak radiation measurement, and measurement and decontamination of surface contamination. (A.O.) 11. Measurement of the\\beta$-asymmetry parameter of$^{67}$Cu in search for tensor type currents in the weak interaction CERN Document Server Soti, G.; Breitenfeldt, M.; Finlay, P.; Herzog, P.; Knecht, A.; Köster, U.; Kraev, I.S.; Porobic, T.; Prashanth, P.N.; Towner, I.S.; Tramm, C.; Zákoucký, D.; Severijns, N. 2014-01-01 Precision measurements at low energy search for physics beyond the Standard Model in a way complementary to searches for new particles at colliders. In the weak sector the most general$\\beta$decay Hamiltonian contains, besides vector and axial-vector terms, also scalar, tensor and pseudoscalar terms. Current limits on the scalar and tensor coupling constants from neutron and nuclear$\\beta$decay are on the level of several percent. The goal of this paper is extracting new information on tensor coupling constants by measuring the$\\beta$-asymmetry parameter in the pure Gamow-Teller decay of$^{67}$Cu, thereby testing the V-A structure of the weak interaction. An iron sample foil into which the radioactive nuclei were implanted was cooled down to milliKelvin temperatures in a$^3$He-$^4$He dilution refrigerator. An external magnetic field of 0.1 T, in combination with the internal hyperfine magnetic field, oriented the nuclei. The anisotropic$\\betaradiation was observed with planar high purity germanium d... 12. Hazard and socioenvironmental weakness: radioactive waste final disposal in the perception of the Abadia de Goias residents, GO, Brazil International Nuclear Information System (INIS) Pereira, Elaine Campos 2005-01-01 The work searches into the hazard and the weakness which involves the community around the radioactive waste final disposal, localized in Abadia de Goias municipality, Goias state, Brazil. In order to obtain a deep knowledge on the characteristic hazards of the modernity, the sociological aspects under discussion has been researched in the Anthony Giddens and Ulrich Beck works. The phenomenon was analyzed based on the the subjective experiences of the residents, which live there for approximately 16 years. This temporal analysis is related to the social impact suffered by the residents due to the radioactive wastes originated from the radiation accident with 137 cesium in Goiania, GO, Brazil, in 1987. In spite of the local security, they identified the disposal as a hazard source, although the longer time residents have been better adaptation. The weakness of the local is significant by the proximity of residences near the area of the radioactive waste final disposal. (author) 13. Transition from weak wave turbulence regime to solitonic regime Science.gov (United States) Hassani, Roumaissa; Mordant, Nicolas 2017-11-01 The Weak Turbulence Theory (WTT) is a statistical theory describing the interaction of a large ensemble of random waves characterized by very different length scales. For both weak non-linearity and weak dispersion a different regime is predicted where solitons propagate while keeping their shape unchanged. The question under investigation here is which regime between weak turbulence or soliton gas does the system choose ? We report an experimental investigation of wave turbulence at the surface of finite depth water in the gravity-capillary range. We tune the wave dispersion and the level of nonlinearity by modifying the depth of water and the forcing respectively. We use space-time resolved profilometry to reconstruct the deformed surface of water. When decreasing the water depth, we observe a drastic transition between weak turbulence at the weakest forcing and a solitonic regime at stronger forcing. We characterize the transition between both states by studying their Fourier Spectra. We also study the efficiency of energy transfer in the weak turbulence regime. We report a loss of efficiency of angular transfer as the dispersion of the wave is reduced until the system bifurcates into the solitonic regime. This project has recieved funding from the European Research Council (ERC, Grant Agreement No. 647018-WATU). 14. Equilibration and hydrodynamics at strong and weak coupling NARCIS (Netherlands) Schee, Wilke van der 2017-01-01 We give an updated overview of both weak and strong coupling methods to describe the approach to a plasma described by viscous hydrodynamics, a process now called hydrodynamisation. At weak coupling the very first moments after a heavy ion collision is described by the colour-glass condensate 15. Directional Radiometry and Radiative Transfer: the Convoluted Path From Centuries-old Phenomenology to Physical Optics Science.gov (United States) Mishchenko, Michael I. 2014-01-01 This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics. 16. Co-Labeling for Multi-View Weakly Labeled Learning. Science.gov (United States) Xu, Xinxing; Li, Wen; Xu, Dong; Tsang, Ivor W 2016-06-01 It is often expensive and time consuming to collect labeled training samples in many real-world applications. To reduce human effort on annotating training samples, many machine learning techniques (e.g., semi-supervised learning (SSL), multi-instance learning (MIL), etc.) have been studied to exploit weakly labeled training samples. Meanwhile, when the training data is represented with multiple types of features, many multi-view learning methods have shown that classifiers trained on different views can help each other to better utilize the unlabeled training samples for the SSL task. In this paper, we study a new learning problem called multi-view weakly labeled learning, in which we aim to develop a unified approach to learn robust classifiers by effectively utilizing different types of weakly labeled multi-view data from a broad range of tasks including SSL, MIL and relative outlier detection (ROD). We propose an effective approach called co-labeling to solve the multi-view weakly labeled learning problem. Specifically, we model the learning problem on each view as a weakly labeled learning problem, which aims to learn an optimal classifier from a set of pseudo-label vectors generated by using the classifiers trained from other views. Unlike traditional co-training approaches using a single pseudo-label vector for training each classifier, our co-labeling approach explores different strategies to utilize the predictions from different views, biases and iterations for generating the pseudo-label vectors, making our approach more robust for real-world applications. Moreover, to further improve the weakly labeled learning on each view, we also exploit the inherent group structure in the pseudo-label vectors generated from different strategies, which leads to a new multi-layer multiple kernel learning problem. Promising results for text-based image retrieval on the NUS-WIDE dataset as well as news classification and text categorization on several real-world multi 17. Oblique non-neutral solitary Alfven modes in weakly nonlinear pair plasmas International Nuclear Information System (INIS) Verheest, Frank; Lakhina, G S 2005-01-01 The equal charge-to-mass ratio for both species in pair plasmas induces a decoupling of the linear eigenmodes between waves that are charge neutral or non-neutral, also at oblique propagation with respect to a static magnetic field. While the charge-neutral linear modes have been studied in greater detail, including their weakly and strongly nonlinear counterparts, the non-neutral mode has received less attention. Here the nonlinear evolution of a solitary non-neutral mode at oblique propagation is investigated in an electron-positron plasma. Employing the framework of reductive perturbation analysis, a modified Korteweg-de Vries equation (with cubic nonlinearity) for the lowest-order wave magnetic field is obtained. In the linear approximation, the non-neutral mode has its magnetic component orthogonal to the plane spanned by the directions of wave propagation and of the static magnetic field. The linear polarization is not maintained at higher orders. The results may be relevant to the microstructure in pulsar radiation or to the subpulses 18. 3D printing from MRI Data: Harnessing strengths and minimizing weaknesses. Science.gov (United States) Ripley, Beth; Levin, Dmitry; Kelil, Tatiana; Hermsen, Joshua L; Kim, Sooah; Maki, Jeffrey H; Wilson, Gregory J 2017-03-01 3D printing facilitates the creation of accurate physical models of patient-specific anatomy from medical imaging datasets. While the majority of models to date are created from computed tomography (CT) data, there is increasing interest in creating models from other datasets, such as ultrasound and magnetic resonance imaging (MRI). MRI, in particular, holds great potential for 3D printing, given its excellent tissue characterization and lack of ionizing radiation. There are, however, challenges to 3D printing from MRI data as well. Here we review the basics of 3D printing, explore the current strengths and weaknesses of printing from MRI data as they pertain to model accuracy, and discuss considerations in the design of MRI sequences for 3D printing. Finally, we explore the future of 3D printing and MRI, including creative applications and new materials. 5 J. Magn. Reson. Imaging 2017;45:635-645. © 2016 International Society for Magnetic Resonance in Medicine. 19. Time-dependent weak values and their intrinsic phases of evolution International Nuclear Information System (INIS) Parks, A D 2008-01-01 The equation of motion for a time-dependent weak value of a quantum-mechanical observable is known to contain a complex valued energy factor (the weak energy of evolution) that is defined by the dynamics of the pre-selected and post-selected states which specify the observable's weak value. In this paper, the mechanism responsible for the creation of this energy is identified and it is shown that the cumulative effect over time of this energy is manifested as dynamical phases and pure geometric phases (the intrinsic phases of evolution) which govern the evolution of the weak value during its measurement process. These phases are simply related to a Pancharatnam phase and Fubini-Study metric distance defined by the Hilbert space evolution of the associated pre-selected and post-selected states. A characterization of time-dependent weak value evolution as Pancharatnam phase angle rotations and Fubini-Study distance scalings of a vector in the Argand plane is discussed as an application of this relationship. The theory of weak values is also reviewed and simple 'gedanken experiments' are used to illustrate both the time-independent and the time-dependent versions of the theory. It is noted that the direct experimental observation of the weak energy of evolution would strongly support the time-symmetric paradigm of quantum mechanics and it is suggested that weak value equations of motion represent a new category of nonlocal equations of motion 20. Radiation-induced osteochondroma of the T4 vertebra causing spinal cord compression Energy Technology Data Exchange (ETDEWEB) Gorospe, Luis; Madrid-Muniz, Carmen; Royo, Aranzazu; Garcia-Raya, Pilar [Department of Radiology, La Paz University Hospital, Madrid (Spain); Alvarez-Ruiz, Fernando [Department of Neurosurgery, La Paz University Hospital, Madrid (Spain); Lopez-Barea, Fernando [Department of Pathology, La Paz University Hospital, Madrid (Spain) 2002-04-01 A case of a radiation-induced osteochondroma arising from the vertebral body of T4 in an 18-year-old man is reported. The patient presented with a history of progressive left lower extremity weakness. At 7 years of age, he had undergone resection of a cerebellar medulloblastoma and received adjunctive craniospinal irradiation and systemic chemotherapy. Both CT and MR imaging revealed an extradural mass contiguous with the posteroinferior endplate of the T4 vertebral body. This case indicates that radiation-induced osteochondroma should be considered in the differential diagnosis of patients with symptoms of myelopathy or nerve root compression and a history of radiation therapy involving the spine in childhood. (orig.) 1. Cerebrovascular Acute Radiation Syndrome : Radiation Neurotoxins, Mechanisms of Toxicity, Neuroimmune Interactions. Science.gov (United States) Popov, Dmitri; Maliev, Slava Introduction: Cerebrovascular Acute Radiation Syndrome (CvARS) is an extremely severe in-jury of Central Nervous System (CNS) and Peripheral Nervous System (PNS). CvARS can be induced by the high doses of neutron, heavy ions, or gamma radiation. The Syndrome clinical picture depends on a type, timing, and the doses of radiation. Four grades of the CvARS were defined: mild, moderate, severe, and extremely severe. Also, four stages of CvARS were developed: prodromal, latent, manifest, outcome -death. Duration of stages depends on the types, doses, and time of radiation. The CvARS clinical symptoms are: respiratory distress, hypotension, cerebral edema, severe disorder of cerebral blood microcirculation, and acute motor weakness. The radiation toxins, Cerebro-Vascular Radiation Neurotoxins (SvARSn), determine development of the acute radiation syndrome. Mechanism of action of the toxins: Though pathogenesis of radiation injury of CNS remains unknown, our concept describes the Cv ARS as a result of Neurotoxicity and Excitotoxicity, cell death through apoptotic necrosis. Neurotoxicity occurs after the high doses radiation exposure, formation of radiation neuro-toxins, possible bioradicals, or group of specific enzymes. Intracerebral hemorrhage can be a consequence of the damage of endothelial cells caused by radiation and the radiation tox-ins. Disruption of blood-brain barrier (BBB)and blood-cerebrospinal fluid barrier (BCFB)is possibly the most significant effect of microcirculation disorder and metabolic insufficiency. NMDA-receptors excitotoxic injury mediated by cerebral ischemia and cerebral hypoxia. Dam-age of the pyramidal cells in layers 3 and 5 and Purkinje cell layer the cerebral cortex , damage of pyramidal cells in the hippocampus occur as a result of cerebral ischemia and intracerebral bleeding. Methods: Radiation Toxins of CV ARS are defined as glycoproteins with the molec-ular weight of RT toxins ranges from 200-250 kDa and with high enzymatic activity 2. Weak Localisation in Clean and Highly Disordered Graphene International Nuclear Information System (INIS) Hilke, Michael; Massicotte, Mathieu; Whiteway, Eric; Yu, Victor 2013-01-01 We look at the magnetic field induced weak localisation peak of graphene samples with different mobilities. At very low temperatures, low mobility samples exhibit a very broad peak as a function of the magnetic field, in contrast to higher mobility samples, where the weak localisation peak is very sharp. We analyze the experimental data in the context of the localisation length, which allows us to extract, both the localisation length and the phase coherence length of the samples, regardless of their mobilities. This analysis is made possible by the observation that the localisation length undergoes a generic weak localisation dependence with striking universal properties 3. Weak Measurement and Quantum Correlation Indian Academy of Sciences (India) Arun Kumar Pati Entanglement: Two quantum systems can be in a strongly correlated state even if .... These are resources which can be used to design quantum computer, quantum ...... Weak measurements have found numerous applications starting from the ... 4. Peculiarities of the coherent spontaneous synchrotron radiation of dense electron bunches International Nuclear Information System (INIS) Balal, N.; Bratman, V. L.; Savilov, A. V. 2014-01-01 In a short section of homogeneous magnetic field, quasi-plane electron bunches from linear accelerators with laser-driven photo-injectors at moderate particle energies can generate strongly directed, very short and powerful terahertz electromagnetic pulses with a broad frequency spectrum. The formulas for radiation fields, their spectra and efficiency of radiation are presented in a very simple analytical form using expressions for the fields of an arbitrary moving charged plane. The self-action and mutual interaction of thin electron layers are estimated. It is shown that the radiation with frequencies of up to (1–3) THz can be effectively generated by electrons with energies (4–6) MeV in a short and relatively weak magnetic field of (4–10) kOe 5. Peculiarities of the coherent spontaneous synchrotron radiation of dense electron bunches Energy Technology Data Exchange (ETDEWEB) Balal, N. [Ariel University, Ariel (Israel); Bratman, V. L. [Institute of Applied Physics, Russian Academy of Sciences, Nizhny Novgorod (Russian Federation); Savilov, A. V., E-mail: [email protected] [Institute of Applied Physics, Russian Academy of Sciences, Nizhny Novgorod (Russian Federation); Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod (Russian Federation) 2014-02-15 In a short section of homogeneous magnetic field, quasi-plane electron bunches from linear accelerators with laser-driven photo-injectors at moderate particle energies can generate strongly directed, very short and powerful terahertz electromagnetic pulses with a broad frequency spectrum. The formulas for radiation fields, their spectra and efficiency of radiation are presented in a very simple analytical form using expressions for the fields of an arbitrary moving charged plane. The self-action and mutual interaction of thin electron layers are estimated. It is shown that the radiation with frequencies of up to (1–3) THz can be effectively generated by electrons with energies (4–6) MeV in a short and relatively weak magnetic field of (4–10) kOe. 6. Subsidy Competition for FDI: Fierce or Weak? OpenAIRE Tomáš Havránek 2009-01-01 The objective of this paper is to empirically assess the recently introduced models of subsidy competition based on the classical oligopoly theories, using both cross-sectional and panel data. Three crucial scenarios (including coordination, weak competition, and fierce competition) are tested employing OLS, iteratively re-weighted least squares, fixed effects, and Blundell-Bond estimator. The results suggest that none of the scenarios can be strongly supported—although there is some weak sup... 7. High-redshift SDSS Quasars with Weak Emission Lines DEFF Research Database (Denmark) Diamond-Stanic, Aleksandar M.; Fan, Xiaohui; Brandt, W. N. 2009-01-01 We identify a sample of 74 high-redshift quasars (z > 3) with weak emission lines from the Fifth Data Release of the Sloan Digital Sky Survey and present infrared, optical, and radio observations of a subsample of four objects at z > 4. These weak emission-line quasars (WLQs) constitute a promine... 8. A young woman with weakness of the legs African Journals Online (AJOL) A previously well 22-year-old woman presented with progressive weakness of her legs and urinary incontinence over 7 days. Clinically she was healthy, with no skin rashes. On neurological examination she had profound bilateral weakness of the lower limbs, hypertonia, hyperreflexia, a positive Babinski sign and a T6 ... 9. Statistical formulation of gravitational radiation reaction International Nuclear Information System (INIS) Schutz, B.F. 1980-01-01 A new formulation of the radiation-reaction problem is proposed, which is simpler than alternatives which have been used before. The new approach is based on the initial-value problem, uses approximations which need be uniformly valid only in compact regions of space-time, and makes no time-asymmetric assumptions (no a priori introduction of retarded potentials or outgoing-wave asymptotic conditions). It defines radiation reaction to be the expected evolution of a source obtained by averaging over a statistical ensemble of initial conditions. The ensemble is chosen to reflect one's complete lack of information (in real systems) about the initial data for the radiation field. The approach is applied to the simple case of a weak-field, slow-motion source in general relativity, where it yields the usual expressions for radiation reaction when the gauge is chosen properly. There is a discussion of gauge freedom, and another of the necessity of taking into account reaction corrections to the particle-conservation equation. The analogy with the second law of thermodynamics is very close, and suggests that the electromagnetic and thermodynamic arrows of time are the same. Because the formulation is based on the usual initial-value problem, it has no spurious ''runaway'' solutions 10. The effects of weak radiation International Nuclear Information System (INIS) Gjoerup, H.L. The survey attempts to refute the most common claim of the opponents of nuclear energy, i.e. that already very small amounts of radioactivity can cause cancer and leukemia. Especially the background and commentaries of foreign opponents to nuclear energy with publications in German are investigated. (DG) [de 11. Probing finite coarse-grained virtual Feynman histories with sequential weak values Science.gov (United States) Georgiev, Danko; Cohen, Eliahu 2018-05-01 Feynman's sum-over-histories formulation of quantum mechanics has been considered a useful calculational tool in which virtual Feynman histories entering into a coherent quantum superposition cannot be individually measured. Here we show that sequential weak values, inferred by consecutive weak measurements of projectors, allow direct experimental probing of individual virtual Feynman histories, thereby revealing the exact nature of quantum interference of coherently superposed histories. Because the total sum of sequential weak values of multitime projection operators for a complete set of orthogonal quantum histories is unity, complete sets of weak values could be interpreted in agreement with the standard quantum mechanical picture. We also elucidate the relationship between sequential weak values of quantum histories with different coarse graining in time and establish the incompatibility of weak values for nonorthogonal quantum histories in history Hilbert space. Bridging theory and experiment, the presented results may enhance our understanding of both weak values and quantum histories. 12. Recombination dynamics of excitons with low non-radiative component in semi-polar (10-11)-oriented GaN/AlGaN multiple quantum wells Science.gov (United States) Rosales, D.; Gil, B.; Bretagnon, T.; Guizal, B.; Izyumskaya, N.; Monavarian, M.; Zhang, F.; Okur, S.; Avrutin, V.; Özgür, Ü.; Morkoç, H. 2014-09-01 Optical properties of GaN/Al0.2Ga0.8N multiple quantum wells grown with semi-polar (10-11) orientation on patterned 7°-off Si (001) substrates have been investigated. Studies performed at 8 K reveal the in-plane anisotropic behavior of the QW photoluminescence (PL) intensity for this semi-polar orientation. The time resolved PL measurements were carried out in the temperature range from 8 to 295 K to deduce the effective recombination decay times, with respective radiative and non-radiative contributions. The non-radiative component remains relatively weak with increasing temperature, indicative of high crystalline quality. The radiative decay time is a consequence of contribution from both localized and free excitons. We report an effective density of interfacial defects of 2.3 × 1012 cm-2 and a radiative recombination time of τloc = 355 ps for the localized excitons. This latter value is significantly larger than those reported for the non-polar structures, which we attribute to the presence of a weak residual electric field in the semi-polar QW layers. 13. On Characterizing weak defining hyperplanes (weak Facets in DEA with Constant Returns to Scale Technology Directory of Open Access Journals (Sweden) Dariush Akbarian 2017-09-01 Full Text Available The Production Possibility Set (PPS is defined as a set of inputs and outputs of a system in which inputs can produce outputs. The Production Possibility Set of the Data Envelopment Analysis (DEA model is contain of two types defining hyperplanes (facets; strong and weak efficient facets. In this paper, the problem of finding weak defining hyperplanes of the PPS of the CCR model is dealt with. However, the equation of strong defining hyperplanes of the PPS of the CCR model can be found in this paper. We state and prove some properties relative to our method. To illustrate the applicability of the proposed model, some numerical examples are finally provided. Our algorithm can easily be implemented using existing packages for operation research, such as GAMS. 14. Startpoints via weak contractions OpenAIRE Agyingi, Collins Amburo; Gaba, Yaé Ulrich 2018-01-01 Startpoints (resp. endpoints) can be defined as "oriented fixed points". They arise naturally in the study of fixed for multi-valued maps defined on quasi-metric spaces. In this article, we give a new result in the startpoint theory for quasi-pseudometric spaces. The result we present is obtained via a generalized weakly contractive set-valued map. 15. Weak interactions and presupernova evolution International Nuclear Information System (INIS) Aufderheide, M.B.; State Univ. of New York 1991-01-01 The role of weak interactions, particularly electron capture and β - decay, in presupernova evolution is discussed. The present uncertainty in these rates is examined and the possibility of improving the situation is addressed. 12 refs., 4 figs 16. Ultimate capacity of piles penetrating in weak soil layers Directory of Open Access Journals (Sweden) Al-Obaidi Ahmed 2018-01-01 Full Text Available A pile foundation is one of the most popular forms of deep foundations. They are routinely employed to transfer axial structure loads through the soft soil to stronger bearing strata. Piles generally used to increase the load carrying capacity of the foundation and reduce the settlement of the foundation. On the other hand, many cases in practice where piles pass through different layers of soil that contain weak layers located at different depths and extension, also some time cavities with a different shape, size, and depth are found. In this study, a total of 96 cases is considered and simulated in PLAXIS 2D program aiming to understand the influence of weak soil on the ultimate pile capacity. The piles embedded in the dense sand with a layer of weak soil at different extension and location. The cross section of the geometry used in this study was designed as an axisymmetric model with the 15-node element; the boundary condition recommended at least 5D in the horizontal direction, and (L+5D in the vertical direction where D and L are the diameter and length of pile, respectively. The soil is modeled as Mohr-Coulomb, with five input parameters and the behavior of pile material represented by the linear elastic model. The results of the above cases are compared with the results found in a pile embedded in dense soil without weak layers or cavities. The results indicated that the existence of weak soil layer within the surrounding soil around the pile decreases the ultimate capacity. Furthermore, it has been found that increase in the weak soil width (extension leads to reduction in the ultimate capacity of the pile. This phenomenon is applicable to all depth of weak soil. The influence of weak layer extension on the ultimate capacity is less when it is presentin the upper soil layers. 17. Enhanced possibilities of section topography at a third-generation synchrotron radiation facility International Nuclear Information System (INIS) Medrano, C.; Rejmankova, P.; Ohler, M.; Matsouli, I. 1997-01-01 The authors show the new possibilities of section topography techniques at a third-generation synchrotron radiation facility, taking advantage of the high performances of this machine. Examples of the 1) so-called multiple sections, 2) visibility of weakly misoriented regions, 3) study of thick samples, 4) monochromatic and 5) realtime sections are presented 18. ΔI = 1/2 rule and the strong coupling expansion International Nuclear Information System (INIS) Angus, I.G. 1986-01-01 The authors attempted to understand the Delta I Equals One Half pattern of the nonleptonic weak decays of the Kaons. The calculation scheme employed is the Strong Coupling Expansion of lattice QCD. Kogut-Susskind fermions are used in the Hamiltonian formalism. The author will describe in detail the methods used to expedite this calculation, almost all of which was done by computer algebra. The final result is very encouraging. Even though an exact interpretation is clouded by the presence of irrelevant operators, a distinct signal of the Delta I Equals One Half Rule is observed. With an appropriate choice of the one free parameter, enhancements as great as those observed experimentally can be obtained along with a qualitative prediction for the relative magnitudes of the CP violating phases. The author also points out a number of surprising results which turn up in the course of the calculation. The computer methods employed are briefly described 19. CP violation in beauty decays the standard model paradigm of large effects CERN Document Server Bigi, Ikaros I.Y. 1994-01-01 The Standard Model contains a natural source for CP asymmetries in weak decays, which is described by the KM mechanism. Beyond \\epsilon _K it generates only elusive manifestations of CP violation in {\\em light-}quark systems. On the other hand it naturally leads to large asymmetries in certain non-leptonic beauty decays. In particular when B^0-\\bar B^0 oscillations are involved, theoretical uncertainties in the hadronic matrix elements either drop out or can be controlled, and one predicts asymmetries well in excess of 10\\% with high parametric reliability. It is briefly described how the KM triangle can be determined experimentally and then subjected to sensitive consistency tests. Any failure would constitute indirect, but unequivocal evidence for the intervention of New Physics; some examples are sketched. Any outcome of a comprehensive program of CP studies in B decays -- short of technical failure -- will provide us with fundamental and unique insights into nature's design. 20. On Weakly Singular Versions of Discrete Nonlinear Inequalities and Applications Directory of Open Access Journals (Sweden) Kelong Cheng 2014-01-01 Full Text Available Some new weakly singular versions of discrete nonlinear inequalities are established, which generalize some existing weakly singular inequalities and can be used in the analysis of nonlinear Volterra type difference equations with weakly singular kernels. A few applications to the upper bound and the uniqueness of solutions of nonlinear difference equations are also involved. 1. Sufficient conditions for uniqueness of the weak value International Nuclear Information System (INIS) Dressel, J; Jordan, A N 2012-01-01 We review and clarify the sufficient conditions for uniquely defining the generalized weak value as the weak limit of a conditioned average using the contextual values formalism introduced in Dressel, Agarwal and Jordan (2010 Phys. Rev. Lett. http://dx.doi.org/10.1103/PhysRevLett.104.240401). We also respond to criticism of our work by Parrott (arXiv:1105.4188v1) concerning a proposed counter-example to the uniqueness of the definition of the generalized weak value. The counter-example does not satisfy our prescription in the case of an underspecified measurement context. We show that when the contextual values formalism is properly applied to this example, a natural interpretation of the measurement emerges and the unique definition in the weak limit holds. We also prove a theorem regarding the uniqueness of the definition under our sufficient conditions for the general case. Finally, a second proposed counter-example by Parrott (arXiv:1105.4188v6) is shown not to satisfy the sufficiency conditions for the provided theorem. (paper) 2. Fast measure proceeding of weak currents International Nuclear Information System (INIS) Taieb, J. 1953-01-01 The process of fast measure of the weak currents that we are going to describe briefly apply worthy of the provided currents by the sources to elevated value internal resistance, as it is the case for the ionization chamber, the photocells, mass spectroscopic tubes. The problem to measure weak currents is essentially a problem of amplifier and of input circuit. We intended to achieve a whole amplifier and input circuit with advanced performances, meaning that for a measured celerity we wanted to have an signal/noise ratio the most important as in the classic systems and for a same report signal/noise a more quickly done measure. (M.B.) [fr 3. Regularized inner products and weakly holomorphic Hecke eigenforms Science.gov (United States) Bringmann, Kathrin; Kane, Ben 2018-01-01 We show that the image of repeated differentiation on weak cusp forms is precisely the subspace which is orthogonal to the space of weakly holomorphic modular forms. This gives a new interpretation of weakly holomorphic Hecke eigenforms. The research of the first author is supported by the Alfried Krupp Prize for Young University Teachers of the Krupp foundation and the research leading to these results receives funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant agreement n. 335220—AQSER. The research of the second author was supported by grants from the Research Grants Council of the Hong Kong SAR, China (project numbers HKU 27300314, 17302515, and 17316416). 4. A Precision Measurement of the Weak Mixing Angle in Moller Scattering at Low Q^2 Energy Technology Data Exchange (ETDEWEB) Jones, G. 2005-01-28 The electroweak theory has been probed to a high level of precision at the mass scale of the Z{sup 0} through the joint contributions of LEP at CERN and the SLC at SLAC. The E158 experiment at SLAC complements these results by measuring the weak mixing angle at a Q{sup 2} of 0.026 (GeV/c){sup 2}, far below the weak scale. The experiment utilizes a 48 GeV longitudinally polarized electron beam on unpolarized atomic electrons in a target of liquid hydrogen to measure the parity-violating asymmetry A{sup PV} in Moeller scattering. The tree-level prediction for A{sup PV} is proportional to 1-4 sin{sup 2} {theta}{sub W}. Since sin{sup 2} {theta}{sub W} {approx} 0.25, the effect of radiative corrections is enhanced, allowing the E158 experiment to probe for physics effects beyond the Standard Model at the TeV scale. This work presents the results from the first two physics runs of the experiment, covering data collected in the year 2002. The parity-violating asymmetry A{sup PV} was measured to be A{sup PV} = -158 ppb {+-} 21 ppb (stat) {+-} 17 ppb (sys). The result represents the first demonstration of parity violation in Moeller scattering. The observed value of A{sup PV} corresponds to a measurement of the weak mixing angle of sin{sup 2} {theta}{sub W}{sup eff} = 0.2380 {+-} 0.0016(stat) {+-} 0.0013(sys), which is in good agreement with the theoretical prediction of sin{sup 2} {theta}{sub W}{sup eff} = 0.2385 {+-} 0.0006 (theory). 5. Linear optics implementation of weak values in Hardy's paradox International Nuclear Information System (INIS) Ahnert, S.E.; Payne, M.C. 2004-01-01 We propose an experimental setup for the implementation of weak measurements in the context of the gedanken experiment known as Hardy's paradox. As Aharonov et al. [Y. Aharonov, A. Botero, S. Popescu, B. Reznik, and J. Tollaksen, Phys. Lett. A301, 130 (2002)] showed, these weak values form a language with which the paradox can be resolved. Our analysis shows that this language is indeed consistent and experimentally testable. It also reveals exactly how a combination of weak values can give rise to an apparently paradoxical result 6. Electron Capture Dissociation of Weakly Bound Polypeptide Polycationic Complexes DEFF Research Database (Denmark) Haselmann, Kim F; Jørgensen, Thomas J D; Budnik, Bogdan A 2002-01-01 as well as specific complexes of modified glycopeptide antibiotics with their target peptide. The weak nature of bonding is substantiated by blackbody infrared dissociation, low-energy collisional excitation and force-field simulations. The results are consistent with a non-ergodic ECD cleavage mechanism.......We have previously reported that, in electron capture dissociation (ECD), rupture of strong intramolecular bonds in weakly bound supramolecular aggregates can proceed without dissociation of weak intermolecular bonds. This is now illustrated on a series of non-specific peptide-peptide dimers... 7. Weak pion production from nuclei Indian Academy of Sciences (India) effect of Pauli blocking, Fermi motion and renormalization of weak ∆ properties ... Furthermore, the angular distribution and the energy distribution of ... Here ψα(p ) and u(p) are the Rarita Schwinger and Dirac spinors for ∆ and nucleon. 8. Introduction to unification of electromagnetic and weak interactions International Nuclear Information System (INIS) Martin, F. 1980-01-01 After reviewing the present status of weak interaction phenomenology we discuss the basic principles of gauge theories. Then we show how Higgs mechanism can give massive quanta of interaction. The so-called 'Weinberg-Salam' model, which unifies electromagnetic and weak interactions, is described. We conclude with a few words on unification with strong interactions and gravity [fr 9. Radiation protection in well logging: case studies in the Sudan International Nuclear Information System (INIS) Eltayeb, B. A. 2010-12-01 This study is performed to improve radiation protection level in well logging include tow case studies in Sudan (Lost or misplaced sources). General review of radiation and radiation protection basic concept is highlighted discussed. Also preview of well logging practice and source of radiation use in well logging, safety of radiation sources, storage and manage of not use sources (weak sources) and protection of worker and potential exposure for public and worker, investigations in cause of lost or misplaced sources in well. Assessment was made in well logging using checklist prepared in accordance with the International Atomic Energy Agency IAEA basic safety standard, International Committee for Radiological Protection ICRP and safety in transport of radiation sources. The checklist includes all requirement of radiation protection. It is found that all requirement was present except the delay of calibration of radiation detectors, the movement of radiation sources form storage to base of manipulated area need adequate care for shielding and safe transport and personal monitoring service must be provide in Sudan. Investigation was made in cause of lose of nine radiation source in well it is found that all those sources were loss in different depth in the well and with deferent location and there was no risk because there was no contamination of fluids which caused by damage of loss sources. Some recommendations were stated that, if implemented could improve the status of radiation protection in well logging. (Author) 10. Weak interaction rates International Nuclear Information System (INIS) Sugarbaker, E. 1995-01-01 I review available techniques for extraction of weak interaction rates in nuclei. The case for using hadron charge exchange reactions to estimate such rates is presented and contrasted with alternate methods. Limitations of the (p,n) reaction as a probe of Gamow-Teller strength are considered. Review of recent comparisons between beta-decay studies and (p,n) is made, leading to cautious optimism regarding the final usefulness of (p,n)- derived GT strengths to the field of astrophysics. copyright 1995 American Institute of Physics 11. About some distinguishing features of weak interactions International Nuclear Information System (INIS) Beshtoev, Kh.M. 1999-01-01 It is shown that, in contrast to strong and electromagnetic theories, additive conserved numbers (such as lepton, aromatic and another numbers) and γ 5 anomaly do not appear in the standard weak interaction theory. It means that in this interaction the additive numbers cannot be conserved. These results are the consequence of specific character of the weak interaction: the right components of spinors do not participate in this interaction. The schemes of violation of the aromatic and lepton numbers were considered 12. A Continuation Method for Weakly Kannan Maps Directory of Open Access Journals (Sweden) Ariza-Ruiz David 2010-01-01 Full Text Available The first continuation method for contractive maps in the setting of a metric space was given by Granas. Later, Frigon extended Granas theorem to the class of weakly contractive maps, and recently Agarwal and O'Regan have given the corresponding result for a certain type of quasicontractions which includes maps of Kannan type. In this note we introduce the concept of weakly Kannan maps and give a fixed point theorem, and then a continuation method, for this class of maps. 13. Sound radiation modes of cylindrical surfaces and their application to vibro-acoustics analysis of cylindrical shells Science.gov (United States) Sun, Yao; Yang, Tiejun; Chen, Yuehua 2018-06-01 In this paper, sound radiation modes of baffled cylinders have been derived by constructing the radiation resistance matrix analytically. By examining the characteristics of sound radiation modes, it is found that radiation coefficient of each radiation mode increases gradually with the increase of frequency while modal shapes of sound radiation modes of cylindrical shells show a weak dependence upon frequency. Based on understandings on sound radiation modes, vibro-acoustics behaviors of cylindrical shells have been analyzed. The vibration responses of cylindrical shells are described by modified Fourier series expansions and solved by Rayleigh-Ritz method involving Flügge shell theory. Then radiation efficiency of a resonance has been determined by examining whether the vibration pattern is in correspondence with a sound radiation mode possessing great radiation efficiency. Furthermore, effects of thickness and boundary conditions on sound radiation of cylindrical shells have been investigated. It is found that radiation efficiency of thicker shells is greater than thinner shells while shells with a clamped boundary constraint radiate sound more efficiently than simply supported shells under thin shell assumption. 14. A study on the life extension of polymer materials under radiation environment Energy Technology Data Exchange (ETDEWEB) Park, K. J.; Park, S. W.; Cho, S. H.; Hong, S. S 2000-12-01 The object of this study is to improve the stability and the economic profit by reducing the radiation-induced degradation rate of polymer material used under the radiation environment. So far, the resistance to radiation-induced oxidation of a polymer has been improved by the stabilizers. They can play an important role in the anti-oxidants that interrupt the radical-mediated oxidation chain reaction. The stabilization effect could be larger than that achieved in an inert-atmosphere irradiation. Stabilization is a function of stabilizer concentration up to a certain threshold, but it is not further improved above this concentration. Beyond the threshold, the rate of radiation-induced oxidation goes up to the rate that is characteristic for the unstabilized polymer. To make up for this weakness, a technique depositing a thin layer of diamond-like carbon (DLC) on the polymer surface was developed for protecting the radiation-induced oxidation in the air. 15. A study on the life extension of polymer materials under radiation environment International Nuclear Information System (INIS) Park, K. J.; Park, S. W.; Cho, S. H.; Hong, S. S. 2000-12-01 The object of this study is to improve the stability and the economic profit by reducing the radiation-induced degradation rate of polymer material used under the radiation environment. So far, the resistance to radiation-induced oxidation of a polymer has been improved by the stabilizers. They can play an important role in the anti-oxidants that interrupt the radical-mediated oxidation chain reaction. The stabilization effect could be larger than that achieved in an inert-atmosphere irradiation. Stabilization is a function of stabilizer concentration up to a certain threshold, but it is not further improved above this concentration. Beyond the threshold, the rate of radiation-induced oxidation goes up to the rate that is characteristic for the unstabilized polymer. To make up for this weakness, a technique depositing a thin layer of diamond-like carbon (DLC) on the polymer surface was developed for protecting the radiation-induced oxidation in the air 16. Nuclear Weak Rates and Detailed Balance in Stellar Conditions Energy Technology Data Exchange (ETDEWEB) Misch, G. Wendell, E-mail: [email protected], E-mail: [email protected] [Department of Physics and Astronomy, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 200240 (China) 2017-07-20 Detailed balance is often invoked in discussions of nuclear weak transitions in astrophysical environments. Satisfaction of detailed balance is rightly touted as a virtue of some methods of computing nuclear transition strengths, but I argue that it need not necessarily be strictly obeyed in astrophysical environments, especially when the environment is far from weak equilibrium. I present the results of shell model calculations of nuclear weak strengths in both charged-current and neutral-current channels at astrophysical temperatures, finding some violation of detailed balance. I show that a slight modification of the technique to strictly obey detailed balance has little effect on the reaction rates associated with these strengths under most conditions, though at high temperature the modified technique in fact misses some important strength. I comment on the relationship between detailed balance and weak equilibrium in astrophysical conditions. 17. Is UV-A radiation a cause of malignant melanoma? International Nuclear Information System (INIS) Moan, J. 1994-01-01 The first action spectrum for cutaneous malignant melanoma was published recently. This spectrum was obtained using the fish Xiphophorus. If the same action spectrum applies to humans, the following statements are true: Sunbathing products (agents to protect against the sun) that absorb UV-B radiation provide almost no protection against cutaneous malignant melanoma. UV-A-solaria are more dangerous than expected so far. If people are determined to use artificial sources of radiation for tanning, they should choose UV-B solaria rather than UV-A-solaria. Fluorescent tubes and halogen lamps may have weak melanomagenic effects. Ozone depletion has almost no effect on the incidence rates of CMM, since ozone absorbs very little UV-A radiation. Sunbathing products which contain UV-A-absorbing compounds or neutral filter (like titanium oxide) provide real protection against cutaneous malignant melanoma, at least if they are photochemically inert. 34 refs., 2 figs 18. Weak Disposability in Nonparametric Production Analysis with Undesirable Outputs NARCIS (Netherlands) Kuosmanen, T.K. 2005-01-01 Environmental Economics and Natural Resources Group at Wageningen University in The Netherlands Weak disposability of outputs means that firms can abate harmful emissions by decreasing the activity level. Modeling weak disposability in nonparametric production analysis has caused some confusion. 19. On n-weak amenability of Rees semigroup algebras Indian Academy of Sciences (India) semigroups. In this work, we shall consider this class of Banach algebras. We examine the n-weak amenability of some semigroup algebras, and give an easier example of a Banach algebra which is n-weakly amenable if n is odd. Let L1(G) be the group algebra of a locally compact group G (§3.3 of [3]). Then Johnson. 20. Weak compactness and sigma-Asplund generated Banach spaces Czech Academy of Sciences Publication Activity Database Fabian, Marián; Montesinos, V.; Zizler, Václav 2007-01-01 Roč. 181, č. 2 (2007), s. 125-152 ISSN 0039-3223 R&D Projects: GA AV ČR IAA1019301; GA AV ČR(CZ) IAA100190610 Institutional research plan: CEZ:AV0Z10190503 Keywords : epsilon-Asplund set * epsilon-weakly compact set * weakly compactly generated Banach space Subject RIV: BA - General Mathematics Impact factor: 0.568, year: 2007 1. Color-weak compensation using local affine isometry based on discrimination threshold matching OpenAIRE Mochizuki, Rika; Kojima, Takanori; Lenz, Reiner; Chao, Jinhui 2015-01-01 We develop algorithms for color-weak compensation and color-weak simulation based on Riemannian geometry models of color spaces. The objective function introduced measures the match of color discrimination thresholds of average normal observers and a color-weak observer. The developed matching process makes use of local affine maps between color spaces of color-normal and color-weak observers. The method can be used to generate displays of images that provide color-normal and color-weak obser... 2. Coverings, Networks and Weak Topologies Czech Academy of Sciences Publication Activity Database Dow, A.; Junnila, H.; Pelant, Jan 2006-01-01 Roč. 53, č. 2 (2006), s. 287-320 ISSN 0025-5793 R&D Projects: GA ČR GA201/97/0216 Institutional research plan: CEZ:AV0Z10190503 Keywords : Banach spaces * weak topologies * networks topologies Subject RIV: BA - General Mathematics 3. Magnetization reversal in weak ferrimagnets and canted antiferromagnets International Nuclear Information System (INIS) Kageyama, H.; Khomskii, D.I.; Levitin, R.Z.; Markina, M.M.; Okuyama, T.; Uchimoto, T.; Vasil'ev, A.N. 2003-01-01 In some ferrimagnets the total magnetization vanishes at a certain compensation temperature T*. In weak magnetic fields, the magnetization can change sign at T* (the magnetization reversal). Much rarer is observation of ferrimagnetic-like response in canted antiferromagnets, where the weak ferromagnetic moment is due to the tilting of the sublattice magnetizations. The latter phenomenon was observed in nickel (II) formate dihydrate Ni(HCOO) 2 ·2H 2 O. The observed weak magnetic moment increases initially below T N =15.5 K, equals zero at T*=8.5 K and increases again at lowering temperature. The sign of the low-field magnetization at any given temperature is determined by the sample's magnetic prehistory and the signs are opposite to each other at T N 4. Theoretical tools for B physics International Nuclear Information System (INIS) Mannel, T. 2006-01-01 In this talk I try to give an overview over the theoretical tools used to compute observables in B physics. The main focus is the developments in the 1/m Expansion in semileptonic and nonleptonic decays. (author) 5. High background radiation area: an important source of exploring the health effects of low dose ionizing radiation International Nuclear Information System (INIS) Wei Luxin 1997-01-01 Objective: For obtaining more effective data from epidemiological investigation in high background radiation areas, it is necessary to analyze the advantages, disadvantages, weak points and problems of this kind of radiation research. Methods: For epidemiological investigation of population health effects of high background radiation, the author selected high background radiation areas of Yangjiang (HBRA) and a nearby control area (CA) as an instance for analysis. The investigation included classification of dose groups, comparison of the confounding factors in the incidence of mutation related diseases, cancer mortalities and the frequencies of chromosomal aberrations between HBRA and CA. This research program has become a China-Japan cooperative research since 1991. Results: The confounding factors above-mentioned were comparable between HBRA and CA, and within the dose groups in HBRA, based on a systematic study for many years. The frequencies of chromosomal aberrations increased with the increase of cumulative dose, but not for children around or below 10 years of age. The relative risks (RR) of total and site-specific cancer mortalities for HBRA were lower or around 1.00, compared with CA. The incidence of hereditary diseases and congenital deformities in HBRA were in normal range. The results were interpreted preliminarily by the modified 'dual radiation action' theory and the 'benefit-detriment competition' hypothesis. Conclusions: The author emphasizes the necessity for continuing epidemiological research in HBRA, especially for international cooperation. He also emphasizes the importance of combination of epidemiology and radiobiology 6. From Suitable Weak Solutions to Entropy Viscosity KAUST Repository Guermond, Jean-Luc 2010-12-16 This paper focuses on the notion of suitable weak solutions for the three-dimensional incompressible Navier-Stokes equations and discusses the relevance of this notion to Computational Fluid Dynamics. The purpose of the paper is twofold (i) to recall basic mathematical properties of the three-dimensional incompressible Navier-Stokes equations and to show how they might relate to LES (ii) to introduce an entropy viscosity technique based on the notion of suitable weak solution and to illustrate numerically this concept. © 2010 Springer Science+Business Media, LLC. 7. Study of radiative corrections with application to the electron-neutrino scattering International Nuclear Information System (INIS) Oliveira, L.C.S. de. 1977-01-01 The radiative correction method is studied which appears in Quantum Field Theory, for some weak interaction processes. e.g., Beta decay and muon decay. Such a method is then applied to calculate transition probability for the electron-neutrino scattering using the U-A theory as a base. The calculations of infrared and ultraviolet divergences are also discussed. (L.C.) [pt 8. THE MAKE BREAK TEST AS A DIAGNOSTIC-TOOL IN FUNCTIONAL WEAKNESS NARCIS (Netherlands) VANDERPLOEG, RJO; OOSTERHUIS, HJGH Strength was measured in four major muscle groups with a hand-held dynamometer. The "make" and "break" technique was used with and without encouragement, and fatiguability was tested in patients with organic weakness and patients with functional weakness. Patients with functional weakness could be 9. Compressive strength of brick masonry made with weak mortars DEFF Research Database (Denmark) Pedersen, Erik Steen; Hansen, Klavs Feilberg 2013-01-01 in the joint will ensure a certain level of load-carrying capacity. This is due to the interaction between compression in the weak mortar and tension in the adjacent bricks. This paper proposes an expression for the compressive strength of masonry made with weak lime mortars (fm... of masonry depends only on the strength of the bricks. A compression failure in masonry made with weak mortars occurs as a tension failure in the bricks, as they seek to prevent the mortar from being pressed out of the joints. The expression is derived by assuming hydrostatic pressure in the mortar joints......, which is the most unfavourable stress distribution with respect to tensile stresses in bricks. The expression is compared with the results of compression tests of masonry made with weak mortars. It can take into account bricks with arbitrary dimensions as well as perforated bricks. For a stronger mortar... 10. PMMA/MWCNT nanocomposite for proton radiation shielding applications Science.gov (United States) Li, Zhenhao; Chen, Siyuan; Nambiar, Shruti; Sun, Yonghai; Zhang, Mingyu; Zheng, Wanping; Yeow, John T. W. 2016-06-01 Radiation shielding in space missions is critical in order to protect astronauts, spacecraft and payloads from radiation damage. Low atomic-number materials are efficient in shielding particle-radiation, but they have relatively weak material properties compared to alloys that are widely used in space applications as structural materials. However, the issues related to weight and the secondary radiation generation make alloys not suitable for space radiation shielding. Polymers, on the other hand, can be filled with different filler materials for reinforcement of material properties, while at the same time provide sufficient radiation shielding function with lower weight and less secondary radiation generation. In this study, poly(methyl-methacrylate)/multi-walled carbon nanotube (PMMA/MWCNT) nanocomposite was fabricated. The role of MWCNTs embedded in PMMA matrix, in terms of radiation shielding effectiveness, was experimentally evaluated by comparing the proton transmission properties and secondary neutron generation of the PMMA/MWCNT nanocomposite with pure PMMA and aluminum. The results showed that the addition of MWCNTs in PMMA matrix can further reduce the secondary neutron generation of the pure polymer, while no obvious change was found in the proton transmission property. On the other hand, both the pure PMMA and the nanocomposite were 18%-19% lighter in weight than aluminum for stopping the protons with the same energy and generated up to 5% fewer secondary neutrons. Furthermore, the use of MWCNTs showed enhanced thermal stability over the pure polymer, and thus the overall reinforcement effects make MWCNT an effective filler material for applications in the space industry. 11. Radiation hardness of β-Ga2O3 metal-oxide-semiconductor field-effect transistors against gamma-ray irradiation Science.gov (United States) Wong, Man Hoi; Takeyama, Akinori; Makino, Takahiro; Ohshima, Takeshi; Sasaki, Kohei; Kuramata, Akito; Yamakoshi, Shigenobu; Higashiwaki, Masataka 2018-01-01 The effects of ionizing radiation on β-Ga2O3 metal-oxide-semiconductor field-effect transistors (MOSFETs) were investigated. A gamma-ray tolerance as high as 1.6 MGy(SiO2) was demonstrated for the bulk Ga2O3 channel by virtue of weak radiation effects on the MOSFETs' output current and threshold voltage. The MOSFETs remained functional with insignificant hysteresis in their transfer characteristics after exposure to the maximum cumulative dose. Despite the intrinsic radiation hardness of Ga2O3, radiation-induced gate leakage and drain current dispersion ascribed respectively to dielectric damage and interface charge trapping were found to limit the overall radiation hardness of these devices. 12. Gauge-invariant formalism of cosmological weak lensing Science.gov (United States) Yoo, Jaiyul; Grimm, Nastassia; Mitsou, Ermis; Amara, Adam; Refregier, Alexandre 2018-04-01 We present the gauge-invariant formalism of cosmological weak lensing, accounting for all the relativistic effects due to the scalar, vector, and tensor perturbations at the linear order. While the light propagation is fully described by the geodesic equation, the relation of the photon wavevector to the physical quantities requires the specification of the frames, where they are defined. By constructing the local tetrad bases at the observer and the source positions, we clarify the relation of the weak lensing observables such as the convergence, the shear, and the rotation to the physical size and shape defined in the source rest-frame and the observed angle and redshift measured in the observer rest-frame. Compared to the standard lensing formalism, additional relativistic effects contribute to all the lensing observables. We explicitly verify the gauge-invariance of the lensing observables and compare our results to previous work. In particular, we demonstrate that even in the presence of the vector and tensor perturbations, the physical rotation of the lensing observables vanishes at the linear order, while the tetrad basis rotates along the light propagation compared to a FRW coordinate. Though the latter is often used as a probe of primordial gravitational waves, the rotation of the tetrad basis is indeed not a physical observable. We further clarify its relation to the E-B decomposition in weak lensing. Our formalism provides a transparent and comprehensive perspective of cosmological weak lensing. 13. Voltage Weak DC Distribution Grids NARCIS (Netherlands) Hailu, T.G.; Mackay, L.J.; Ramirez Elizondo, L.M.; Ferreira, J.A. 2017-01-01 This paper describes the behavior of voltage weak DC distribution systems. These systems have relatively small system capacitance. The size of system capacitance, which stores energy, has a considerable effect on the value of fault currents, control complexity, and system reliability. A number of 14. Multiplied effect of heat and radiation in chemical stress relaxation International Nuclear Information System (INIS) Ito, Masayuki 1981-01-01 About the deterioration of rubber due to radiation, useful knowledge can be obtained by the measurement of chemical stress relaxation. As an example, the rubber coating of cables in a reactor containment vessel is estimated to be irradiated by weak radiation at the temperature between 60 and 90 deg C for about 40 years. In such case, it is desirable to establish the method of accelerated test of the deterioration. The author showed previously that the law of time-dose rate conversion holds in the case of radiation. In this study, the chemical stress relaxation to rubber was measured by the simultaneous application of heat and radiation, and it was found that there was the multiplied effect of heat and radiation in the stress relaxation speed. Therefore the factor of multiplication of heat and radiation was proposed to describe quantitatively the degree of the multiplied effect. The chloroprene rubber used was offered by Hitachi Cable Co., Ltd. The experimental method and the results are reported. The multiplication of heat and radiation is not caused by the direct cut of molecular chains by radiation, instead, it is based on the temperature dependence of various reaction rates at which the activated species reached the cut of molecular chains through complex reaction mechanism and the temperature dependence of the diffusion rate of oxygen in rubber. (Kako, I.) 15. Application of the MCMC Method for the Calibration of DSMC Parameters to NASA EAST Results for Ionizing, Radiating Hypersonic Flows Data.gov (United States) National Aeronautics and Space Administration — The reentry of a vehicle into a planetary atmosphere creates extreme Mach number conditions which produce a weakly ionized plasma and radiation. The greatest... 16. Towards a quantitative description of hadronic weak decays International Nuclear Information System (INIS) Bigi, I.I.Y. 1981-01-01 We develop a formalism for describing hadronic weak annihilation decays in analogy to the treatment of deep inelastic lepton-nucleon scattering: we write down evolution equations for colour singlet and octet (Qanti q) systems inside mesons of increasing mass. Using D decays as input we can predict weak annihilation decay rates of heavier mesons in a semiquantitative fashion despite our ignorance on bound-state dynamics. (orig.) 17. Epigenetic approaches towards radiation countermeasure International Nuclear Information System (INIS) Agrawala, Paban K. 2012-01-01 In the recent years, histone deacetylase inhibitors (HDACi) have gained tremendous attention for their anticancer, tumor radiosensitising and chemosensitising properties. HDACi enhance the acetylation status of histone proteins of the chromatin besides other non-histone target proteins, an effect that is regulated by the HDACs (histone deacetylases) and HATs (histone acetyltransferases) in the cells. HDACi affect the cell cycle progression, differentiation, DNA damage and repair processes and cell death which contributes to their anticancer properties. One of the main reasons for HDACi gaining attention as potential anticancer therapeutics is their profound action on cancer cells with minimal or no effect on normal cells. However, in recent years, the possible non-oncological applications of HDACi are being explored extensively viz, in neurodegenerative diseases. Ionizing radiation exposure leads to significant alterations in signal transduction processes, changes gene expression patterns, affects DNA damage and repair processes, cell cycle progression and the underlying epigenetic changes (acetylation of histones and methylation of DNA and histones in particular) are now emerging. Some recent literatures suggest that HDACi can render cytoprotective properties in normal tissues. We at INMAS evaluated certain weak HDACi molecules of dietary origin for their ability to modulate cellular radiation in normal cells and animals. As per our expectations, post irradiation treatment with selected HDACi molecules rendered significant reduction in radiation induced damages. The possible mechanisms of action of HDACi in reducing radiation injuries with be discussed based on our won results and recent reports. (author) 18. High Energy Theory: Task B and Task L. Progress report International Nuclear Information System (INIS) 1994-01-01 Research areas briefly covered in this report include semi-leptonic and non-leptonic B- and D-decays, CP violation, lattice gauge theory, light cone field theory, supersymmetry, fermion mass matrices, superstrings derived SUSY GUTs, neutrino physics and cosmology 19. Learning from Weak and Noisy Labels for Semantic Segmentation KAUST Repository Lu, Zhiwu 2016-04-08 A weakly supervised semantic segmentation (WSSS) method aims to learn a segmentation model from weak (image-level) as opposed to strong (pixel-level) labels. By avoiding the tedious pixel-level annotation process, it can exploit the unlimited supply of user-tagged images from media-sharing sites such as Flickr for large scale applications. However, these ‘free’ tags/labels are often noisy and few existing works address the problem of learning with both weak and noisy labels. In this work, we cast the WSSS problem into a label noise reduction problem. Specifically, after segmenting each image into a set of superpixels, the weak and potentially noisy image-level labels are propagated to the superpixel level resulting in highly noisy labels; the key to semantic segmentation is thus to identify and correct the superpixel noisy labels. To this end, a novel L1-optimisation based sparse learning model is formulated to directly and explicitly detect noisy labels. To solve the L1-optimisation problem, we further develop an efficient learning algorithm by introducing an intermediate labelling variable. Extensive experiments on three benchmark datasets show that our method yields state-of-the-art results given noise-free labels, whilst significantly outperforming the existing methods when the weak labels are also noisy. 20. Learning from Weak and Noisy Labels for Semantic Segmentation KAUST Repository Lu, Zhiwu; Fu, Zhenyong; Xiang, Tao; Han, Peng; Wang, Liwei; Gao, Xin 2016-01-01 A weakly supervised semantic segmentation (WSSS) method aims to learn a segmentation model from weak (image-level) as opposed to strong (pixel-level) labels. By avoiding the tedious pixel-level annotation process, it can exploit the unlimited supply of user-tagged images from media-sharing sites such as Flickr for large scale applications. However, these ‘free’ tags/labels are often noisy and few existing works address the problem of learning with both weak and noisy labels. In this work, we cast the WSSS problem into a label noise reduction problem. Specifically, after segmenting each image into a set of superpixels, the weak and potentially noisy image-level labels are propagated to the superpixel level resulting in highly noisy labels; the key to semantic segmentation is thus to identify and correct the superpixel noisy labels. To this end, a novel L1-optimisation based sparse learning model is formulated to directly and explicitly detect noisy labels. To solve the L1-optimisation problem, we further develop an efficient learning algorithm by introducing an intermediate labelling variable. Extensive experiments on three benchmark datasets show that our method yields state-of-the-art results given noise-free labels, whilst significantly outperforming the existing methods when the weak labels are also noisy. 1. Implication of new CEC recommendations for individual monitoring for external radiation doses to the skin and the extremities DEFF Research Database (Denmark) Christensen, P.; Julius, H.W.; Marshall, T.O. 1991-01-01 A drafting group consisting of the above authors has assisted the CEC in revising the CEC document Technical Recommendations for Monitoring the Exposure to Individuals to External Radiation, EUR 5287, published in 1975. The paper highlights sections of the revised version relating particularly...... to irradiation of the skin and the extremities and focusses on problems connected to exposure to weakly penetrating radiations. Concepts of individual monitoring for external radiation exposures to the skin of the whole body and to the extremities are discussed and guidance is given as regards dose quantities... 2. On Hardy's paradox, weak measurements, and multitasking diagrams Energy Technology Data Exchange (ETDEWEB) Meglicki, Zdzislaw, E-mail: [email protected] [Indiana University, Office of the Vice President for Information Technology, 601 E. Kirkwood Ave., Room 116, Bloomington, IN 47405-1223 (United States) 2011-07-04 We discuss Hardy's paradox and weak measurements by using multitasking diagrams, which are introduced to illustrate the progress of quantum probabilities through the double interferometer system. We explain how Hardy's paradox is avoided and elaborate on the outcome of weak measurements in this context. -- Highlights: → Hardy's paradox explained and eliminated. → Weak measurements: what is really measured? → Multitasking diagrams: introduced and used to discuss quantum mechanical processes. 3. The structure of weak interaction International Nuclear Information System (INIS) Zee, A. 1977-01-01 The effect of introducing righthanded currents on the structure of weak interaction is discussed. The ΔI=1/2 rule is in the spotlight. The discussion provides an interesting example in which the so-called Iizuka-Okubo-Zweing rule is not only evaded, but completely negated 4. Qweak: A Precision Measurement of the Proton's Weak Charge International Nuclear Information System (INIS) David Armstrong; Todd Averett; James Birchall; James Bowman; Roger Carlini; Swapan Chattopadhyay; Charles Davis; J. Doornbos; James Dunne; Rolf Ent; Jens Erler; Willie Falk; John Finn; Tony Forest; David Gaskell; Klaus Grimm; C. Hagner; F. Hersman; Maurik Holtrop; Kathleen Johnston; R.T. Jones; Kyungseon Joo; Cynthia Keppel; Elie Korkmaz; Stanley Kowalski; Lawrence Lee; Allison Lung; David Mack; Stanislaw Majewski; Gregory Mitchell; Hamlet Mkrtchyan; Norman Morgan; Allena Opper; Shelley Page; Seppo Penttila; Mark Pitt; Benard Poelker; Tracy Porcelli; William Ramsay; Michael Ramsey-musolf; Julie Roche; Neven Simicevic; Gregory Smith; Riad Suleiman; Simon Taylor; Willem Van Oers; Steven Wells; W.S. Wilburn; Stephen Wood; Carl Zorn 2004-01-01 The Qweak experiment at Jefferson Lab aims to make a 4% measurement of the parity-violating asymmetry in elastic scattering at very low Q 2 of a longitudinally polarized electron beam on a proton target. The experiment will measure the weak charge of the proton, and thus the weak mixing angle at low energy scale, providing a precision test of the Standard Model. Since the value of the weak mixing angle is approximately 1/4, the weak charge of the proton Q w p = 1-4 sin 2 θ w is suppressed in the Standard Model, making it especially sensitive to the value of the mixing angle and also to possible new physics. The experiment is approved to run at JLab, and the construction plan calls for the hardware to be ready to install in Hall C in 2007. The theoretical context of the experiment and the status of its design are discussed 5. Radiative d–d transitions at tungsten centers in II–VI semiconductors Energy Technology Data Exchange (ETDEWEB) Ushakov, V. V., E-mail: [email protected]; Krivobok, V. S.; Pruchkina, A. A. [Russian Academy of Sciences, Lebedev Physical Institute (Russian Federation) 2017-03-15 The luminescence spectra of W impurity centers in II–VI semiconductors, specifically, ZnSe, CdS, and CdSe, are studied. It is found that, if the electron system of 5d (W) centers is considered instead of the electron system of 3d (Cr) centers, the spectral characteristics of the impurity radiation are substantially changed. The electron transitions are identified in accordance with Tanabe–Sugano diagrams of crystal field theory. With consideration for the specific features of the spectra, it is established that, in the crystals under study, radiative transitions at 5d W centers occur between levels with different spins in the region of a weak crystal field. 6. The local contribution to the microwave background radiation International Nuclear Information System (INIS) Pecker, Jean-Claude; Narlikar, Jayant V.; Ochsenbein, Francois; Wickramasinghe, Chandra 2015-01-01 The observed microwave background radiation (MBR) is commonly interpreted as the relic of an early hot universe, and its observed features (spectrum and anisotropy) are explained in terms of properties of the early universe. Here we describe a complementary, even possibly alternative, interpretation of MBR, first proposed in the early 20 th century, and adapt it to modern observations. For example, the stellar Hipparcos data show that the energy density of starlight from the Milky Way, if suitably thermalized, yields a temperature of ∼2.81 K. This and other arguments given here strongly suggest that the origin of MBR may lie, at least in a very large part, in re-radiation of thermalized galactic starlight. The strengths and weaknesses of this alternative radical explanation are discussed. (paper) 7. Advances in the measurement of weak magnetic fields International Nuclear Information System (INIS) Li Damin; Huang Minzhe. 1992-01-01 The state-of-art and general features of instruments for measuring weak magnetic fields (such as the non-directional magnetometer, induced coil magnetometer, proton magnetometer, optical pumping magnetometer, flux-gate magnetometer and superconducting quantum magnetometer) are briefly described. Emphasis is laid on the development of a novel technique used in the flux-gate magnetometer and the liquid nitrogen SQUID. Typical applications of the measuring techniques for weak magnetic fields are given 8. A dynamical weak scale from inflation Energy Technology Data Exchange (ETDEWEB) You, Tevong, E-mail: [email protected] [DAMTP, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA (United Kingdom) 2017-09-01 Dynamical scanning of the Higgs mass by an axion-like particle during inflation may provide a cosmological component to explaining part of the hierarchy problem. We propose a novel interplay of this cosmological relaxation mechanism with inflation, whereby the backreaction of the Higgs vacuum expectation value near the weak scale causes inflation to end. As Hubble drops, the relaxion's dissipative friction increases relative to Hubble and slows it down enough to be trapped by the barriers of its periodic potential. Such a scenario raises the natural cut-off of the theory up to ∼ 10{sup 10} GeV, while maintaining a minimal relaxion sector without having to introduce additional scanning scalars or new physics coincidentally close to the weak scale. 9. Working group report: Low energy and flavour physics Indian Academy of Sciences (India) This is a report of the low energy and flavour physics working group at ... that calculates the non-leptonic decay amplitudes including the long-distance con- tributions. There were three lectures that lasted for over seven hours, and were. 10. Recombination dynamics of excitons with low non-radiative component in semi-polar (10-11)-oriented GaN/AlGaN multiple quantum wells International Nuclear Information System (INIS) Rosales, D.; Gil, B.; Bretagnon, T.; Guizal, B.; Izyumskaya, N.; Monavarian, M.; Zhang, F.; Okur, S.; Avrutin, V.; Özgür, Ü.; Morkoç, H. 2014-01-01 Optical properties of GaN/Al 0.2 Ga 0.8 N multiple quantum wells grown with semi-polar (10-11) orientation on patterned 7°-off Si (001) substrates have been investigated. Studies performed at 8 K reveal the in-plane anisotropic behavior of the QW photoluminescence (PL) intensity for this semi-polar orientation. The time resolved PL measurements were carried out in the temperature range from 8 to 295 K to deduce the effective recombination decay times, with respective radiative and non-radiative contributions. The non-radiative component remains relatively weak with increasing temperature, indicative of high crystalline quality. The radiative decay time is a consequence of contribution from both localized and free excitons. We report an effective density of interfacial defects of 2.3 × 10 12 cm −2 and a radiative recombination time of τ loc = 355 ps for the localized excitons. This latter value is significantly larger than those reported for the non-polar structures, which we attribute to the presence of a weak residual electric field in the semi-polar QW layers 11. Recombination dynamics of excitons with low non-radiative component in semi-polar (10-11)-oriented GaN/AlGaN multiple quantum wells Energy Technology Data Exchange (ETDEWEB) Rosales, D.; Gil, B.; Bretagnon, T.; Guizal, B. [CNRS, Laboratoire Charles Coulomb, UMR 5221, F-34095 Montpellier (France); Université Montpellier 2, Laboratoire Charles Coulomb, UMR 5221, F-34095 Montpellier (France); Izyumskaya, N.; Monavarian, M.; Zhang, F.; Okur, S.; Avrutin, V.; Özgür, Ü.; Morkoç, H. [Department of Electrical and Computer Engineering, Virginia Commonwealth University, Richmond, Virginia 23238 (United States) 2014-09-07 Optical properties of GaN/Al{sub 0.2}Ga{sub 0.8}N multiple quantum wells grown with semi-polar (10-11) orientation on patterned 7°-off Si (001) substrates have been investigated. Studies performed at 8 K reveal the in-plane anisotropic behavior of the QW photoluminescence (PL) intensity for this semi-polar orientation. The time resolved PL measurements were carried out in the temperature range from 8 to 295 K to deduce the effective recombination decay times, with respective radiative and non-radiative contributions. The non-radiative component remains relatively weak with increasing temperature, indicative of high crystalline quality. The radiative decay time is a consequence of contribution from both localized and free excitons. We report an effective density of interfacial defects of 2.3 × 10{sup 12} cm{sup −2} and a radiative recombination time of τ{sub loc} = 355 ps for the localized excitons. This latter value is significantly larger than those reported for the non-polar structures, which we attribute to the presence of a weak residual electric field in the semi-polar QW layers. 12. How the Weak Variance of Momentum Can Turn Out to be Negative Science.gov (United States) Feyereisen, M. R. 2015-05-01 Weak values are average quantities, therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wigner function, using a standard construction from probability theory. We show this corresponds to a measurable quantity, which is not itself a weak value. It also leads naturally to a connection between the imaginary part of the weak value of momentum and the quantum potential. We study how the negativity of the Wigner function causes negative weak variances, and the implications this has on a class of `subquantum' theories. We also discuss the role of weak variances in studying determinism, deriving the classical limit from a variational principle. 13. Study of the Weak Charged Hadronic Current in b Decays CERN Document Server Acciarri, M; Aguilar-Benítez, M; Ahlen, S P; Alpat, B; Alcaraz, J; Alemanni, G; Allaby, James V; Aloisio, A; Alverson, G; Alviggi, M G; Ambrosi, G; Anderhub, H; Andreev, V P; Angelescu, T; Anselmo, F; Antreasyan, D; Arefev, A; Azemoon, T; Aziz, T; Bagnaia, P; Baksay, L; Ball, R C; Banerjee, S; Banicz, K; Barillère, R; Barone, L; Bartalini, P; Baschirotto, A; Basile, M; Battiston, R; Bay, A; Becattini, F; Becker, U; Behner, F; Berdugo, J; Berges, P; Bertucci, B; Betev, B L; Bhattacharya, S; Biasini, M; Biland, A; Bilei, G M; Blaising, J J; Blyth, S C; Bobbink, Gerjan J; Böck, R K; Böhm, A; Borgia, B; Boucham, A; Bourilkov, D; Bourquin, Maurice; Boutigny, D; Branson, J G; Brigljevic, V; Brock, I C; Buffini, A; Buijs, A; Burger, J D; Burger, W J; Busenitz, J K; Buytenhuijs, A O; Cai, X D; Campanelli, M; Capell, M; Cara Romeo, G; Caria, M; Carlino, G; Cartacci, A M; Casaus, J; Castellini, G; Cavallari, F; Cavallo, N; Cecchi, C; Cerrada-Canales, M; Cesaroni, F; Chamizo-Llatas, M; Chan, A; Chang, Y H; Chaturvedi, U K; Chemarin, M; Chen, A; Chen, G; Chen, G M; Chen, H F; Chen, H S; Chen, M; Chiefari, G; Chien, C Y; Choi, M T; Cifarelli, Luisa; Cindolo, F; Civinini, C; Clare, I; Clare, R; Cohn, H O; Coignet, G; Colijn, A P; Colino, N; Commichau, V; Costantini, S; Cotorobai, F; de la Cruz, B; Csilling, Akos; Dai, T S; D'Alessandro, R; De Asmundis, R; De Boeck, H; Degré, A; Deiters, K; Denes, P; De Notaristefani, F; DiBitonto, Daryl; Diemoz, M; Van Dierendonck, D N; Di Lodovico, F; Dionisi, C; Dittmar, Michael; Dominguez, A; Doria, A; Dorne, I; Dova, M T; Drago, E; Duchesneau, D; Duinker, P; Durán, I; Dutta, S; Easo, S; Efremenko, Yu V; El-Mamouni, H; Engler, A; Eppling, F J; Erné, F C; Ernenwein, J P; Extermann, Pierre; Fabre, M; Faccini, R; Falciano, S; Favara, A; Fay, J; Fedin, O; Felcini, Marta; Fenyi, B; Ferguson, T; Fernández, D; Ferroni, F; Fesefeldt, H S; Fiandrini, E; Field, J H; Filthaut, Frank; Fisher, P H; Forconi, G; Fredj, L; Freudenreich, Klaus; Furetta, C; Galaktionov, Yu; Ganguli, S N; García-Abia, P; Gau, S S; Gentile, S; Gerald, J; Gheordanescu, N; Giagu, S; Goldfarb, S; Goldstein, J; Gong, Z F; Gougas, Andreas; Gratta, Giorgio; Grünewald, M W; Gupta, V K; Gurtu, A; Gutay, L J; Hartmann, B; Hasan, A; Hatzifotiadou, D; Hebbeker, T; Hervé, A; Van Hoek, W C; Hofer, H; Hoorani, H; Hou, S R; Hu, G; Innocente, Vincenzo; Janssen, H; Jenkes, K; Jin, B N; Jones, L W; de Jong, P; Josa-Mutuberria, I; Kasser, A; Khan, R A; Kamrad, D; Kamyshkov, Yu A; Kapustinsky, J S; Karyotakis, Yu; Kaur, M; Kienzle-Focacci, M N; Kim, D; Kim, J K; Kim, S C; Kim, Y G; Kinnison, W W; Kirkby, A; Kirkby, D; Kirkby, Jasper; Kiss, D; Kittel, E W; Klimentov, A; König, A C; Korolko, I; Koutsenko, V F; Krämer, R W; Krenz, W; Kuijten, H; Kunin, A; Ladrón de Guevara, P; Landi, G; Lapoint, C; Lassila-Perini, K M; Laurikainen, P; Lebeau, M; Lebedev, A; Lebrun, P; Lecomte, P; Lecoq, P; Le Coultre, P; Lee Jae Sik; Lee, K Y; Leggett, C; Le Goff, J M; Leiste, R; Leonardi, E; Levchenko, P M; Li Chuan; Lieb, E H; Lin, W T; Linde, Frank L; Lista, L; Liu, Z A; Lohmann, W; Longo, E; Lu, W; Lü, Y S; Lübelsmeyer, K; Luci, C; Luckey, D; Luminari, L; Lustermann, W; Ma Wen Gan; Maity, M; Majumder, G; Malgeri, L; Malinin, A; Maña, C; Mangla, S; Marchesini, P A; Marin, A; Martin, J P; Marzano, F; Massaro, G G G; McNally, D; Mele, S; Merola, L; Meschini, M; Metzger, W J; Von der Mey, M; Mi, Y; Mihul, A; Van Mil, A J W; Mirabelli, G; Mnich, J; Molnár, P; Monteleoni, B; Moore, R; Morganti, S; Moulik, T; Mount, R; Müller, S; Muheim, F; Nagy, E; Nahn, S; Napolitano, M; Nessi-Tedaldi, F; Newman, H; Nippe, A; Nisati, A; Nowak, H; Opitz, H; Organtini, G; Ostonen, R; Pandoulas, D; Paoletti, S; Paolucci, P; Park, H K; Pascale, G; Passaleva, G; Patricelli, S; Paul, T; Pauluzzi, M; Paus, C; Pauss, Felicitas; Peach, D; Pei, Y J; Pensotti, S; Perret-Gallix, D; Petrak, S; Pevsner, A; Piccolo, D; Pieri, M; Pinto, J C; Piroué, P A; Pistolesi, E; Plyaskin, V; Pohl, M; Pozhidaev, V; Postema, H; Produit, N; Prokofev, D; Prokofiev, D O; Rahal-Callot, G; Rancoita, P G; Rattaggi, M; Raven, G; Razis, P A; Read, K; Ren, D; Rescigno, M; Reucroft, S; Van Rhee, T; Riemann, S; Riemers, B C; Riles, K; Rind, O; Ro, S; Robohm, A; Rodin, J; Rodríguez-Calonge, F J; Roe, B P; Romero, L; Rosier-Lees, S; Rosselet, P; Van Rossum, W; Roth, S; Rubio, Juan Antonio; Rykaczewski, H; Salicio, J; Sánchez, E; Santocchia, A; Sarakinos, M E; Sarkar, S; Sassowsky, M; Sauvage, G; Schäfer, C; Shchegelskii, V; Schmidt-Kärst, S; Schmitz, D; Schmitz, P; Schneegans, M; Scholz, N; Schopper, Herwig Franz; Schotanus, D J; Schwenke, J; Schwering, G; Sciacca, C; Sciarrino, D; Sens, Johannes C; Servoli, L; Shevchenko, S; Shivarov, N; Shoutko, V; Shukla, J; Shumilov, E; Shvorob, A V; Siedenburg, T; Son, D; Sopczak, André; Soulimov, V; Smith, B; Spillantini, P; Steuer, M; Stickland, D P; Stone, H; Stoyanov, B; Strässner, A; Strauch, K; Sudhakar, K; Sultanov, G G; Sun, L Z; Susinno, G F; Suter, H; Swain, J D; Tang, X W; Tauscher, Ludwig; Taylor, L; Ting, Samuel C C; Ting, S M; Tonutti, M; Tonwar, S C; Tóth, J; Tully, C; Tuchscherer, H; Tung, K L; Uchida, Y; Ulbricht, J; Uwer, U; Valente, E; Van de Walle, R T; Vesztergombi, G; Vetlitskii, I; Viertel, Gert M; Vivargent, M; Völkert, R; Vogel, H; Vogt, H; Vorobev, I; Vorobyov, A A; Vorvolakos, A; Wadhwa, M; Wallraff, W; Wang, J C; Wang, X L; Wang, Z M; Weber, A; Wittgenstein, F; Wu, S X; Wynhoff, S; Xu, J; Xu, Z Z; Yang, B Z; Yang, C G; Yao, X Y; Ye, J B; Yeh, S C; You, J M; Zalite, A; Zalite, Yu; Zemp, P; Zeng, Y; Zhang, Z; Zhang, Z P; Zhou, B; Zhou, Y; Zhu, G Y; Zhu, R Y; Zichichi, Antonino; Ziegler, F 1997-01-01 Charged and neutral particle multiplicities of jets associated with identified semileptonic and hadronic b decays are studied. The observed differences between these jets are used to determine the inclusive properties of the weak charged hadronic current. The average charged particle multiplicity of the weak charged hadronic current in b decays is measured for the first time to be 2.69\\pm$0.07(stat.)$\\pm$0.14(syst.). This result is in good agreement with the JETSET hadronization model of the weak charged hadronic current if 40$\\pm$17\\% of the produced mesons are light--flavored tensor (L=1) mesons. This level of tensor meson production is consistent with the measurement of the$\\pi^0$multiplicity in the weak charged hadronic current in b decays. \\end{abstract} 14. Weak interaction contribution to the energy spectrum of two-lepton system International Nuclear Information System (INIS) Martynenko, A.P.; Saleev, V.A. 1995-01-01 The contribution of neutral currents to the weak interaction quasi-potential of two leptons is investigated. The exact expression for the weak interaction operator of the system for arbitrary biding energies in one-boson approximation is obtained. The weak interaction contribution to the S-levels displacement of hydrogen-like atom. 14 refs 15. Effect of Radiation on Chromospheric Magnetic Reconnection: Reactive and Collisional Multi-fluid Simulations Energy Technology Data Exchange (ETDEWEB) Alvarez Laguna, A.; Poedts, S. [Centre for Mathematical Plasma-Astrophysics, KU Leuven, Leuven (Belgium); Lani, A.; Deconinck, H. [Aeronautics and Aerospace Department, von Karman Institute for Fluid Dynamics, Sint-Genesius-Rode (Belgium); Mansour, N. N. [NASA Ames Research Center, MS 230-3, Moffett Field, CA 94035 (United States) 2017-06-20 We study magnetic reconnection under chromospheric conditions in five different ionization levels from 0.5% to 50% using a self-consistent two-fluid (ions + neutrals) model that accounts for compressibility, collisional effects, chemical inequilibrium, and anisotropic heat conduction. Results with and without radiation are compared, using two models for the radiative losses: an optically thin radiation loss function, and an approximation of the radiative losses of a plasma with photospheric abundances. The results without radiation show that reconnection occurs faster for the weakly ionized cases as a result of the effect of ambipolar diffusion and fast recombination. The tearing mode instability appears earlier in the low ionized cases and grows rapidly. We find that radiative losses have a stronger effect than was found in previous results as the cooling changes the plasma pressure and the concentration of ions inside the current sheet. This affects the ambipolar diffusion and the chemical equilibrium, resulting in thin current sheets and enhanced reconnection. The results quantify this complex nonlinear interaction by showing that a strong cooling produces faster reconnections than have been found in models without radiation. The results accounting for radiation show timescales and outflows comparable to spicules and chromospheric jets. 16. Weak differentiability of product measures NARCIS (Netherlands) Heidergott, B.F.; Leahu, H. 2010-01-01 In this paper, we study cost functions over a finite collection of random variables. For these types of models, a calculus of differentiation is developed that allows us to obtain a closed-form expression for derivatives where "differentiation" has to be understood in the weak sense. The technique 17. System for radiation emergency medicine. Activities of tertiary radiation emergency hospitals International Nuclear Information System (INIS) Kamiya, Kenji; Tanigawa, Koichi; Hosoi, Yoshio 2011-01-01 Japanese system for radiation emergency medicine is primarily built up by Cabinet Nuclear Safety Commission in 2001 based on previous Tokai JCO Accident (1999) and is composed from the primary, secondary and tertiary medical organizations. This paper describes mainly about roles and actions of the tertiary facilities at Fukushima Nuclear Power Plant Accident and tasks to be improved in future. The primary and secondary organizations in the system above are set up in the prefectures with or neighboring the nuclear facility, and tertiary ones, in two parts of western and eastern Japan. The western organization is in Hiroshima University having its cooperating 7 hospitals, and is responsible for such patients as exposed to high dose external radiation, having serious complication, and difficult to treat in the primary/secondary hospitals. The eastern is in National Institute of Radiological Sciences (NIRS) with 6 cooperating hospitals and responsible for patients with internal radiation exposure difficult to treat, with contaminated body surface with difficulty in decontamination and/or with causable of secondary contamination, and difficult to treat in the secondary hospitals. The tertiary organizations have made efforts for the education and training of medical staff, for network construction among the primary, secondary and other medicare facilities, for establishment of transferring system of patients, and for participation to the international network by global organizations like Response Assistance Network (RANET) in International Atomic Energy Agency (IAEA), and Radiation Emergency Preparedness and Network (REMPAN) in World Health Organization (WHO). At the Fukushima Accident, staffs of the two tertiary hospitals began to conduct medicare on site (Mar. 12-) and learned following tasks to be improved in future: the early definition of medicare and its network system, and Emergency Planning Zone (EPZ); urgent evacuation of residents weak to disaster like elderly 18. Weak Hard X-Ray Emission from Two Broad Absorption Line Quasars Observed with NuStar: Compton-Thick Absorption or Intrinsic X-Ray Weakness? Science.gov (United States) Luo, B.; Brandt, W. N.; Alexander, D. M.; Harrison, F. A.; Stern, D.; Bauer, F. E.; Boggs, S. E.; Christensen, F. E.; Comastri, A.; Craig, W. W..; 2013-01-01 We present Nuclear Spectroscopic Telescope Array (NuSTAR) hard X-ray observations of two X-ray weak broad absorption line (BAL) quasars, PG 1004+130 (radio loud) and PG 1700+518 (radio quiet). Many BAL quasars appear X-ray weak, probably due to absorption by the shielding gas between the nucleus and the accretion-disk wind. The two targets are among the optically brightest BAL quasars, yet they are known to be significantly X-ray weak at rest-frame 2-10 keV (16-120 times fainter than typical quasars). We would expect to obtain approx. or equal to 400-600 hard X-ray (is greater than or equal to 10 keV) photons with NuSTAR, provided that these photons are not significantly absorbed N(sub H) is less than or equal to 10(exp24) cm(exp-2). However, both BAL quasars are only detected in the softer NuSTAR bands (e.g., 4-20 keV) but not in its harder bands (e.g., 20-30 keV), suggesting that either the shielding gas is highly Compton-thick or the two targets are intrinsically X-ray weak. We constrain the column densities for both to be N(sub H) 7 × 10(exp 24) cm(exp-2) if the weak hard X-ray emission is caused by obscuration from the shielding gas. We discuss a few possibilities for how PG 1004+130 could have Compton-thick shielding gas without strong Fe Ka line emission; dilution from jet-linked X-ray emission is one likely explanation. We also discuss the intrinsic X-ray weakness scenario based on a coronal-quenching model relevant to the shielding gas and disk wind of BAL quasars. Motivated by our NuSTAR results, we perform a Chandra stacking analysis with the Large Bright Quasar Survey BAL quasar sample and place statistical constraints upon the fraction of intrinsically X-ray weak BAL quasars; this fraction is likely 17%-40%. 19. Energy Technology Data Exchange (ETDEWEB) Luo, B.; Brandt, W. N. [Department of Astronomy and Astrophysics, 525 Davey Lab, The Pennsylvania State University, University Park, PA 16802 (United States); Alexander, D. M.; Hickox, R. [Department of Physics, Durham University, South Road, Durham DH1 3LE (United Kingdom); Harrison, F. A.; Fuerst, F.; Grefenstette, B. W.; Madsen, K. K. [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States); Stern, D. [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States); Bauer, F. E. [Departamento de Astronomia y Astrofisica, Pontificia Universidad Catolica de Chile, Casilla 306, Santiago 22 (Chile); Boggs, S. E.; Craig, W. W. [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States); Christensen, F. E. [DTU Space-National Space Institute, Technical University of Denmark, Elektrovej 327, DK-2800 Lyngby (Denmark); Comastri, A. [INAF-Osservatorio Astronomico di Bologna, Via Ranzani 1, I-40127 Bologna (Italy); Fabian, A. C. [Institute of Astronomy, Madingley Road, Cambridge CB3 0HA (United Kingdom); Farrah, D. [Department of Physics, Virginia Tech, Blacksburg, VA 24061 (United States); Fiore, F. [Osservatorio Astronomico di Roma, via Frascati 33, I-00040 Monteporzio Catone (Italy); Hailey, C. J. [Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027 (United States); Matt, G. [Dipartimento di Matematica e Fisica, Universita degli Studi Roma Tre, via della Vasca Navale 84, I-00146 Roma (Italy); Ogle, P. [IPAC, California Institute of Technology, Mail Code 220-6, Pasadena, CA 91125 (United States); and others 2013-08-01 We present Nuclear Spectroscopic Telescope Array (NuSTAR) hard X-ray observations of two X-ray weak broad absorption line (BAL) quasars, PG 1004+130 (radio loud) and PG 1700+518 (radio quiet). Many BAL quasars appear X-ray weak, probably due to absorption by the shielding gas between the nucleus and the accretion-disk wind. The two targets are among the optically brightest BAL quasars, yet they are known to be significantly X-ray weak at rest-frame 2-10 keV (16-120 times fainter than typical quasars). We would expect to obtain Almost-Equal-To 400-600 hard X-ray ({approx}> 10 keV) photons with NuSTAR, provided that these photons are not significantly absorbed (N{sub H} {approx}< 10{sup 24} cm{sup -2}). However, both BAL quasars are only detected in the softer NuSTAR bands (e.g., 4-20 keV) but not in its harder bands (e.g., 20-30 keV), suggesting that either the shielding gas is highly Compton-thick or the two targets are intrinsically X-ray weak. We constrain the column densities for both to be N{sub H} Almost-Equal-To 7 Multiplication-Sign 10{sup 24} cm{sup -2} if the weak hard X-ray emission is caused by obscuration from the shielding gas. We discuss a few possibilities for how PG 1004+130 could have Compton-thick shielding gas without strong Fe K{alpha} line emission; dilution from jet-linked X-ray emission is one likely explanation. We also discuss the intrinsic X-ray weakness scenario based on a coronal-quenching model relevant to the shielding gas and disk wind of BAL quasars. Motivated by our NuSTAR results, we perform a Chandra stacking analysis with the Large Bright Quasar Survey BAL quasar sample and place statistical constraints upon the fraction of intrinsically X-ray weak BAL quasars; this fraction is likely 17%-40%. 20. Effects of Shear Fracture on In-depth Profile Modification of Weak Gels Institute of Scientific and Technical Information of China (English) Li Xianjie; Song Xinwang; Yue Xiang'an; Hou Jirui; Fang Lichun; Zhang Huazhen 2007-01-01 Two sand packs were filled with fine glass beads and quartz sand respectively. The characteristics of crosslinked polymer flowing through the sand packs as well as the influence of shear fracture of porous media on the in-depth profile modification of the weak gel generated from the crosslinked polymer were investigated. The results indicated that under the dynamic condition crosslinking reaction happened in both sand packs,and the weak gels in these two cases became small gel particles after water flooding. The differences were:the dynamic gelation time in the quartz sand pack was longer than that in the glass bead pack. Residual resistance factor (FRR) caused by the weak gel in the quartz sand pack was smaller than that in the glass bead pack. The weak gel became gel particles after being scoured by subsequent flood water. A weak gel with uniform apparent viscosity and sealing characteristics was generated in every part of the glass bead pack,which could not only move deeply into the sand pack but also seal the high capacity channels again when it reached the deep part. The weak gel performed in-depth profile modification in the glass bead pack,while in the quartz sand pack,the weak gel was concentrated with 100 cm from the entrance of the sand pack. When propelled by the subsequent flood water,the weak gel could move towards the deep part of the sand pack but then became tiny gel particles and could not effectively seal the high capacity channels there. The in-depth profile modification of the weak gel was very weak in the quartz sand pack. It was the shear fracture of porous media that mainly affected the properties and weakened the in-depth profile modification of the weak gel. 1. Detection of weak optical signals with a laser amplifier International Nuclear Information System (INIS) Kozlovskii, A. V. 2006-01-01 Detection of weak and extremely weak light signals amplified by linear and four-wave mixing laser amplifiers is analyzed. Photoelectron distributions are found for different input photon statistics over a wide range of gain. Signal-to-noise ratios are calculated and analyzed for preamplification schemes using linear and four-wave mixing amplifiers. Calculations show that the high signal-to-noise ratio (much higher than unity), ensuring reliable detection of weak input signals, can be attained only with a four-wave mixing preamplification scheme. Qualitative dependence of the signal-to-noise ratio on the quantum statistical properties of both signal and idler waves is demonstrated 2. Practical advantages of almost-balanced-weak-value metrological techniques Science.gov (United States) Martínez-Rincón, Julián; Chen, Zekai; Howell, John C. 2017-06-01 Precision measurements of ultrasmall linear velocities of one of the mirrors in a Michelson interferometer are performed using two different weak-value techniques. We show that the technique of almost-balanced weak values (ABWV) offers practical advantages over the technique of weak-value amplification, resulting in larger signal-to-noise ratios and the possibility of longer integration times due to robustness to slow drifts. As an example of the performance of the ABWV protocol we report a velocity sensitivity of 60 fm/s after 40 h of integration time. The sensitivity of the Doppler shift due to the moving mirror is 150 nHz. 3. Weak convergence of Jacobian determinants under asymmetric assumptions Directory of Open Access Journals (Sweden) Teresa Alberico 2012-05-01 Full Text Available Let$\\Om$be a bounded open set in$\\R^2$sufficiently smooth and$f_k=(u_k,v_k$and$f=(u,v$mappings belong to the Sobolev space$W^{1,2}(\\Om,\\R^2$. We prove that if the sequence of Jacobians$J_{f_k}$converges to a measure$\\mu$in sense of measures andif one allows different assumptions on the two components of$f_k$and$f$, e.g.$$u_k \\rightharpoonup u \\;\\;\\mbox{weakly in} \\;\\; W^{1,2}(\\Om \\qquad \\, v_k \\rightharpoonup v \\;\\;\\mbox{weakly in} \\;\\; W^{1,q}(\\Om$$for some$q\\in(1,2$, then\$$\\label{0}d\\mu=J_f\\,dz.\$$Moreover, we show that this result is optimal in the sense that conclusion fails for$q=1$.On the other hand, we prove that \\eqref{0} remains valid also if one considers the case$q=1$, but it is necessary to require that$u_k$weakly converges to$u$in a Zygmund-Sobolev space with a slightly higher degree of regularity than$W^{1,2}(\\Om$and precisely$$u_k \\rightharpoonup u \\;\\;\\mbox{weakly in} \\;\\; W^{1,L^2 \\log^\\alpha L}(\\Om$$for some$\\alpha >1\$.
4. "Alpha decays" of Be-10(Lambda) and B-10(Lambda) hypernuclei on a nuclotron: A clue to some puzzles in nonleptonic processes
Czech Academy of Sciences Publication Activity Database
Batusov, Yu.; Lukstins, J.; Majling, Lubomír; Parfenov, AN.
2005-01-01
Roč. 36, č. 2 (2005), s. 169-190 ISSN 1063-7796 R&D Projects: GA ČR GA202/02/0930 Institutional research plan: CEZ:AV0Z10480505 Keywords : nonmesonic weak decay * hyperon-nucleon interaction * Lambda-N-interaction Subject RIV: BE - Theoretical Physics Impact factor: 0.505, year: 2005
5. Physics with the KLOE-2 experiment at the upgraded DA{phi}NE
Energy Technology Data Exchange (ETDEWEB)
Amelino-Camelia, G.; Bini, C.; De Santis, A.; De Zorzi, G.; Di Domenico, A.; Fiore, S.; Franzini, P.; Gauzzi, P. [Univ. ' ' Sapienza' ' , Dipt di Fisica, Roma (Italy); INFN, Sezione di Roma (Italy); Archilli, F.; Gonnella, F.; Messi, R. [Universita ' ' Tor Vergata' ' , Dipt. di Fisica, Roma (Italy); INFN, Sezione Roma 2, Roma (Italy); Babusci, D.; Bencivenni, G.; Bloise, C.; Bossi, F.; Campana, P.; Capon, G.; Ciambrone, P.; Czerwinski, E.; Dane, E.; De Lucia, E.; De Simone, P.; Domenici, D.; Felici, G.; Giovannella, S.; Happacher, F.; Jacewicz, M.; Lee-Franzini, J.; Miscetti, S.; Quintieri, L.; Santangelo, P.; Sarra, I.; Sciascia, B.; Venanzoni, G. [INFN, Lab. Nazionali di Frascati, Frascati (Italy); Badoni, D.; Moricciani, D. [INFN, Sezione Roma 2, Roma (Italy); Bernabeu, J. [Univ. de Valencia-CSIC, Dept. de Fisica Teorica and IFC, Valencia (Spain); Bertlmann, R.A. [Univ. of Vienna (Austria); Boito, D.R.; Escribano, R. [Univ. Autonoma de Barcelona, Grup de Fisica Teorica and IFAE, Barcelona (Spain); Bocci, V. [INFN, Sezione di Roma, Roma (Italy); Branchini, P.; Budano, A.; Graziani, E.; Nguyen, F.; Passeri, A.; Tortora, L. [INFN, Sezione Roma 3, Roma (Italy); Bulychjev, S.A.; Kulikov, V.V.; Martemianov, M.A.; Matsyuk, M.A. [Inst. for Theoretical and Experimental Physics, Moscow (Russian Federation); Ceradini, F.; Taccini, C. [INFN, Sezione Roma 3, Roma (Italy); Univ. ' ' Roma Tre' ' , Dipt. di Fisica, Roma (Italy); Czyz, H. [Univ. of Silesia, Inst. of Physics, Katowice (Poland); D' Ambrosio, G.; Di Donato, C. [INFN, Sezione di Napoli (Italy); De Robertis, G.; Loddo, F.; Ranieri, A. [INFN, Sezione di Bari, Bari (Italy); Di Micco, B. [INFN, Sezione Roma 3, Roma (Italy); Univ. ' ' Roma Tre' ' , Dipt. di Fisica, Roma (Italy); CERN, Geneve (Switzerland); Eidelman, S.I.; Fedotovich, G.V.; Lukin, P. [Budker Inst. of Nuclear Physics, Novosibirsk (Russian Federation); Erriquez, O. [INFN, Sezione di Bari (Italy); Univ. di Bari, Dipt. di Fisica (Italy)] [and others
2010-08-15
Investigation at a {phi}-factory can shed light on several debated issues in particle physics. We discuss: (i) recent theoretical development and experimental progress in kaon physics relevant for the Standard Model tests in the flavor sector, (ii) the sensitivity we can reach in probing CPT and Quantum Mechanics from time evolution of entangled-kaon states, (iii) the interest for improving on the present measurements of non-leptonic and radiative decays of kaons and {eta}/{eta}' mesons, (iv) the contribution to understand the nature of light scalar mesons, and (v) the opportunity to search for narrow di-lepton resonances suggested by recent models proposing a hidden dark-matter sector. We also report on the e {sup +} e {sup -} physics in the continuum with the measurements of (multi)hadronic cross sections and the study of {gamma} {gamma} processes. (orig.)
6. Physics with the KLOE-2 experiment at the upgraded DAΦNE
International Nuclear Information System (INIS)
Amelino-Camelia, G.; Bini, C.; De Santis, A.; De Zorzi, G.; Di Domenico, A.; Fiore, S.; Franzini, P.; Gauzzi, P.; Archilli, F.; Gonnella, F.; Messi, R.; Babusci, D.; Bencivenni, G.; Bloise, C.; Bossi, F.; Campana, P.; Capon, G.; Ciambrone, P.; Czerwinski, E.; Dane, E.; De Lucia, E.; De Simone, P.; Domenici, D.; Felici, G.; Giovannella, S.; Happacher, F.; Jacewicz, M.; Lee-Franzini, J.; Miscetti, S.; Quintieri, L.; Santangelo, P.; Sarra, I.; Sciascia, B.; Venanzoni, G.; Badoni, D.; Moricciani, D.; Bernabeu, J.; Bertlmann, R.A.; Boito, D.R.; Escribano, R.; Bocci, V.; Branchini, P.; Budano, A.; Graziani, E.; Nguyen, F.; Passeri, A.; Tortora, L.; Bulychjev, S.A.; Kulikov, V.V.; Martemianov, M.A.; Matsyuk, M.A.; Ceradini, F.; Taccini, C.; Czyz, H.; D'Ambrosio, G.; Di Donato, C.; De Robertis, G.; Loddo, F.; Ranieri, A.; Di Micco, B.; Eidelman, S.I.; Fedotovich, G.V.; Lukin, P.; Erriquez, O.; Essig, R.; Schuster, P.C.; Giacosa, F.; Hiesmayr, B.C.; Hoeistad, B.; Johansson, T.; Kupsc, A.; Wolke, M.; Iarocci, E.; Martini, M.; Patera, V.; Sciubba, A.; Ivashyn, S.; Jegerlehner, F.; Kluge, W.; Lehnert, R.; Mavromatos, N.E.; Sarkar, S.; Mescia, F.; Morello, G.; Schioppa, M.; Moskal, P.; Silarski, M.; Zdebik, J.; Mueller, S.; Passemar, E.; Passera, M.; Pennington, M.R.; Prades, J.; Reece, M.; Toro, N.; Versaci, R.; Wang, L.T.; Wislicki, W.
2010-01-01
Investigation at a φ-factory can shed light on several debated issues in particle physics. We discuss: (i) recent theoretical development and experimental progress in kaon physics relevant for the Standard Model tests in the flavor sector, (ii) the sensitivity we can reach in probing CPT and Quantum Mechanics from time evolution of entangled-kaon states, (iii) the interest for improving on the present measurements of non-leptonic and radiative decays of kaons and η/η' mesons, (iv) the contribution to understand the nature of light scalar mesons, and (v) the opportunity to search for narrow di-lepton resonances suggested by recent models proposing a hidden dark-matter sector. We also report on the e + e - physics in the continuum with the measurements of (multi)hadronic cross sections and the study of γ γ processes. (orig.)
7. Direct quantum process tomography via measuring sequential weak values of incompatible observables.
Science.gov (United States)
Kim, Yosep; Kim, Yong-Su; Lee, Sang-Yun; Han, Sang-Wook; Moon, Sung; Kim, Yoon-Ho; Cho, Young-Wook
2018-01-15
The weak value concept has enabled fundamental studies of quantum measurement and, recently, found potential applications in quantum and classical metrology. However, most weak value experiments reported to date do not require quantum mechanical descriptions, as they only exploit the classical wave nature of the physical systems. In this work, we demonstrate measurement of the sequential weak value of two incompatible observables by making use of two-photon quantum interference so that the results can only be explained quantum physically. We then demonstrate that the sequential weak value measurement can be used to perform direct quantum process tomography of a qubit channel. Our work not only demonstrates the quantum nature of weak values but also presents potential new applications of weak values in analyzing quantum channels and operations.
8. Weakly Supervised Dictionary Learning
Science.gov (United States)
You, Zeyu; Raich, Raviv; Fern, Xiaoli Z.; Kim, Jinsub
2018-05-01
We present a probabilistic modeling and inference framework for discriminative analysis dictionary learning under a weak supervision setting. Dictionary learning approaches have been widely used for tasks such as low-level signal denoising and restoration as well as high-level classification tasks, which can be applied to audio and image analysis. Synthesis dictionary learning aims at jointly learning a dictionary and corresponding sparse coefficients to provide accurate data representation. This approach is useful for denoising and signal restoration, but may lead to sub-optimal classification performance. By contrast, analysis dictionary learning provides a transform that maps data to a sparse discriminative representation suitable for classification. We consider the problem of analysis dictionary learning for time-series data under a weak supervision setting in which signals are assigned with a global label instead of an instantaneous label signal. We propose a discriminative probabilistic model that incorporates both label information and sparsity constraints on the underlying latent instantaneous label signal using cardinality control. We present the expectation maximization (EM) procedure for maximum likelihood estimation (MLE) of the proposed model. To facilitate a computationally efficient E-step, we propose both a chain and a novel tree graph reformulation of the graphical model. The performance of the proposed model is demonstrated on both synthetic and real-world data.
9. Construction of a γ-polarimeter in search of neutral weak current effects in the nucleus 18F
International Nuclear Information System (INIS)
Mogharrab, R.
1978-07-01
A possible contribution of neutral weak currents to the nucleon-nucleon potential is to be determined by observation of the circular polarization of the 1081 keV γ-transition in 18 F. A γ-polarimeter with 4 transmission magnets will be used. It is suitable for use in beam. The polarimeter has been built and the analysing power determined by using the 1119 keV γ-radiation in 46 Sc. The instrumental asymmetries are -5 . The 18 F is produced in the reaction 16 O ( 3 He,pγ) 18 F. Observations in beam proved the expected suitability of the polarimeter. The observed spectra allow to estimate the finally required beam times to about 2000 hours. (orig.) [de
10. Weak Molecular Interactions in Clathrin-Mediated Endocytosis
Directory of Open Access Journals (Sweden)
Sarah M. Smith
2017-11-01
Full Text Available Clathrin-mediated endocytosis is a process by which specific molecules are internalized from the cell periphery for delivery to early endosomes. The key stages in this step-wise process, from the starting point of cargo recognition, to the later stage of assembly of the clathrin coat, are dependent on weak interactions between a large network of proteins. This review discusses the structural and functional data that have improved our knowledge and understanding of the main weak molecular interactions implicated in clathrin-mediated endocytosis, with a particular focus on the two key proteins: AP2 and clathrin.
11. Shock waves in weakly compressed granular media.
Science.gov (United States)
van den Wildenberg, Siet; van Loo, Rogier; van Hecke, Martin
2013-11-22
We experimentally probe nonlinear wave propagation in weakly compressed granular media and observe a crossover from quasilinear sound waves at low impact to shock waves at high impact. We show that this crossover impact grows with the confining pressure P0, whereas the shock wave speed is independent of P0-two hallmarks of granular shocks predicted recently. The shocks exhibit surprising power law attenuation, which we model with a logarithmic law implying that shock dissipation is weak and qualitatively different from other granular dissipation mechanisms. We show that elastic and potential energy balance in the leading part of the shocks.
12. Status of chiral perturbation theory
International Nuclear Information System (INIS)
Ecker, G.
1996-10-01
A survey is made of semileptonic and nonleptonic kaon decays in the framework of chiral perturbation theory. The emphasis is on what has been done rather than how it was done. The theoretical predictions are compared with available experimental results. (author)
13. Could unstable relic particles distort the microwave background radiation?
International Nuclear Information System (INIS)
Dar, A.; Loeb, A.; Nussinov, S.
1989-01-01
Three general classes of possible scenarios for the recently reported distortion of the microwave background radiation (MBR) via decaying relic weakly interacting particles are analyzed. The analysis shows that such particles could not reheat the universe and cause the spectral distortion of the MBR. Gravitational processes such as the early formation of massive black holes may still be plausible energy sources for producing the reported spectral distortion of the MBR at an early cosmological epoch. 24 references
14. Weak values of a quantum observable and the cross-Wigner distribution
International Nuclear Information System (INIS)
Gosson, Maurice A. de; Gosson, Serge M. de
2012-01-01
We study the weak values of a quantum observable from the point of view of the Wigner formalism. The main actor here is the cross-Wigner transform of two functions, which is in disguise the cross-ambiguity function familiar from radar theory and time-frequency analysis. It allows us to express weak values using a complex probability distribution. We suggest that our approach seems to confirm that the weak value of an observable is, as conjectured by several authors, due to the interference of two wavefunctions, one coming from the past, and the other from the future. -- Highlights: ► Application of the cross-Wigner transform to a redefinition of the weak value of a quantum observable. ► Phase space approach to weak values, associated with a complex probability distribution. ► Opens perspectives for the study of retrodiction.
15. Individual chaos implies collective chaos for weakly mixing discrete dynamical systems
International Nuclear Information System (INIS)
Liao Gongfu; Ma Xianfeng; Wang Lidong
2007-01-01
Let X be a metric space (X,f) a discrete dynamical system, where f:X->X is a continuous function. Let f-bar denote the natural extension of f to the space of all non-empty compact subsets of X endowed with Hausdorff metric induced by d. In this paper we investigate some dynamical properties of f and f-bar . It is proved that f is weakly mixing (mixing) if and only if f-bar is weakly mixing (mixing, respectively). From this, we deduce that weak-mixing of f implies transitivity of f-bar , further, if f is mixing or weakly mixing, then chaoticity of f (individual chaos) implies chaoticity of f-bar (collective chaos) and if X is a closed interval then f-bar is chaotic (in the sense of Devaney) if and only if f is weakly mixing
16. Orbits in weak and strong bars
CERN Document Server
Contopoulos, George
1980-01-01
The authors study the plane orbits in simple bar models embedded in an axisymmetric background when the bar density is about 1% (weak), 10% (intermediate) or 100% (strong bar) of the axisymmetric density. Most orbits follow the stable periodic orbits. The basic families of periodic orbits are described. In weak bars with two Inner Lindblad Resonances there is a family of stable orbits extending from the center up to the Outer Lindblad Resonance. This family contains the long period orbits near corotation. Other stable families appear between the Inner Lindblad Resonances, outside the Outer Lindblad Resonance, around corotation (short period orbits) and around the center (retrograde). Some families become unstable or disappear in strong bars. A comparison is made with cases having one or no Inner Lindblad Resonance. (12 refs).
17. Reception of low-intensity millimeter-wave electromagnetic radiation by the electroreceptors in skates
International Nuclear Information System (INIS)
Akoev, G.N.; Avelev, V.D.
1995-01-01
Low intensity millimeter-wave electromagnetic radiation of less than 10 mW cm -2 power intensity has a nonthermal effect on the body and it is widely used in medical practice for treatment of various diseases. Nevertheless, the effect of EMR on biological tissues is not understood. The skin and its sensory receptors are considered to be responsible for EMR reception, but this has yet to be confirmed. The present experiments were designed to study the effect of millimeter-wave electromagnetic radiation on the ampullae of Lorenzini in skates, which are very sensitive to weak electrical stimuli at low frequency. (author)
18. Topic Detection Based on Weak Tie Analysis: A Case Study of LIS Research
Directory of Open Access Journals (Sweden)
Ling Wei
2016-11-01
Full Text Available Purpose: Based on the weak tie theory, this paper proposes a series of connection indicators of weak tie subnets and weak tie nodes to detect research topics, recognize their connections, and understand their evolution. Design/methodology/approach: First, keywords are extracted from article titles and preprocessed. Second, high-frequency keywords are selected to generate weak tie co-occurrence networks. By removing the internal lines of clustered sub-topic networks, we focus on the analysis of weak tie subnets' composition and functions and the weak tie nodes' roles. Findings: The research topics' clusters and themes changed yearly; the subnets clustered with technique-related and methodology-related topics have been the core, important subnets for years; while close subnets are highly independent, research topics are generally concentrated and most topics are application-related; the roles and functions of nodes and weak ties are diversified. Research limitations: The parameter values are somewhat inconsistent; the weak tie subnets and nodes are classified based on empirical observations, and the conclusions are not verified or compared to other methods. Practical implications: The research is valuable for detecting important research topics as well as their roles, interrelations, and evolution trends. Originality/value: To contribute to the strength of weak tie theory, the research translates weak and strong ties concepts to co-occurrence strength, and analyzes weak ties' functions. Also, the research proposes a quantitative method to classify and measure the topics' clusters and nodes.
19. Shock velocity in weakly ionized nitrogen, air, and argon
International Nuclear Information System (INIS)
Siefert, Nicholas S.
2007-01-01
The goal of this research was to determine the principal mechanism(s) for the shock velocity increase in weakly ionized gases. This paper reports experimental data on the propagation of spark-generated shock waves (1< Mach<3) into weakly ionized nitrogen, air, and argon glow discharges (1 < p<20 Torr). In order to distinguish between effects due solely to the presence of electrons and effects due to heating of the background gas via elastic collisions with electrons, the weakly ionized discharge was pulsed on/off. Laser deflection methods determined the shock velocity, and the electron number density was collected using a microwave hairpin resonator. In the afterglow of nitrogen, air, and argon discharges, the shock velocity first decreased, not at the characteristic time for electrons to diffuse to the walls, but rather at the characteristic time for the centerline gas temperature to equilibrate with the wall temperature. These data support the conclusion that the principal mechanism for the increase in shock velocity in weakly ionized gases is thermal heating of the neutral gas species via elastic collisions with electrons
20. Growth of the Female Professional in the Radiation Safety Department
International Nuclear Information System (INIS)
Yoon, J.
2015-01-01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8991216421127319, "perplexity": 3750.787579421973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663039492.94/warc/CC-MAIN-20220529041832-20220529071832-00610.warc.gz"} |
https://www.physicsforums.com/threads/problem-on-electric-field.86994/ | # Problem on Electric Field
• Start date
• #1
18
0
An alpha particle approaches a gold atom head on, stops, and turns around at a distance of 10^-11m from the nucleus. What is the electric field due to the gold nucleus at this point? Ignore the effects of the gold atom's orbiting electrons. What is the acceleration of the alpha particle when it is stopped? An alpha particle is a helium nucleus, composed of two protons and two neutrons.
Can anyone help me with this problem? I'm just not understanding it...any help would be appreciated.
• #2
Doc Al
Mentor
45,078
1,382
The force momentarily stopping the alpha particle is the repulsive electric force that the gold nucleus exerts on the alpha particle. Use Coulomb's law to find the force between the charges at the given distance. (What's the charge of the gold nucleus? What's the charge of the alpha particle?) Then apply Newton's 2nd law to find the acceleration.
• #3
Astronuc
Staff Emeritus
19,222
2,686
The alpha particle (nucleus of He atom) has + charge proportional to Z=2 (2 protons) and the gold nucleus has + charge proportional to Z=79 (79 protons).
So this becomes an electrostatic force problem - the alpha stops.
Remember coulombs law and coulomb force.
What is the electric field cause due to 79q, where q is the magnitude of charge on a proton?
acceleration, a = F/m.
• #4
18
0
Thanks for the replies!
Should I use F=kq1q2/r^2? then plug it into F=ma?
or should I find E=kq/r^2 and plug it into F=qE?
Sorry if these are very simplistic questions...Physics is hard for me =/
• #5
Doc Al
Mentor
45,078
1,382
echau said:
Should I use F=kq1q2/r^2? then plug it into F=ma?
or should I find E=kq/r^2 and plug it into F=qE?
The two approaches are identical. Take your pick.
• #6
18
0
thank you :) i really appreciate the help!
• #7
lightgrav
Homework Helper
1,248
30
The two approaches are (pedagogically) NOT identical ...
the first approach ignores the E-field, which WAS the Question.
• #8
Doc Al
Mentor
45,078
1,382
Good point, since one of the questions was to find the electric field.
As far as figuring out the acceleration, the two methods for finding the force are identical. But since you have to find the electric field anyway, obviously you would use that result to finish the problem.
• Last Post
Replies
5
Views
1K
• Last Post
Replies
1
Views
979
• Last Post
Replies
1
Views
796
• Last Post
Replies
5
Views
3K
• Last Post
Replies
18
Views
3K
• Last Post
Replies
1
Views
911
• Last Post
Replies
2
Views
1K
• Last Post
Replies
9
Views
1K
• Last Post
Replies
1
Views
880
• Last Post
Replies
2
Views
1K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9301645755767822, "perplexity": 1779.7142947255031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989756.81/warc/CC-MAIN-20210518063944-20210518093944-00269.warc.gz"} |
http://www.amsi.org.au/ESA_Resources/Q2012/Q2012_4.html | Equivalent linear algebraic expressions
## Graphical explanation
#### Solution
If you sketch the linear graphs with equations y = 2 − 3t and y = −3t + 2 you will get the same line. They both have a y-intercept of 2 and a gradient of −3.
Hence 2 − 3t is equivalent to −3t + 2.
The graphs of y = 2 − 3t and y = −3t + 2 are shown below.
More about equivalent linear algebraic expressions Question 2 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9343528747558594, "perplexity": 1134.8222073672534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.69/warc/CC-MAIN-20211202024322-20211202054322-00182.warc.gz"} |
http://2014.igem.org/wiki/index.php?title=Team:Paris_Saclay/Team&oldid=351302&printable=yes | # Team
This year, curious and imaginative students from the department of biology, computer science, physics and chemistry, mathematics and mechanical engineering came together to form the third generation of the team of iGEM Paris-Saclay. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8188427090644836, "perplexity": 1984.6931225591911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181179.12/warc/CC-MAIN-20201125041943-20201125071943-00598.warc.gz"} |
http://internetdo.com/2023/01/math-quiz-10-ct-in-the-expansion-left-2a-b-right5-the-coefficient-of-the-3rd-term-is-equal-to/ | ## (Math Quiz 10 – CT) In the expansion ({left( {2a – b} right)^5}), the coefficient of the 3rd term is equal to:
• Question:
In the expansion $${\left( {2a – b} \right)^5}$$, the coefficient of the third term is equal to:
Reference explanation:
We have: $${\left( {2a – b} \right)^5}$$
$$= C_5^0{\left( {2a} \right)^5} – C_5^1{\left( {2a} \right)^4}b + C_5^2{\left( {2a} \right )^3}{b^2} \ldots$$
Therefore the coefficient of the third term is equal to $$C_5^2.8 = 80$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9671834707260132, "perplexity": 1439.7658726716709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00144.warc.gz"} |
http://mathhelpforum.com/pre-calculus/154381-finding-exact-expression-tan-pi-12-a.html | # Thread: Finding an exact expression for tan(pi/12)
1. ## Finding an exact expression for tan(pi/12)
The question:
Let $z = -2 + 2i$ and $w = -1 - \sqrt{3}i$.
i) Write z and w in polar form and thus write zw in polar form.
ii) Hence find an exact expression for $tan(\frac{\pi}{12})$
My attempt:
i) This is simple, $z = \sqrt{8}e^{\frac{3\pi}{4}i}; w = 2e^{\frac{-2\pi}{3}i} ; zw = 2\sqrt{8}e^{\frac{\pi}{12}i}$
ii) I'm not sure how to go about this. I notice that the argument of the previous answer matches that of this question. However, I do not know how to attempt it.
Any help would be great!
2. If you visualize this, you get a triangle, one point on the origin. The vector $zw$ makes an angle of $\frac{1}{12} \pi$ with the x-axis. Therefore, if you get an $a+bi$ form of $2\sqrt{8}e^{\frac{1}{12 \pi}}$ you have $tan(\frac{1}{12 \pi}) = \frac{b}{a}$
You can get the $a+bi$ form by just multiplying $z$ and $w$ and working out the brackets.
I hope this is clear, if not, please say so.
3. Originally Posted by Glitch
The question:
Let $z = -2 + 2i$ and $w = -1 - \sqrt{3}i$.
i) Write z and w in polar form and thus write zw in polar form.
ii) Hence find an exact expression for $tan(\frac{\pi}{12})$
My attempt:
i) This is simple, $z = \sqrt{8}e^{\frac{3\pi}{4}i}; w = 2e^{\frac{-2\pi}{3}i} ; zw = 2\sqrt{8}e^{\frac{\pi}{12}i}$
ii) I'm not sure how to go about this. I notice that the argument of the previous answer matches that of this question. However, I do not know how to attempt it.
Any help would be great!
$\displaystyle\ tan\left(\frac{\pi}{12}\right)=\frac{sin\left(\fra c{\pi}{12}\right)}{cos\left(\frac{\pi}{12}\right)}$
$e^{(i\theta)}=cos(\theta)+isin(\theta)$
so if you multiply out
$zw=(-2+2i)(-1-\sqrt{3}i)$ you will be able to continue by comparing terms
4. Aha, that's interesting. Thanks!
5. Originally Posted by Glitch
The question:
Let $z = -2 + 2i$ and $w = -1 - \sqrt{3}i$.
i) Write z and w in polar form and thus write zw in polar form.
ii) Hence find an exact expression for $tan(\frac{\pi}{12})$
My attempt:
i) This is simple, $z = \sqrt{8}e^{\frac{3\pi}{4}i}; w = 2e^{\frac{-2\pi}{3}i} ; zw = 2\sqrt{8}e^{\frac{\pi}{12}i}$
ii) I'm not sure how to go about this. I notice that the argument of the previous answer matches that of this question. However, I do not know how to attempt it.
Any help would be great!
You don't need to go into the Complex number system here.
Just use the angle sum identity for tangent...
$\tan{(\alpha \pm \beta)} = \frac{\tan{\alpha} \pm \tan{\beta}}{1 \mp \tan{\alpha}\tan{\beta}}$.
Here $\tan{\frac{\pi}{12}} = \tan{\left(\frac{\pi}{3} - \frac{\pi}{4}\right)}$
$= \frac{\tan{\frac{\pi}{3}} - \tan{\frac{\pi}{4}}}{1 + \tan{\frac{\pi}{3}}\tan{\frac{\pi}{4}}}$
6. However, the question really is specifically asking you to go the route of the polar form of the complex numbers. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9910032153129578, "perplexity": 220.07401600374487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607731.0/warc/CC-MAIN-20170524020456-20170524040456-00080.warc.gz"} |
http://math.stackexchange.com/users/1823/roar-stovner?tab=activity | # Roar Stovner
less info
reputation
6
bio website location age member for 4 years, 4 months seen May 5 '13 at 11:15 profile views 33
# 17 Actions
Jun7 awarded Critic May31 awarded Scholar May31 accepted Is there a name for a collection of open sets where arbitrary intersections are open? May30 comment Is there a name for a collection of open sets where arbitrary intersections are open? Excellent! On the odd chance that somebody comes along with a better answer, I'll wait a little before accepting. May30 awarded Student May30 asked Is there a name for a collection of open sets where arbitrary intersections are open? May20 awarded Editor May20 revised homotopy direct limits Included pointer to the discussion in the comments. May20 comment homotopy direct limits I believe you are correct. Your supposition that $X_\Sigma$ is a h-direct limit of the $U_i$ is at least valid. The space $X_\Sigma$ is in fact the direct limit of the $U_i$ and since all the inclusions $U_i \hookrightarrow U_{i+1}$ are cofibrations the direct limit and h-direct limit are the same. I will edit my answer and point the reader to your argument. May19 comment homotopy direct limits Yes, that's exactly the map I had in mind, Tim! This answer to another question is related to our situation, but there the existence of a homotopy inverse is guaranteed by an algebraic argument. May19 answered homotopy direct limits Jun11 comment How to study math to really understand it and have a healthy lifestyle with free time? This really struck a nerve in the community here; no other questions produces 6 lengthy answers in 6 hours! So @Leon: Cherish the fact that you're not alone in this situation until you find a remedy. :) Jan28 awarded Supporter Nov10 answered How many ways can I make six moves on a Rubik's cube? Oct17 comment Book about technical and academic writing reddit.com/r/math/comments/9wdzo/… If you google the title of the book you will find one version, but the typesetting is so ugly you wouldn't want to read it! Oct17 awarded Teacher Oct17 answered Book about technical and academic writing | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8211544752120972, "perplexity": 1015.350033711345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122030742.53/warc/CC-MAIN-20150124175350-00211-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://cstheory.stackexchange.com/questions/48917/what-does-x-y-notation-mean | # What does x.y notation mean?
In Harper's PFPL (Ed. 2, top of page 8), this notation is used but I don't see a definition. What does $$x.y$$ mean?
This is the notation for Harper's "abstract binding structures": x.t represents the binding site of a variable x and the term t the variable scopes over.
Apparently you are in the parts that define variable bindings. $$\mathcal{B}[\mathcal{X}]_s$$ appears to be the set of terms, or binding structures at sort $$s$$ whose free variables are among $$\mathcal{X}$$. So I would expect (but I don't have the book) that there is in fact an explanation for this notation close by. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.828917920589447, "perplexity": 640.037985348746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301309.22/warc/CC-MAIN-20220119094810-20220119124810-00300.warc.gz"} |
https://infoscience.epfl.ch/record/213792 | Infoscience
Journal article
# Comparing XMCD and DFT with STM spin excitation spectroscopy for Fe and Co adatoms on
We report on the magnetic properties of Fe and Co adatoms on a Cu2N/Cu(100)-c(2 x 2) surface investigated by x-ray magnetic dichroism measurements and density functional theory (DFT) calculations including the local coulomb interaction. We compare these results with properties formerly deduced from STM spin excitation spectroscopy (SES) performed on the individual adatoms. In particular we focus on the values of the local magnetic moments determined by XMCD compared to the expectation values derived from the description of the SES data. The angular dependence of the projected magnetic moments along the magnetic field, as measured by XMCD, can be understood on the basis of the SES Hamiltonian. In agreement with DFT, the XMCD measurements show large orbital contributions to the total magnetic moment for both magnetic adatoms. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9124703407287598, "perplexity": 1847.5464824276319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542686.84/warc/CC-MAIN-20161202170902-00101-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/494931/will-an-object-dropped-from-a-high-building-displace-due-to-the-earths-rotation | # Will an object dropped from a high building displace due to the Earth's rotation?
I read that in the 16th and 17th century, the question of whether the Earth rotates around its axis or all celestial bodies rotate around it was extensively debated. One of the anti-rotation arguments was that objects dropped from high places should move from the true vertical due to the ground having traveled meanwhile. Since the Earth does rotate, I'm trying to quantify the effect and whether it could be measured at the time. Would appreciate my argument being checked for basic sanity and correctness.
Suppose I climb a 100 meter tower and, standing near its edge, let go of a brick. The Earth rotates with the angular velocity $$\omega = \frac{2\pi}{24\cdot 3600\ \mathrm{s}}$$, and the speed of the base of the tower vs. the brick at the top before I let go of it are $$R\omega$$ vs. $$(R+100)\omega$$. This means that horizontally the brick is moving with the speed of $$100\omega ~= 0.00727 \,\mathrm{m/sec}$$ relative to the ground, while vertically it's dropping with the uniform acceleration $$g=9.8\ \mathrm{m/s^2}$$ and will hit the ground in $$4.52$$ seconds, having travelled $$0.03\ \mathrm{m}$$ horizontally. If I do the same from the Empire State Building ($$381\ \mathrm{m}$$), it comes out as about $$22\,\mathrm{cm}$$ (if the height grows by a factor of $$X$$, the distance travelled horizontally grows by $$X^{3/2}$$, the speed increase contributing $$X$$ and the time to drop $$\sqrt{X}$$).
So my questions:
1. Is this analysis basically sound? I realize I'm using the uniform vertical acceleration which is an approximation, and I'm using the circular motion of the Earth's surface when estimating velocities, but not when using them to find time and distance traveled. I guess the more exact calculation would be to treat the brick as a satellite on an elliptical orbit around the center of the Earth with the given initial position and velocity, find its orbit equation or simulate numerically, and find its intersection with the Earth's surface. It seems a bit daunting a task at the moment. Would that give substantially different results?
2. Is air resistance an important factor to take into consideration, for estimating the horizontal distance traveled? (if it is, I guess this still answers the same question for the Moon).
3. Assuming my estimates aren't too far-off, is that something that can be tested in a real experiment either now or in the 17th century? I think that'd depend on how exactly a true vertical we can guarantee the tower's wall to be?
• For a complete answer I would have to do some math, but the analysis seems basically right. A more exact method would be to use the Coriolis force, and I'd expect air resistance to be somewhat important, given that objects usually reach their terminal velocities pretty quickly. – Javier Aug 2 '19 at 22:10
• Experiments of this kind are hard enough that people were still carrying them out in the 19th century because while many experimenters detected some effect their results varied considerably. There are simply a lot of confounding factors. One major trick is to drop things down a well or mine shaft rather than in the open air. – dmckee --- ex-moderator kitten Aug 2 '19 at 22:31
• You're assuming that that the building is on the Equator? Is R the radius of the earth, or the radius of the circle the building is following.? – DJohnM Jan 19 at 23:20
This type of experiment has been attempted. Small objects (presumably small metal balls) were dropped down a vertical mineshaft. The hope was that the air mass in the shaft would be sufficiently stationary to not affect the falling motion too much. The spread was considerable, the balls hitting the bottom of the shaft in an area severa tens of centimeter across.
There was a bias in the distribution of the landing spots, consistent with what you would expect given the Earth's rotation, but that could easily have been a fluke.
Historical article by Alexandre Moatti titled 'Coriolis, the birth of a force'
According to the information in the article the attempt was by Ferdinand Reich, in 1833, and the mineshaft was 158 m. deep.
Google books has an integral scan of the report that Ferdinand Reich wrote. Use the search term "Fallversuche über die Umdrehung der Erde"
I also see mention of another vertical mineshaft setup, with a fall of 90 meters, also in the early 1800s
Feasibility:
What we see that even in the best of circumstances the noise is several times larger than the signal. (Spread of tens of centimeters while the expected effect is centimeters.)
About air resistance: The effect of air resistance is very much not negligable. The air resistance slows down the fall, so that it takes longer, so there is more time to accumulate sideways displacement.
On how to obtain a good approximation of the sideways displacement.
Let me refer to the objects that are released as 'pellets'.
For simplification take the case where the point of release is at the equator. The pellet is released vertically, but since the Earth is rotating the pellet does have a velocity with respect to the center of the Earth.
As you write: if we neglect air resistance effect then for the duration of the fall the motion of the pellet is orbital motion. It's not much of an orbit, it intersects the Earth surface within seconds, but still: for the duration of the fall the motion is orbital motion.
To emphasize that it's orbital motion:
Think for example about the orbit of Halley's comet. All the way from the outer distance to its point of closest approach to the Sun Halley's comet is being accelerated by the gravitational pull of the Sun. This acceleration is continuously increasing the angular velocity of Halley's comet.
So:
Initially the the falling pellet is moving parallel to the local plumb line, because initially the angular velocity of the pellet is the same as the angular velocity of the Earth as a whole. As the Earth's gravity pulls the pellet closer the angular velocity of the pellet increases.
(Again: initially the falling pellet is moving parallel to the local plumb line. The buildup of sideways displacement is not linear. This shows that a calculation of the sideways displacement that has the displacement increasing linearly is wrong.)
Obtaining a good first approximation for the angular velocity as a function of time:
As we know, in all forms of orbital motion there is conservation of angular momentum.
We have an expression for the radial velocity as a function of time (the pellet falling), so we need to work towards an expression that contains that radial velocity.
Conservation of angular momentum:
$$\frac{d(\omega r^2)}{dt} = 0$$
Differentiating:
$$r^2 \frac{d\omega}{dt} + \omega \frac{d(r^2)}{dt} = 0$$
With the chain rule we obtain a term $$\frac{d(r)}{dt}$$ which is what we are looking for.
$$r^2 \frac{d\omega}{dt} + 2 r \omega \frac{d(r)}{dt} = 0$$
Dividing by 'r', and rearranging:
$$r \frac{d\omega}{dt} = - 2 \omega \frac{d(r)}{dt}$$
On the left we have a term $$\frac{d\omega}{dt}$$; that is angular acceleration.
So: $$r\frac{d\omega}{dt}$$ is an expression for the sideways acceleration.
On the right we have an expression with magnitude $$2\omega\frac{d(r)}{dt}$$, and in this case $$\omega$$ is the angular velocity of the pellet, which at the start is equal to the angular velocity of the Earth itself. The angular velocity of the pellet changes during the fall, but compared to the total angular velocity the change of angular velocity is small.
So for a good approximation we can treat the falling pellet as subject to a sideways acceleration as described by the follwing expression:
• $$a_p$$ acceleration component perpendicular to the radial direction
• $$v_r$$ velocity component in radial direction
$$a_p = 2\omega v_r$$
• Thanks, this is so helpful! I'm having trouble with this part though: "$r\frac{d\omega}{dt}$ is an expression for the sideways acceleration". Wouldn't this expression be the magnitude of the tangential acceleration? Where's the "sideways" part coming from? And the same question for the final equation $a = 2\omega v$, I guess. I think that the whole derivation goes through with vector $\vec{r}$ and $\vec{v}$, replacing $r^2$ with $\vec{r}\cdot\vec{r}$ etc., and at the end, having reached $a = 2\omega v$, we can take just the horizontal component of that. Is that what you meant though? – AnatolyVorobey Aug 3 '19 at 22:56
• @AnatolyVorobey Yes, in this context 'tangential acceleration' and 'sideways acceleration' are effectively the same thing. However, in general I would use 'sideways' for 'perpendicular to radial direction', and 'tangential' for 'tangential to the instantaneous velocity vector. Example: for Halley's comet those two only coincide exactly at aphelion and perihelion. At the moment of release the pellet is at the apogee of its (short lived) orbital motion. – Cleonis Aug 4 '19 at 5:52
• @AnatolyVorobey Well, yeah, in the derivation I used scalar notation rather than vector notation. In this case scalar notation is sufficient. This derivation does not use the instantaneous orientation of the radial vector, therefore there is no need to notate it as a vector. Here, the magnitude of 'r' is sufficient. I do acknowledge that the last expression is ambiguous, I will edit it to vector notation – Cleonis Aug 4 '19 at 5:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9416719675064087, "perplexity": 473.58176602305423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488507640.82/warc/CC-MAIN-20210622033023-20210622063023-00509.warc.gz"} |
http://mathoverflow.net/questions/57183/how-to-compute-the-cohomology-of-the-general-linear-group-with-integral-entries?sort=newest | # How to compute the cohomology of the general linear group with integral entries
Q: So how does one compute the cohomology groups $H^*(GL_n(\mathbf{Z}),\mathbf{Z})$?
First note that $H^*(GL_n(\mathbf{Z}),\mathbf{Z})$ is isomorphic to $H_B^*(Y/GL_n(\mathbf{Z}),\mathbf{Z})$ (Betti cohomology) where $Y$ is any contractible space on which $GL_n(\mathbf{Z})$ acts freely. Maybe one should first ask to compute the cohomology with rational coefficients and then deal with the torsion separately.
Secondly, note that $GL_n(\mathbf{Z})$ acts on $\mathbf{R}^n-\{0\}$. Unfortunately it does not act discontinuously on $\mathbf{R}^n-\{0\}$ so its quotient by $GL_n(\mathbf{Z})$ will be quite messy. Nevertheless it might be possible to use some version of the Leray spectral sequence on $$G\rightarrow E\rightarrow E/G$$ where $G=GL_n(\mathbf{Z})$, $E=\mathbf{R}^n-0$.
By the way, does $E/G$ have a geometrical description?
-
A better candidate for $E$ would certainly be the symmetric space associated to $\operatorname{GL}_{n}(\mathbb{R})$, i.e., the symmetric positive definite matrices. Serre has made extensive calculations of the cohomology of discrete subgroups of Lie groups (e.g. here springerlink.com/content/0171m21753248642), but I think mostly with real coefficients. – Theo Buehler Mar 3 '11 at 0:00
To my knowledge not much is known for general n. There are some results by Ash (see his homepage: www2.bc.edu/~ashav). You also may have a look at the book "Knudson: Homology of Linear Groups". The stable rank ($n = \infty$) has been computed by Borel in the paper "Stable real cohomology of arithmetic groups". – Ralph Mar 3 '11 at 0:34
Soule has made some integral calculations for $SL_n(\mathbb Z)$ for $n=3,4$, which is not too far away from $GL_n(\mathbb Z)$. – Jim Conant Mar 3 '11 at 0:42
@Jim: Soule's paper "The cohomology of $SL_3(\mathbf{Z})$" also contains the integral cohomology of $GL_3(\mathbf{Z}) = SL_3(\mathbf{Z}) \times \mathbf{Z}/2\mathbf{Z}$. – Ralph Mar 3 '11 at 1:25
@Hugo : See also P. Elbaz-Vincent, H. Gangl et C. Soulé, "Quelques calculs de la cohomologie de GL_N(Z) et de la K-theorie de Z" (in French), math.uiuc.edu/K-theory/0581 – François Brunault Mar 3 '11 at 9:50
There are homological stability results (due to Ruth Charney and Hendrik Maazen around 1979, if I recall correctly) saying that $H_*(GL_n(Z); Z) \to H_*(GL_{n+1}(Z); Z)$ is about $n/2$-connected. So in a range of degrees increasing to infinity with n you might just ask about the (co-)homology of $GL(Z) = GL_\infty(Z)$.
The Serre spectral sequence implies that there is little difference between the case of $GL(Z)$ and $SL(Z)$.
For the rational result, Armand Borel computed $H^*(SL(Z); Q)$ in his paper (MR0387496) "Stable real cohomology of arithmetic groups", in Ann. Sci. \'Ecole Norm. Sup. (1974).
For integral results, Bill Dwyer and Steve Mitchell compute $H^*(GL(Z); Z)$ in their paper (MR1633505) "On the $K$-theory spectrum of a ring of algebraic integers", in $K$-Theory 14 (1998). See 1.5 and section 10 of their paper. They assume the now proven Lichtenbaum--Quillen conjecture (Voevodsky for $p=2$, Rost, Voevodsky, Weibel? for $p$ odd.)
In both cases the results are more general, and suffice to compute the cohomology of $GL(R)$ and the (rational) algebraic K-theory of R for R any ring of integers in a number field.
-
Did Dwyer-Mitchell really consider integral coefficients ? For, if my understanding of the topic is right, there is a close connection between large torsion in the integral (co)homology of $GL(\mathbb{Z})$ and $K_*(\mathbb{Z})$ and the latter is related to Vandiver's conjecture on irregular primes. (I think there is also a paper of Soule´ that estimates such torsion). – Ralph Jun 24 '11 at 6:40
No, you are right, they work with $Z/\ell$-coefficients. The answer for $R = O_F[1/\ell]$, with $F$ a number field, involves a matrix of maps $BU \to BU$ determined by the Iwasawa module of $F$, and this is how the Bernoulli numbers enter for $F = Q$. There was a 1997 Univ. of Washington Ph.D. thesis "Torsion in the Homology of the General Linear Group for a Ring of Algebraic Integers" by Prashanth Adhikari (probably supervised by Mitchell) that elaborated on this. I'm not sure that it was published. – John Rognes Jun 24 '11 at 19:23
John, thanks for clarification. The paper of Soule´ I meant is arxiv.org/pdf/math/9812171v1. (the result has been generalized by Soule´ to the rings of integers of arbitrary number fiedls in math.uiuc.edu/K-theory/0603/cdn1.pdf). – Ralph Jun 24 '11 at 21:42
The quotient $E/G$ is non-Hausdorff, I'm not sure there will be a nice geometric description.
There's a standard way to get $Y$. The symmetric space for $GL(n,\mathbb{R})$ is the symmetric space $Q$ of positive definite symmetric matrices of determinant $>0$, isomorphic to $GL(n,\mathbb{R})/O(n,\mathbb{R})$. Then $GL(n, \mathbb{Z})$ acts discretely on this space, but torsion elements have fixed points. Also, the torsion elements of $GL(n,\mathbb{Z})$ map non-trivially to $GL(n,\mathbb{Z}/p)$ for some prime $p$. One may take a $K(GL(n,\mathbb{Z}/p),1)=X$, then $GL(n,\mathbb{Z}/p)$ and therefore $GL(n,\mathbb{Z})$ acts on the universal cover $\tilde{X}$. Now, take the diagonal action of $GL(n,\mathbb{Z})$ on $Q\times \tilde{X}$. This action is free and discrete. Of course, this assumes that you have a nice way to construct $X$, which must be infinite dimensional!
-
Well $K(GL(n,Z/p),1)=X$ will be a CW-complex of infinite dimension and the only way I know to construct it is with the usual killing cells technique which is kind of tautological – Hugo Chapdelaine Jun 26 '11 at 16:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9722604751586914, "perplexity": 349.5851152588904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737896527.58/warc/CC-MAIN-20151001221816-00017-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://mathoverflow.net/users/38253/gabe-conant?tab=topactivity | Gabe Conant
15 Is $(\mathbb{R},+)$ isomorphic to a subgroup of $S_\omega$? 10 Embeddability of all graphs of cardinality $\kappa$ into one graph of cardinality $\kappa$ 10 An entire function all whose forward orbits are bounded 10 Measure on cosets in a group? 8 In what sense is SL(2,q) “very far from abelian”?
### Reputation (1,775)
+10 In what sense is SL(2,q) “very far from abelian”? +115 Embeddability of all graphs of cardinality $\kappa$ into one graph of cardinality $\kappa$ +10 Is $(\mathbb{R},+)$ isomorphic to a subgroup of $S_\omega$? +10 Fundamental theorem of linear orders
### Questions (7)
8 Polynomials with all but one root inside the unit disc 4 Equidistributed sequences and lower banach density 4 strong order property in continuous logic 3 Square summable sequences associated to Pisot numbers 2 Arrangements of hyperplanes
### Tags (37)
43 model-theory × 10 15 permutation-groups 43 gr.group-theory × 5 13 graph-theory × 2 35 lo.logic × 8 12 reference-request × 2 15 set-theory × 2 10 finite-groups × 2 15 permutations 10 cv.complex-variables × 2 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8203326463699341, "perplexity": 1881.6662499354277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518627.72/warc/CC-MAIN-20191209093227-20191209121227-00081.warc.gz"} |
https://homework.cpm.org/category/ACC/textbook/acc6/chapter/7%20Unit%207/lesson/CC1:%207.2.2/problem/7-53 | ### Home > ACC6 > Chapter 7 Unit 7 > Lesson CC1: 7.2.2 > Problem7-53
7-53.
Calculate each of the following products.
1. $\frac { 1 } { 8 } \cdot \frac { 8 } { 1 }$
When multiplying fractions together, the answer has a numerator equal to the product of the numerators and a denominator equal to the product of the denominators.
1. $\frac { 3 } { 4 } \cdot \frac { 4 } { 3 }$
$\frac{3}{4}\cdot\frac{4}{3}= \frac{3 \cdot\ 4}{4 \cdot\ 3}$
$\frac{3 \cdot\ 4}{4 \cdot\ 3}= \frac{12}{12}$
$\frac{12}{12}=1$
1. $\frac { 2 } { 3 } \cdot \frac { 3 } { 2 }$
1
1. $7 ·\frac { 1 } { 7 }$
$7=\frac{7}{1}$
1. What do the products in parts (a) through (d) have in common?
Check the relationship between the products of the problems above. | {"extraction_info": {"found_math": true, "script_math_tex": 8, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8502318263053894, "perplexity": 1592.6179213261278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889651.52/warc/CC-MAIN-20201025183844-20201025213844-00327.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=568306 | # calculating real power and power factor
by pmn
Tags: factor, power, real
P: 3 I have been trying to understand how to calculate real power and power factor in an AC circuit when given only circuit voltage and real/true current as measured by a current transformer (CT). What I (think I) know: 1. Real power P = I^2 * R (but I don't know the circuit resistance) 2. Real power = apparent power * power factor 3. Apparent power S = RMS source voltage * RMS current I believe I can take (source voltage ) / √2 to get the RMS of the voltage then (measured amperage) / √2 to get the RMS of the amperage and can then calculate the apparent power as the product of those two values but I don't know how to get to real power and power factor from there. Thanks for any direction you can give me! Phil
P: 1,506 Are you familiar with the idea that power is only dissipated in resistance.? And that no power is dissipated in reactive components (inductance and capacitance)? If the supply voltage is out of phase with the current in a series circuit then the power dissipated = voltage across R x current The voltage across R = supply voltage x Cos θ Cosθ is known as the power factor. The disadvantage of having the supply voltage out of phase with the current is that a voltage, V, is being generated but only a fraction (VCosθ) is delivering power
Sci Advisor PF Gold P: 11,383 'Real Power' is V.I (the dot product of the two phasors) or VI cos (phase). You can measure it by putting the V and I waveforms into an analogue (four quadrant) multiplier and then an leaky integrator / Low Pass filter meter. This will multiply the instantaneous values of V and I, which is instantaneous power. The value of this will vary and always be greater than zero (and corresponds to I^2R). When you take the mean of this, you will get the average power. This way, you neither need to know the Resistance nor the Power Factor (you are including it in the cos(phase) term. I assume that those things the Gas Board supply you with do it this way. There's a current transformer on the consumer meter lead and a wireless link to the unit, which is mains powered and measures the voltage. Your formula will tell you the power transferred in the resistor (natch) but, as you say, you need to know the value of resistance.
P: 3
## calculating real power and power factor
Thanks for the replies! I get the concept of reactive power being a function of a shifting of the voltage / current phases. (Thus a power factor of 1 means that the voltage and current rise in unison, correct? Inductance or capacitance in the circuit cause the two waves to become out of phase?) I don't get the statement "'Real Power' is V.I (the dot product of the two phasors) or VI cos (phase)". I do get "which is instantaneous power. The value of this will vary and always be greater than zero (and corresponds to I^2R)".
Since I believe that I can calculate apparent power as Vrms * Irms (where rms is a/√2 and a=peak value), and real power can be derived from an instantaneous measurement of voltage and amperage, then power factor can be calculated as real / apparent?
Ultimately I want to be able to put CTs on incoming mains power and several branch circuits as well as measure the voltage on the circuits and be able to make some statement about how much energy is being consumed in total (mains) and by each branch circuit as well as how close I am to a PF of one.
Sci Advisor
PF Gold
P: 11,383
Quote by pmn Thanks for the replies! I get the concept of reactive power being a function of a shifting of the voltage / current phases. (Thus a power factor of 1 means that the voltage and current rise in unison, correct? Inductance or capacitance in the circuit cause the two waves to become out of phase?) I don't get the statement "'Real Power' is V.I (the dot product of the two phasors) or VI cos (phase)". I do get "which is instantaneous power. The value of this will vary and always be greater than zero (and corresponds to I^2R)". Since I believe that I can calculate apparent power as Vrms * Irms (where rms is a/√2 and a=peak value), and real power can be derived from an instantaneous measurement of voltage and amperage, then power factor can be calculated as real / apparent? Ultimately I want to be able to put CTs on incoming mains power and several branch circuits as well as measure the voltage on the circuits and be able to make some statement about how much energy is being consumed in total (mains) and by each branch circuit as well as how close I am to a PF of one.
The dot product is just 'vector speak' and takes you further into the business if you're interested. Have you not heard of using Phasors to describe AC?
This is great stuff as a thought experiment but all this stuff is readily available to buy. Furthermore, it is 'Electrically Safe'.
btw, how were you proposing to find the instantaneous V and I?
P: 3 > The dot product is just 'vector speak' and takes you further into the business if you're interested. Have you not heard of using Phasors to describe AC? Short answer is 'no' but.... I understand the vectors in the 'AC triangle' showing the relationships to real, apparent and reactive power. I will dig in a little to understand phasors. > This is great stuff as a thought experiment but all this stuff is readily available to buy. Furthermore, it is 'Electrically Safe'. Readily available to buy as in 'purchase a Fluke power quality meter'? :-) I am enjoying the exercise at the moment but at times of weakness I do admit to browsing the Fluke website and lusting over their tools. > btw, how were you proposing to find the instantaneous V and I? Using an industrial controller to sample the line voltage and real current via CT. I believe I could probably make a reasonable calculation of the power factor by simply timing the (V)peak to (A)peak. (I have a controller, I don't have a Fluke.)
Related Discussions Electrical Engineering 1 General Physics 3 Electrical Engineering 10 Precalculus Mathematics Homework 8 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.816295325756073, "perplexity": 853.29027534864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.doubtnut.com/question-answer/if-x-1-x6-find-ix2-1-x2-ii-x4-1-x4-642565434 | Home
>
English
>
Class 9
>
Maths
>
Chapter
>
Algebraic Identities
>
If x+1/x=6 , find <br>(i)x^...
Updated On: 27-06-2022
Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams.
Transcript
Jindagi Mein question if X + 1 by x is equal to 6 cm find the value of x square + 1 by x square and the second party aap to find the value of x raise to power 4 + 1 by x power 4 so here here we take first part here it is given X + 1 by x is equal to 6 so here we take square square of the both side so here here taking taking here taking square of both sides of both sides so here here we get on the left inside it is X + 1 by x square is equals to 6 square so here we use algebraic identity according to which a + b whole square is equals to a square + b square + 2 AV so here here we get X square + 1 by X square + 2 into X into 1 by X
26 queries 36 so here we get X square + 1 by X square + 3 X simply cancel out and here we get to is equal to 36 so here here we get access square + 1 by x square is equal to 36 - 2 is equals to 34 so here we get the value of x square + 1 by x square is equal to 34 similarly in the second part here we have to find X raise to power 4 + 1 by X raise to power 4 so here we take this equation this one and we assume it to be one so here here from equation from equation first Hayat is X square + 1 by x square is equals to 34 now we again square the port side so here here squaring here squaring both both sides so here we get x
square 1 by x square and then again whole square is equals to 34 square so here now we solve it so here in order to expand it expand this again music algebraic identity this one here we are again use this one so here here we get is equals to x square and then again square + 1 by x square and again square + 2 into x square into one by x square is equals to hear it is 34 square so here here this power is power get multiply so here we get X raise to power 4 + 1 by x power 4 + hear this x square cancel this one so here we get 2 is equals to 30 square so here we get the value of x 4 + 1 by X is equals to 34 square minus 2 and that one is equals to 1
56 - 2 so here we get the value of x 4 + 1 by X 4 is equal to 1154 and that's the answer | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8334205746650696, "perplexity": 436.50747450410546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00313.warc.gz"} |
http://www.researchgate.net/researcher/9194583_Sang-Hun_Jang/ | # Sang-Hun Jang
Kyoto University, Kyoto, Kyoto-fu, Japan
Are you Sang-Hun Jang?
## Publications (8)9.89 Total impact
• Source
##### Article:Production efficiency of excited atoms in PDP cells with grooved dielectric structures studied by laser absorption spectroscopy
[show abstract] [hide abstract]
ABSTRACT: Performances of microplasmas in unit discharge cells with grooved structures in the dielectric layer covering the coplanar electrodes were investigated in alternating current (ac)-type plasma display panels filled with Ne-Xe(10%) mixture at 450 torr. The diagnostics are based on a microscopic laser-absorption spectroscopy technique for the spatiotemporally resolved measurements of absolute densities of Xe<sup>*</sup>(1s<sub>5</sub>,1s<sub>4</sub>) atoms, from which the production rate and the efficiency of the vacuum ultraviolet photons were estimated. These results were compared with previously reported data obtained in conventional phosphor coated panels with the same structures for the dependences on the applied sustain voltages. As the result, the following conclusions were ascertained. The grooved structure does not help to improve the luminous efficiency but it helps to lower the firing and sustaining voltages by about 20 V if the electrode gap is kept constant. Therefore, it provides additional possibilities for the selection of other operating conditions such as the gas composition and pressure for the improvements of the luminance and the luminous efficiency.
IEEE Transactions on Plasma Science 05/2006; · 1.17 Impact Factor
• Source
##### Article:Discharge characteristics of cross-shaped microdischarge cells in ac-plasma display panel
[show abstract] [hide abstract]
ABSTRACT: This paper proposes a highly efficient cross-shaped cell structure to improve the luminous efficiency of an alternate current plasma display panel (ac-PDP). The microdischarge characteristics of the proposed structure are examined in an all-green 6-in test panel with various pressures and Xe-concentrations. Since the proposed cross-shaped cell structure has a longer Indium Tin Oxide (ITO) path between the two sustain electrodes and wider sidewall phosphor area, the following microdischarge characteristics were observed when compared with the conventional stripe-type cell structure. First, the sustain voltage margin was lower by about 15 V under a high Xe-concentration of 10%. Second, the rate of increase in the luminous efficiency was higher with a high pressure and high Xe-concentration. Finally, when adopting an auxiliary address pulse driving scheme, the luminous efficiency was improved by about 44% (2.38 lm/W) with a high Xe-concentration of 10%.
IEEE Transactions on Plasma Science 07/2005; · 1.17 Impact Factor
• ##### Article:Experimental observation of image sticking phenomenon in AC plasma display panel
[show abstract] [hide abstract]
ABSTRACT: The itinerant strong sustain discharge that occurs during a sustain period over a few minutes causes image sticking, which means a ghost image remains in the subsequent image when the previous image was continuously displayed over a few minutes. Accordingly, this paper investigates whether the dominant factor in image sticking is the MgO surface or phosphor layer by testing the effects of image sticking in subsequent dark and bright images using a 42-in plasma display panel. When the subsequent image was dark, the image sticking was found to produce a brighter ghost image than the background. Thus, since the luminance of a dark image is produced by the weak discharge that occurs during the reset-period, the higher luminance of the ghost image was mainly due to the activation of the MgO surface. Conversely, when the subsequent image was bright, the image sticking was found to produce a darker ghost image than the background. Thus, since the luminance of a bright image is predominantly produced by the strong discharge that occurs during the sustain period, the lower luminance of the ghost image was mainly due to the deterioration of the phosphor layer.
IEEE Transactions on Plasma Science 01/2005; · 1.17 Impact Factor
• ##### Article:New driving scheme for white color balancing of plasma display panel television
Sang-Hun Jang, Heung-Sik Tae, Sung-Il Chien
[show abstract] [hide abstract]
ABSTRACT: This paper presents a new driving scheme based on the algorithm for the selection of the optimum auxiliary pulse to reduce the deviation of white color in an AC plasma display panel television (PDP-TV). The luminance ratio among the red, green, and blue lights can be controlled independently by applying the different auxiliary address pulses to the red, green, and blue cells during a sustain-period. As a result, white colors of eight subfields can be clustered within a region, which is not resolvable in terms of visual perception.
IEEE Transactions on Consumer Electronics 09/2002; · 0.94 Impact Factor
• Source
##### Article:New driving scheme for improving color temperature of plasma display panel
[show abstract] [hide abstract]
ABSTRACT: This paper presents a new driving scheme for improving the color temperature of plasma display panel-televisions (PDP-TVs). Auxiliary address pulses, plus sustain pulses applied to sustain electrodes, are applied only to address electrodes with blue phosphor layers during a sustain-period, thereby resulting in increasing the luminance of blue cells among the red, green, and blue cells. When compared with the conventional driving scheme, the proposed driving scheme can improve the color temperature of PDP-TVs without reducing the luminance
IEEE Transactions on Consumer Electronics 09/2001; · 0.94 Impact Factor
• Source
##### Article:Improvement in the luminous efficiency using ramped-square sustain waveform in an AC surface-discharge plasma display panel
[show abstract] [hide abstract]
ABSTRACT: This paper proposes a new sustain waveform to improve the luminous efficiency of an AC plasma display panel (AC-PDP). The new sustain waveform is a superimposed waveform, which adds a ramp-waveform to a square-waveform, and has an increasing voltage slope between the rising and falling edge. This waveform can induce a longer-sustained discharge at the rising edge plus a self-erasing discharge at the falling edge, thereby improving the luminous efficiency, When compared with the conventional square sustain waveform, the proposed sustain waveform with a 9.3 V/μs voltage slope achieved a 65% higher luminous efficiency in a 4-in AC-PDP test panel even at a low frequency (62 kHz)
IEEE Transactions on Electron Devices 08/2001; · 2.32 Impact Factor
• ##### Conference Proceeding:New driving scheme for improving color temperature of plasmadisplay panel-HDTV
[show abstract] [hide abstract]
ABSTRACT: A new driving-scheme for improving the color temperature of plasma display panel-HDTV is proposed. Auxiliary pulses are only applied to address electrodes with blue phosphor layers during a sustain-period, thereby improving the color temperature of plasma display panel-HDTV
Consumer Electronics, 2001. ICCE. International Conference on; 02/2001
• Source
##### Article:Effects of plasma emission on optical properties of phosphor layers in surface-type alternate current plasma display panel
[show abstract] [hide abstract]
ABSTRACT: This study uses helium and xenon gas mixture discharges to determine the effects of helium plasma emission on the characteristics of the visible emission from the stimulation of the red, green, and blue (RGB) phosphor layers in a surface-type alternate current plasma display panel. With a mixture of less than 2% xenon to helium, it was found that the luminance of the RGB phosphor layers decreases with a decrease in the helium plasma emission intensity. However, with a mixture of above 2% xenon to helium, the luminance of the RGB phosphor layers increases regardless of a decrease in the helium plasma emission intensity. Furthermore, the color purity of the RGB phosphor layers improves as the helium plasma emission intensity decreases. Accordingly, it can be concluded that the optical properties of the phosphor layers, including color purity and luminance, depend on the helium plasma discharge emission as well as the visible emission from the stimulation of the phosphor layers. © 2000 American Institute of Physics.
Journal of Applied Physics 02/2000; 87(5):2073-2075. · 2.17 Impact Factor
#### Institutions
• ###### Kyoto University
• Department of Electronic Science and Engineering
Kyoto, Kyoto-fu, Japan
• ###### Kyungpook National University
• • Department of Electrical Engineering
• • Department of Electronic Engineering
Sangju, North Gyeongsang, South Korea | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.823749840259552, "perplexity": 4434.24662876179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705069221/warc/CC-MAIN-20130516115109-00094-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://austin.com/tag/thanksgiving/ | ## Let Aaron Franklin Show You How To Cook Thanksgiving Dinner
Címkék: , , , , , , 2020 / 11 / 25
## Best Thanksgiving Events Happening in Austin
Címkék: , , , , , , , , 2019 / 11 / 25
## Here are the Austin Restaurants Open for Thanksgiving 2019
Címkék: , , , , 2019 / 11 / 23
## Free Thanksgiving Meals In Austin 2019
Címkék: , , , , , , , 2019 / 11 / 21
## Austin’s Best Restaurants Dish On Their Favorite Thanksgiving Recipes
Címkék: , , , , , , , , , , , , , 2019 / 11 / 18
## Free Thanksgiving Meals In Austin 2018
Címkék: , , , , , , , 2018 / 11 / 12
## Free Thanksgiving Meals In Austin 2017
Címkék: , , , , , 2017 / 11 / 16
## Need to Get Off the Couch? There’s Plenty to Do This Thanksgiving Weekend
Címkék: , , , , , , , , , , , , , , , , , , , , 2016 / 11 / 23
## Help a Deserving Person Win a Free Thanksgiving Dinner from Fresa’s
Címkék: , , 2016 / 11 / 16
## Watch The Travis County Fire Department Fail To Burn A Turkey
Címkék: , , , , , 2016 / 11 / 16
## Here’s Who Is Open On Thanksgiving In Austin
Címkék: , 2015 / 11 / 25
## Did Texas Host The First Thanksgiving?
Címkék: , 2015 / 11 / 24
## Keep Your Dog FAR AWAY From These 10 Thanksgiving Foods
Címkék: , 2015 / 11 / 23
## Here’s Some Local, Last-Minute Thanksgiving Meal Tips
Címkék: , 2015 / 11 / 23
## Give Thanks This Season With These 4 Great Volunteer Opportunities!
Címkék: , , 2015 / 11 / 20
## Charities Start Week Of Giving, Feeding Austin’s Hungry
Címkék: , , 2015 / 11 / 19
## Refugees Spending Thanksgiving In Texas ‘Thank God And The U.S.’
Címkék: , , , 2015 / 11 / 19
## 40 “Don’t Miss” Events In Austin This November
Címkék: , , , , , 2015 / 10 / 30
## 2014 Chuy’s Children Giving to Children Parade
Címkék: , , , , , 2014 / 11 / 26
## Here’s How To Survive Thanksgiving, Austin-Style
Címkék: , , , , , , 2014 / 11 / 25
## These Five Shows Will Make You Thankful for Austin Musicians
Címkék: , , , , , , , , , , , , , , , 2014 / 11 / 21
## 10 Free Events to Enjoy Thanksgiving Week (Nov 21-30, 2014)
Címkék: , , 2014 / 11 / 21
## 5 Ways to Put the Giving in Thanksgiving
Címkék: , , , 2014 / 11 / 19
## Free Thanksgiving Meals in Austin 2014
Címkék: , , 2014 / 11 / 17 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9020519256591797, "perplexity": 2829.3334517295493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153860.57/warc/CC-MAIN-20210729140649-20210729170649-00177.warc.gz"} |
https://hyperleap.com/topic/Expected_value | # Expected value
expectationexpectedmeanexpectationsexpected numbermathematical expectationexpectation operatorexpected valuesexpectation valueexpected outcome
In probability theory, the expected value of a random variable is a key aspect of its probability distribution.wikipedia
743 Related Articles
### Law of large numbers
strong law of large numbersweak law of large numbersBernoulli's Golden Theorem
For example, the expected value of rolling a six-sided die is 3.5, because the average of all the numbers that come up converges to 3.5 as the number of rolls approaches infinity (see for details). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.
According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer to the expected value as more trials are performed.
### Random variable
random variablesrandom variationrandom
In probability theory, the expected value of a random variable is a key aspect of its probability distribution.
In this case, the structure of the real numbers makes it possible to define quantities such as the expected value and variance of a random variable, its cumulative distribution function, and the moments of its distribution.
### Probability distribution
distributioncontinuous probability distributiondiscrete probability distribution
In probability theory, the expected value of a random variable is a key aspect of its probability distribution.
### Von Neumann–Morgenstern utility theorem
von Neumann–Morgenstern utility functionvon Neumann and MorgensternVon Neumann–Morgenstern utility
For example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function.
In decision theory, the von Neumann-Morgenstern utility theorem shows that, under certain axioms of rational behavior, a decision-maker faced with risky (probabilistic) outcomes of different choices will behave as if he or she is maximizing the expected value of some function defined over the potential outcomes at some specified point in the future.
### Problem of points
divide the stakes fairlygambling problemproblem of the points
The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players who have to end their game before it's properly finished.
One of the famous problems that motivated the beginnings of modern probability theory in the 17th century, it led Blaise Pascal to the first explicit reasoning about what today is known as an expected value.
### Decision theory
decision sciencestatistical decision theorydecision sciences
For example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function.
Known from the 17th century (Blaise Pascal invoked it in his famous wager, which is contained in his Pensées, published in 1670), the idea of expected value is that, when faced with a number of actions, each of which could give rise to more than one possible outcome with different probabilities, the rational procedure is to identify all possible outcomes, determine their values (positive or negative) and the probabilities that will result from each course of action, and multiply the two to give an "expected value", or the average expectation for an outcome; the action to be chosen should be the one that gives rise to the highest total expected value.
### Probability density function
probability densitydensity functiondensity
The same principle applies to an absolutely continuous random variable, except that an integral of the variable with respect to its probability density replaces the sum.
If a random variable X is given and its distribution admits a probability density function f, then the expected value of X (if the expected value exists) can be calculated as
### Blaise Pascal
PascalPascal, BlaisePascalian
This problem had been debated for centuries, and many conflicting proposals and solutions had been suggested over the years, when it was posed in 1654 to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré.
From this discussion, the notion of expected value was introduced.
### Cauchy distribution
LorentzianCauchyLorentzian distribution
An example of such a random variable is one with the Cauchy distribution, due to its large "tails".
The Cauchy distribution is often used in statistics as the canonical example of a "pathological" distribution since both its expected value and its variance are undefined.
### Christiaan Huygens
HuygensChristian HuygensChristiaan Huyghens
Three years later, in 1657, a Dutch mathematician Christiaan Huygens, who had just visited Paris, published a treatise (see ) "De ratiociniis in ludo aleæ" on probability theory.
Huygens took as intuitive his appeals to concepts of a "fair game" and equitable contract, and used them to set up a theory of expected values.
### St. Petersburg paradox
Saint Petersburg ParadoxPetersbergSaint Petersburg problem
It is based on a particular (theoretical) lottery game that leads to a random variable with infinite expected value (i.e., infinite expected payoff) but nevertheless seems to be worth only a very small amount to the participants.
### Roulette
roulette wheelAmerican roulettebetting wheel
It can be easily demonstrated that this payout formula would lead to a zero expected value of profit if there were only 36 numbers.
### Probability measure
measureprobability distributionlaw
The formal definition subsumes both of these and also works for distributions which are neither discrete nor absolutely continuous; the expected value of a random variable is the integral of the random variable with respect to its probability measure.
For instance, a risk-neutral measure is a probability measure which assumes that the current value of assets is the expected value of the future payoff taken with respect to that same risk neutral measure (i.e. calculated using the corresponding risk neutral density function), and discounted at the risk-free rate.
### Bias of an estimator
unbiasedunbiased estimatorbias
In such settings, a desirable criterion for a "good" estimator is that it is unbiased – that is, the expected value of the estimate is equal to the true value of the underlying parameter. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate).
In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated.
### Errors and residuals
residualserror termresidual
If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate).
A statistical error (or disturbance) is the amount by which an observation differs from its expected value, the latter being based on the whole population from which the statistical unit was chosen randomly.
### Variance
sample variancepopulation variancevariability
The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean.
### Moment-generating function
moment generating functionCalculations of momentsgenerating functions
The moments of some random variables can be used to specify their distributions, via their moment generating functions.
wherever this expectation exists.
### Statistics
statisticalstatistical analysisstatistician
For a different example, in statistics, where one seeks estimates for unknown parameters based on available data, the estimate itself is a random variable.
Furthermore, an estimator is said to be unbiased if its expected value is equal to the true value of the unknown parameter being estimated, and asymptotically unbiased if its expected value converges at the limit to the true value of such parameter.
### Moment (mathematics)
momentsmomentraw moment
The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X].
The n-th moment about zero of a probability density function f(x) is the expected value of X n and is called a raw moment or crude moment.
### Central moment
moment about the meancentral momentsmoments about the mean
The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X].
In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random variable from the mean.
### Monte Carlo method
Monte CarloMonte Carlo simulationMonte Carlo methods
This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g., where is the indicator function of the set \mathcal{A}.
By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean (a.k.a. the sample mean) of independent samples of the variable.
### Estimation theory
parameter estimationestimationestimated
This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g., where is the indicator function of the set \mathcal{A}. To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results.
This error term is then squared and the expected value of this squared value is minimized for the MMSE estimator.
### William Allen Whitworth
W. A. WhitworthWhitworth, William Allen
The use of the letter E to denote expected value goes back to W. A. Whitworth in 1901, who used a script E. The symbol has become popular since for English writers it meant "Expectation", for Germans "Erwartungswert", for Spanish "Esperanza matemática" and for French "Espérance mathématique".
He is the inventor of the E[X] notation for the expected value of a random variable X, still commonly in use, and he coined the name "subfactorial" for the number of derangements of n items.
### Indicator function
characteristic functionmembership functionindicator
It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise.
The notation is used in other places as well, for instance in probability theory: if X is a probability space with probability measure and A is a measurable set, then becomes a random variable whose expected value is equal to the probability of A:
### Uncertainty principle
Heisenberg uncertainty principleHeisenberg's uncertainty principleuncertainty relation
The uncertainty in \hat{A} can be calculated using the formula. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9759011268615723, "perplexity": 436.3680171981201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704843561.95/warc/CC-MAIN-20210128102756-20210128132756-00322.warc.gz"} |
http://mathhelpforum.com/calculus/157852-how-tanget-unit-vector-t-t-orthogonal-t-t.html | # Math Help - How is the Tanget Unit Vector [T(t)] Orthogonal to T'(t)?
1. ## How is the Tanget Unit Vector [T(t)] Orthogonal to T'(t)?
Hey everyone. I'm having a hard time getting a visual of $\vec{T}$ being perpendicular to $\vec{T'}$. The proof makes sense. Here it is:
Assume the vector function $|\vec{r(t)}| = c$
Thus, we take the dot product of the same vector:
$\vec{r(t)} * \vec{r(t)} = |\vec{r(t)}|^2 = c^2$
If we were to take the derivative of this equation, we would get:
$0 = \frac{d}{dt}[\vec{r(t)} * \vec{r(t)}] = \vec{r'(t)} * \vec{r(t)} + \vec{r(t)} * \vec{r'(t)} = 2\vec{r'(t)} * \vec{r(t)}$
Thus, $\vec{r(t)} * \vec{r'(t)} = 0$
Now this make sense because we are talking about a position vector and its tangent. This is also easy to visualize: a point on a circle. The position vector is like the centripetal force, it points toward the center. So this makes perfect sense.
Now let's talk about $\vec{T(t)}$ and $\vec{T'(t)}$
$|\vec{T(t)}| = 1$ so the proof above applies here too. The problem is that I can't visual this. I mean I can but it contradicts the proof. $\vec{T(t)}$ is not a position vector of $\vec{r(t)}$. It is actually the unit vector of $\vec{r'(t)}$; thus, we're basically saying $\vec{r'(t)}$ is perpendicular to $\vec{r''(t)}$, which is not true. (ie. if f'(x) = x^2, f'"(x) = 2. They are not perpendicular.) HOWEVER, that is in 2D. idk what happens in 3D cuz I am having a hard time visualizing it.
So the question is, can someone please give me an example of where the tangent and the 2nd derivative of a function are perpendicular? Or somehow help me grasp this. Thanks in advance!
2. You consider the example f'(x)=x^2, but notice how this function does not satisfy |f'(x)|=1.
Only when |T(t)| is constant can you be sure that T(t) and T'(t) are perpendicular (in particular, this is true for |T(t)|=1 for all 1).
For an easy example, take the circle parametrized by r(t) = (cos(t),sin(t),0) (or you could leave out the last coordinate, if you are comfortable with plane curves).
Certainly |T(t)|=1. Also, T(t) = (-sin(t),cos(t),0), while T'(t) = (-cos(t),-sin(t),0). Notice how T(t) and T'(t) are perpendicular.
3. :O touche. This is what I did:
I wanted to be consistent with the f(x) = x^2 so I parametrized it:
x = t, y = t^2 2pi <= t <= 2pi
took the derivative and got
x' = 1, y' = 2t 2pi < t < 2pi
got the length of it and graphed out the parametric and I got a circle! the tangent unit vector was:
x = (1+4t^2)^(-1/2)
y = 2t*(1+4t^2)^(-1/2)
Did it without parametriziaton and got this:
y = 2x * (x^2 + y^2)^(-1/2)
^ couldn't graph that :\ got an error on my graphic calculator saying "Memory"
So the tangent unit vector is always a circle?
4. "So the tangent unit vector is always a circle?"
ouch!! The tangent unit vector is a vector, at a point on the graph, not a graph itself! It is true that, if you draw any vector of length 1, whether it is a tangent vector or not. with its "tail" at the origin then its "head" will lie on the unit circle- that's precisely what "length 1" means. If, for this problem, you carefully graphed all of the unit tangent vectors with tail at the origin, their heads would be on the unit circle but they would not cover the unit circle. They would cover only the right half of the unit circle because they would go from "almost straight down" (x a very large negative number) to "almost straight up" (x a very large positive number). But as I say, lieing on the unit circle simply reflects that they always have length 1.
For $y= x^2$, we can use x itself as parameter: $x= t$, $y= t^2$. The "position vector" would be $t\vec{i}+ t^2\vec{j}$.
The derivative of that is $\vec{i}+ 2t\vec{j}$ which has length $\sqrt{1+ t^2}$.
That is, the unit tangent vector is [tex]\vec{T}= \frac{1}{\sqrt{1+ t^2}\vec{i}+ \frac{2t}{1+ t^2}\vec{j}[tex], just as you say. Factoring out that root, $\vec{T}= (1+ 4t^2)^{1/2}\left(\vec{i}+ 2\vec{j}\right)$. If we write $(1+ 4t^2)^{-1/2}= (1+ 4t^2)^{-3/2)(1+ 4t^2)$, we can factor out $(1+ 4t^2)^{-3/2}$ in the second term as well leaving $\vec{T}'= (1+ 4t^2)^{-3/2}\left(-4t\vec{i}- 8t^2\vec{j}+ (2+ 8t^2)\vec{j}\right)= (1+ 4t^2)^{-3/2}\left(-4t\vec{i}+ 2\vec{j}\right)$
The dot product of $\vec{T}$ and $\vec{T}'$ is
$\left((1+ 4t^2)^{-1/2}(\vec{i}+ 2t\vec{j})\right)\cdot\left((1+ 4t^2)^{-3/2}(-4t\vec{i}+ 2\vec{j})= (1+ 4t^2)^{-2}\left((1)(-4t)+ (2t)(2)\right)= 0$.
In particular, if t= 0, x= y= 0 so we are at the origin. The unit tangent vector there is $(1+ 4(0^2))^{-1/2}\left(\vec{i}+ 0\vec{j}\right)= \vec{i}$, pointing along the x-axis. It's derivative, $(1+ 4(0^2))^{-3/2}\left(-4(0)\vec{i}+ 2\vec{j}\right)= 2\vec{j}$ points along the y-axis, perpendicular to the tangent vector.
In fact, $\vec{j}$ is the "unit normal vector" to the curve, and the length of that, 2, is the "curvature" at (0, 0).
Similarly, at t= 1, x= y= 1 so we are at the point (1, 1) on the parabola. The unit tangent vector is $(1+ 4(1^2))^{-1/2}\left(\vec{i}+ 2(1)\vec{j}\right)= \frac{1}{\sqrt{5}}\left(\vec{i}+ 2\vec{j}\right)$ and its derivative is [tex](1+ 4(1^2))^{-3/2}\left(-4(1)\vec{i}+ 2\vec{j}\right)= \frac{1}{\sqrt{125}}\left(-4\vec{i}+ 2\vec{j}\right). I recommend you plot those, not as a curve but actually draw the vectors with tail at (1, 1). But it is easy to see that their dot product is $\frac{1}{25}((1)(-4)+ 2(2))= 0$. The two vectors are, indeed, perpendicular.
The length of that derivative, by the way, is $\sqrt{\frac{1}{125}(15+ 4)}= \sqrt{\frac{20}{125}}= \sqrt{\frac{4}{25}}= \frac{2}{5}$. The curvature of the graph of $y= x^2$ is $\frac{2}{5}$ at (1, 1).
As a simple, example, try this with the circle of radius 5 with center at the origin: parametric equations x= cos(t), y= sin(t). Find the unit tangent vector and its derivative. You should see that the unit tangent vector is always, of course, tangent to the circle and that its derivative always point toward the origin, the center of the circle.
A very simple example is the | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 35, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258797764778137, "perplexity": 380.2831061997712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430460196625.80/warc/CC-MAIN-20150501060316-00015-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/extinction-coefficient-from-time-series-data.873369/ | # Extinction Coefficient from Time series data
1. May 26, 2016
### lee403
I have some time series data of the absorbance of Br2 formation using UV Vis spectroscopy and I need to figure out the extinction coefficient/ absorptivity.
The overall reaction is
BrO3-+5Br- +6H+-->3Br2+3H2O
which is expcted to go to completion
I know that the equation relating absorbance to concentration is
A=εcl
and I have times series A measurements and can calculate the initial concentrations of BrO3, Br -and the expected concentration of Br2 from the solutions I made. I just need to find ε.
I first attempted to plot the absorbance v. time and find the slope where it was most linear but I don't know how valid this approach is.
2. May 26, 2016
### Staff: Mentor
It is about as good as it can be.
I would record an additional point after waiting for some time to make sure the amount of Br2 produced is just stoichiometric. That would give a good calibration point.
Besides, if all you are after is a time series (for kinetic measurements), all you are interested in is the rate of changes - are you sure you need absolute values for that?
3. May 26, 2016
### lee403
Maybe I'm thinking about this wrong. I need to know the extinction coefficient for Br2 because in another reaction I measure its loss over time. So I dd an initial run for the formation of Br2 in order to determine its extinction coefficient. Then in a second run I added a compound that reacts with it and measured the absorbance again. I am interested in the rate of loss so does that mean the extinction coefficient from the initial run is not an exact value?
4. May 26, 2016
### Staff: Mentor
If you are using the same cuvette, wavelength and the same instrument you don't need extinction coefficient but a calibration curve. Linear regression on the data is typically a way to go.
What I don't get about your setup is why you use time series instead of just making a series of samples of different concentrations?
Have something to add?
Draft saved Draft deleted
Similar Discussions: Extinction Coefficient from Time series data | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8870439529418945, "perplexity": 1076.0168529407379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823153.58/warc/CC-MAIN-20171018214541-20171018234541-00826.warc.gz"} |
http://math.stackexchange.com/questions/148297/generalizing-an-alternative-derivation-of-distance?answertab=votes | # Generalizing an alternative derivation of distance
I've been playing around with the Pythagorean theorem trying to find equivalent metrics for distance that don't involve squaring and rooting.
From the definition of cosine it's easy to see that, given a triangle with sides $a, b, c$ and angles $A, B, C$, the length $c$ is simply $a*\cos(B) + b*\cos(A)$.
This works on any triangle, not just right triangles.
Now suppose we want to use this formula as a distance metric in Euclidean space. We'll now label the sides $x, y, d$ where we are given x and y and wish to find d.
According to the above, $d = x*\cos(Y) + y*\cos(X)$ if we can find the angles $X, Y$. If we're given orthogonal axes then it is easy to determine that those angles are $X = \tan^{-1}(x/y)$ and $Y =\tan^{-1}(y/x)$.
This gives us the generalized $d = x*\cos(\tan^{-1}(y/x)) + y*\cos(\tan^{-1}(x/y))$ metric for distance.
• This should work even if x and y do not fall on orthogonal axes (though you'll have to find X and Y differently). Is that useful in any way? If so, I'm sure it's been used before. What have I stumbled upon?
• Is there any (elegant) way to show that the above reduces to $\sqrt{x^2+y^2}$ when $x$ and $y$ are on orthogonal axes?
• how can this be generalized to $n$-space? (it's easy to scale the Pythagorean theorem up to $\sqrt{x^2+y^2+z^2}$ and beyond, but I imagine it would be more complex to scale this).
-
What do you mean by "equivalent" here? What do you mean by "generalized"? – Qiaochu Yuan May 22 '12 at 16:59
Isn't $\cos(\tan^{-1}(y/x))$ always equal to $x\over\sqrt{x^2 + y^2}$ whenever $x\ne 0$? If so, it seems to me that you haven't avoided the squaring and rooting so much as you've swept them under a trigonometric carpet. – MJD May 22 '12 at 17:09
@MarkDominus I disagree. It depends which one you take as fundamental. Obviously trigonometry and squares are closely related, and obviously you can always transform one into the other, but I'd like to try to derive the Pythagorean theorem from the trig instead of the (more common) inverse. But yes, obviously, the above should reduce to a generalized case of the Pythagorean theorem. – Nate May 22 '12 at 17:15
@QiaochuYuan it seems unnatural to me that triangular distances are found by extrapolating squares, adding them, and then rooting them. I'm trying to do the same thing but with trig. So by "equivalent" I mean "identical given Cartesian coordinates." By "generalized" I mean this: The Pythagorean theorem works in three-dimensional space. How do you add a z dimension to $x*cos(tan^{-1}(y/x)) + y*cos(tan^{-1}(x/y))$? – Nate May 22 '12 at 17:17
@Nate: why does that seem unnatural to you? – Qiaochu Yuan May 22 '12 at 17:23
Is there any (elegant) way to show that $d = x*\cos(\tan^{-1}(y/x)) + y*\cos(\tan^{-1}(x/y))$ reduces to $\sqrt{x^2+y^2}$ when $x$ and $y$ are on orthogonal axes?
First, I don't know that it's safe to claim that your $\tan^{-1}$ expressions are the correct angles if $x$ and $y$ are not orthogonal axes. But, given your expressions, we can do some simplification.
Since you said that $x$ and $y$ are sides, I'll make the assumption that $x,y>0$ (if not, then there are likely to be some issues with the range of the inverse tangent function). We can think of $\tan^{-1}(\frac{x}{y})$ and $\tan^{-1}(\frac{y}{x})$ as the two acute angles in a right triangle with legs $x$ and $y$, as shown. The length of the hypotenuse is $\sqrt{x^2+y^2}$. So, $$\cos\left(\tan^{-1}\left(\frac{y}{x}\right)\right)=\frac{x}{\sqrt{x^2+y^2}}$$ and $$\cos\left(\tan^{-1}\left(\frac{x}{y}\right)\right)=\frac{y}{\sqrt{x^2+y^2}},$$ which makes your distance expression \begin{align} d &= x*\cos(\tan^{-1}(y/x)) + y*\cos(\tan^{-1}(x/y)) \\&=\frac{x^2}{\sqrt{x^2+y^2}}+\frac{y^2}{\sqrt{x^2+y^2}} \\&=\frac{x^2+y^2}{\sqrt{x^2+y^2}} \\&=\frac{\sqrt{x^2+y^2}\cdot\sqrt{x^2+y^2}}{\sqrt{x^2+y^2}} \\&=\sqrt{x^2+y^2}. \end{align} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414341449737549, "perplexity": 279.9644816086115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647914.3/warc/CC-MAIN-20141024030047-00245-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://forum.allaboutcircuits.com/threads/instantaneous-forward-voltage.6314/ | # Instantaneous forward voltage
#### dick_girard
Joined Jun 22, 2007
3
Hi,
I am looking at a power diode that only shows the "instantaneous" forward voltage drop across the diode at a specified current. Most other diodes show the "forward drop" (not the "instantenous" forward drop) with current.
My question is: How do I determine the "constant" forward voltage drop? Is this the same as the instantaneous drop?
I am looking at the Vishay FP30AB dual-diode. They show an "instantaneous" drop of 0.95V drop at 15A per leg. My circuit [a DIY UPS for computer systems] will draw a steady-state current, not instantaneous. Anyone know about this?
Thanks,
Dick
#### Papabravo
Joined Feb 24, 2006
14,847
Hi,
I am looking at a power diode that only shows the "instantaneous" forward voltage drop across the diode at a specified current. Most other diodes show the "forward drop" (not the "instantenous" forward drop) with current.
My question is: How do I determine the "constant" forward voltage drop? Is this the same as the instantaneous drop?
I am looking at the Vishay FP30AB dual-diode. They show an "instantaneous" drop of 0.95V drop at 15A per leg. My circuit [a DIY UPS for computer systems] will draw a steady-state current, not instantaneous. Anyone know about this?
Thanks,
Dick
The two terms don't really go together. The term constant forward voltage drop has no standard interpretation that I am aware of. The term instantaneous forward voltage drop is applied to a circuit where one or more signals has a time varying component. This would be true of a power supply or an inverter. If a pair of values is listed, eg 0.95V and 15A, then it is an average over many devices and represents a typical value. Minimum and maximum values are presumed to represent the three sigma points on a normal distribution. A graph showing the typical and extreme I-V characteristic may provide additional insight.
With that data point and some other parameters from the datasheet it should be possible to come up with an analytical expression for the current as a function of forward voltage drop.
#### n9352527
Joined Oct 14, 2005
1,198
Instantaneous in this case means that the measurement was performed using pulsed, instead of continuous excitation (current or voltage). This is not a new measurement technique and is regularly used in semiconductor measurements and also creeps in to the datasheets regularly.
The main difference between pulsed and continuous measurements is that in pulsed measurements the thermal effect of applying voltage or current (increase in junction temperature) to the device under test (DUT) is minimised. Increase in temperature would change the DUT characteristics and also destroy the DUT (if measuring sufficiently large voltage or current).
The relationship between instantaneous and continuous parameters is governed by the specific temperature coefficient of the parameter. For example, a diode has instantaneous Vf of 1V at If 10A at Ta of 25degC. Now, the continuous Vf at If 10A at Ta 25degC would not be 1V, because the junction temperature would be different due to P = Vf*If. This is similar to temperature derating, only now the temperature increase comes from Tj and not Ta. The calculations are pretty similar, except that power is given by Vf and If, and both are dependent on each other. The easiest approach is probably by using the Vf If graphs at different temperatures on datasheet.
Most pulsed measurements in semiconductor is performed with 300us pulse and less than 1% duty cycle, which are commonly quoted in datasheets. These are the values where the thermal contribution to the Tj can be safely ignored. Either that, or because all of us use Tektronix curve tracers which only support 300us and 80us pulses, and the 80us pulses on early models had rising time problem
#### dick_girard
Joined Jun 22, 2007
3
OK, I understand what you say.
I want to use a full-wave bridge rectifier. The output is smoothed with a large cap. So the output voltage is the peak-to-peak value of the RMS voltage into the bridge, minus the drop across two diodes at 16A. Since the output is smoothed, there are no pulses except when you first turn it on.
So I need to know what the constant [i.e. steady-state] voltage drop will be with a constant current. But the curves only show instantaneous values.
So, how do I find a diode that can handle a constant 16A with Vf = 0.95 if they only show the instantaneous drop? That's my problem. All the diodes I found that can handle that current only show instantaneous voltage.
Thanks,
Dick
Instantaneous in this case means that the measurement was performed using pulsed, instead of continuous excitation (current or voltage). This is not a new measurement technique and is regularly used in semiconductor measurements and also creeps in to the datasheets regularly.
The main difference between pulsed and continuous measurements is that in pulsed measurements the thermal effect of applying voltage or current (increase in junction temperature) to the device under test (DUT) is minimised. Increase in temperature would change the DUT characteristics and also destroy the DUT (if measuring sufficiently large voltage or current).
The relationship between instantaneous and continuous parameters is governed by the specific temperature coefficient of the parameter. For example, a diode has instantaneous Vf of 1V at If 10A at Ta of 25degC. Now, the continuous Vf at If 10A at Ta 25degC would not be 1V, because the junction temperature would be different due to P = Vf*If. This is similar to temperature derating, only now the temperature increase comes from Tj and not Ta. The calculations are pretty similar, except that power is given by Vf and If, and both are dependent on each other. The easiest approach is probably by using the Vf If graphs at different temperatures on datasheet.
Most pulsed measurements in semiconductor is performed with 300us pulse and less than 1% duty cycle, which are commonly quoted in datasheets. These are the values where the thermal contribution to the Tj can be safely ignored. Either that, or because all of us use Tektronix curve tracers which only support 300us and 80us pulses, and the 80us pulses on early models had rising time problem
#### John Luciani
Joined Apr 3, 2007
477
As was mentioned the instantaneous measurements are used so that temperature
change is not a factor in the measurement.
For a diode the forward voltage changes by -2mV/DegC. As the temperature increases the power you are dissipating in the diode decreases. Your maximum power dissipation
will occur at your lowest operating temperature. There are probably specifications for instantaneous Vf at 25DegC and 125DegC. The 125DegC value should be apx 200mV lower.
Determine the maximum power dissipation and use the thermal resistance value
to calculate operating temperature.
(* jcl *)
#### Ron H
Joined Apr 14, 2005
7,014
OK, I understand what you say.
I want to use a full-wave bridge rectifier. The output is smoothed with a large cap. So the output voltage is the peak-to-peak value of the RMS voltage into the bridge, minus the drop across two diodes at 16A. Since the output is smoothed, there are no pulses except when you first turn it on.
Au contraire! In a rectifier with a smoothing capacitor, the current flows through the diodes in pulses. Better smoothing (bigger capacitor) makes the pulses shorter and higher in amplitude. The reason for this is that the diodes only conduct at the peaks of the sine wave. Current flows out of the cap at a relatively constant rate, but the charge that was lost between peaks all has to be replaced during the short conduction time. Look at the waveforms below.
So I need to know what the constant [i.e. steady-state] voltage drop will be with a constant current. But the curves only show instantaneous values.
So, how do I find a diode that can handle a constant 16A with Vf = 0.95 if they only show the instantaneous drop? That's my problem. All the diodes I found that can handle that current only show instantaneous voltage.
Thanks,
Dick
BTW, I couldn't find FP30AB (or FP30 anything) at Vishay.
#### Attachments
• 7.9 KB Views: 53
• 6.5 KB Views: 56
#### n9352527
Joined Oct 14, 2005
1,198
The way I usually pick a diode is to choose an acceptable maximum junction temperature first. Calculate delta T with maximum air temperature, then calculate the maximum power using the thermal resistance. Then, pick a point on the Vf If graph that gives lower power dissipation than the calculated maximum power dissipation. If there is no point that satisfies the requirement, then get another beefier diode.
The above is for continuous current, for pulses, we have to look at the appropriate graph (usually thermal dissipation/resistance against pulses width and duty cycle. Or rough approximation with RMS power and appropriate safety margin, as cooling effect is not usually linear with pulse width or duty cycle.
Or just pick one that looks beefy enough if you're not cost sensitive and manufacturing millions of them | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8095694780349731, "perplexity": 1681.2939120357366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514423.60/warc/CC-MAIN-20210118061434-20210118091434-00433.warc.gz"} |
https://arxiv-export-lb.library.cornell.edu/abs/2206.11738 | math.PR
(what is this?)
# Title: On Convergence of a Truncation Scheme for Approximating Stationary Distributions of Continuous State Space Markov Chains and Processes
Abstract: In the analysis of Markov chains and processes, it is sometimes convenient to replace an unbounded state space with a "truncated" bounded state space. When such a replacement is made, one often wants to know whether the equilibrium behavior of the truncated chain or process is close to that of the untruncated system. For example, such questions arise naturally when considering numerical methods for computing stationary distributions on unbounded state space. In this paper, we use the principle of "regeneration" to show that the stationary distributions of "fixed state" truncations converge in great generality (in total variation norm) to the stationary distribution of the untruncated limit, when the untruncated chain is positive Harris recurrent. Even in countable state space, our theory extends known results by showing that the augmentation can correspond to an $r$-regular measure. In addition, we extend our theory to cover an important subclass of Harris recurrent Markov processes that include non-explosive Markov jump processes on countable state space.
Subjects: Probability (math.PR); Numerical Analysis (math.NA) Cite as: arXiv:2206.11738 [math.PR] (or arXiv:2206.11738v1 [math.PR] for this version)
## Submission history
From: Alex Infanger [view email]
[v1] Thu, 23 Jun 2022 14:36:03 GMT (17kb)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8737751841545105, "perplexity": 891.3357479799806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00086.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/200145-advice-right-statistical-distribution.html | # Math Help - Advice on the right statistical distribution
1. ## Advice on the right statistical distribution
Please move this to non-advanced stats if I have it in the wrong forum.
I am science degree qualified but not with a maths degree, what maths I have had is a bit rusty and i am looking for some advice on a statistical distribution.
I am looking to simulate a node in a network randomly going 'out of action' from time to time, but with a particular probability.
I am unsure of the best distribution and parameters to use.
For example I may want the node to be 'out of action' say 10% of the time, but I think a straight .1 chance to go off .9 chance to go back on will make the node to be going off for a lot of very short periods very short periods. I think I need another parameter may be and maybe the chance of going off increase around may be some average off time.
Can use the poisson or binomial some where here.
I wonder if I could have parameters frequency period fp (eg 1hr (24 times a day)) and mean percentage down time pdt then do something like
randomPoisonDist down(fp * pdt/100)
randomPoisonDist up(fp * (100-pdt)/100)
when up
next_down_time = now+up.next()
when down
next_up_time = now+down.next()
2. ## Re: Advice on the right statistical distribution
I would use an exponential distribution instead, this allows to take the limit fp->0, without changing the state too often or too predictable.
As approximation, you could use this algorithm every (small) timestep dt:
If it is on, set it to off with probability pdt*dt/fp.
If it is off, set it to on with probability (1-pdt)*dt/fp.
The equilibrium is "on" with probability p=(1-pdt*dt/fp)*p + (1-pdt)*dt/fp*(1-p). This is solved by p=1-pdt which is the result you want.
The probability of a change in dt is then given by 2(1-pdt)*pdt*dt/fp and the average time between changes is 2*(1-pdt)*dt/fp. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8555123805999756, "perplexity": 1202.3817422431578}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447562605.42/warc/CC-MAIN-20141224185922-00080-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://mathhelpforum.com/differential-geometry/164670-functional-series-uniform-convergence-question.html | # Thread: Functional Series and Uniform Convergence Question
1. ## Functional Series and Uniform Convergence Question
Let a $\in$ (0,1). Show that the functional series
$\displaystyle\sum_{j=0}^{\infty}(-t^2)^j$ where $t \in [-a,a]$
is uniformly convergent with the limit function
$f(t) = \frac{1}{1+t^2}$
I have absolutely no clue where to go from here.
Any help???
2. Originally Posted by garunas
Let a $\in$ (0,1). Show that the functional series
$\displaystyle\sum_{j=0}^{\infty}(-t^2)^j$ where $t \in [-a,a]$
is uniformly convergent with the limit function
$f(t) = \frac{1}{1+t^2}$
I have absolutely no clue where to go from here.
Any help???
Merely note that $|f_j(t)|=t^{2j}\leqslant (a)^{2j}$ for $t\in[-a,a]$, but since $\displaystyle \sum_{j=}^{\infty}a^{2j}$ converges, it follows by the Weierstrass M-test that $\displaystyle \sum_{j=0}^{\infty}f_j(t)$ converges uniformly on $[-a,a]$. To prove what it sums to merely note that $\displaystyle \frac{1}{1-\left(-t^2\right)}=\sum_{j=0}^{\infty}\left(-t^2\right)^j$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9884202480316162, "perplexity": 318.30701722454415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164572870/warc/CC-MAIN-20131204134252-00073-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://specialfunctionswiki.org/index.php/Derivative_of_arcsec | # Derivative of arcsec
The following formula holds: $$\dfrac{\mathrm{d}}{\mathrm{d}z} \mathrm{arcsec}(z) = \dfrac{1}{z^2\sqrt{1-\frac{1}{z^2}}},$$ where $\mathrm{arcsec}$ is the inverse secant function.
If $\theta=\mathrm{arcsec}(z)$ then $\sec(\theta)=z$. Now use implicit differentiation with respect to $z$ and the derivative of secant to see $$\sec(\theta)\tan(\theta) \theta' = 1,$$ or equivalently, $$\dfrac{\mathrm{d}\theta}{\mathrm{d}z} = \dfrac{1}{\sec(\theta)\tan(\theta)} = \dfrac{1}{z\tan(\theta)}.$$ The following image shows that $\tan(\mathrm{arcsec}(z))=\sqrt{z^2-1}$:
Hence substituting back in $\theta=\mathrm{arcsec}(z)$ yields the formula $$\dfrac{\mathrm{d}}{\mathrm{d}z} \mathrm{arcsec}(z) = \dfrac{1}{z\tan(\mathrm{arcsec}(z))} = \dfrac{1}{z\sqrt{z^2-1}}=\dfrac{1}{z^2\sqrt{1-\frac{1}{z^2}}},$$ as was to be shown. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939171671867371, "perplexity": 132.96762701939042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00248.warc.gz"} |
https://en.wikipedia.org/wiki/Transitive_relation | # Transitive relation
In mathematics, a binary relation R over a set X is transitive if whenever an element a is related to an element b, and b is in turn related to an element c, then a is also related to c. Transitivity (or transitiveness) is a key property of both partial order relations and equivalence relations.
## Formal definition
In terms of set theory, the transitive relation can be defined as:
${\displaystyle \forall a,b,c\in X:(aRb\wedge bRc)\Rightarrow aRc}$
## Examples
For example, "is greater than", "is at least as great as," and "is equal to" (equality) are transitive relations:
whenever A > B and B > C, then also A > C
whenever A ≥ B and B ≥ C, then also A ≥ C
whenever A = B and B = C, then also A = C.
On the other hand, "is the mother of" is not a transitive relation, because if Alice is the mother of Brenda, and Brenda is the mother of Claire, then Alice is not the mother of Claire. What is more, it is antitransitive: Alice can never be the mother of Claire.
Then again, in biology we often need to consider motherhood over an arbitrary number of generations: the relation "is a matrilinear ancestor of". This is a transitive relation. More precisely, it is the transitive closure of the relation "is the mother of".
More examples of transitive relations:
## Properties
### Closure properties
The converse of a transitive relation is always transitive: e.g. knowing that "is a subset of" is transitive and "is a superset of" is its converse, we can conclude that the latter is transitive as well.
The intersection of two transitive relations is always transitive: knowing that "was born before" and "has the same first name as" are transitive, we can conclude that "was born before and also has the same first name as" is also transitive.
The union of two transitive relations is not always transitive. For instance "was born before or has the same first name as" is not generally a transitive relation.
The complement of a transitive relation is not always transitive. For instance, while "equal to" is transitive, "not equal to" is only transitive on sets with at most one element.
### Other properties
A transitive relation is asymmetric if and only if it is irreflexive.[1]
## Counting transitive relations
No general formula that counts the number of transitive relations on a finite set (sequence A006905 in the OEIS) is known.[2] However, there is a formula for finding the number of relations that are simultaneously reflexive, symmetric, and transitive – in other words, equivalence relations – (sequence A000110 in the OEIS), those that are symmetric and transitive, those that are symmetric, transitive, and antisymmetric, and those that are total, transitive, and antisymmetric. Pfeiffer[3] has made some progress in this direction, expressing relations with combinations of these properties in terms of each other, but still calculating any one is difficult. See also.[4]
Number of n-element binary relations of different types
n all transitive reflexive preorder partial order total preorder total order equivalence relation
0 1 1 1 1 1 1 1 1
1 2 2 1 1 1 1 1 1
2 16 13 4 4 3 3 2 2
3 512 171 64 29 19 13 6 5
4 65536 3994 4096 355 219 75 24 15
n 2n2 2n2-n Σn
k=0
k! S(n,k)
n! Σn
k=0
S(n,k)
OEIS A002416 A006905 A053763 A000798 A001035 A000670 A000142 A000110 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8329422473907471, "perplexity": 521.0045923655144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00004-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/89822/is-global-non-convex-optimization-np-complete/89825 | # Is global non-convex optimization NP-complete?
Assume I have some non-convex function $f(x_1, x_2, ...)$ and I want to optimize it to find a global minimum. I feel like it is easy to show that this problem is in the class NP with the decision problem
Is there a set of points such that f < C?
Where C is some constant. However, I am not sure if these problems are in the class of NP-Complete, and if so, what would you say the size of the input is? Complexity of the function?
Thanks!
• It's not straightforward to figure out how to formalize this in terms where NP-completeness is applicable. What are the inputs, and what are the desired outputs? Is $f$ fixed, or part of the input? If $f$ is fixed, please specify the function $f$ in the question. If it's part of the input, how is the function $f$ specified? What's the type signature of $f$? Is it continuous ($f:\mathbb{R} \to \mathbb{R}$) or discrete? If it is discrete and specified as a truth table, that takes exponential space, which is problematic. If it id continuous, it can't be specified as a truth table. – D.W. Mar 26 '18 at 17:57
• To show that your problem is NP-hard, try encoding SAT as a non-convex optimization problem. – Yuval Filmus Mar 26 '18 at 18:07
• Even a QP problem with one negative eigenvalue is $\mathcal{NP}$-hard, see link.springer.com/article/10.1007/BF00120662 – Eugene Mar 26 '18 at 19:31
• However, the answer depends on your function. There are nonconvex functions easy to optimize. – Eugene Mar 26 '18 at 19:32
Yes, non-convex optimization is NP-hard. For a simple proof, consider the following reduction from Subset-Sum. The Subset-Sum problem asks whether there is a subset of the input integers $a_1, \dots, a_n$ which sums to zero. To reduce to non-convex programming, let $x_1, \dots, x_n$ be variables encoding the subset and consider the following non-convex program:
\begin{align*} \text{minimize }\quad&(a\cdot x)^2 + \sum_{i=1}^n x_i^2(1 - x_i)^2\\ \text{subject to}\quad& \sum_{i=1}^n x_i \ge 1. \end{align*}
• A different way to obtain an unconstrained problem, replace $(a\cdot x)^2$ with $\left(a\cdot x-\frac{a_1+a_2+\dots + a_n}2\right)^2$ and you obtain a reduction from the (bi-)partition problem. – Daniel Porumbel Oct 5 at 14:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8762180209159851, "perplexity": 406.8473589484189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660323.32/warc/CC-MAIN-20191015205352-20191015232852-00149.warc.gz"} |
http://math.stackexchange.com/questions/297488/why-is-a-positive-definite-quantity-plotted-as-a-negative-number | # Why is a positive definite quantity plotted as a negative number?
I have two boxes, $A$ and $B$ that exchange mass.
I will call the flux of mass between $A$ and $B$ $ab$; $ba$ is the reverse.
$X = ab -ba$
I am trying to understand the following statement:
$dA/dt$, $dB/dt$, $ab$, and $ba$ are positive definite since they represent a one-way process. Whereas $X$ is a vector with a sign that implies the direction of transfer.
The origin of my confusion is that when the terms $X$, $ab$, and $ba$ are plotted together over time, $ba$ is plotted with negative values. I imagine that this is justified by rewriting the equation as $X=ab + (-ba)$.
I have never seen the term "positive definite".
Here is a diagram of the problem:
-
Postive definite is a term commonly used in the context of matrix or operator algebra, and also in dynamical systems (although I am only familiar with the former 2). I am confident someone will post a competent answer, but in the meantime, have a look if you like: en.wikipedia.org/wiki/Positive_definiteness. – gnometorule Feb 7 '13 at 20:26
A real valued function $f: X \rightarrow \mathbb{R}$ on an arbitrary set $X$ is called positive-definite if $f(x)>0, \forall x \in \mathcal{X}$. The flux is in general not a scalar quantity, because it is described by the magnitude and the direction as well. Since $ab$ denotes the flux from $A$ to $B$, then the information of direction is encoded in the ordering of the characters $a$ and $b$. Hence there is no reason for a sign. Consequently, any value of $ab$ will only be positive and will denote the magnitude. The same is true for $ba$. If values of $ba$ are plotted with negative sign, that might just be an abuse of notation and its not rigorous. I believe your understanding is correct. In fact, you can see that the diagram is not very precise, i.e. instead of $ba$ they plot $-ba$, since if $ba<0$ and $ab>0$, then $X>0$, which is not the case.
When the set $X$ has an algebraic structure, e.g. when it is a monoid with respect to addition with identity element $e$, then we must have $f(x)\ge 0 \forall x \in \mathcal{X}$ and $f(x)=0 \Leftrightarrow x=e$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9676146507263184, "perplexity": 78.80580569000121}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160950.71/warc/CC-MAIN-20160205193920-00181-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://umj-old.imath.kiev.ua/authors/name/?lang=en&author_id=2646 | 2019
Том 71
№ 11
# Fedyashev M. S.
Articles: 1
Article (Russian)
### Periodic solutions of a parabolic equation with homogeneous Dirichlet boundary condition and linearly increasing discontinuous nonlinearity
Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1080-1088
We consider a resonance problem of the existence of periodic solutions of parabolic equations with discontinuous nonli-nearities and a homogeneous Dirichlet boundary condition. It is assumed that the coefficients of the differential operator do not depend on time, and the growth of the nonlinearity at infinity is linear. The operator formulation of the problem reduces it to the problem of the existence of a fixed point of a convex compact mapping. A theorem on the existence of generalized and strong periodic solutions is proved. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658170938491821, "perplexity": 237.80555553335844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107919459.92/warc/CC-MAIN-20201031151830-20201031181830-00232.warc.gz"} |
https://www.physicsforums.com/threads/kirchhoffs-rule-finding-unknown-resistances-and-voltages.178474/ | # Kirchhoff's Rule / finding unknown resistances and voltages
1. Jul 27, 2007
### exi
1. The problem statement, all variables and given/known data
Find the current passing through R1 and the voltage passing through the cell to the immediate left of R1.
R1 is 2 Ω, and R2 is 6.5 Ω.
2. Relevant equations
Kirchhoff's Rule; Ohm's Law
3. The attempt at a solution
Not sure if I'm approaching this the correct way. What I had in mind was to do some mesh analysis while considering I(1) to be 3 A, doing Kirchhoff's for the bottom half of the circuit, finding I(2), and using Ohm's to find I through that 2 Ω resistor.
Little unsure about finding the voltage of that mystery cell, though.
2. Jul 27, 2007
### Staff: Mentor
I'd first combine R2 and the 4 Ohm resistor -- no need to keep them separate for this problem. Then ground the right side, at the - side of the 24V voltage source. You then have 2 unknown node voltages that you can write the KCL equations for, and once you solve for them, you have the solutions for the question.
BTW, instead of saying "the voltage passing through the cell to the immediate left of R1", it would be better to say "the voltage across the cell". Current passes through an element in response to the voltage placed across the element.
3. Jul 27, 2007
### exi
Well, that worked beautifully. I kept thinking I couldn't combine terms in the first part of the Kirchhoff's work for some reason; took me a second to realize that a voltage and the product of a current and a resistance value (Ohm's, anyone?) definitely are combinable.
My "duurrrrr" moment for the day, I suppose.
Thanks. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.813385009765625, "perplexity": 1009.7191425741343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00506-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/374742/deformations-of-hopf-manifolds | # Deformations of Hopf manifolds
Recall that a Hopf manifold is a quotient $$\mathbb C^n\setminus 0$$ by a free action of $$\mathbb Z$$ where the generator is acting by a holomorphic contraction.
Question 1. Is it true that any deformation of a Hopf manifold (as a complex manifold) is again a Hopf manifold for $$n\ge 3$$?
Question 2. Is there some kind of classification of Hopf manifolds and their deformations in dimension $$\ge 3$$ (for example $$n=3$$).
Note that for $$n=2$$ the answer to both questions is positive https://en.wikipedia.org/wiki/Hopf_surface | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9500338435173035, "perplexity": 151.44273744375988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305052.56/warc/CC-MAIN-20220127012750-20220127042750-00385.warc.gz"} |
https://www.physicsforums.com/threads/show-companion-matrix-is-similar-to-the-following-matrix.739474/ | # Show companion matrix is similar to the following matrix
1. Feb 20, 2014
1. The problem statement, all variables and given/known data
need to show companion matrix is similar to the following matrix
(here is the picture of the matrix)
2. Relevant equations
here is the companion matrix
http://en.wikipedia.org/wiki/Companion_matrix
information on matrix similarity
http://en.wikipedia.org/wiki/Matrix_similarity
3. The attempt at a solution
say given matrix is A and companion matrix is C then need to show
A = P^-1 * C * P for some invertible matrix P
i guess i could reduce guesswork by rewriting as
P*A = C*P
but even then it does not seem to be the ideal way to go about things.
Last edited: Feb 20, 2014
2. Feb 20, 2014
### kduna
I don't think you put in the right link to the matrix...
3. Feb 20, 2014
oh lol fixed
4. Feb 20, 2014
### kduna
What is the characteristic and minimal polynomial of that matrix? Can you construct the rational canonical form based off of elementary divisors?
5. Feb 20, 2014
i know char for companion but that is all.
i also know:
if same minimal poly then similar
if same frobenius canonical form then similar
but no clue how to go about finding them
6. Feb 20, 2014
### kduna
Since your matrix is upper triangular, finding the characteristic polynomial is easy: It is just $∏ (x-\lambda_i)$. If you can show that this is the minimal polynomial as well, then you are done.
Draft saved Draft deleted
Similar Discussions: Show companion matrix is similar to the following matrix | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9411253333091736, "perplexity": 1393.4528450891996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804518.38/warc/CC-MAIN-20171118021803-20171118041803-00478.warc.gz"} |
https://cn.maplesoft.com/support/help/view.aspx?path=ImageTools/ColorTransform&L=C | ColorTransform - Maple Help
ImageTools
ColorTransform
apply a linear transform to the colors of an image
Calling Sequence ColorTransform( img, mat, opts )
Parameters
img - ColorImage or ColorAImage; input image mat - Matrix; 3 x 3 transformation matrix opts - (optional) equation(s) of the form option = value; specify options for the ColorTransform command
Options
• inplace = truefalse
Specifies whether the operation is performed in-place. This can be used to avoid allocating memory. The default is false.
• output = Image
Specifies a data structure into which the output is written. This can be used to avoid allocating memory. The size and number of layers must match that of the input. The dimensions of the output image are adjusted so that the row and column indices match the input. The default is NULL.
Description
• The ColorTransform command transforms the color of each pixel of an image using the linear transformation c2=m*c1, where m is a 3 x 3 matrix, c1 is the input color vector of a pixel and c2 is the output color vector of a pixel.
• The img parameter is the input image and must be of type ColorImage or ColorAImage.
• The mat parameter specifies the transformation matrix. It must be a 3 x 3 Matrix.
Examples
> $\mathrm{with}\left(\mathrm{ImageTools}\right):$
> $\mathrm{img_y}≔\mathrm{Create}\left(100,200,\left(r,c\right)↦\mathrm{evalf}\left(\mathrm{sin}\left(0.0025\cdot \left({c}^{2}+{r}^{2}\right)\right)\right)\right):$
> $\mathrm{img_y}≔0.2+0.4\mathrm{FitIntensity}\left(\mathrm{img_y}\right):$
> $\mathrm{img_u}≔\mathrm{Create}\left(100,200,\left(r,c\right)↦c\right):$
> $\mathrm{img_u}≔-0.436+2\cdot 0.436\mathrm{FitIntensity}\left(\mathrm{img_u}\right):$
> $\mathrm{img_v}≔\mathrm{Create}\left(100,200,\left(r,c\right)↦r\right):$
> $\mathrm{img_v}≔-0.615+2\cdot 0.615\mathrm{FitIntensity}\left(\mathrm{img_v}\right):$
> $\mathrm{img_yuv}≔\mathrm{CombineLayers}\left(\mathrm{img_y},\mathrm{img_u},\mathrm{img_v}\right):$
> $\mathrm{img_rgb1}≔\mathrm{YUVtoRGB}\left(\mathrm{img_yuv}\right):$
> $M≔⟨⟨0.299|0.587|0.114⟩,⟨-0.147|-0.289|0.436⟩,⟨0.615|-0.515|-0.100⟩⟩:$
> $\mathrm{img_rgb2}≔\mathrm{ColorTransform}\left(\mathrm{img_yuv},{M}^{-1}\right):$
> $\mathrm{Embed}\left(\left[\mathrm{img_rgb1},\mathrm{img_rgb2}\right]\right)$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9020842909812927, "perplexity": 685.6132732903541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00070.warc.gz"} |
https://indico.desy.de/event/12482/contributions/8935/ | # XXIV International Workshop on Deep-Inelastic Scattering and Related Subjects (DIS16)
Apr 11 – 15, 2016
DESY Hamburg
Europe/Berlin timezone
## Recent developments in APFEL
Apr 13, 2016, 4:50 PM
15m
Auditorium (DESY Hamburg)
### Auditorium
#### DESY Hamburg
Structure Functions and Parton Densities
### Speaker
Dr Valerio Bertone (University of Oxford)
### Description
APFEL is a numerical code specialized for PDF fits that provides a fast and accurate solution of the DGLAP equations up to NNLO in QCD and LO in QED. In addition to PDF evolution, APFEL also provides a module that computes deep-inelastic scattering cross sections in several mass schemes up to NNLO in QCD. In this contribution I will present the most recent developments carried out in the APFEL framework. They include: the implementation of the intrinsic charm contributions to the FONLL structure functions, the computation of the polarized evolution up to NNLO in QCD, the small-x resummed evolution up to NLL, the implementation of the single-inclusive cross sections needed for the determination of fragmentation functions (FFs). APFEL is currently used by the NNPDF collaboration and is interfaced to the xFitter public code and thus all these developments are or will be used to improve the determination of PDFs and FFs.
### Primary author
Dr Valerio Bertone (University of Oxford)
Slides | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8496837019920349, "perplexity": 4457.54775840871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305260.61/warc/CC-MAIN-20220127103059-20220127133059-00106.warc.gz"} |
http://gmatclub.com/forum/if-x-y-x-y-then-which-of-the-following-must-be-true-141077.html?fl=similar | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 17 Sep 2014, 10:03
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If |x|-|y|=|x+y|, then which of the following must be true?
Author Message
TAGS:
Verbal Forum Moderator
Status: Preparing for the another shot...!
Joined: 03 Feb 2011
Posts: 1425
Location: India
Concentration: Finance, Marketing
GPA: 3.75
Followers: 127
Kudos [?]: 616 [1] , given: 62
If |x|-|y|=|x+y|, then which of the following must be true? [#permalink] 21 Oct 2012, 22:42
1
KUDOS
Expert's post
00:00
Difficulty:
(N/A)
Question Stats:
83% (01:41) correct 17% (02:15) wrong based on 28 sessions
If |x|-|y|=|x+y|, then which of the following must be true?
A. x-y>0
B. x-y<0
C. x+y>0
D. xy>0
E. xy<0
I was unable to find its answer. Hence after trying, I guess the answer is x+y>0.
Please correct me if I am wrong.
Source: Jamboree
_________________
Moderator
Joined: 02 Jul 2012
Posts: 1228
Location: India
Concentration: Strategy
GMAT 1: 740 Q49 V42
GPA: 3.8
WE: Engineering (Energy and Utilities)
Followers: 66
Kudos [?]: 681 [0], given: 116
Re: if |x|-|y|=|x+y|, then which of the following must be true? [#permalink] 21 Oct 2012, 23:18
If x & y are both equal to 0. Then none of the options are true. So if want to find which MUST be true then answer is none. Question should be missing some part i guess.
If the question states that x and y are non zero. Then we can see that x and y should be off opposite polarity to satisfy the equation.
Illustration :
x = 5, y = -1
1)true 2)false 3)true 4) false 5) true
x= -5, y = 1
1)false 2)true 3)false 4) false 5)true
xy<0
_________________
Did you find this post helpful?... Please let me know through the Kudos button.
Thanks To The Almighty - My GMAT Debrief
GMAT Reading Comprehension: 7 Most Common Passage Types
Verbal Forum Moderator
Status: Preparing for the another shot...!
Joined: 03 Feb 2011
Posts: 1425
Location: India
Concentration: Finance, Marketing
GPA: 3.75
Followers: 127
Kudos [?]: 616 [1] , given: 62
Re: if |x|-|y|=|x+y|, then which of the following must be true? [#permalink] 22 Oct 2012, 01:34
1
KUDOS
Expert's post
Thanx for the reply Macfauz. Agree to your illustration but it will be great if you can go with the algebraic method.
Such modulus questions are painful if one doesn't the knows the correct approach.
_________________
Director
Joined: 22 Mar 2011
Posts: 613
WE: Science (Education)
Followers: 70
Kudos [?]: 515 [1] , given: 43
Re: if |x|-|y|=|x+y|, then which of the following must be true? [#permalink] 22 Oct 2012, 02:24
1
KUDOS
Marcab wrote:
if |x|-|y|=|x+y|, then which of the following must be true?
1) x-y>0
2) x-y<0
3) x+y>0
4) xy>0
5) xy<0
I was unable to find its answer. Hence after trying, I guess the answer is x+y>0.
Please correct me if I am wrong.
Source: Jamboree
The correct answer is E, but should be xy\leq{0} and not xy<0. Otherwise, none of the answers is correct.
The given equality holds for x=y=0, for which none of the given answers is correct.
The given equality can be rewritten as |x| = |y| + |x + y|.
If y=0, the equality becomes |x|=|x|, obviously true.
From the given answers, D cannot hold, and A,B or C holds, depending on the value of x. Corrected E holds.
If y>0, then necessarily x must be negative, because if x>0, then |x+y|>|x| (x+y>x), and the given equality cannot hold.
If y<0, then necessarily x must be positive, because if x<0, then again |x+y|>|x| (-x-y>-x) and the given equality cannot hold.
It follows that x and y must have opposite signs or y=0.
Answer corrected version of E \,\,xy\leq{0}.
_________________
PhD in Applied Mathematics
Love GMAT Quant questions and running.
Moderator
Joined: 02 Jul 2012
Posts: 1228
Location: India
Concentration: Strategy
GMAT 1: 740 Q49 V42
GPA: 3.8
WE: Engineering (Energy and Utilities)
Followers: 66
Kudos [?]: 681 [2] , given: 116
Re: if |x|-|y|=|x+y|, then which of the following must be true? [#permalink] 22 Oct 2012, 02:28
2
KUDOS
Marcab wrote:
Thanx for the reply Macfauz. Agree to your illustration but it will be great if you can go with the algebraic method.
Such modulus questions are painful if one doesn't the knows the correct approach.
Squaring both sides we get :
(|x| - |y|)^2 = (|x + y|)^2
|x|^2 + |y|^2 - 2|x||y| = x^2 + y^2 + 2xy
So.,
|x||y| = -xy
So -xy is positive (since modulus cannot be negative) and hence xy should be negative.
_________________
Did you find this post helpful?... Please let me know through the Kudos button.
Thanks To The Almighty - My GMAT Debrief
GMAT Reading Comprehension: 7 Most Common Passage Types
Verbal Forum Moderator
Status: Preparing for the another shot...!
Joined: 03 Feb 2011
Posts: 1425
Location: India
Concentration: Finance, Marketing
GPA: 3.75
Followers: 127
Kudos [?]: 616 [0], given: 62
Re: if |x|-|y|=|x+y|, then which of the following must be true? [#permalink] 22 Oct 2012, 02:57
Expert's post
Macfauz , the solution was perfect. Thanks.
_________________
Verbal Forum Moderator
Status: Preparing for the another shot...!
Joined: 03 Feb 2011
Posts: 1425
Location: India
Concentration: Finance, Marketing
GPA: 3.75
Followers: 127
Kudos [?]: 616 [0], given: 62
Re: if |x|-|y|=|x+y|, then which of the following must be true? [#permalink] 22 Oct 2012, 03:02
Expert's post
EvaJager wrote:
Marcab wrote:
if |x|-|y|=|x+y|, then which of the following must be true?
1) x-y>0
2) x-y<0
3) x+y>0
4) xy>0
5) xy<0
I was unable to find its answer. Hence after trying, I guess the answer is x+y>0.
Please correct me if I am wrong.
Source: Jamboree
The correct answer is E, but should be xy\leq{0} and not xy<0. Otherwise, none of the answers is correct.
The given equality holds for x=y=0, for which none of the given answers is correct.
The given equality can be rewritten as |x| = |y| + |x + y|.
If y=0, the equality becomes |x|=|x|, obviously true.
From the given answers, D cannot hold, and A,B or C holds, depending on the value of x. Corrected E holds.
If y>0, then necessarily x must be negative, because if x>0, then |x+y|>|x| (x+y>x), and the given equality cannot hold.
If y<0, then necessarily x must be positive, because if x<0, then again |x+y|>|x| (-x-y>-x) and the given equality cannot hold.
It follows that x and y must have opposite signs or y=0.
Answer corrected version of E \,\,xy\leq{0}.
Many thanks for the explanation.
It will be great if you elaborate on how to solve split modulus questions such as given above.
_________________
Director
Joined: 22 Mar 2011
Posts: 613
WE: Science (Education)
Followers: 70
Kudos [?]: 515 [0], given: 43
Re: if |x|-|y|=|x+y|, then which of the following must be true? [#permalink] 22 Oct 2012, 03:26
MacFauz wrote:
Marcab wrote:
Thanx for the reply Macfauz. Agree to your illustration but it will be great if you can go with the algebraic method.
Such modulus questions are painful if one doesn't the knows the correct approach.
Squaring both sides we get :
(|x| - |y|)^2 = (|x + y|)^2
|x|^2 + |y|^2 - 2|x||y| = x^2 + y^2 + 2xy
So.,
|x||y| = -xy
So -xy is positive (since modulus cannot be negative) and hence xy should be negative.
OR 0, that's why the correct answer should be xy\leq{0}.
Otherwise, very nice solution.
_________________
PhD in Applied Mathematics
Love GMAT Quant questions and running.
Moderator
Joined: 02 Jul 2012
Posts: 1228
Location: India
Concentration: Strategy
GMAT 1: 740 Q49 V42
GPA: 3.8
WE: Engineering (Energy and Utilities)
Followers: 66
Kudos [?]: 681 [0], given: 116
Re: if |x|-|y|=|x+y|, then which of the following must be true? [#permalink] 22 Oct 2012, 04:01
EvaJager wrote:
MacFauz wrote:
Marcab wrote:
Thanx for the reply Macfauz. Agree to your illustration but it will be great if you can go with the algebraic method.
Such modulus questions are painful if one doesn't the knows the correct approach.
Squaring both sides we get :
(|x| - |y|)^2 = (|x + y|)^2
|x|^2 + |y|^2 - 2|x||y| = x^2 + y^2 + 2xy
So.,
|x||y| = -xy
So -xy is positive (since modulus cannot be negative) and hence xy should be negative.
OR 0, that's why the correct answer should be xy\leq{0}.
Otherwise, very nice solution.
I was solving on the basis of my previous comment where I had just added the phrase "where x and y are non zero" to the question.
But seeing as how it is much more probable to leave out a <= sign than an entire sentence, I guess the question frame is right and the answer should be xy <= 0
_________________
Did you find this post helpful?... Please let me know through the Kudos button.
Thanks To The Almighty - My GMAT Debrief
GMAT Reading Comprehension: 7 Most Common Passage Types
Re: if |x|-|y|=|x+y|, then which of the following must be true? [#permalink] 22 Oct 2012, 04:01
Similar topics Replies Last post
Similar
Topics:
which of the following must be true? 2 06 Jun 2009, 03:57
2 If XY is divisible by 4, which of the following must be true 16 29 Nov 2008, 14:27
if X ____ = X, Which of the following must be true for all 8 26 Aug 2008, 09:41
Which of the following about triangle must be true? I. If a 10 31 Jul 2008, 20:06
which of the following must be true? (no restrictions of 8 29 May 2007, 16:07
Display posts from previous: Sort by | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8426072001457214, "perplexity": 4334.998984823425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657124236.90/warc/CC-MAIN-20140914011204-00121-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
http://math.stackexchange.com/questions/292680/why-does-the-term-frac1n-1-2n-4-choose-n-2-counts-the-number-of-possi | # Why does the term ${\frac{1}{n-1}} {2n-4\choose n-2}$ counts the number of possible triangulations in a polygon?
In the given picture bellow, it counts the number of different triangloations in a polygon, how do the get to this expression, why is it:
$${2n-4\choose n-2}$$
and why do we multiply it by $${\frac{1}{n-1}}$$
In this book they don't explain this issue.
I took the counting expression from here:
-
Every triangulation of an $n$-sided polygon can be associated with a binary tree with $(n-2)$ nodes, since there are $(n-2)$ triangles in the triangulation. Fix a side of the polygon, then consider the triangle on that side. It can have one or two neighbours, left or right. The triangles with only one neighbour are the ones that share a side (or two, in the case of the starting triangle) with the perimeter of the polygon, and the triangles with no neighbours are the ones that share two sides with the perimeter of the polygon. On the other hand, every rooted binary tree with $(n-2)$ nodes (where a right child differs from a left child) gives a different triangulation. Moreover, every rooted binary tree with $(n-2)$ nodes can be encoded as a string of $(2n-4)$ balanced parenthesis: assign the encoding "()" to every leaf, then recursively assign the encoding "(encoding-of-the-left-child) encoding-of-the-right-child" to every parent, until the root is reached. A couple of parenthesis is added every time an edge of the tree is traversed. Moreover, every string of $(2n-4)$ balanced parenthesis can be uniquely "decoded" into a rooted binary left/right tree with $(n-2)$ nodes. The last bijection is between the string of $(2n-4)$ balanced parenthesis and the up-or-right paths in a $(n-2)\times (n-2)$ square grid: starting from the lower left corner, we move up every time we encounter a "(" and to the right every time we encounter a ")": the balancing condition is equivalent to the fact that we never cross the diagonal of the grid. Now see the second proof on Wikipedia about Catalan numbers and everything is done.
Draw a regular $n$-gon, and pick one of the sides. Once you triangulate it, this side will be on one of the triangles. If we remove this triangle, we are left with two triagulated polygons. The triangulations we find on them are arbitrary, so this gives us a recurrence. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8209584355354309, "perplexity": 115.57669729077075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701149548.13/warc/CC-MAIN-20160205193909-00094-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/how-do-i-calculate-the-volume-of-this.156563/ | # How do I calculate the volume of this?
1. Feb 15, 2007
### -EquinoX-
How do I calculate the volume of this??
What is the volume of the region bounded by y = x^2, y = 1, and the y-axis rotated around the y -axis
2. Feb 15, 2007
### mathwonk
define a voklume function, as a function of y, by letting V(y) be that portion of the volume lying below height y. then the derivative of this function i the area of the circular face of this portion of volume, i.e. dV/dy = πr^2 where r = x = sqrt(y), so dV/dy = πy. so you guess a formula for V(y) and then plug in y = 1.
3. Feb 15, 2007
### -EquinoX-
I should do this using integral.. I was thinking of slicing this region vertically but then how do I represent this in integrals?? I have to take integrals from -1 to 1 right?
I would evaluate this as the integral from 0 to 1 of (1-squareroot of y)^2 dy
is that right??
Last edited: Feb 15, 2007
4. Feb 16, 2007
### HallsofIvy
Why in the world would you slice it vertically? Since it is rotated around the y-axis, slices horizontally will be disks with radius x. Each would have area $\pi x^2= \pi y$ and each infinitesmal disc will have volume $\pi y dy$. Integrate that.
5. Feb 16, 2007
### -EquinoX-
I am sorry that's my mistake. I would slice it horizontally and take the integral from 0 to 1. And shouldn't the radius be 1-square root of y?? Because it's the intersection of y = x^2 and y = x
6. Feb 16, 2007
### HallsofIvy
Where did you get y= x from? Your original post was:
Every horizontal "slice" is a circle with center at the y-axis, x= 0, and the end of a radius at $x= \sqrt{y}$. Of course, the area of the disk is $\pi x^2= \pi y$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9145525097846985, "perplexity": 924.3753851341868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645405.20/warc/CC-MAIN-20180317233618-20180318013618-00062.warc.gz"} |
http://rpg.stackexchange.com/questions/28533/how-long-does-a-binding-last | How long does a Binding last?
The Binding advantage in GURPS allows a character to hold another character in place, in a way similar to being grappled by a creature with ST equal to the level of your Binding.
My problem is that the advantage seems annoyingly non-specific about exactly how long the binding lasts. If I have a strong enough Binding, can I keep someone held forever? It seems like that's what the advantage is implying, but it seems kind of weird that Binding would be effectively permanent, given a high enough level of Binding.
- | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8142712712287903, "perplexity": 1175.0974355950266}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00185-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://www.dadisp.com/webhelp/mergedProjects/refman2/FncrefSZ/SPECTRUM.htm | # SPECTRUM
## Purpose:
Returns the normalized magnitude of the FFT.
## Syntax:
SPECTRUM(series, len)
series - Any series or multi-column table. len - Optional. An integer, the FFT length. Defaults to the length of the input series. If len > length(series), the series is padded with zeros.
## Returns:
A real series or table.
## Example:
W1: gsin(100, 0.01, 4)*5;setvunits("V")
W2: spectrum(W1)
Max(W2) occurs at 4 Hz. with an amplitude of 5. The length of W2 is 51 points.
## Example:
W3: spectrum(W1, 2048)
Same as above except a 2048 point FFT is used to calculate the Spectrum, resulting in a 1025 point series.
## Example:
fn := 1.0
W1: gsin(100,.01,fn);label(sprintf("Frequency: %g", fn))
W2: spectrum(W1, 1024)
fn:=1;while(fn<=100, fn++)
W2 displays a remarkably simple demonstration of aliasing errors due to undersampling the sinewave in W1.
## Remarks:
The SPECTRUM is normalized so that a sinewave of amplitude A and frequency F yields a SPECTRUM of amplitude A at frequency F. If the input series is in Volts, the resulting SPECTRUM has units of Volts.
If len is larger than the length of series, the series is padded with zeros to length len before calculating the SPECTRUM. If len is less than the series length, the series is truncated to length len. If not specified, len defaults to a length of series.
The length of the final result is int(fftlen/2) + 1 where the last sample represents the Nyquist frequency.
The SPECTRUM is calculated by the FFT and has the following form:
spectrum(s) = 2*mag(fft(s))/length(s)
with frequency values from 0 to Fs/2 Hz., where Fs is the sampling rate of the data (i.e. rate(s)). The first value (DC component) and the last value (at Fs/2, the Nyquist frequency) are not scaled by 2 to preserve Parseval's theorem.
See AMPSPEC to compute the normalized complex amplitude spectrum.
See MAGSPEC to compute the magnitude of the normalized complex amplitude spectrum.
See POWSPEC to compute the power spectrum.
See PSD to compute the Power Spectral Density.
See SPECGRAM to compute a joint time-frequency distribution.
See WINFUNC for a list of windowing functions and amplitude correction schemes.
See NSPECTRUM to compute a N point spectrum by zero padding or time aliasing.
See DADiSP/FFTXL to optimize the underlying FFT computation.
AMPSPEC
DFT
FFT
MAGSPEC
NSPECTRUM
PHASESPEC
POWSPEC
PSD
SPECGRAM | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8579690456390381, "perplexity": 3592.052953566164}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697772439/warc/CC-MAIN-20130516094932-00014-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://vkr.rubesz.cz/nealgebraicke-rovnice-a-nerovnice/logaritmicke-rovnice-2/logaritmicke-rovnice-iii/ | # Logaritmické rovnice III
Řešte rovnice pro $x\in\mathbb R$:
(a) $x^{\sqrt{x}}=x^{\frac x2}$ (b) $x^{\sqrt{x}}=(\sqrt{x})^x$
Řešení Ukázat
Řešte rovnice pro $x\in\mathbb R$:
(a) $x^{\log_3x}=3$ (b) $x^{\log(x)-1}=100$ (c) $x^{\log(x)+1}=100$ (d) $\displaystyle\left(\sqrt{x}\right)^{\log x}=100$ (e) $x^{\frac38\log^3x-\frac34\log x}=1000$ (f) $\displaystyle x^{2\log^3x-\frac{3}{2}\log x} =\sqrt{10}$ (g) $(10+x)^{-\log(10+x)}=10^{-4}$ (h) $10(x^{\log x}+x^{-\log x})=101$
Řešení Ukázat
Řešte rovnice pro $x\in\mathbb R$:
(a $\mathrm e^2\cdot x^{\ln x}=x^3$ (b) $x^{1+\log_{\frac12}x}=\frac x4$ (c) $x^{2\log_{9}x}=9x^{-1}$ (d) $100^{\log(x+7)}= (x+7)^2$ (e) $x^{\log x} - 100x = 0$ (f) $x^{\log x^4}=\dfrac{x^8}{1000}$ (g) $x^{\log_x{\frac15}} = \dfrac{x^2}{125}$ (h) $x^x-x^{-x}=3(1-x^{-x})$
Řešení Ukázat
Řešte rovnice pro $x\in\mathbb R$:
(a) $5^{\log x}+x^{\log5}=50$ (b) $5^{\log_2x}+2\cdot x^{\log_25}=15$ (c) $3^{\log_3^2x}+x^{\log_3x}=162$ (d) $2^{\sqrt{\log_2x}}+x^{\sqrt{\log_x2}}=4$
Řešení Ukázat
Řešte rovnice pro $x\in\mathbb R$:
(a) $\log_2\left(2^x-\dfrac72\right)=1-x$ (b) $\log_2(5-2^x)=2+2x$ (c) $\log_2(9-2^x)=3-x$ (d) $\log2+\log(4^{x-2}+9)=1+\log(2^{x-2}+1)$ (e) $\log\left( 2^{x}+1\right) +\log \left( 2^{x+1}-1\right) =2\log 3$ (f) $\log10+\dfrac13\log(3^{2\sqrt x}+271)=2$
Řešení Ukázat
Řešte rovnice pro $x\in\mathbb R$:
(a) $x^{x^2+x-6}=1$ (b) $x^{2x-4} =x^{x^2-3x+2}$ (c) $(x^2-x-1)^{x+2}=1$ | {"extraction_info": {"found_math": true, "script_math_tex": 68, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 68, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8076536059379578, "perplexity": 2320.2883049423735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499829.29/warc/CC-MAIN-20230130201044-20230130231044-00813.warc.gz"} |
http://www.ibase.com.sg/aay7srw/f7c644-arithmetic-mean-meaning | Yakuza 0 Disco Tips, Current Week Number 2019, Luxury Beach House, Dorset, Loreto Grammar School Sixth Form, Schoolwear Solutions Discount Code, Dragon Ball Z Power Levels, Prepackaged School Supplies, New Work Visa Rules In Oman 2020, Northern Credit Union, " /> Yakuza 0 Disco Tips, Current Week Number 2019, Luxury Beach House, Dorset, Loreto Grammar School Sixth Form, Schoolwear Solutions Discount Code, Dragon Ball Z Power Levels, Prepackaged School Supplies, New Work Visa Rules In Oman 2020, Northern Credit Union, " />
# arithmetic mean meaning
The arithmetic mean (or simply mean) of a list of numbers, is the sum of all of the numbers divided by the amount of numbers.Similarly, the mean of a sample ,, …,, usually denoted by ¯, is the sum of the sampled values divided by the number of items in the sample ¯ = (∑ =) = + + ⋯ + For example, the arithmetic mean of five values: 4, 36, 45, 50, 75 is: Therefore, the calculation is as follows, =(100+120+250+90+110+40+50+150+70+100+10)/10; Example #3. Geometric means takes into account the compounding effect during the calculation period. It is sometimes known as an unweighted average; Calculating an Average. Let's look at some more examples. Formula {[(1+Return1) x (1+Return2) x (1+Return3)…)]^(1/n)]} – 1 (Return1 + Return2 + Return3 + Return4)/ 4: Values: The geometric mean is always lower than the arithmetic means due to the compounding effect. For example (1, 2, 2, 2, 3, 9). Most companies report returns in the form of an arithmetic average because it is usually the highest average that can be announced. Arithmetic mean definition, the mean obtained by adding several quantities together and dividing the sum by the number of quantities: the arithmetic mean of 1, 5, 2, and 8 is 4. To calculate it: • add up all the numbers, • then divide by how many numbers there are. Arithmetic means are used in situations such as working out cricket averages. Define arithmetic mean. The arithmetic mean is … The arithmetic mean is commonly referred to as the average, because it is a common measure of central tendency among a data set. The arithmetic mean is the most commonly used and readily understood measure of central tendency in a data set. Synonyms: arithmetic mean; expectation; expected value; first moment. See more. The case p = −1 is also called the harmonic mean. If x is the arithmetic mean of x 1 and x 2, the three numbers x 1, x, x 2 are in arithmetic progression. Learn more. Arithmetic Mean is known as Additive Mean. What does Arithmetic mean mean? “8” is called as the arithmetic mean (A.M). Information and translations of Arithmetic mean in the most comprehensive dictionary definitions resource on the web. We can define mean as the value obtained by dividing the sum of measurements with the number of measurements contained in the data set and is denoted by the symbol $\bar{x}$. Arithmetic Mean in the most common and easily understood measure of central tendency. However, the arithmetic return is actually misleading unless the return earned is fixed for the entire investment period. In layman terms, the mean of data indicates an average of the given collection of data. Definition: The arithmetic average of a series of number is the sum of all the numbers in the series divided by the counts of the total number in the series. The arithmetic and quadratic means are the special cases p = 1 and p = 2 of the pth-power mean, M p, defined by the formula where p may be any real number except zero. the sum of the values of a random variable divided by the number of values ; Thanks for visiting The Crossword Solver. Arithmetic mean definition: an average value of a set of integers , terms , or quantities , expressed as their sum... | Meaning, pronunciation, translations and examples For example, the mean of the numbers 5, 7, 9 is 4 since 5 + 7 + 9 = 21 and 21 divided by 3 [there are three numbers] is 7. Arithmetic Mean Formula Sum of all of the numbers of a group, when divided by the number of items in that list is known as the Arithmetic Mean or Mean of the group. Definition: The arithmetic mean of a set of data is found by taking the sum of the data, and then dividing the sum by the total number of values in the set. In all the reliable sources I found about the definition of arithmetic mean, its definition is the sum of the numbers divided by the count of the numbers. Arithmetic mean formula The mean, most commonly known as the average of a set of numerical values, is a measure of central tendency, a value that estimates the center of a set of numbers. Add the numbers: 2 + 7 + 9 = 18 Divide by how many numbers (i.e. What is the arithmetic mean. Arithmetic Mean= ((3 * 3) + (4 * 9) + (6 * 18) + (7 * 12) + (9 * 3)) / 45; Arithmetic Mean = 264 / 45; Arithmetic Mean = 5.87; Therefore, the average score of the class in the science test was 5.87. ARITHMETIC Meaning: "art of computation, the most elementary branch of mathematics," mid-13c., arsmetike, from Old French… See definitions of arithmetic. Let us take an example of two data sets with two different arithmetic means. Let us discuss the arithmetic mean in Statistics and its applications here in detail. Definition of arithmetic mean noun from the Oxford Advanced Learner's Dictionary. n. The value obtained by dividing the sum of a set of quantities by the number of quantities in the set. The arithmetic mean always lies between the smallest and the largest of the numbers in the set. We much have b – A = A- a ; Each being equal to the common difference There will also be a list of synonyms for your answer. The mean will be displayed if the calculation is successful. Weighted pth-power means are defined by. … Arithmetic mean is one of the measures of central tendency which can be defined as the sum of all observations to be divided by the number of observations. Solution. The definition of the arithmetic mean (A.M): Suppose if the terms p,q,r are in arithmetic progression then we call the middle term “q” as the arithmetic mean (A.M). For example, w know that 2, 8, 14 are in arithmetic progression.In the three numbers, the middle term i.e. arithmetic mean: The arithmetic mean, also called the average or average value, is the quantity obtained by summing two or more numbers or variables and then dividing by the number of numbers or variables. Geometric Mean. Here a, A, b are in A.P . arithmetic mean n the average value of a set of integers, terms, or quantities, expressed as their sum divided by their number the arithmetic mean of 3, 4, and 8 is 5 (Often shortened to) mean (Also called) average Compare → geometric mean The Principal of a school calls two teachers to his office – one teaches Division A, and the other teaches Division B. Arithmetic Mean Formula – Example #3. The Arithmetic Mean is the average of the numbers: a calculated "central" value of a set of numbers. The arithmetic mean is important in statistics. Definition of arithmetic mean. ARITHMETIC MEAN. The arithmetic mean “A” of any two quantities of ” a” and ” b”. Here, an example is given that the arithmetic mean(s) of 2, 4, 6, 8, 10 are 4, 6, and 8 (which should be 6). arithmetic mean noun /ˌærɪθmetɪk ˈmiːn/ /ˌærɪθmetɪk ˈmiːn/ (also mean) (mathematics) jump to other results. Now, let us look at the properties of arithmetic mean. ‘The geometric mean is a better estimate than the arithmetic mean of the central values, as it is less influenced by episodic high values not representative of the mean background.’ ‘Medians were also calculated because they are less affected than arithmetic means by the extreme values often encountered in skewed distributions of dietary variables.’ The arithmetic mean is the sum of all the numbers in a data set divided by the quantity of numbers in that set. we added 3 numbers): 18 ÷ 3 = 6 So the mean is 6 Classified under: Nouns denoting cognitive processes and contents. In this article explained about Definition, Properties, Formula and Examples with Solutions of Arithmetic Mean. Meaning of Arithmetic mean. The geometric mean differs from the arithmetic average, or arithmetic mean, in how it is calculated because it takes into account the compounding … We've listed any clues from our database that match your search. Arithmetic definition, the method or process of computation with figures: the most elementary branch of mathematics. Both of them claim that their teaching methods are superior. Hypernyms ("arithmetic mean" is a kind of...): mean; mean value (an average of n numbers computed by adding some function of the … {{#verifyErrors}} {{message}} {{/verifyErrors}} {{^verifyErrors}} {{#message}} Property 1 : If all the observations assumed by a variable are constants, say "k", then arithmetic mean is also "k". Arithmetic Mean. The arithmetic mean allows investors to gain some insight into stock prices, economic data, and a host of other information. What is Arithmetic Mean in Statistics? See more. A mean is commonly referred to as an average. The most common measure of central tendency is the arithmetic mean. Find more similar words at wordhippo.com! The synonyms have been arranged depending on the number of charachters so that they're easy to find. Arithmetic Mean Return. The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. Geometric Mean: Arithmetic Mean: Meaning: Geometric Mean is known as the Multiplicative Mean. • ARITHMETIC MEAN (noun) Sense 1. Example: what is the mean of 2, 7 and 9? This is calculated by multiplying the numbers in a series and taking the nth root of the multiplication. It is the total sum divided by the quantity. Synonyms for arithmetic mean include average, expectation, mean, average value, expected value, first moment, sample mean, simple average, standard and norm. To clear the calculator and enter new data, press "Reset". arithmetic mean definition: → mean noun. Meaning: The sum of the values of a random variable divided by the number of values. arithmetic mean synonyms, arithmetic mean pronunciation, arithmetic mean translation, English dictionary definition of arithmetic mean. It is equal to the sum of all the values in the group of data divided by the total number of values. Why Does the Arithmetic Mean Matter? The arithmetic mean of a set of observed data is defined as being equal to the sum of the numerical values of each and every observation, divided by the total number of observations. Definition of Arithmetic mean in the Definitions.net dictionary. Calculate the arithmetic mean of wages for the CEO. More precisely, Note that this definition refers to the arithmetic mean, as distinct from other types of means like geometric mean or harmonic mean. Also called average . The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics. Then. the value found by adding together all the numbers in a group, and dividing the total by the number of numbers; See arithmetic mean in the Oxford Advanced American Dictionary. Basic Concept, formula with examples for Arithmetic Mean. This is not always the case. In the problem above, the mean was a whole number. The arithmetic mean is 19/6 = 3.17. The arithmetic mean or average of a set of values is the ratio of the sum of these values to the number of elements in the set. In statistics, the term average refers to any of the measures of central tendency. Definition: An arithmetic mean is a simple raw average. Average, because it is sometimes known as the arithmetic mean average of the given collection of data indicates average! Properties, Formula and Examples with Solutions of arithmetic mean new data, and a host other!, Formula and Examples with Solutions of arithmetic mean in Statistics and its applications here in detail of them that! To gain some insight into stock prices, economic data, press ''. Your search usually the highest average that can be announced your search Principal! Is commonly referred to as the average, because it is usually the highest average that can announced. Each being equal to the sum of the multiplication on the number of charachters that! The problem above, the term average refers to any of the:. The form of an arithmetic average because it is equal to the sum of the values in the common... = 18 divide by how many numbers there are stock prices, economic data press... The problem above, the method or process of computation with figures the! Variable divided by the number of values commonly used and readily understood measure of central.! On the web ” and ” b ” gain some insight into stock prices, economic data, press Reset..., arithmetic mean is 6 Define arithmetic mean as the average, because is... P = −1 is also called the harmonic mean the average, it! Numbers in the group of data many numbers there are average because it is equal to common... Properties of arithmetic mean sometimes known as arithmetic mean meaning average, because it is the common! Definitions resource on the number of values a common measure of central tendency in a data set and easily measure... Being equal to the common difference arithmetic mean “ a ” of any two quantities of ” ”! That can be announced here a, and a host of other information 18 ÷ =! 8 ” is called as the average, because it is the comprehensive. Different arithmetic means are used in situations such as working out cricket.... The most elementary branch of mathematics mean of 2, 2, 2, 7 and?! In detail: 18 ÷ 3 = 6 so the mean is referred. This article explained about definition, Properties, Formula and Examples with Solutions arithmetic... Principal of a random variable divided by the number of values translations of arithmetic mean given collection of.! 1, 2, 8, 14 are in A.P 3 = 6 so the mean 2... Investment period of two data sets with two different arithmetic means three numbers, mean... Synonyms have been arranged depending on the web it: • add up all the values of a random divided... From our database that match your search much have b – a = a... Arithmetic mean is commonly referred to as the arithmetic mean first moment Meaning: geometric mean: arithmetic mean /ˌærɪθmetɪk! Have been arranged depending on the web to any of the values of random... Numbers ): 18 ÷ 3 = 6 so the mean of wages for the entire investment period translation... Arithmetic return is actually misleading unless the return earned is fixed for the arithmetic mean meaning a.: 18 ÷ 3 = 6 so the mean of data indicates an average quantity. Entire investment period, English dictionary definition of arithmetic mean: what is the mean... = A- a ; Each being equal to the sum of a school two... Is a common measure of central tendency is the total sum divided by the number values... An average of the given collection of data divided by the number of quantities by the number of quantities the... Actually misleading unless the return earned is fixed for the CEO the Crossword Solver and taking nth... Investors to gain some insight into stock prices, economic data, and a host other... Of ” a ” of any two quantities of ” a ” of any two quantities of ” a of! Of two data sets with two different arithmetic means quantities by the number of values Thanks. However, the mean of data indicates an arithmetic mean meaning your answer we added 3 )... Arithmetic mean in the set companies report returns in the problem above, the term average to! Also be a list of synonyms for your answer: • add up the... The value obtained by dividing the sum of all the numbers in a and... # 3 ; expectation ; expected value ; first moment it: • add up all the numbers the..., 14 are in A.P figures: the sum of the multiplication average to. Of mathematics follows, = ( 100+120+250+90+110+40+50+150+70+100+10 ) /10 ; example # 3 at Properties. Also mean ) ( mathematics ) jump to other results a whole number values of a of. 100+120+250+90+110+40+50+150+70+100+10 ) /10 ; example # 3 added 3 numbers ): 18 ÷ 3 = 6 so the was! We much have b – a = A- a arithmetic mean meaning Each being equal to the sum of all the in. Formula and Examples with Solutions of arithmetic mean ; expectation ; expected value first. We added 3 numbers ): 18 ÷ 3 = 6 so the mean a! Their teaching methods are superior above, the mean of 2, 2,,... 6 so the mean will be displayed if the calculation period any of the measures central. 7 and 9 = A- a ; Each being equal to the sum of values... = ( 100+120+250+90+110+40+50+150+70+100+10 ) /10 ; example # 3 ; expected value ; first moment,... If the calculation period the given collection of data A.M ) most elementary branch of.. ) ( mathematics ) jump to other results Formula and Examples with Solutions of arithmetic mean also mean (! The total sum divided by the number of quantities by the number of charachters so that they easy! To the sum of a set of quantities in the set the web up all the:..., • then divide by how many numbers ( i.e two quantities of ” a ” of two. Arithmetic average because it is sometimes known as an unweighted average ; Calculating an average of values... Claim that their teaching methods are superior into stock prices, economic data, and the other teaches Division.. A ; Each being equal to the common difference arithmetic mean is commonly referred to as the Multiplicative mean above!: 18 ÷ 3 = 6 so the mean will be displayed if the calculation period mean investors! An example of two data sets with two different arithmetic means clear the calculator enter... ” of any two quantities of ” a ” of any two quantities of ” ”... Reset '' = −1 is also called the harmonic mean by multiplying the numbers: +... Is known as an unweighted average ; Calculating an average referred to arithmetic mean meaning the arithmetic mean ; Each being to... Of an arithmetic average because it is the total number of values ; Thanks visiting! Us take an example of two data sets with two different arithmetic means the quantity Statistics and applications... Stock prices, economic data, press Reset '' numbers: 2 + 7 + =. Commonly used and readily understood measure of central tendency most companies report returns the! Principal of a random variable divided by the number of charachters so that 're. 7 + 9 = 18 divide by how many numbers there are calculated by multiplying the numbers: 2 7. Mean noun /ˌærɪθmetɪk ˈmiːn/ ( also mean ) ( mathematics ) jump to other results take an of! Are superior numbers, the method or process of computation with figures: the sum of the numbers, then! Know that 2, 8, 14 are in arithmetic progression.In the numbers! Of all the numbers: 2 + 7 + 9 = 18 divide by how many there! Quantities of ” a ” of any two quantities of ” a and. Charachters so that they 're easy to find easy to find, Formula with Examples for arithmetic mean ; ;! Commonly used and readily understood measure of central tendency in a data set branch of mathematics the!, let us look at the Properties of arithmetic mean of 2 2... As follows, = ( 100+120+250+90+110+40+50+150+70+100+10 ) /10 ; example # 3 7 + 9 = divide! 100+120+250+90+110+40+50+150+70+100+10 ) /10 ; example # 3 “ a ” of any quantities. 18 divide by how many numbers there are and easily understood measure of central tendency in a series and the! Was a whole number resource on the web 18 ÷ 3 = 6 the. 'Ve listed any clues from our database that match your search that 2, 3 9! The average, because it is the total number of values insight stock... Mean pronunciation, arithmetic mean ” of any two quantities of ” ”. Teaches Division a, b are in A.P be a list of synonyms for your answer teaching! • then divide by how many numbers there are the term average refers to any of the measures of tendency., 8, 14 are in A.P numbers: 2 + 7 + 9 = 18 divide by how numbers. Calculator and enter new data, press Reset '' the total number of values that they 're to. Harmonic mean many numbers ( i.e mean noun from the Oxford Advanced Learner 's dictionary the calculation successful!, • then divide by how many numbers there are Principal of set. By multiplying the numbers, • then divide by how many numbers there are that 2, 2,,... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087545275688171, "perplexity": 615.3634915858138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487598213.5/warc/CC-MAIN-20210613012009-20210613042009-00182.warc.gz"} |
https://www.coursehero.com/file/5661687/52/ | {[ promptMessage ]}
Bookmark it
{[ promptMessage ]}
# 5.2 - Dr Doom Lin Alg MATHl850/2050 Section 5.2 Page 1 of 8...
This preview shows pages 1–8. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview. Sign up to access the rest of the document.
Unformatted text preview: Dr. Doom Lin Alg MATHl850/2050 Section 5.2 Page 1 of 8 Section 5.2. Subspaces. Definition: Subspace. A subset W of V is called a subspace of V if W is itself a vector space under the addition and scalar multiplication defined on V. In general, given a subset W of a vector space V, to show that W is a subspace of V, each of the 10 vector space axioms have to be shown to hold for W. However, because W is a subset of V and you use the same rules of addition and scalar multiplication, most of the axioms are “inherited” from V. That is, most of them don’t have to be checked. The next theorem states that, in fact, only two things have to be checked. Theorem 5.2.1: Subspaces. If W is a set of one or more vectors from a vector space V, then W is a subspace of V if and only if the following conditions hold. (a) If u and V are vectors in W, then u + V [email protected] (b) If k is any scalar and u is any vector in W, then ku is. Example: Let W consist of the points on the line y = a: + 1, and so, W is a subset of R2. Show that W is NOT a subspace of R2. Raw-x LIST Lemme, we surouueu \AJ “3: amen buucL Ammo». So \0 Lc NOT A \$U\$\$9k€£ or— ML. Dr. Doom Lin Alg MATH1850/2050 Section 5.2 Page 2 of 8 Example: The set of points W that lie on a line through the origin is a subset of R3. To show that W is a subspace of R3, we need to show the conditions of Theorem 5.2.1 are satisfied. \L) = {(Atpa‘z) 6 \Q? \ 6:13.): fica,‘o,c)k “"—‘ . = A, wiecrfon vector. FoL we Lm€ .4 3 cuecv. (\$0609.; uuoEG. Abbi-r80»: 'u: g 2.2 eUJ ,cuecv. 5.999“). u,\_ge\U News g¢CU-\,UM\1~3 2< Ea k4 V:(U\JV1\\J3) & \J—— Sol For. sou; 5,956 19., aisacwsw .. G [L Meat-mm cLosoac ween. scALAo. worm . 'Lv L’LQUJ & U—élfi ,cqecc ‘49 G“). Mm WWWM So lkgéwi elk Dr. Doom Lin Alg MATHl850/2050 Section 5.2 Page 3 of 8 Example: The set of all points on a plane is a subset of R3. From Example 6 on page 225 of text, every plane through the origin is a vector space. Thus, every plane through the origin is a subspace of R3. Example: In the previous section we saw that the set that contained only the zero vector was a vector space (called the zero vector space). Because every vector space has a zero vector, the zero vector space is a subspace of every vector space. In addition, the vector space V is a subspace of itself. Subspaces of R2. 0 the zero subspace 0 lines through the origin 0 R2 It turns out that these are the only subspaces of R2. Subspaces of R3. 0 the zero subspace 0 lines through the origin 0 planes through the origin 0 R3 It turns out that these are the only subspaces of R2. Dr. Doom Lin Alg MATH1850/2050 Section 5.2 Page 4 of 8 Example: The set P2 of polynomials of degree g 2 (p 2 19(33) 2 co + 013: + 02332) is a subset of F (—00, 00), the set of real-valued functions on (—00, 00). Show that P2 is a subspace of F(—oo, oo). Cue-=01. cmsum: ounce. A—ou'u-(ofl = '\F i) & 16 P1,: LHECV— er€ ‘97. \$0 LET Q) lg @Z , 5° ?5 C'+C‘7<I +C'LX1 7- i 1": J~9+Axx +A1X. ”HEV-E 11% Ct, £ 24.15 Y'kl: (Cot-oh)? @\+J.\x ‘\- Cc1+ A act So P.\.Lé P). 0: L9,. m L0. 6 (P. i L week k ép. Cl-"EUL (4.050% most scnuw. Mum n: \F f4 ‘93, & \aéi (’ 7, \‘V" 64934 affix-4:31))!" So \koépz.s e ((1 e10. 9% So PL is A SOQSDACE OF gc—QJQ) . Dr. Doom Lin Alg MATH1850/2050 Section 5.2 Page 5 of 8 Theorem 5.2.2: Solution space of homogeneous systems. If AX = 0 is a homogeneous linear system of m equations in n unknowns, then the set of solution vectors is a subspace of R". Example: Consider AX = 0 with X\ x1— x3 7. l ‘1— L ‘LL 2. \ -Z l a?- \ i/z ,\ O A : o L o 4-... o \ a, _\___, o k a a 7. o -1 9. an; L o 4 0 3131341 0 o o a M X‘s“k (Folké(fl) GET 36,50) x\=\$t x. \ so X; t k 0 X3 | - - s Sownou 661’ l\$ A- suesmce oF l2, . Proof of 171771522 6A1 V») is THE scum'qu 561' 0(— AE: Q CLosuw upbee. ADBiTCo» -. LC g 2g é UL) (q a! Ate sownaus) TBEA Lac—xv. 91+! evJ C 092 L5 A Sow-non), Qt) \lé‘k) °D A8519. JAQ'JQ- ACgurg): AchA! = 9+9. =9. \$0 ‘03". Gull 6L0\$ULE unoev. Smut MULT'M: , A(\¢Lfl= lL(Akl)= \LQ =Q I\$0 legend Dr. Doom Lin Alg MATH1850/2050 Section 5.2 Page 6 of 8 Definition: Linear Combination. A vector w is called a linear combi- nation of the vectors V1, V2, . . Where 161,192, . . . ,Vr if it can be expressed in the form W=l€1V1+l€2V2+“‘+k7~VT . , kr are scalars. Examples: 1. In R”, with the standard unit vectors e1, e2, . . . ,en ‘ n W 19619- , £=(x\,xz,..~.xn)= x\€;.+x;€_z+--~+xu€_n- 2. Consider the two vectors u = (1, —1, 1) and V = (—1,2, 1) in R3. Show that W = (1,0,3) is a linear combination of u and V and that X = (1,0,0) is not F09. w wAuT scams L an, (em) Sucu. THAT 9=ke*%u C\)oas)= ‘L\(l)-\,\)+ sz‘\,—2,\) \$29M \‘r EQWiES ‘4‘ _ l‘z ._._ 4 2“ 9mm; -. "lLrtl‘L'L = 7.“ emu’a : k\ + k1. = \ -| 1 \ -‘ \ \ ’\ ‘ \$0 \L\: _.| '7. o Qzl-P-v*v", O l ‘ 9 O l \ k7,: 4 ‘ \ ’5 9319-3'2‘ o 7. 7. 2310.522 0 o 0 so - 195+ \_1_ k . . Col v -. MJNJ’T SCALAO\$ \4‘ ALL (u: VosSlgLE) such «an )5: (“(3 11,41 0.093 = \4.C\,—\,\)+\cz(4,2.o \L\" \Ll 7’ J ”\L\“‘2k1 "7' o \ 'l I | ,\ l u' 1. 0 Q2" QI‘Q‘; ‘ ‘ '- \ \ 0 25123-9“ O z .-\ pa stationed. Dr. Doom Lin Alg MATHl850/2050 Section 5.2 Page 7 of 8 Theorem 5.2.3. If V1, V2, . . . ,Vr are vectors in a vector space V, then: (a) The set W of all linear combinations of V1, V2, . . . ,V7n is a subspace of V. (b) W is the smallest subspace of V that contains V1, V2, . . . ,V7n in the sense that every other subspace of V that contains V1, V2, . . . ,V7n must contain W. @ Cuecv. meson: woeo. Abbi’rfcfl : '\F g: 3 \g Q“) , CHECK. guy eUJ, (Afifléw =3 ufill.\\lli'\L-L\l1-\-n.+\<r¥lv gr \E_=C|\_j,\+Cz\l-2+...+Cf\_f_r .— 2 wag: Q¢k+c,)\_I,JrQ¢2+Cz)\_J,-\-u-+(kr+cr3\lr so [Lynn ékfll cued. CLO€>UIL€ uuoea. SCALAL MULT‘Q: “3 gem] , Véw. , meat lcgéUJ norm gr As ABNé' kg, = Q¢k|32‘4(kkz)gz¥u.+Ckkr)\lr so ikgégl I Definition: Spanning. If S 2 {V1, V2, . . . ,V,} is a set of vectors in a vector space V, then the subspace W of V consisting of all the linear combinations of the vectors in S is called the Space spanned by V1,V2, . . . ,VT, and we say that the vectors V1, V2, . . . ,V7n span W. To indicate that W is the space spanned by the vectors in the set S, we write W = span(S) or W = span {V1,V2, . . . ,Vr} Example: 1. R3 = span{e1, e2, e3} 2. If W denotes the asy-plane in R3, W = span{e1, e2} Dr. Doom Lin Alg MATHl850/2050 Section 5.2 Page 8 of 8 Theorem 5.2.4. If S = {V1,V2, . . . ,VT} and S’ = {W1,W2, . . . ,Wk} are two sets of vectors in a vector space V, then span {V1,V2, . . . ,VT} 2 span {W1,W2, . . . ,Wk} if and only if each vector in S is a linear combination of those in S’ and each vector in S’ is a linear combination of those in S. Example: Say 8 = {e1,e2}, S’ = {(1,2,0), (1,1,0), (3,1,0)}. Then \a(o,l;o) 2‘ 51,, g; Chow) span(S) = span(Sl) if 394%: so sueay vecmz is 5’ (s A Lin-cone. <29 g. 23.1 9 =<~Aly. + z-ynoyz €1 —= l \h "' L") ‘11 *0 93 so enemy van-on. \M S (Au 3.; €XN£6\$€D As A d». CQMB. oF . I UGCt°L§ ”£8 3: Meg): Wfis’). ...
View Full Document
{[ snackBarMessage ]}
### What students are saying
• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.
Kiran Temple University Fox School of Business ‘17, Course Hero Intern
• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.
Dana University of Pennsylvania ‘17, Course Hero Intern
• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.
Jill Tulane University ‘16, Course Hero Intern | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8586200475692749, "perplexity": 4129.958045615739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865456.57/warc/CC-MAIN-20180523063435-20180523083435-00009.warc.gz"} |
https://www.physicsforums.com/threads/finding-mass-based-on-amu-and-abundance.259001/ | Finding mass based on amu and % abundance
1. chemstudent
2
1. The problem statement, all variables and given/known data
Oxygen has 3 isotopes. Oxygen-16 has the amu of 15.995 and its natural percent abundance is 99.759. Oxygen-17 has a mass of 16.995 amu and its natural percent abundance is 0.037. Oxygen-18 has a mass of 17.999 amu and its natural percent abundance is 0.204. What is the average atomic mass of oxygen?
2. Relevant equations
I think I forgot these...
3. The attempt at a solution
I couldn't remember what the process is supposed to look like to figure this sort of problem out, so I tried to multiply the mass by the percent abundance, but the answers made very little sense. If somebody could help me figure out where I messed up, PLEASE RESPOND!
2. symbolipoint
3,425
The average atomic mass of an element is much like a simple mixture problem that you learn in first year Algebra such that you can resort directly to percentage contributions formula.
AverageAMU = 0.99759*16 + 0.00037*17 + 0.00204*18
3. chemstudent
2
Thank you so much! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9415268301963806, "perplexity": 1036.2119009954147}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446230.43/warc/CC-MAIN-20151124205406-00022-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/198647-algebra-simplification-don-t-understand.html | # Math Help - Algebra simplification - don't understand
1. ## Algebra simplification - don't understand
Hi
I'm working on a textbook mathematical induction problem - part of which involves an algebra simplification that I don't understand.
Just wondering if anyone can explain how they get to the right hand side?
((k-1)(k-1 +1) (2(k-1)+1)) /6 = ((k-1) k(2k - 1)) /6
I don't understand how they managed to ditch so many ks and other numbers.
I don't get algebra at all so the most basic explanation would be good.
Thanks
2. ## Re: Algebra simplification - don't understand
Originally Posted by jooby
Hi
I'm working on a textbook mathematical induction problem - part of which involves an algebra simplification that I don't understand.
Just wondering if anyone can explain how they get to the right hand side?
((k-1)(k-1 +1) (2(k-1)+1)) /6 = ((k-1) k(2k - 1)) /6
It's largely arithmetic. For example, -1+ 1= 0 so that middle term is k-1+ 1= k. And 2(k- 1)= 2k- 2 so that 2(k-1)+1= 2k- 2+ 1= 2k- 1.
I don't understand how they managed to ditch so many ks and other numbers.
I don't get algebra at all so the most basic explanation would be good.
Thanks | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062674641609192, "perplexity": 2172.717024384418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657128304.55/warc/CC-MAIN-20140914011208-00185-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
http://math.stackexchange.com/questions/223356/math-software-for-associative-algebras-usual-not-commutative | # math software for associative algebras (usual not commutative)
Does anyone if there exists a math program that can work with free associative algebras ? By work with associative algebras, I mean
• It can define any ideal $I$, being it left-,right- or twosided.
• It can define quotientrings $R/I$ and I can do summation and multiplication in it. (if it can work only with ideals generated by homogeneous elements, it's ok).
• It can calculate elements of the center of an associatieve algebra(= a quotient of a free algebra) of a certain degree (I don't need to find the complete center, just the elements of a certain degree).
Additionally, it would be nice if it were free (opensource or something like that) and it works on Windows.
Does it do all those things ? The only examples I find give finite dimensional algebras and $R/I$ only works in the case $I$ is a power ideal (if I copy the code from sagemath.org-quotient rings). – KevinDL Oct 29 '12 at 8:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8830762505531311, "perplexity": 346.285314934471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989126.22/warc/CC-MAIN-20150728002309-00339-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/129777/what-is-the-fastest-most-efficient-algorithm-for-estimating-eulers-constant-g/129808 | # What is the fastest/most efficient algorithm for estimating Euler's Constant $\gamma$?
What is the fastest algorithm for estimating Euler's Constant $\gamma \approx0.57721$?
Using the definition:
$$\lim_{n\to\infty} \sum_{x=1}^{n}\frac{1}{x}-\log n=\gamma$$
I finally get $2$ decimal places of accuracy when $n\geq180$. The third correct decimal place only comes when $n \geq638$. Clearly, this method is not very efficient (it can be expensive to compute $\log$).
What is the best method to use to numerically estimate $\gamma$ efficiently?
-
The paper "On the computation of the Euler constant $\gamma$" by Ekatharine A. Karatsuba, in Numerical Algorithms 24(2000) 83-97, has a lot to say about this. This link might work for you.
In particular, the author shows that for $k\ge 1$, $$\gamma= 1-\log k \sum_{r=1}^{12k+1} \frac{ (-1)^{r-1} k^{r+1}}{(r-1)!(r+1)} + \sum_{r=1}^{12k+1} \frac{ (-1)^{r-1} k^{r+1} }{(r-1)! (r+1)^2}+\mbox{O}(2^{-k})$$
and more explicitly \begin{align*} -\frac{2}{(12k)!} - 2k^2 e^{-k} \le \gamma -1+&\log k \sum_{r=1}^{12k+1} \frac{ (-1)^{r-1} k^{r+1}}{(r-1)!(r+1)} - \sum_{r=1}^{12k+1} \frac{ (-1)^{r-1} k^{r+1} }{(r-1)! (r+1)^2}\\ &\le \frac{2}{(12k)!} + 2k^2 e^{-k}\end{align*} for $k\ge 1$.
Since the series has fast convergence, you can use these to get good approximations to $\gamma$ fairly quickly.
-
... not to be confused with Karatsuba of Karatsuba algorithm. – user2468 Apr 9 '12 at 21:39
Thanks! Here are the results from this series to those who are interested (Sorry about the lack of line breaks): k=1, sum=0.7965995992978246, error=0.21938393439629178. k=5, sum=0.5892082678451087, error=0.011992602943575847. k=10, sum=0.5773243590712589, error=1.086941697260313E-4. k=15, sum=0.5772165124955206, error=8.47593987773898E-7. Great series! – Argon Apr 10 '12 at 2:06
Great answer Matthew! – GarouDan Apr 10 '12 at 19:04
@user2468: Ekatherina Karatsuba is Anatolii Karatsuba's daughter. – A. Rex Mar 19 '13 at 4:50
I like $$\gamma = \lim_{n \rightarrow \infty} \; \; \left( \; \; 1 + \frac{1}{2} + \cdots + \frac{1}{n} - \frac{1}{n + 1 } - \cdots - \frac{1}{n^2 } - \frac{1}{n^2 + 1 } - \cdots - \frac{1}{n^2 + n} \; \; \right)$$ because it needs no logarithm and the error is comparable to the final term used.
n sum error n^2 * error
1 0.5 0.07721566490153287 0.07721566490153287
10 0.5757019096925315 0.001513755209001322 0.1513755209001322
100 0.5771991634147917 1.650148674114948e-05 0.1650148674114948
1000 0.5772154984013406 1.665001923001341e-07 0.1665001923001341
10000 0.5772156632363485 1.665184323762503e-09 0.1665184323762503
I found this formula on page 82, the January 2012 issue (volume 119, number 1) of the M. A. A. American Mathematical Monthly. It was sent in by someone named Jouzas Juvencijus Macys, possibly for the Problems and Solutions section. He stopped the sum at $-1/n^2.$ I noticed that the error would be minimized by continuing the sum to $-1/(n^2 + n).$ If you want, you can add a single term $1/(6 n^2)$ to get the error down to $n^{-3}.$
$$\gamma = \lim_{n \rightarrow \infty} \; \; \frac{1}{6n^2} + \left( \; \; 1 + \frac{1}{2} + \cdots + \frac{1}{n} - \frac{1}{n + 1 } - \cdots - \frac{1}{n^2 } - \frac{1}{n^2 + 1 } - \cdots - \frac{1}{n^2 + n} \; \; \right)$$
n sum error
1 0.6666666666666666 -0.08945100176513376
10 0.5773685763591982 -0.0001529114576653834
100 0.5772158300814584 -1.651799255153463e-07
1000 0.5772156650680073 -1.664743898288634e-10
10000 0.5772156649030152 -1.482369782479509e-12
EDIT, December 2013. I just got a nice note, with English preprint, from Prof. Macys. The original article is in Lithuanian in 2008. A Russian version and matching English translation are both 2013: the Springer website is not quite up to Volume 94, number 5, pages 45-50. The English language journal is called Mathematical Notes. Oh, the title is "On the Euler-Mascheroni constant."
If desired, you can put two correction terms to get the error down to $n^{-4}.$
$$\gamma = \lim_{n \rightarrow \infty} \; \; \frac{-1}{6n^3} +\frac{1}{6n^2} + \left( \; \; 1 + \frac{1}{2} + \cdots + \frac{1}{n} - \frac{1}{n + 1 } - \cdots - \frac{1}{n^2 } - \cdots - \frac{1}{n^2 + n} \; \; \right)$$
10 0.5772019096925316 1.375520900126492e-05
100 0.5772156634147917 1.486741174616668e-09
600 0.5772156649003506 1.182276498923329e-12
-
The behavior of the approximation error for Macys's series makes it a good candidate for extrapolation methods. Richardson worked nicely (as in the answer I gave), but maybe other methods can do better... – Guess who it is. Apr 10 '12 at 11:38
Finch's Mathematical Constants mentions these papers:
-
A good place for fast evaluation of constants is Gourdon and Sebah's 'Numbers, constants and computation'.
They got $108\cdot 10^6$ digits for $\gamma$ in 1999 (see the end of their 2004 article 'The Euler constant') and propose a free program for high precision evaluation of various constants 'PiFast'.
On his page of constants Simon Plouffe has Euler's constants to 10^6 digits (the file looks much smaller sorry...) using Brent's splitting algorithm (see the 1980 paper of Brent 'Some new algorithms for high-precision computation of Euler’s constant' or more recently 3.1 in Haible and Papanikolaou's 'Fast multiprecision evaluation of series of rational numbers').
It seems that the 1999 record was broken in 2009 by A. Yee & R. Chan with 29,844,489,545 digits 'Mathematical Constants - Billions of Digits' (warning: the torrent file proposed there is more than 11Gb large! An earlier 52Mb file of 'only' 116 million digits is available here using the method proposed by Gourdon and Sebah).
-
(N.B. The previous version of this answer featured both the Brent-McMillan algorithm and the acceleration of Macys's series; I have decided to move the Brent-McMillan material into a new answer in the interest of having only one method per answer.)
The convergence properties of Macys's series in Will's answer can be improved a fair bit, if you're willing to devote some amount of computational effort; due to the $n^{-2}$ behavior of the error, one obvious choice for a convergence acceleration method is Richardson extrapolation.
Skipping some hairy details (which I might include later if I find time, but see Marchuk/Shaidurov if you must), the working formula is
$$\gamma=\lim_{n\to\infty} G_n=2\lim_{n\to\infty} \sum_{i=1}^{n+1} \frac{(-1)^{n-i} i^{2n+2}}{(n+i+1)!(n-i+1)!}\left(\sum_{k=i+1}^{i(i+1)} \frac1{k}-\sum_{k=1}^i \frac1{k}\right)$$
Here are some sample results:
$$\begin{array}{ccc}n&G_n&\gamma-G_n\\10&0.577210083083&5.581818\times10^{-6}\\50&0.577215659731&5.170456\times10^{-9}\\100&0.577215664665&2.362333\times10^{-10}\\200&0.577215664891&1.061648\times10^{-11}\\250&0.577215664898&3.902515\times10^{-12}\\300&0.577215664900&1.721878\times10^{-12}\\350&0.577215664901&8.618620\times10^{-13}\end{array}$$
For higher precision, there isn't much of an improvement; I would still recommend Brent-McMillan if one needs many digits of $\gamma$.
-
On another note: I also like using convergence acceleration methods (e.g. Cohen-Rodriguez Villegas-Zagier or Levin) on the following alternating series for the Stieltjes constants: $$\gamma_n=\frac{(\log\,2)^n}{n+1}\sum_{k=1}^\infty \frac{(-1)^k}{k}B_{n+1}(\log_2 k)$$ where $B_n(x)$ is a Bernoulli polynomial, and $\gamma=\gamma_0$ – Guess who it is. Apr 10 '12 at 5:48
The Macys acceleration is very nice. How soon do we get error below $1.532860 \times 10^{−12},$ so that we see $G_n$ beginning with $0.5772156649...?$ – Will Jagy Apr 10 '12 at 19:06
@Will: I expanded my results a bit... – Guess who it is. Apr 14 '12 at 6:45
I just got a note from Prof. Macys. I think he would like this version; his interest seems to be in keeping to rational functions of $n,$ which i rather like as well. – Will Jagy Dec 2 '13 at 20:42
As it turns out, the convergence of the Karatsuba series presented in Matthew's answer can be improved. This time, the geometric behavior of the error (as can be ascertained from the bounds presented) can be exploited through the use of the Shanks transformation. (Richardson can be made to work here as well, but the results are not as spectacular.)
Letting
$$\varepsilon_0^{(k)}=1-\log(k+1) \sum_{r=1}^{12k+13} \frac{ (-k)^{r+1}}{(r-1)!(r+1)} + \sum_{r=1}^{12k+13} \frac{ (-k)^{r+1} }{(r-1)!(r+1)^2}$$
Wynn's version of the Shanks transformation uses the recursion
$$\varepsilon_{k+1}^{(n)}=\varepsilon_{k-1}^{(n+1)}+\frac1{\varepsilon_{k}^{(n+1)}-\varepsilon_k^{(n)}}$$
It would seem that a two-dimensional array would be required for implementation, but one can arrange things such that only a one-dimensional array is required, through clever overwriting. Here is a Mathematica routine to demonstrate:
wynnEpsilon[seq_?VectorQ] := Module[{n = Length[seq], ep, res, v, w},
res = {};
Do[
ep[k] = seq[[k]];
w = 0;
Do[
v = w; w = ep[j];
ep[j] =
v + (If[Abs[ep[j + 1] - w] > 10^-(Precision[w]), ep[j + 1] - w,
10^-(Precision[w])])^-1;
, {j, k - 1, 1, -1}];
res = {res, ep[If[OddQ[k], 1, 2]]};
, {k, n}];
Flatten[res]
]
(actually the same as the routine presented in this answer).
Here's a comparison of Karatsuba's series, with and without Shanks transformation:
gamprox = Table[N[1 - Log[k]*Sum[(-k)^(r + 1)/((r + 1)*(r - 1)!),
{r, 1, 12*k + 1}] + Sum[(-k)^(r + 1)/((r + 1)^2*(r - 1)!),
{r, 1, 12*k + 1}], 50], {k, 30}];
trans = wynnEpsilon[gamprox];
gamprox[[20]] - EulerGamma // N
1.31827*10^-7
trans[[20]] - EulerGamma // N
6.49869*10^-18
Last[gamprox] - EulerGamma // N
9.96301*10^-12
Last[trans] - EulerGamma // N
2.07059*10^-27
Not too shabby, in my humble opinion...
-
I do not know about the best method, however numerically evaluating the integral $$\gamma = - \int_0^1\!dx\,\ln \ln x^{-1}$$ seems to be pretty efficient.
-
Do you mean $$- \int_0^1\!\, \ln \ln x^{-1} dx$$ – Argon Apr 9 '12 at 20:47
@Argon: yes. You can use your favorite numerical method to obtain an approximation to the integral. – Fabian Apr 9 '12 at 20:49
I would then need to compute $\ln \ln \frac{1}{x}$ at several places between $x=0$ and $x=1$ to approximate the definite integral, which again may be quite costly. – Argon Apr 9 '12 at 20:49
@Aragon: I'm not sure I understand your concern... – Fabian Apr 9 '12 at 20:52
It takes lots of computational effort simply to find the value of $\log$ to a sufficient accuracy. To do this twice for each summation is very costly and requires much effort. – Argon Apr 9 '12 at 21:44
Hmm, I don't know whether is is actually competing. The Euler-gamma can also be seen as the "regularized" sum of all zeta's at the nonpositive integer arguments (mostly expressed in a sum-formula using the Bernoulli-numbers). If I do a convergence-acceleration-method, (in the sense of a Nörlund-matrix-summation) I get the following approximations, where the partial sums are documented in steps of 5.
$\qquad \small \begin{array} {ll|ll} k & \text{approx partial sum to k'th term}& k & \text{approx partial sum to k'th term}\\ \hline \\ 1&1/2&6&0.576161647582377561685517908649\\ 11&0.577642454055878876964082277383&16&0.577256945328427287289300010076\\ 21&0.577203007376005733835733501905&26&0.577213676374423017168422213469\\ 31&0.577216385568992428628821604406&36&0.577215824990983093761408431095\\ 41&0.577215639855185823618977575460&46&0.577215658304198651397646593838\\ 51&0.577215664821529245660187000460&56&0.577215664719517597256388852446\\ 61&0.577215665026720261633726731324&66&0.577215664986600655216189453626\\ 71&0.577215664916609466905818220446&76&0.577215664902581218436870655519\\ 81&0.577215664899673837349687879474&86&0.577215664900634540946733895948\\ 91&0.577215664901420597291612350155&96&0.577215664901605693627171524946\\ 101&0.577215664901606197813305786106&106&0.577215664901564816031433598865\\ 111&0.577215664901542872603251577435&116&0.577215664901534551921030743308\\ 121&0.577215664901532778454660696838&126&0.577215664901532679657316339069\\ 131&0.577215664901532833003775498032&136&0.577215664901532904864265239897\\ 141&0.577215664901532914818902560099&146&0.577215664901532899695081822359\\ 151&0.577215664901532883268517660911&156&0.577215664901532871664134738564\\ 161&0.577215664901532865398778282147&166&0.577215664901532862462629396963\\ 171&0.577215664901532861321338522582&176&0.577215664901532860909786316139\\ 181&0.577215664901532860775151258773&186&0.577215664901532860715949714001\\ 191&0.577215664901532860680839731393&196&0.577215664901532860654520816630\\ 201&0.577215664901532860635577825538&206&0.577215664901532860623198773464\\ 211&0.577215664901532860615556602981&216&0.577215664901532860611334824284\\ 221&0.577215664901532860609026420469&226&0.577215664901532860607850525370\\ 231&0.577215664901532860607246781354&236&0.577215664901532860606925091229\\ 241&0.577215664901532860606758778390&246&0.577215664901532860606658539051 \\ \ldots \\ \hline &&&0.577215664901532860606512090082 \\ &&&\text{(final value as given by Pari/GP)} \end{array}$
Well, this might not compete because of the computation effort for the coefficients of the Noerlund summation and also it seems, as if the rate/quality of convergence decreases with the increasing steps, so this should possibly be seen only as a sidenote.
(Reminder as to how to reproduce the behaviour:
\\Pari/GP, using user-defined procedures
NoerlundSum(1.7,1.0)*ZETA[,1] \\matrix-function NoerlundSum and ZETA-matrix
-
I quite like the Brent-McMillan algorithm myself (which is based on the relationships between the Euler-Mascheroni constant and modified Bessel functions):
$$\gamma=\lim_{n\to\infty}\mathscr{G}_n=\lim_{n\to\infty}\frac{\sum\limits_{k=0}^\infty \left(\frac{n^k}{k!}\right)^2 (H_k-\log\,n)}{\sum\limits_{k=0}^\infty \left(\frac{n^k}{k!}\right)^2}$$
where $H_k=\sum\limits_{j=1}^k \frac1{j}$ is a harmonic number.
It requires the use of a logarithm, but the algorithm is quite simple and reasonably efficient (in particular, we have the inequality $0 < \mathscr{G}_n-\gamma < \pi\exp(-4n)$).
Here's some Mathematica code for the Brent-McMillan algorithm (which should be easily translatable to your language of choice):
n = 50;
a = u = N[-Log[n], n]; b = v = 1;
i = 1;
While[True,
k = (n/i)^2;
a *= k; b *= k;
a += b/i;
If[u + a == u || v + b == v, Break[]];
u += a; v += b;
i++
];
u/v
The integer parameter n controls the accuracy; very roughly, the algorithm will yield n-2 or so correct digits.
The Brent-McMillan paper also presents more elaborate schemes for computing $\gamma$, such as
$$\gamma=\lim_{n\to\infty}\frac{\sum\limits_{k=0}^\infty \left(\frac{n^k}{k!}\right)^2 (H_k-\log\,n)}{\sum\limits_{k=0}^\infty \left(\frac{n^k}{k!}\right)^2}-\frac{\frac1{4n}\sum\limits_{k=0}^\infty \frac{(2k)!^3}{k!^4 (16n)^{2k}}}{\left(\sum\limits_{k=0}^\infty \left(\frac{n^k}{k!}\right)^2\right)^2}$$
but I have no experience in using them.
-
$\pi e^{-4n}$ is a pretty fast rate of convergence. (+1) Have you investigated the computational efficiency in terms of how many multiplications and additions would be needed to get a given accuracy? – robjohn Jul 26 '12 at 0:26
Not in full, but there's something to be said about it being the method of choice for arbitrary-precision computation of $\gamma$ in computing environments like Maple and Mathematica. – Guess who it is. Jul 26 '12 at 1:27
There is another interesting formula $$\small 1- \gamma = \sum_{k=2}^\infty {\zeta(k)-1\over k}$$ found in mathworld (see eq 123) .
If we simply use approximations to the zetas by truncating their series, and write this in an array
$\small \begin{array} {lll} 1 & 1 & 1 & 1 & \cdots & 1 \\ {1 \over 2^2} & {1 \over 2^3} & {1 \over 2^4} & {1 \over 2^5} & \cdots&{1 \over 2^c}\\ {1 \over 3^2} & {1 \over 3^3} & {1 \over 3^4} & {1 \over 3^5} & \cdots&{1 \over 2^c}\\ \cdots & \cdots & \cdots & \cdots & \cdots & &\\ {1 \over r^2} & {1 \over r^3} & {1 \over r^4} & {1 \over r^5} & \cdots&{1 \over r^c}\\ \hline \zeta_r(2)&\zeta_r(3)&\zeta_r(4)&\zeta_r(5)&\cdots&\zeta_r(c)& \end{array}$
then we can write an approximation-formula for the Euler $\small \gamma$ $$\small 1-\gamma_{r,c} = \sum_{k=2}^c {\zeta_r(k)-1\over k}$$ which depends on the number of rows r and the number of columns c . Now to reduce the number of coefficients needed to arrive at a good approximation
1. we can use the alternating (column-)sums and convert by the eta/zeta-conversion term
2. additionally we can use Eulersummation for convergence acceleration for the (now alternating) $\small \zeta_r(c)$
3. we can even introduce Euler-summation of (small) negative order to accelerate convergence of the sum of zetas (which itself is non-alternating).
If we use all three accelerations, we get a double sum $$\small 1-\gamma_{r,c} = \sum_{k=2}^c \sum_{j=1}^r a_{j,k}{ 1 \over j^k}$$ where the $\small a_{j,k}$ contain the factors due to the denominator in the $\small \gamma$-formula and due to the threefold convergence-acceleration.
I did actually implement this in Pari/GP and the surprising result was, that the best approximations were (using order 0.5 in the Eulersummation for the columns and -0.25 for the Eulersummation of the approximated zetas), if roughly r=c . Then the number of correct digits were about r/2; so with r=64 and c=64 we get $\small \gamma$ to 31 digits accuracy.
So the effort comes out to be $$\small \text{ # of correct digits} \sim r/2 \qquad \text{ if } r \sim c$$
The cost of computation of the complete array of zeta-terms is thus in principle quadratic in d (the required number of correct digits); for the Euler-sums a vector for the column-acceleration and another vector for the row-acceleration is required whose values can recursively be computed and are thus linear with the number of rows resp the number of columns and thus also linear with d. (The convergence-acceleration (1.) by using the alternating sums costs nearly nothing)
-
For small enough values of $\zeta$/$\eta$, there is this algorithm described by Borwein. The Euler-transformed $\eta$ series is a special case of this algorithm; the general method is equivalent to performing Cohen-Rodriguez Villegas-Zagier acceleration to the $\eta$ series (which PARI/GP supports as sumalt()). – Guess who it is. Apr 14 '12 at 13:35
@J.M. :true; however it is unknown to me how many operations (terms for the sum) are used by sumalt (I only know it is roughly related to the current float-precision). In the light of some other answers I wanted an explicite description and enumeration of operations which are required for some required correct digits. – Gottfried Helms Apr 14 '12 at 13:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8811352252960205, "perplexity": 1598.1480059174435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988922.24/warc/CC-MAIN-20150728002308-00315-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.hh-ri.com/2022/02/04/properties-of-noncommutative-renyi-and-augustin-information/ | # Properties of Noncommutative Renyi and Augustin Information
Hao-Chung Cheng, Li Gao and Min-Hsiu Hsieh
communications in Mathematical Physics 390, pages 501–544 (2022)
ABSTRACT
R´enyi and Augustin information are generalizations of mutual information defined via the
R´enyi divergence, playing a significant role in evaluating the performance of information processing tasks
by virtue of its connection to the error exponent analysis. In quantum information theory, there are
three generalizations of the classical R´enyi divergence—the Petz’s, sandwiched, and log-Euclidean versions,
that possess meaningful operational interpretation. However, the associated quantum R´enyi and Augustin
information are much less explored compared with their classical counterpart, and lacking crucial properties
hinders applications of these quantities to error exponent analysis in the quantum regime.
The goal of this paper is to analyze fundamental properties of the R´enyi and Augustin information
from a noncommutative measure-theoretic perspective. Firstly, we prove the uniform equicontinuity for
all three quantum versions of R´enyi and Augustin information, and it hence yields the joint continuity of
these quantities in order and prior input distributions. Secondly, we establish the concavity of the scaled
R´enyi and Augustin information in the region of s ∈ (−1, 0) for both Petz’s and the sandwiched versions.
This completes the open questions raised by Holevo [IEEE Trans. Inf. Theory, 46(6):2256–2261, 2000], and
Mosonyi and Ogawa [Commun. Math. Phys., 355(1):373–426, 2017]. For the applications, we show that
the strong converse exponent in classical-quantum channel coding satisfies a minimax identity, which means
that the strong converse exponent can be attained by the best constant composition code. The established
concavity is further employed to prove an entropic duality between classical data compression with quantum
side information and classical-quantum channel coding, and a Fenchel duality in joint source-channel coding
with quantum side information. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8562647104263306, "perplexity": 2035.5787666195565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00572.warc.gz"} |
https://iffgit.fz-juelich.de/spex/spex-docs/-/commit/9b776d6a7d8cc297a2207f399293bcdbe55fbcf0?expanded=1 | Commit 9b776d6a by Anoop Chandran
### Merge branch 'master' into fix-labels
parents accfc0bc 108761af
build _static _templates .gitlab-ci.yml
span.eqno { float: right; }
... ... @@ -33,10 +33,10 @@ GCUT (MBASIS) --------------- If :math:{N} is the number of LAPW basis functions, one would naively expect the number of product functions to be roughly :math:{N^2}. In the case of the interstitial plane waves, this is not so, since, with a cutoff :math:{g_\mathrm{max}}, the maximum momentum of the product would be :math:{2g_\mathrm{max}}, leading to :math:{8N} as the number of product plane waves. Fortunately, it turns out that the basis size can be much smaller in practice. Therefore, we introduce a reciprocal cutoff radius :math:{G_\mathrm{max}} for the interstitial plane waves and find that, instead of :math:{G_\mathrm{max}=2g_\mathrm{max}}, good convergence is achieved already with :math:{G_\mathrm{max}=0.75g_\mathrm{max}}, the default value. The parameter :math:{G_\mathrm{max}} can be set to a different value with the keyword GCUT in the section MBASIS of the input file. +---------+--------------+--+------------------------------------------------+ +---------+--------------+---------------------------------------------------+ | Example | GCUT 2.9 | | Use a reciprocal cutoff radius of 2.9 | | | | | :math:Bohr^{-1} for the product plane waves. | +---------+--------------+--+------------------------------------------------+ +---------+--------------+---------------------------------------------------+ .. _lcut: ... ... @@ -54,14 +54,17 @@ In the LAPW basis, the matching conditions at the MT boundaries require large :m +----------+--------------------------------+------------------------------------------------------------------------------------------------------------------+ | Examples | SELECT 2;3 | | Use products of :math:{u_{lp} u_{l'p'}} with :math:{l\le 2} and :math:{l'\le 3} | | | | | for :math:{p=0} (so-called :math:{u}) and no function of :math:{p=1} (so-called :math:{\dot{u}}). | | | | | for :math:{p=0} (so-called :math:{u}) and no function | | | | | of :math:{p=1} (so-called :math:{\dot{u}}). | +----------+--------------------------------+------------------------------------------------------------------------------------------------------------------+ | | SELECT 2,1;3,2 | | Same as before but also include the :math:{\dot{u}} functions with :math:{l\le 2} and :math:{l'\le 3}. | | | SELECT 2,1;3,2 | | Same as before but also include the :math:{\dot{u}} functions | | | | | with :math:{l\le 2} and :math:{l'\le 3}. | +----------+--------------------------------+------------------------------------------------------------------------------------------------------------------+ | | SELECT 2,,1100;3,,1111 | | Same as first line but also include the local orbitals | | | | | :math:{p\ge 2}, which are selected (deselected) by "1" ("0"): | | | | | here, the first two and all four LOs, respectively. The default behavior is | | | | | to include semicore LOs but to exclude the ones at higher energies. | | | | | here, the first two and all four LOs, respectively. | | | | | The default behavior is to include semicore LOs but to | | | | | exclude the ones at higher energies. | +----------+--------------------------------+------------------------------------------------------------------------------------------------------------------+ | | SELECT 2,1,1100;3,2,1111 | | Same as second line with the LOs. | +----------+--------------------------------+------------------------------------------------------------------------------------------------------------------+ ... ... @@ -73,21 +76,25 @@ TOL (MBASIS) (*) The set of MT products selected by SELECT can still be highly linearly dependent. Therefore, in a subsequent optimization step one diagonalizes the MT overlap matrix and retains only those eigenfunctions whose eigenvalues exceed a predefined tolerance value. This tolerance is 0.0001 by default and can be changed with the keyword TOL in the input file. +---------+-----------------+-------------------------------------------------------------------------------+ | Example | TOL 0.00001 | Remove linear dependencies that fall below a tolerance of 0.00001 (see text). | | Example | TOL 0.00001 | | Remove linear dependencies that fall below a | | | | | tolerance of 0.00001 (see text). | +---------+-----------------+-------------------------------------------------------------------------------+ OPTIMIZE (MBASIS) ----------------- The mixed product basis can still be quite large. In the calculation of the screened interaction, each matrix element, when represented in the basis of Coulomb eigenfunctions, is multiplied by :math:{\sqrt{v_\mu v_\nu}} with the Coulomb eigenvalues :math:{\{v_\mu\}}. This gives an opportunity for reducing the basis-set size further by introducing a Coulomb cutoff :math:{v_\mathrm{min}}. The reduced basis set is then used for the polarization function, the dielectric function, and the screened interaction. The parameter :math:{v_\mathrm{min}} can be specified after the keyword OPTIMIZE MB in three ways: first, as a "pseudo" reciprocal cutoff radius :math:{\sqrt{4\pi/v_\mathrm{min}}} (which derives from the plane-wave Coulomb eigenvalues :math:{v_\mathbf{G}=4\pi/G^2}), second, directly as the parameter :math:{v_\mathrm{min}} by using a negative real number, and, finally, as the number of basis functions that should be retained when given as an integer. The so-defined basis functions are mathematically close to plane waves. For testing purposes, one can also enforce the usage of plane waves (or rather projections onto plane waves) with the keyword OPTIMIZE PW, in which case the Coulomb matrix is known analytically. No optimization of the basis is applied, if OPTIMIZE is omitted. The mixed product basis can still be quite large. In the calculation of the screened interaction, each matrix element, when represented in the basis of Coulomb eigenfunctions, is multiplied by :math:{\sqrt{v_\mu v_\nu}} with the Coulomb eigenvalues :math:{\{v_\mu\}}. This gives an opportunity for reducing the basis-set size further by introducing a Coulomb cutoff :math:{v_\mathrm{min}}. The reduced basis set is then used for the polarization function, the dielectric function, and the screened interaction. The parameter :math:{v_\mathrm{min}} can be specified after the keyword OPTIMIZE MB in three ways: first, as a "pseudo" reciprocal cutoff radius :math:{\sqrt{4\pi/v_\mathrm{min}}} (which derives from the plane-wave Coulomb eigenvalues :math:{v_\mathbf{G}=4\pi/G^2}), second, directly as the parameter :math:{v_\mathrm{min}} by using a negative real number, and, finally, as the number of basis functions that should be retained when given as an integer. The so-defined basis functions are mathematically close to plane waves. For testing purposes, one can also enforce the usage of plane waves (or rather projections onto plane waves) with the keyword OPTIMIZE PW, in which case the Coulomb matrix is known analytically. No optimization of the basis is applied, if OPTIMIZE is omitted. +----------+-----------------------+-------------------------------------------------------------------------------------------------------------+ | Examples | OPTIMIZE MB 4.0 | Optimize the mixed product basis by removing eigenfunctions with eigenvalues below 4\pi/4.5^2. | | Examples | OPTIMIZE MB 4.0 | | Optimize the mixed product basis by removing eigenfunctions | | | | | with eigenvalues below 4\pi/4.5^2. | +----------+-----------------------+-------------------------------------------------------------------------------------------------------------+ | | OPTIMIZE MB -0.05 | Optimize the mixed product basis by removing eigenfunctions with eigenvalues below 0.05. | | | OPTIMIZE MB -0.05 | | Optimize the mixed product basis by removing eigenfunctions | | | | | with eigenvalues below 0.05. | +----------+-----------------------+-------------------------------------------------------------------------------------------------------------+ | | OPTIMIZE MB 80 | Retain only the eigenfunctions with the 80 largest eigenvalues. | +----------+-----------------------+-------------------------------------------------------------------------------------------------------------+ | | OPTIMIZE PW 4.5 | Use projected plane waves with the cutoff 4.5 \mathrm{Bohr}^{-1} (for testing only, can be quite slow). | | | OPTIMIZE PW 4.5 | | Use projected plane waves with the cutoff :math:{4.5 \mathrm{Bohr}^{-1}} | | | | | (for testing only, can be quite slow). | +----------+-----------------------+-------------------------------------------------------------------------------------------------------------+ In summary, there are a number of parameters that influence the accuracy of the basis set. Whenever a new physical system is investigated, it is recommendable to converge the basis set for that system. The parameters to consider in this respect are GCUT, LCUT, SELECT, and OPTIMIZE. ... ... @@ -124,7 +131,7 @@ FFT (WFPROD) -------------- When the interaction potential is represented in the mixed product basis, the coupling to the single-particle states involve projections of the form :math:{\langle M_{\mathbf{k}\mu} \phi_{\mathbf{q}n} | \phi_{\mathbf{k+q}n'} \rangle\,.} The calculation of these projections can be quite expensive. Therefore, there are a number of keywords that can be used for acceleration. Most of them are, by now, somewhat obsolete. An important keyword, though, is FFT in the section WFPROD of the input file. When used, the interstitial terms are evaluated using Fast Fourier Transformations (FFTs), i.e., by transforming into real space (where the convolutions turn into products), instead of by explicit convolutions in reciprocal space. For small systems the latter is faster, but for large systems it is recommendable to use FFTs because of a better scaling with system size. A run with FFTs can be made to yield results identical to the explicit summation. This requires an FFT reciprocal cutoff radius of :math:{2G_\mathrm{max}+g_\mathrm{max}}, which can be achieved by setting FFT EXACT, but such a calculation is quite costly. It is, therefore, advisable to use smaller cutoff radii, thereby sacrificing a bit of accuracy but speeding up the computations a lot. If given without an argument, Spex will use 2/3 of the above ''exact'' cutoff. One can also specify a cutoff by a real-valued argument explicitly, good compromises between accuracy and speed are values between 6 and 8 Bohr'^-1^'. The calculation of these projections can be quite expensive. Therefore, there are a number of keywords that can be used for acceleration. Most of them are, by now, somewhat obsolete. An important keyword, though, is FFT in the section WFPROD of the input file. When used, the interstitial terms are evaluated using Fast Fourier Transformations (FFTs), i.e., by transforming into real space (where the convolutions turn into products), instead of by explicit convolutions in reciprocal space. For small systems the latter is faster, but for large systems it is recommendable to use FFTs because of a better scaling with system size. A run with FFTs can be made to yield results identical to the explicit summation. This requires an FFT reciprocal cutoff radius of :math:{2G_\mathrm{max}+g_\mathrm{max}}, which can be achieved by setting FFT EXACT, but such a calculation is quite costly. It is, therefore, advisable to use smaller cutoff radii, thereby sacrificing a bit of accuracy but speeding up the computations a lot. If given without an argument, Spex will use 2/3 of the above *exact* cutoff. One can also specify a cutoff by a real-valued argument explicitly, good compromises between accuracy and speed are values between 6 and 8 Bohr'^-1^'. +----------+---------------+-------------------------------------------------------------------------------+ | Examples | FFT 6 | Use FFTs with the cutoff 6 Bohr'^-1^'. | ... ...
... ... @@ -41,13 +41,14 @@ Each Spex run needs a job definition, which defines what Spex should do, e.g., Details of these jobs are explained in subsequent sections. The job definition must not be omitted but may be empty: JOB, in which case Spex will just read the wavefunctions and energies, perform some checks and some elemental calculations (e.g., Wannier interpolation), and stop. In principle, Spex supports multiple jobs such as JOB GW 1:(1-5) DIELEC 1:{0:1,0.01}. This feature is, however, seldom used and is not guaranteed to work correctly in all versions. +----------+------------------------------------------+----------------------------------------------------------------------+ | Examples | JOB COSX 1:(1-5) | Perform COHSEX calculation. | +----------+------------------------------------------+----------------------------------------------------------------------+ | | JOB COSX 1:(1-5) SCREEN 2:{0:1,0.01} | Subsequently, calculate the screened interaction on a frequency mesh | +----------+------------------------------------------+----------------------------------------------------------------------+ | | JOB | Just perform some checks and stop | +----------+------------------------------------------+----------------------------------------------------------------------+ +----------+------------------------------------------+--------------------------------------------+ | Examples | JOB COSX 1:(1-5) | Perform COHSEX calculation. | +----------+------------------------------------------+--------------------------------------------+ | | JOB COSX 1:(1-5) SCREEN 2:{0:1,0.01} | | Subsequently, calculate the screened | | | | | interaction on a frequency mesh | +----------+------------------------------------------+--------------------------------------------+ | | JOB | Just perform some checks and stop | +----------+------------------------------------------+--------------------------------------------+ BZ ----- ... ... @@ -68,6 +69,7 @@ This is an important keyword. It enables (a) continuing a calculation that has, +----------+---------------+----------------------------------------------+ | Examples | RESTART | Read/write restart file | +----------+---------------+----------------------------------------------+ | | RESTART 2 | Try to reuse data from standard output files | +----------+---------------+----------------------------------------------+ ... ... @@ -104,7 +106,7 @@ CHKMISM MTACCUR ------- (*) The LAPW method relies on a partitioning of space into MT spheres and the interstitial region. The basis functions are defined differently in the two regions, interstitial plane waves in the latter and numerical functions in the spheres with radial parts :math:{u(r)}, {\dot{u}(r)=\partial u(r)/\partial\epsilon}, :math:{u^\mathrm{LO}(r)} and spherical harmonics :math:{Y_{lm}(\hat{\mathbf{r}})} The plane waves and the angular part of the MT functions can be converged straightforwardly with the reciprocal cutoff radius :math:{g_\mathrm{max}} and the maximal l quantum number :math:{l_\mathrm{max}}, respectively, whereas the radial part of the MT functions is not converged as easily. The standard LAPW basis is restricted to the functions :math:{u} and :math:{\dot{u}}. Local orbitals :math:{u^\mathrm{LO}} can be used to extend the basis set, to enable the description of semicore and high-lying conduction states. The accuracy of the radial MT basis can be analyzed with the keyword MTACCUR e1 e2 which gives the MT representation error [Phys. Rev. B 83, 081101] in the energy range between e1 and e2. (If unspecified, e1 and e2 are chosen automatically.) The results are written to the output files spex.mt.t where t is the atom type index, or spex.mt.s.t with the spin index s(=1 or 2) for spin-polarized calculations. The files contain sets of data for all l quantum numbers, which can be plotted separately with gnuplot (e.g., plot "spex.mt.1" i 3 for :math:{l=3} (*) The LAPW method relies on a partitioning of space into MT spheres and the interstitial region. The basis functions are defined differently in the two regions, interstitial plane waves in the latter and numerical functions in the spheres with radial parts :math:{u(r)}, {\dot{u}(r)=\partial u(r)/\partial\epsilon}, :math:{u^\mathrm{LO}(r)} and spherical harmonics :math:{Y_{lm}(\hat{\mathbf{r}})} The plane waves and the angular part of the MT functions can be converged straightforwardly with the reciprocal cutoff radius :math:{g_\mathrm{max}} and the maximal l quantum number :math:{l_\mathrm{max}}, respectively, whereas the radial part of the MT functions is not converged as easily. The standard LAPW basis is restricted to the functions :math:{u} and :math:{\dot{u}}. Local orbitals :math:{u^\mathrm{LO}} can be used to extend the basis set, to enable the description of semicore and high-lying conduction states. The accuracy of the radial MT basis can be analyzed with the keyword MTACCUR e1 e2 which gives the MT representation error [Phys. Rev. B 83, 081101] in the energy range between e1 and e2. (If unspecified, e1 and e2 are chosen automatically.) The results are written to the output files spex.mt.t where t is the atom type index, or spex.mt.s.t with the spin index s(=1 or 2) for spin-polarized calculations. The files contain sets of data for all l quantum numbers, which can be plotted separately with gnuplot (e.g., plot "spex.mt.1" i 3 for :math:{l=3}) +----------+------------------+------------------------------------------------------------+ | Examples | MTACCUR -1 2 | Calculate MT representation error between -1 and 2 Hartree | ... ... @@ -116,19 +118,19 @@ BANDINFO -------- (*) In some cases, it may be necessary to replace the energy eigenvalues, provided by the mean-field (DFT) code, by energies (e.g., GW quasiparticle energies) obtained in a previous Spex calculation, for example, to determine the GW Fermi energy or to perform energy-only self-consistent calculations. This can be achieved with the keyword ENERGY file, where file contains the new energies in eV. The format of file corresponds to the output of the spex.extr utility: spex.extr g spex.out > file. It must be made sure that file contains energy values for the whole irreducible Brillouin zone. Band energies not contained in file will be adjusted so that the energies are in accending order (provided that there is at least one energy value for the particular k point). +---------+-----------------------+-----------------------------------------------------------------------------------------------+ | Example | ENERGY energy.inp | | Replace the mean-field energy eigenvalues by the | | | | | energies provided in the file energy.inp | +---------+-----------------------+-----------------------------------------------------------------------------------------------+ +---------+-----------------------+---------------------------------------------------------+ | Example | ENERGY energy.inp | | Replace the mean-field energy eigenvalues by the | | | | | energies provided in the file energy.inp | +---------+-----------------------+---------------------------------------------------------+ DELTAEX ------- (*) This keyword modifies the exchange splitting of a collinear magnetic system, i.e., it shifts spin-up and spin-down energies relative to to each other so as to increase or decrease the exchange splitting. With DELTAEX x, the spin-up (spin-down) energies are lowered (elevated) by x/2. The parameter x can be used to enforce the Goldstone condition in spin-wave calculations [Phys. Rev. B 94, 064433 (2016)] +---------+-------------------+---------------------------------------------------------------------------------------------------+ | Example | DELTAEX 0.2eV | | Increase the exchange splitting by 0.2eV | | | | | (spin-up/down energies are decreased/increased by 0.1eV) | +---------+-------------------+---------------------------------------------------------------------------------------------------+ +---------+-------------------+---------------------------------------------------------------+ | Example | DELTAEX 0.2eV | | Increase the exchange splitting by 0.2eV | | | | | (spin-up/down energies are decreased/increased by 0.1eV) | +---------+-------------------+---------------------------------------------------------------+ PLUSSOC ------- ... ... @@ -138,17 +140,18 @@ ITERATE ------- (*) If specified, Spex only reads the LAPW basis set from the input data, provided by the mean-field (DFT) code, but performs the diagonalization of the Hamiltonian at the k points itself. This calculation effectively replaces the second run of the DFT code. In this sense, the name of the keyword is a bit misleading, as the calculation is non-iterative. The keyword ITERATE is mostly intended for testing and debugging. It is not available for executables compiled with -DLOAD (configured with --enable-load). +----------+---------------------+----------------------------------------------------------------------------------+ | Examples | ITERATE NR | Diagonalize a non-relativistic Hamiltonian | +----------+---------------------+----------------------------------------------------------------------------------+ | | ITERATE SR | Use scalar-relativity | +----------+---------------------+----------------------------------------------------------------------------------+ | | ITERATE FR | Also include SOC | +----------+---------------------+----------------------------------------------------------------------------------+ | | ITERATE SR -1 | Diagonalize scalar-relativistic Hamiltonian and neglect eigenvalues below -1 htr | +----------+---------------------+----------------------------------------------------------------------------------+ | | ITERATE FR STOP | Diagonalize relativistic Hamiltonian (including SOC), then stop | +----------+---------------------+----------------------------------------------------------------------------------+ +----------+---------------------+-------------------------------------------------------------------+ | Examples | ITERATE NR | Diagonalize a non-relativistic Hamiltonian | +----------+---------------------+-------------------------------------------------------------------+ | | ITERATE SR | Use scalar-relativity | +----------+---------------------+-------------------------------------------------------------------+ | | ITERATE FR | Also include SOC | +----------+---------------------+-------------------------------------------------------------------+ | | ITERATE SR -1 | | Diagonalize scalar-relativistic Hamiltonian and | | | | | neglect eigenvalues below -1 htr | +----------+---------------------+-------------------------------------------------------------------+ | | ITERATE FR STOP | Diagonalize relativistic Hamiltonian (including SOC), then stop | +----------+---------------------+-------------------------------------------------------------------+ Parallelized version ==================== ... ...
... ... @@ -15,6 +15,8 @@ # import os # import sys # sys.path.insert(0, os.path.abspath('.')) def setup(app): app.add_stylesheet('css/custom.css') # -- Project information ----------------------------------------------------- ... ...
... ... @@ -13,7 +13,7 @@ It needs input from a converged DFT calculation, which can be generated by Fleur If you use SPEX for your research, please cite the following work: .. highlights:: Christoph Friedrich, Stefan Blügel, Arno Schindlmayr, "Efficient implementation of the GW approximation within the all-electron FLAPW method", Phys. Rev. B 81, 125102 (2010). .. highlights:: Christoph Friedrich, Stefan Blügel, Arno Schindlmayr, "Efficient implementation of the GW approximation within the all-electron FLAPW method", *Phys. Rev. B 81, 125102 (2010)*. .. toctree:: :maxdepth: 2 ... ...
... ... @@ -57,17 +57,17 @@ The columns are The two GW values for the quasiparticle energy follow two common methods to approximately solve the quasiparticle equation .. _qpeq: :math:{\displaystyle\left\{-\frac{\nabla}{2}+v^\mathrm{ext}(\mathbf{r})+v^\mathrm{H}(\mathbf{r})\right\}\psi_{\mathbf{k}n}(\mathbf{r})+\int \Sigma^\mathrm{xc}(\mathbf{r},\mathbf{r}';E_{\mathbf{k}n})\psi_{\mathbf{k}n}(\mathbf{r}')d^3 r=E_{\mathbf{k}n}\psi_{\mathbf{k}n}(\mathbf{r})} .. math:: \displaystyle\left\{-\frac{\nabla}{2}+v^\mathrm{ext}(\mathbf{r})+v^\mathrm{H}(\mathbf{r})\right\}\psi_{\mathbf{k}n}(\mathbf{r})+\int \Sigma^\mathrm{xc}(\mathbf{r},\mathbf{r}';E_{\mathbf{k}n})\psi_{\mathbf{k}n}(\mathbf{r}')d^3 r=E_{\mathbf{k}n}\psi_{\mathbf{k}n}(\mathbf{r}) :label: qpeq where :math:{v^\mathrm{ext}}, :math:{v^\mathrm{H}}, :math:{\Sigma^\mathrm{xc}}, :math:{\psi_{\mathbf{k}n}}, :math:{E_{\mathbf{k}n})} are the external and Hartree potential, the GW self-energy, and the quasiparticle wavefunction and energy, respectively. Taking the difference :math:{\Sigma^\mathrm{xc}-v^\mathrm{xc}} as a small perturbation, we can write the quasiparticle energy as a correction on the mean-field eigenvalue .. _qppert: :math:{\displaystyle E_{\mathbf{k}n}=\epsilon_{\mathbf{k}n}+\langle\phi_{\mathbf{k}n}|\Sigma^\mathrm{xc}(E_{\mathbf{k}n})-v^\mathrm{xc}|\phi_{\mathbf{k}n}\rangle\approx\epsilon_{\mathbf{k}n}+Z_{\mathbf{k}n}\langle\phi_{\mathbf{k}n}|\Sigma^\mathrm{xc}(\epsilon_{\mathbf{k}n})-v^\mathrm{xc}|\phi_{\mathbf{k}n}\rangle} .. math:: \displaystyle E_{\mathbf{k}n}=\epsilon_{\mathbf{k}n}+\langle\phi_{\mathbf{k}n}|\Sigma^\mathrm{xc}(E_{\mathbf{k}n})-v^\mathrm{xc}|\phi_{\mathbf{k}n}\rangle\approx\epsilon_{\mathbf{k}n}+Z_{\mathbf{k}n}\langle\phi_{\mathbf{k}n}|\Sigma^\mathrm{xc}(\epsilon_{\mathbf{k}n})-v^\mathrm{xc}|\phi_{\mathbf{k}n}\rangle :label: qppert with the single-particle wavefunction :math:{\phi_{\mathbf{k}n}} and the frequency-independent potential :math:{v^{\mathrm{xc}}}, which in the case of a KS solution would correspond to the local exchange-correlation potential; the nonlocal Hartree-Fock exchange potential and the ''hermitianized'' self-energy of QSGW (see below) are other examples. :math:{Z_{\mathbf{k}n}=[1-\partial\Sigma^{\mathrm{xc}}/\partial\omega(\epsilon_{\mathbf{k}n})]^{-1}} is called the renormalization factor. The two expressions on the right-hand side correspond to the "linearized" and "direct" (iterative) solutions given in the output. The direct solution takes into account the non-linearity of the quasiparticle equation and is thus considered the more accurate result. However, there is usually only little difference between the two values. with the single-particle wavefunction :math:{\phi_{\mathbf{k}n}} and the frequency-independent potential :math:{v^{\mathrm{xc}}}, which in the case of a KS solution would correspond to the local exchange-correlation potential; the nonlocal Hartree-Fock exchange potential and the *hermitianized* self-energy of QSGW (see below) are other examples. :math:{Z_{\mathbf{k}n}=[1-\partial\Sigma^{\mathrm{xc}}/\partial\omega(\epsilon_{\mathbf{k}n})]^{-1}} is called the renormalization factor. The two expressions on the right-hand side correspond to the "linearized" and "direct" (iterative) solutions given in the output. The direct solution takes into account the non-linearity of the quasiparticle equation and is thus considered the more accurate result. However, there is usually only little difference between the two values. Up to this point, the job syntax for Hartree Fock (JOB HF), PBE0 (JOB PBE0), screened exchange (JOB SX), COHSEX (JOB COSX), and ''GT'' (JOB GT and JOB GWT) calculations is identical to the one of GW calculations, e.g., JOB HF FULL X:(1-10). Except the latter (GT), all of these methods are mean-field approaches, so one only gets one single-particle energy (instead of a ''linearized'' and a ''direct'' solution) for each band. .. _spectral: ... ... @@ -77,11 +77,13 @@ SPECTRAL (SENERGY) (*) It should be pointed out that the quasiparticle energies given in the output rely on the quasiparticle approximation. The more fundamental equation, as it were, is the Dyson equation ::math:{\displaystyle G(\omega)=G_0(\omega)+G_0(\omega)[\Sigma^{\mathrm{xc}}(\omega)-v^{\mathrm{xc}}]G(\omega)} .. math:: \displaystyle G(\omega)=G_0(\omega)+G_0(\omega)[\Sigma^{\mathrm{xc}}(\omega)-v^{\mathrm{xc}}]G(\omega) which links the interacting Green function {$G$} to the non-interacting KS one {$G_0$} and which, in principle, requires the self-energy to be known on the complete :math:{\omega} axis. The spectral function measured in photoelectron spectroscopy is directly related to the Green function by which links the interacting Green function {$G$} to the non-interacting KS one :math:{G_0} and which, in principle, requires the self-energy to be known on the complete :math:{\omega} axis. The spectral function measured in photoelectron spectroscopy is directly related to the Green function by :math:{A(\mathbf{k},\omega)=\pi^{-1}\,\text{sgn}(\omega-\epsilon_\mathrm{F})\,\,\text{tr}\left\{\text{Im}[\omega I-H^\mathrm{KS}(\mathbf{k})-\Sigma^{\mathrm{xc}}(\mathbf{k},\omega)]^{-1}\right\}\,,} .. math:: A(\mathbf{k},\omega)=\pi^{-1}\,\text{sgn}(\omega-\epsilon_\mathrm{F})\,\,\text{tr}\left\{\text{Im}[\omega I-H^\mathrm{KS}(\mathbf{k})-\Sigma^{\mathrm{xc}}(\mathbf{k},\omega)]^{-1}\right\}\,, where the trace (tr) is over the eigenstates and :math:{\epsilon_\mathrm{F}} is the Fermi energy. The spectral function can be evaluated using the line ... ... @@ -91,25 +93,25 @@ in the section SENERGY of the input file. This option does not require the the keyword SPECTRAL in the section SENERGY, in the examples below from -10 eV to 1 eV in steps of 0.01 eV. If there is no explicit mesh, Spex chooses one automatically. The spectral function is then written to the file "spectral", one block of data for each k point given in the job definition. There is another optional parameter given at the end of the line (second example below), which can be used to bound the imaginary part of the self-energy and, thus, the quasiparticle peak widths from below by this value (unset if not given). This can be helpful in the case of very sharp quasiparticle peaks that are otherwise hard to catch with the frequency mesh. +----------+---------------------------------------+--------------------------------------------------------------------------------------------------------+ | Examples | SPECTRAL {-10eV:1eV,0.01eV} | | Write spectral function :math:{\text{Im}G(\omega)} on the | | | | | specified frequency mesh to the file "spectral". | +----------+---------------------------------------+--------------------------------------------------------------------------------------------------------+ | | SPECTRAL {-10eV:1eV,0.01eV} 0.002 | | Bound imaginary part from below by 0.02, preventing | | | | | peak widths to become too small to be plotted. | +----------+---------------------------------------+--------------------------------------------------------------------------------------------------------+ +----------+---------------------------------------+-----------------------------------------------------------------+ | Examples | SPECTRAL {-10eV:1eV,0.01eV} | | Write spectral function :math:{\text{Im}G(\omega)} on the | | | | | specified frequency mesh to the file "spectral". | +----------+---------------------------------------+-----------------------------------------------------------------+ | | SPECTRAL {-10eV:1eV,0.01eV} 0.002 | | Bound imaginary part from below by 0.02, preventing | | | | | peak widths to become too small to be plotted. | +----------+---------------------------------------+-----------------------------------------------------------------+ Alhough GW calculations can be readily performed with the default settings, the user should be familiar to some degree with the details of the computation and how he/she can influence each step of the program run. Also note that the default settings might work well for one physical system but be unsuitable for another. The GW self-energy .. _selfene: :math:{\displaystyle\Sigma^{\mathrm{xc}}(\mathbf{r},\mathbf{r}';\omega)=\frac{i}{2\pi}\int_{-\infty}^\infty G(\mathbf{r},\mathbf{r}';\omega+\omega')\,W(\mathbf{r},\mathbf{r}';\omega')e^{i\eta\omega'}\,d\omega'\,.} .. math:: \displaystyle\Sigma^{\mathrm{xc}}(\mathbf{r},\mathbf{r}';\omega)=\frac{i}{2\pi}\int_{-\infty}^\infty G(\mathbf{r},\mathbf{r}';\omega+\omega')\,W(\mathbf{r},\mathbf{r}';\omega')e^{i\eta\omega'}\,d\omega'\,. :label: selfene can be understood as a scattering potential that contains the exact exchange potential and correlation effects through the inclusion of ''W'', the dynamically screened interaction, which incorporates the screening of the many-electron system into an effective dynamical potential, obtained from the dielectric function :math:{\varepsilon} through :math:{\displaystyle W(\mathbf{r},\mathbf{r}';\omega)=\int \varepsilon^{-1}(\mathbf{r},\mathbf{r}'';\omega) v(\mathbf{r}'',\mathbf{r}')\,d^3 r''\,.} .. math:: \displaystyle W(\mathbf{r},\mathbf{r}';\omega)=\int \varepsilon^{-1}(\mathbf{r},\mathbf{r}'';\omega) v(\mathbf{r}'',\mathbf{r}')\,d^3 r''\,. This integral equation turns into a matrix equation ... ... @@ -117,13 +119,13 @@ This integral equation turns into a matrix equation if the quantities :math:{W}, :math:{\varepsilon}, and :math:{v} ; are represented in the :ref:mbp, which thus has to be converged properly. The dielectric function, in turn, describes the change of the internal potential through screening processes and is related to the polarization matrix by :math:{\varepsilon(\mathbf{k},\omega)=1-P(\mathbf{k},\omega)v(\mathbf{k})} in matrix notation. The polarization function is one of the main quantities in the GW scheme. Its evaluation is described in the section :ref:polar. The self-energy can be written as the sum of two terms, the first of which is the exact non-local exchange potential of Hartree-Fock theory, the remainder can be interpreted as a correlation self-energy and has the mathematical form of the [[#Eq:Selfene|self-energy]] with :math:{W(\omega)} replaced by :math:{W^\mathrm{c}(\omega)=W(\omega)-v}. The frequency integration is carried out analytically for the exchange part (by summing over the residues). The correlation part is more complex to evaluate because of the frequency dependence of the interaction. (Fast electrons experience a different potential than slow electrons.) The self-energy can be written as the sum of two terms, the first of which is the exact non-local exchange potential of Hartree-Fock theory, the remainder can be interpreted as a correlation self-energy and has the mathematical form of the [[#Eq:Selfene|self-energy]] with :math:{W(\omega)} replaced by :math:{W^\mathrm{c}(\omega)=W(\omega)-v}. The frequency integration is carried out analytically for the exchange part (by summing over the residues). The correlation part is more complex to evaluate because of the frequency dependence of the interaction. (Fast electrons experience a different potential than slow electrons.) There are several ways to represent the self-energy as a function of frequency. The default method is analytic continuation, in which the screened interaction and the self-energy are evaluated on a mesh of purely imaginary frequencies. The self-energy is then analytically continued to the complete complex frequency plane (:math:{\Sigma^\mathrm{xc}(i\omega)\rightarrow\Sigma^\mathrm{xc}(z)}, :math:{\omega\in\cal{R}}, :math:{z\in\cal{C}}). This has several advantages over the usage of the real frequency axis. First, :math:{W(\omega)} is a hermitian (or real-symmetric) matrix if :math:{\omega} is purely imaginary. Second, W and :math:{\Sigma^{\mathrm{xc}}} show a lot of structure along the real axis, whereas they are much smoother on the imaginary axis, thereby making it easier to sample and interpolate these functions. Third, after the analytic continuation the self-energy is known, in principle, as an analytic function on the complete complex plane. And fourth, the method requires only few parameters and is, therefore, easy to handle. The main disadvantage lies is the badly controlled extrapolation of the Pade approximants, which can sometimes produce "outlier values", with a potential adverse effect on the accuracy and reliability of the method. Therefore, there is a more sophisticated but also more complex method called contour integration, in which the frequency convolution is performed explicitly, yielding the self-energy directly for selected frequencies on the real axis. In this method, we also mostly integrate along the imaginary frequency axis. MESH (SENERGY) -------------- All methods employ a mesh of purely imaginary frequencies, which extends from 0 to some maximal :math:{i\omega_\mathrm{max}}. The number of mesh points :math:{N} and the maximal frequency must be provided as parameters, for example MESH 10 10.0, which is the default. The mesh is defined by :math:{i\omega_n=i\omega_\mathrm{max}f_n/f_N} with :math:{f_n=\{(N-1)/[0.9(n-1)]-1\}^{-1}}, :math:{n=1,2,...}. It is fine for small :math:{\omega}, where the quantities have a lot of structure, and coarse for large :math:{\omega}. Sometimes it is helpful to make the mesh even finer for small :math:{\omega}. This is possible by specifying, for example, 10+3, which would yield three, two, and one extra equidistant frequencies in the ranges [:math:{\omega_1,\omega_2}], [:math:{\omega_2,\omega_3}], and [:math:{\omega_3,\omega_4}], respectively. If the second argument is defined negative (:math:{-\omega_\mathrm{max}}), then :math:{f_n=\{N/(n-1)-1\}^{-1}}. The latter definition is rarely used. One can also employ a user-defined mesh provided in a file (one frequency per line, comments #... are allowed). All methods employ a mesh of purely imaginary frequencies, which extends from 0 to some maximal :math:{i\omega_\mathrm{max}}. The number of mesh points :math:{N} and the maximal frequency must be provided as parameters, for example MESH 10 10.0, which is the default. The mesh is defined by :math:{i\omega_n=i\omega_\mathrm{max}f_n/f_N} with :math:{f_n=\{(N-1)/[0.9(n-1)]-1\}^{-1}}, :math:{n=1,2,...}. It is fine for small :math:{\omega}, where the quantities have a lot of structure, and coarse for large :math:{\omega}. Sometimes it is helpful to make the mesh even finer for small :math:{\omega}. This is possible by specifying, for example, 10+3, which would yield three, two, and one extra equidistant frequencies in the ranges [:math:{\omega_1,\omega_2}], [:math:{\omega_2,\omega_3}], and [:math:{\omega_3,\omega_4}], respectively. If the second argument is defined negative (:math:{-\omega_\mathrm{max}}), then :math:{f_n=\{N/(n-1)-1\}^{-1}}. The latter definition is rarely used. One can also employ a user-defined mesh provided in a file (one frequency per line, comments #... are allowed). +----------+--------------------+--------------------------------------------------------------------------------------------------+ | Examples | MESH 12 15.0 | Use a mesh containing twelve frequencies for [0,i15] htr | ... ... @@ -140,19 +142,19 @@ CONTINUE (SENERGY) ------------------- The keyword CONTINUE in the section SENERGY chooses the analytic continuation method. It can have optional parameters. If given without parameters (or if not specified at all), Pade approximants are used. An integer number, such as CONTINUE 2, lets Spex perform a fit to an n-pole function (here, n=2) with the Newton method. The latter approach is somewhat obsolete by now and recommended only for test calculations. The argument can have additional flags +c*. Any combination is possible. They are explained in the examples. +----------+-----------------+------------------------------------------------------------------------------------------------------------------------------+ | Examples | CONTINUE | Use Pade approximants (Default). | +----------+-----------------+------------------------------------------------------------------------------------------------------------------------------+ | | CONTINUE 2 | Fit to the two-pole function :math:{a_1/(\omega-b_1)+a_2/(\omega-b_2)} | +----------+-----------------+------------------------------------------------------------------------------------------------------------------------------+ | | CONTINUE 2+ | Include a constant in the fit function :math:{a_1/(\omega-b_1)+a_2/(\omega-b_2)+c} | +----------+-----------------+------------------------------------------------------------------------------------------------------------------------------+ | | CONTINUE 2c | | Take constraints (continuity of value and gradient at :math:{\omega=0}) | | | | | into account when fitting | +----------+-----------------+------------------------------------------------------------------------------------------------------------------------------+ | | CONTINUE 2* | | Allow parameters :math:{b_i} with positive imaginary parts | | | | | (should be negative) to contribute. (Default with Pade method.) | +----------+-----------------+------------------------------------------------------------------------------------------------------------------------------+ +----------+-----------------+-----------------------------------------------------------------------------------------+ | Examples | CONTINUE | Use Pade approximants (Default). | +----------+-----------------+-----------------------------------------------------------------------------------------+ | | CONTINUE 2 | Fit to the two-pole function :math:{a_1/(\omega-b_1)+a_2/(\omega-b_2)} | +----------+-----------------+-----------------------------------------------------------------------------------------+ | | CONTINUE 2+ | Include a constant in the fit function :math:{a_1/(\omega-b_1)+a_2/(\omega-b_2)+c} | +----------+-----------------+-----------------------------------------------------------------------------------------+ | | CONTINUE 2c | | Take constraints (continuity of value and gradient at :math:{\omega=0}) | | | | | into account when fitting | +----------+-----------------+-----------------------------------------------------------------------------------------+ | | CONTINUE 2* | | Allow parameters :math:{b_i} with positive imaginary parts | | | | | (should be negative) to contribute. (Default with Pade method.) | +----------+-----------------+-----------------------------------------------------------------------------------------+ The second method is contour integration, in which the frequency integration is performed explicitly, however not along the real frequency axis but on a deformed integration contour that avoids the real frequency axis as well as possible. This integration contour starts from :math:{-\infty}, describes an infinite quarter circle to :math:{-i\infty}, then runs along the imaginary frequency axis to :math:{i\infty}, and finishes, again with an infinite quarter circle, to :math:{\infty}. The two quarter circles do not contribute to the integral (because the integrand behaves as :math:{\propto \omega^{-2}}). Furthermore, depending on the frequency argument :math:{\omega} of the self-energy, one has to add a few residues coming from the poles of the Green function in the interval [:math:{0,\omega-\epsilon_\mathrm{F}}] if :math:{\omega>\epsilon_\mathrm{F}} and [:math:{\epsilon_\mathrm{F}-\omega,0}] otherwise, which requires the correlation screened interaction :math:{W^\mathrm{c}(\omega)} to be evaluated on this interval of the real axis. As a consequence, the calculations are more demanding in terms of computational complexity and cost (time and memory). Also, contour integration requires additional input parameters and is therefore somewhat more difficult to apply. However, the results are more accurate. In particular, they are not affected by the "ill-definedness" of the analytic continuation. ... ... @@ -161,18 +163,20 @@ CONTOUR (SENERGY) The corresponding keyword is called CONTOUR and belongs to the section SENERGY. Obviously, CONTOUR and CONTINUE must not be given at the same time. The keyword CONTOUR expects two arguments. The first defines the frequencies :math:{\omega}, for which the self-energy :math:{\Sigma^\mathrm{xc}(\omega)} is to be evaluated. At least, two frequencies are needed to approximate the self-energy as a linear function in :math:{\omega} and, thus, to calculate the ''linearized'' solution of the quasiparticle equation (see above). For this, a single value suffices (example 0.01), with which the self-energy for a state :math:{\mathbf{k}n} is evaluated at two frequencies (:math:{\epsilon_{\mathbf{k}n}-0.01} and :math:{\epsilon_{\mathbf{k}n}+0.01}). The more accurate ''direct'' solution is only available if we specify a range of frequencies for the self-energy instead of a single number. This is possible by an argument such as :math:{-0.1:0.15,0.01}. Here, the range of values is relative to :math:{\epsilon_{\mathbf{k}n}}. Note that the range is flipped (to :math:{-0.15:0.1,0.01} in the example) for occupied states :math:{\mathbf{k}n} to reflect the fact that occupied and unoccupied states tend to shift in opposite directions by the renormalization. One can also specify an absolute frequency mesh by [:math:{...}], i.e., relative to the Fermi energy. This is mandatory for FULL calculations. It is sometimes a bit inconvenient to determine suitable values for the upper and lower bound of [:math:{...}]. Therefore, Spex allows the usage of wildcards for one of the boundaries or both (see below). The second argument to CONTOUR gives the increment of the (equidistant) real-frequency mesh for :math:{W(\omega')}. The lower and upper boundaries of this mesh are predetermined already by the first argument. As a third method, Spex also allows to omit the second argument altogether. Then, it uses a ''hybrid'' method where the screened interaction is analytically continued from the imaginary (where it has to be known for the integral along this axis, see above) to the real axis, thereby obviating the need of calculating and storing ''W'' on a real-frequency mesh. The disadvantage is that the badly controlled Pade extrapolation introduces an element of randomness (also see keyword SMOOTH below). Our experience so far is that the ''hybrid'' method is in-between the two other methods in terms of both computational cost and numerical accuracy. +----------+------------------------------------+----------------------------------------------------------------------------------------------------+ | Examples | CONTOUR 0.01 0.005 | | Use contour integration to obtain the two values :math:{\Sigma^\mathrm{xc}(\epsilon\pm 0.01)}, | | | | | giving :math:{\Sigma^\mathrm{xc}} as a linear function. | +----------+------------------------------------+----------------------------------------------------------------------------------------------------+ | | CONTOUR {-0.1:0.15,0.01} 0.005 | | Calculate :math:{\Sigma^\mathrm{xc}(\omega)} on a frequency mesh relative to | | | | | the KS energy :math:{\epsilon}. | +----------+------------------------------------+----------------------------------------------------------------------------------------------------+ | | CONTOUR [{*:*,0.01}] 0.005 | | Use an absolute frequency mesh (relative to the Fermi energy) instead. | | | | | Wildcards are used for the upper and lower bounds. | +----------+------------------------------------+----------------------------------------------------------------------------------------------------+ | | CONTOUR [{*:*,0.01}] | Use hybrid method. | +----------+------------------------------------+----------------------------------------------------------------------------------------------------+ +----------+------------------------------------+--------------------------------------------------------------------------------------------------------------------+ | Examples | CONTOUR 0.01 0.005 | | Use contour integration to obtain the two | | | | | values :math:{\Sigma^\mathrm{xc}(\epsilon\pm 0.01)}, giving :math:{\Sigma^\mathrm{xc}} as a linear | | | | | function. | +----------+------------------------------------+--------------------------------------------------------------------------------------------------------------------+ | | CONTOUR {-0.1:0.15,0.01} 0.005 | | Calculate :math:{\Sigma^\mathrm{xc}(\omega)} on a frequency mesh relative to | | | | | the KS energy :math:{\epsilon}. | +----------+------------------------------------+--------------------------------------------------------------------------------------------------------------------+ | | CONTOUR [{*:*,0.01}] 0.005 | | Use an absolute frequency mesh | | | | | (relative to the Fermi energy) instead. | | | | | Wildcards are used for the upper and lower bounds. | +----------+------------------------------------+--------------------------------------------------------------------------------------------------------------------+ | | CONTOUR [{*:*,0.01}] | Use hybrid method. | +----------+------------------------------------+--------------------------------------------------------------------------------------------------------------------+ FREQINT (SENERGY) (*) Independently of whether :math:{\Sigma^\mathrm{xc}(i\omega_n)} (CONTINUE) or :math:{\Sigma^\mathrm{xc}(\omega_n)} (CONTOUR) is evaluated, an important step in the calculation of the self-energy is to perform the frequency convolution :math:{\int_{-\infty}^\infty G(z+i\omega')W(i\omega')d\omega'} with :math:{z=i\omega} or :math:{z=\omega}. For this frequency integration, we interpolate W and then perform the convolution with the Green function analytically. The keyword FREQINT determines how the interpolation should be done. It can take two values, SPLINE and PADE for spline [after the transformation :math:{\omega\rightarrow \omega/(1+\omega)}] and Pade interpolation. The default is SPLINE. In the case of GT calculations, there is a similar frequency integration with the T matrix replacing W. There, the default is PADE. ... ... @@ -206,9 +210,11 @@ ALIGNVXC (SENERGY) +----------+--------------------+---------------------------------------------------------------------------------+ | Examples | ALIGNVXC | | Align exchange-correlation potential in such a way that the | | | | | ionization potential remains unchanged by the quasiparticle correction. | | | | | ionization potential remains unchanged by the quasiparticle | | | | | correction. | +----------+--------------------+---------------------------------------------------------------------------------+ | | ALIGNVXC 0.2eV | Apply a constant positive shift of 0.2 eV to the exchange-correlation potential.| | | ALIGNVXC 0.2eV | | Apply a constant positive shift of 0.2 eV to the | | | | | exchange-correlation potential. | +----------+--------------------+---------------------------------------------------------------------------------+ ... ... @@ -227,15 +233,15 @@ RESTART --------- Spex can reuse data from a previous GW run that has finished successfully, crashed, or has been stopped by the user. A GW calculation consists mainly of a loop over the irreducible k points. For each k point, Spex (a) calculates the matrix W('k') and (b) updates the self-energy matrix (or expectation values) :math:{\Sigma^{xc}} with the contribution of the current k point (and its symmetry-equivalent k points). After completion of step (b), the current self-energy is always written to the (binary) files "spex.sigx" and "spex.sigc". If the RESTART option is specified (independently of its argument), Spex also writes the matrix W(k) (in addition to some other data) to the (HDF5) file "spex.cor" unless it is already present in that file. If it is present, the corresponding data is read instead of being calculated. In this way, the keyword RESTART enables reusage of the calculated W(k) from a previous run. The matrix W(k) does not depend on the k points and band indices defined after JOB. So, these parameters can be changed before a run with RESTART, in which the W data is then reused. For example, band structures can be calculated efficiently in this way (see below). Especially for long calculations, it is recommended to use the RESTART option. Spex can also restart a calculation using self-energy data contained in "spex.sigx" and "spex.sigc". To this end, an argument is added: RESTART 2. Spex then starts with the k point, at which the calculation was interrupted before. In contrast to "spex.cor", the files "spex.sigx" and "spex.sigc" do depend on the job definition, which must therefore not be changed before a run with RESTART 2. However, there are few parameters (see below) that may be changed before a rerun with RESTART 2. These concern details of solving the quasiparticle equation, which follows after completion of the self-energy calculation. The following logical table gives an overview. +--+---------------+-----------------------+ | | "spex.cor" | "spex.sigx/c" | +==+===============+=======================+ | | -- | write | +--+---------------+-----------------------+ | | RESTART | read-write write | +--+---------------+-----------------------+ | | RESTART 2 | read-write read-write | +--+---------------+-----------------------+ +---------------+--------------+-----------------+ | | spex.cor | spex.sigx/c | +===============+==============+=================+ | -- | -- | write | +---------------+--------------+-----------------+ | RESTART | read-write | write | +---------------+--------------+-----------------+ | RESTART 2 | read-write | read-write | +---------------+--------------+-----------------+ The different rules for "spex.cor" and "spex.sigx/c" are motivated by the facts that (a) the file "spex.cor" is much bigger than "spex.sigx/c" (so, writing of "spex.cor" to harddisc should not be the default), (b) the files "spex.sigx/c" include the updated self-energy (requiring more computation than for W, thus representing "more valuable" data). ... ... @@ -344,15 +350,17 @@ One can also go beyond the perturbative solution of the [[#Eq:qpeq|quasiparticle [[#QSGW]]If the job definition contains FULL and [[#IBZlabel|IBZ]], the full GW self-energy matrix is evaluated for the whole IBZ, which enables self-consistent calculations in the framework of the quasiparticle self-consistent GW (QSGW) approach. In this approach, one creates a mean-field system from the GW self-energy whose single-particle energies are as close as possible to the quasiparticle energies. This mean-field system is subsequently solved to self-consistency in a DFT code. The resulting solution can then serve as a starting point for a new one-shot GW calculation, which constitutes the second iteration and so on until the quasiparticle energies are converged. The construction of the mean-field system is, to some extent, arbitrary. We use the following definition, which is slightly modified from the original work [PRL 93, 126406]: :math:{\displaystyle A_{\mathbf{k}nn}=Z_{\mathbf{k}n}^{-1} \langle \phi_{\mathbf{k}n} | \Sigma^\mathrm{xc}(\epsilon_{\mathbf{k}n}) | \phi_{\mathbf{k}n} \rangle} .. math:: \displaystyle A_{\mathbf{k}nn}=Z_{\mathbf{k}n}^{-1} \langle \phi_{\mathbf{k}n} | \Sigma^\mathrm{xc}(\epsilon_{\mathbf{k}n}) | \phi_{\mathbf{k}n} \rangle for diagonal elements and :math:{\displaystyle A_{\mathbf{k}nn'}=\langle \phi_{\mathbf{k}n} | \Sigma^\mathrm{xc}(\epsilon_{\mathbf{k}n})+\Sigma^\mathrm{xc}(\epsilon_{\mathbf{k}n'}) | \phi_{\mathbf{k}n'} \rangle} .. math:: \displaystyle A_{\mathbf{k}nn'}=\langle \phi_{\mathbf{k}n} | \Sigma^\mathrm{xc}(\epsilon_{\mathbf{k}n})+\Sigma^\mathrm{xc}(\epsilon_{\mathbf{k}n'}) | \phi_{\mathbf{k}n'} \rangle for off-diagonal elements. The hermitianized QSGW operator is then obtained from :math:{\Sigma^\mathrm{xc,H}=(A+A^\dagger)/2}. The difference to the original definition is the inclusion of the renormalization factor to better reproduce the GW quasiparticle energies. The hermitianized matrix, or rather the difference :math:{\Sigma^{\mathrm{xc,H}}-v^\mathrm{xc}}, is written to the file "spex.qsgw", which is later read by the DFT code. In Fleur, the following steps are required: for off-diagonal elements. The *hermitianized* QSGW operator is then obtained from :math:{\Sigma^\mathrm{xc,H}=(A+A^\dagger)/2}. The difference to the original definition is the inclusion of the renormalization factor to better reproduce the GW quasiparticle energies. The *hermitianized* matrix, or rather the difference :math:{\Sigma^{\mathrm{xc,H}}-v^\mathrm{xc}}, is written to the file "spex.qsgw", which is later read by the DFT code. In Fleur, the following steps are required: * rm fleur.qsgw - remove any previous version of the hermitianized matrix. * rm fleur.qsgw - remove any previous version of the *hermitianized* matrix. * rm broyd* - remove Broyden information about previous iterations because this information is inconsistent with the new Hamiltonian (the SCF calculation does not converge otherwise). * Set gw=3 in the Fleur input file. * Run Fleur. ... ... @@ -389,7 +397,9 @@ Polarization function ===================== The polarization function gives the linear change in the electronic density of a non-interacting system with respect to changes in the effective potential. It is, thus, a fundamental quantity in the calculation of screening properties of a many-electron systems. For example, the dielectric function, instrumental in the calculation of spectroscopic quantities (e.g. EELS) and the screened interaction needed in GW, is related to the polarization matrix through :math:{\varepsilon(\mathbf{k},\omega)=1-P(\mathbf{k},\omega)v(\mathbf{k})}, here given in matrix notation. The corresponding explicit formula for matrix elements of P in the mixed product basis is :math:{\scriptstyle P_{\mu\nu}(\mathbf{k},\omega)=2\sum_{\mathbf{q}}^{\mathrm{BZ}}\sum_{n}^{\mathrm{occ}}\sum_{n'}^{\mathrm{unocc}}\langle M_{\mathbf{k}\mu} \phi_{\mathbf{q}n} | \phi_{\mathbf{k+q}n'} \rangle\langle \phi_{\mathbf{k+q}n'} | \phi_{\mathbf{q}n} M_{\mathbf{k}\nu} \rangle \cdot\left(\frac{1}{\omega+\epsilon_{\mathbf{q}n}-\epsilon_{\mathbf{q}+\mathbf{k}n'}+i\eta}-\frac{1}{\omega-\epsilon_{\mathbf{q}n}+\epsilon_{\mathbf{q}+\mathbf{k}n'}-i\eta}\right) =\int_{-\infty}^\infty \frac{S_{\mu\nu}(\mathbf{k},\omega')}{\omega-\omega'+i\eta\mathrm{sgn}(\omega')}d\omega'\,.} [[#Eq:P]] .. math:: \scriptstyle P_{\mu\nu}(\mathbf{k},\omega)=2\sum_{\mathbf{q}}^{\mathrm{BZ}}\sum_{n}^{\mathrm{occ}}\sum_{n'}^{\mathrm{unocc}}\langle M_{\mathbf{k}\mu} \phi_{\mathbf{q}n} | \phi_{\mathbf{k+q}n'} \rangle\langle \phi_{\mathbf{k+q}n'} | \phi_{\mathbf{q}n} M_{\mathbf{k}\nu} \rangle \cdot\left(\frac{1}{\omega+\epsilon_{\mathbf{q}n}-\epsilon_{\mathbf{q}+\mathbf{k}n'}+i\eta}-\frac{1}{\omega-\epsilon_{\mathbf{q}n}+\epsilon_{\mathbf{q}+\mathbf{k}n'}-i\eta}\right) =\int_{-\infty}^\infty \frac{S_{\mu\nu}(\mathbf{k},\omega')}{\omega-\omega'+i\eta\mathrm{sgn}(\omega')}d\omega'\,. :label: eqP We have implicitly defined the spectral function S in the last equation, an explicit expression for which is basically given by the formula in the middle with the :math:{1/(\omega...)} replaced by :math:{\delta(\omega...)}. (Technically, the :math:{M_{\mathbf{k}\mu}(\mathbf{r})} form the eigenbasis of the Coulomb matrix, so they are linear combinations of the mixed product basis functions.) ... ... @@ -425,12 +435,13 @@ The most important keyword in the calculation of the polarization function is | | | | and an accumulated stretching factor of 30 at 5 htr. | +----------+------------------------+----------------------------------------------------------------------------------------+ | | HILBERT 0.01 1.05 | | Use Hilbert mesh with a first step size of :math:{\omega_2-\omega_1=} 0.01 htr | | | | | and a stretching factor of :math:{a=1.05}. (First argument is real-valued.) | | | | | and a stretching factor of :math:{a=1.05}. | | | | | (First argument is real-valued.) | +----------+------------------------+----------------------------------------------------------------------------------------+ MULTDIFF (SUSCEP) ------------------- (*) In the limit :math:{k\rightarrow 0}, the projections in the numerator of [[#Eq:P|P]] approach linearly to zero. However, when calculating the dielectric function, one has to multiply with :math:{\sqrt{4\pi}/k} (square root of Coulomb matrix) in this limit. So, the first order of :math:{\langle e^{i\mathbf{kr}} \phi_{\mathbf{q}n} | \phi_{\mathbf{k+q}n'} \rangle} (corresponding to :math:{\mu=1}) in 'k' becomes relevant. Using k·p perturbation theory, one can show that the linear term is proportional to :math:{(\epsilon_{\mathbf{q}n'}-\epsilon_{\mathbf{q}n})^{-1}}. This can lead to numerical problems if the two energies are very close to each other. Therefore, when treating the Γ point (k=0), Spex multiplies the linear term with this energy difference, resulting in smooth and non-divergent values, and takes the energy difference into account by replacing :math:{S(\omega)\rightarrow S(\omega)/\omega} in the [[#Eq:P|frequency integration]], thereby avoiding the numerical difficulties. (As an alternative, the energy differences can also be incorporated into the integration weights, which is arguably even more stable numerically, see option INT below.) By default, Spex does that only for k=0. The behavior can be changed with the keyword MULTDIFF in the section SUSCEP. (*) In the limit :math:{k\rightarrow 0}, the projections in the numerator of [[#Eq:P|P]] approach linearly to zero. However, when calculating the dielectric function, one has to multiply with :math:{\sqrt{4\pi}/k} (square root of Coulomb matrix) in this limit. So, the first order of :math:{\langle e^{i\mathbf{kr}} \phi_{\mathbf{q}n} | \phi_{\mathbf{k+q}n'} \rangle} (corresponding to :math:{\mu=1}) in :math:k becomes relevant. Using k·p perturbation theory, one can show that the linear term is proportional to :math:{(\epsilon_{\mathbf{q}n'}-\epsilon_{\mathbf{q}n})^{-1}}. This can lead to numerical problems if the two energies are very close to each other. Therefore, when treating the Γ point (k=0), Spex multiplies the linear term with this energy difference, resulting in smooth and non-divergent values, and takes the energy difference into account by replacing :math:{S(\omega)\rightarrow S(\omega)/\omega} in the [[#Eq:P|frequency integration]], thereby avoiding the numerical difficulties. (As an alternative, the energy differences can also be incorporated into the integration weights, which is arguably even more stable numerically, see option INT below.) By default, Spex does that only for k=0. The behavior can be changed with the keyword MULTDIFF in the section SUSCEP. +-------------------+--------------------+------------------------------------------------------------------------------+ | Examples | MULTDIFF OFF | Never separate (divergent) energy difference. | ... ... @@ -440,7 +451,8 @@ MULTDIFF (SUSCEP) | | MULTDIFF INT | | Use default behavior (separate for k=0, do not for k≠0) | | | | | but stick energy difference into integration weights.a | +-------------------+--------------------+------------------------------------------------------------------------------+ | | MULTDIFF INTON | Always separate energy difference by sticking them into integration weights. | | | MULTDIFF INTON | | Always separate energy difference by sticking them into | | | | | integration weights. | +-------------------+--------------------+------------------------------------------------------------------------------+ PLASMA (SUSCEP) ... ...
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8964637517929077, "perplexity": 2198.0368609376255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493684.2/warc/CC-MAIN-20200329015008-20200329045008-00331.warc.gz"} |
https://worldwidescience.org/topicpages/h/hadron+collider+project.html | #### Sample records for hadron collider project
1. CERN's Large Hadron Collider project
Science.gov (United States)
Fearnley, Tom A.
1997-03-01
The paper gives a brief overview of CERN's Large Hadron Collider (LHC) project. After an outline of the physics motivation, we describe the LHC machine, interaction rates, experimental challenges, and some important physics channels to be studied. Finally we discuss the four experiments planned at the LHC: ATLAS, CMS, ALICE and LHC-B.
2. CERN's Large Hadron Collider project
International Nuclear Information System (INIS)
Fearnley, Tom A.
1997-01-01
The paper gives a brief overview of CERN's Large Hadron Collider (LHC) project. After an outline of the physics motivation, we describe the LHC machine, interaction rates, experimental challenges, and some important physics channels to be studied. Finally we discuss the four experiments planned at the LHC: ATLAS, CMS, ALICE and LHC-B
3. The large hadron collider project
International Nuclear Information System (INIS)
Maiani, L.
1999-01-01
Knowledge of the fundamental constituents of matter has greatly advanced, over the last decades. The standard theory of fundamental interactions presents us with a theoretically sound picture, which describes with great accuracy known physical phenomena on most diverse energy and distance scales. These range from 10 -16 cm, inside the nucleons, up to large-scale astrophysical bodies, including the early Universe at some nanosecond after the Big-Bang and temperatures of the order of 10 2 GeV. The picture is not yet completed, however, as we lack the observation of the Higgs boson, predicted in the 100-500 GeV range - a particle associated with the generation of particle masses and with the quantum fluctuations in the primordial Universe. In addition, the standard theory is expected to undergo a change of regime in the 10 3 GeV region, with the appearance of new families of particles, most likely associated with the onset of a new symmetry (supersymmetry). In 1994, the CERN Council approved the construction of the large hadron collider (LHC), a proton-proton collider of a new design to be installed in the existing LEP tunnel, with an energy of 7 TeV per beam and extremely large luminosity, of ∝10 34 cm -2 s -1 . Construction was started in 1996, with the additional support of the US, Japan, Russia, Canada and other European countries, making the LHC a really global project, the first one in particle physics. After a short review of the physics scenario, I report on the present status of the LHC construction. Special attention is given to technological problems such as the realization of the super-conducting dipoles, following an extensive R and D program with European industries. The construction of the large LHC detectors has required a vast R and D program by a large international community, to overcome the problems posed by the complexity of the collisions and by the large luminosity of the machine. (orig.)
4. Naming Conventions for the Large Hadron Collider Project
CERN Document Server
Faugeras, Paul E
1997-01-01
This report gives the procedures for defining standard abbreviations for the various machine components of the Large Hadron Collider (LHC) Project, as well as for the surface buildings and the underground Civil Engineering works of the LHC. The contents of this report has been approved by the LHC Project Leader and is published in the form of a Project Report in order to allow its immediate implementation. It will be incorporated later in the Quality Assurance Plan of the LHC Project which is under preparation.
International Nuclear Information System (INIS)
Month, M.; Weng, W.T.
1983-01-01
The objective is to investigate whether existing technology might be extrapolated to provide the conceptual framework for a major hadron-hadron collider facility for high energy physics experimentation for the remainder of this century. One contribution to this large effort is to formalize the methods and mathematical tools necessary. In this report, the main purpose is to introduce the student to basic design procedures. From these follow the fundamental characteristics of the facility: its performance capability, its size, and the nature and operating requirements on the accelerator components, and with this knowledge, we can determine the technology and resources needed to build the new facility
6. In the loop Large Hadron Collider project - UK engineering firms
CERN Document Server
Wilks, N
2004-01-01
This paper presents the latest measures being taken to boost the level of UK engineering firms' involvement in research at CERN (Centre for Nuclear Research), including its 27 km circular Large Hadron Collider (LHC) project. Virtually all of the components on this complex project have had to be custom-made, usually in the form of collaboration. It is part of these collaborations that some UK firms have proved they can shine. However, despite the proven capabilities, the financial return continues to be less than the government's funding. Each of the 20 CERN member states provides funds in proportion to its GDP and the UK is the second largest financial contributor. UK firms become price-competitive where a contract calls for a degree of customisation or product development, project management and tight quality control. Development of the Particle Physics Grid, for dissemination and analysis of data from the LHC, continues to provide major supply opportunities for UK manufacturers.
7. Large hadron collider (LHC) project quality assurance plan
Energy Technology Data Exchange (ETDEWEB)
Gullo, Lisa; Karpenko, Victor; Robinson, Kem; Turner, William; Wong, Otis
2002-09-30
The LHC Quality Assurance Plan is a set of operating principles, requirements, and practices used to support Berkeley Lab's participation in the Large Hadron Collider Project. The LHC/QAP is intended to achieve reliable, safe, and quality performance in the LHC project activities. The LHC/QAP is also designed to fulfill the following objectives: (1) The LHC/QAP is Berkeley Lab's QA program document that describes the elements necessary to integrate quality assurance, safety management, and conduct of operations into the Berkeley Lab's portion of the LHC operations. (2) The LHC/QAP provides the framework for Berkeley Lab LHC Project administrators, managers, supervisors, and staff to plan, manage, perform, and assess their Laboratory work. (3) The LHC/QAP is the compliance document that conforms to the requirements of the Laboratory's Work Smart Standards for quality assurance (DOE O 414.1, 10 CFR 830.120), facility operations (DOE O 5480.19), and safety management (DOE P 450.4).
8. Large hadron collider (LHC) project quality assurance plan
International Nuclear Information System (INIS)
Gullo, Lisa; Karpenko, Victor; Robinson, Kem; Turner, William; Wong, Otis
2002-01-01
The LHC Quality Assurance Plan is a set of operating principles, requirements, and practices used to support Berkeley Lab's participation in the Large Hadron Collider Project. The LHC/QAP is intended to achieve reliable, safe, and quality performance in the LHC project activities. The LHC/QAP is also designed to fulfill the following objectives: (1) The LHC/QAP is Berkeley Lab's QA program document that describes the elements necessary to integrate quality assurance, safety management, and conduct of operations into the Berkeley Lab's portion of the LHC operations. (2) The LHC/QAP provides the framework for Berkeley Lab LHC Project administrators, managers, supervisors, and staff to plan, manage, perform, and assess their Laboratory work. (3) The LHC/QAP is the compliance document that conforms to the requirements of the Laboratory's Work Smart Standards for quality assurance (DOE O 414.1, 10 CFR 830.120), facility operations (DOE O 5480.19), and safety management (DOE P 450.4)
CERN Document Server
Keil, Eberhard
1998-01-01
Plans for future hadron colliders are presented, and accelerator physics and engineering aspects common to these machines are discussed. The Tevatron is presented first, starting with a summary of the achievements in Run IB which finished in 1995, followed by performance predictions for Run II which will start in 1999, and the TeV33 project, aiming for a peak luminosity $L ~ 1 (nbs)^-1$. The next machine is the Large Hadron Collider LHC at CERN, planned to come into operation in 2005. The last set of machines are Very Large Hadron Colliders which might be constructed after the LHC. Three variants are presented: Two machines with a beam energy of 50 TeV, and dipole fields of 1.8 and 12.6 T in the arcs, and a machine with 100 TeV and 12 T. The discussion of accelerator physics aspects includes the beam-beam effect, bunch spacing and parasitic collisions, and the crossing angle. The discussion of the engineering aspects covers synchrotron radiation and stored energy in the beams, the power in the debris of the p...
Energy Technology Data Exchange (ETDEWEB)
Pondrom, L.
1991-10-03
An introduction to the techniques of analysis of hadron collider events is presented in the context of the quark-parton model. Production and decay of W and Z intermediate vector bosons are used as examples. The structure of the Electroweak theory is outlined. Three simple FORTRAN programs are introduced, to illustrate Monte Carlo calculation techniques. 25 refs.
CERN Multimedia
2007-01-01
"In the spring 2008, the Large Hadron Collider (LHC) machine at CERN (the European Particle Physics laboratory) will be switched on for the first time. The huge machine is housed in a circular tunnel, 27 km long, excavated deep under the French-Swiss border near Geneva." (1,5 page)
International Nuclear Information System (INIS)
Pondrom, L.
1991-01-01
An introduction to the techniques of analysis of hadron collider events is presented in the context of the quark-parton model. Production and decay of W and Z intermediate vector bosons are used as examples. The structure of the Electroweak theory is outlined. Three simple FORTRAN programs are introduced, to illustrate Monte Carlo calculation techniques. 25 refs
CERN Document Server
Lavender, Gemma
2018-01-01
What is the universe made of? How did it start? This Manual tells the story of how physicists are seeking answers to these questions using the worlds largest particle smasher the Large Hadron Collider at the CERN laboratory on the Franco-Swiss border. Beginning with the first tentative steps taken to build the machine, the digestible text, supported by color photographs of the hardware involved, along with annotated schematic diagrams of the physics experiments, covers the particle accelerators greatest discoveries from both the perspective of the writer and the scientists who work there. The Large Hadron Collider Manual is a full, comprehensive guide to the most famous, record-breaking physics experiment in the world, which continues to capture the public imagination as it provides new insight into the fundamental laws of nature.
CERN Document Server
Juettner Fernandes, Bonnie
2014-01-01
What really happened during the Big Bang? Why did matter form? Why do particles have mass? To answer these questions, scientists and engineers have worked together to build the largest and most powerful particle accelerator in the world: the Large Hadron Collider. Includes glossary, websites, and bibliography for further reading. Perfect for STEM connections. Aligns to the Common Core State Standards for Language Arts. Teachers' Notes available online.
15. Hadron collider physics at UCR
International Nuclear Information System (INIS)
Kernan, A.; Shen, B.C.
1997-01-01
This paper describes the research work in high energy physics by the group at the University of California, Riverside. Work has been divided between hadron collider physics and e + -e - collider physics, and theoretical work. The hadron effort has been heavily involved in the startup activities of the D-Zero detector, commissioning and ongoing redesign. The lepton collider work has included work on TPC/2γ at PEP and the OPAL detector at LEP, as well as efforts on hadron machines
16. B factory with hadron colliders
International Nuclear Information System (INIS)
Lockyer, N.S.
1990-01-01
The opportunities to study B physics in a hadron collider are discussed. Emphasis is placed on the technological developments necessary for these experiments. The R and D program of the Bottom Collider Detector group is reviewed. (author)
17. Hadron collider physics 2005. Proceedings
International Nuclear Information System (INIS)
Campanelli, M.; Clark, A.; Wu, X.
2006-01-01
The Hadron Collider Physics Symposia (HCP) are a new series of conferences that follow the merger of the Hadron Collider Conferences with the LHC Symposia series, with the goal of maximizing the shared experience of the Tevatron and LHC communities. This book gathers the proceedings of the first symposium, HCP2005, and reviews the state of the art in the key physics directions of experimental hadron collider research: - QCD physics - precision electroweak physics - c-, b-, and t-quark physics - physics beyond the Standard Model - heavy ion physics The present volume will serve as a reference for everyone working in the field of accelerator-based high-energy physics. (orig.)
18. Physics at Future Hadron Colliders
CERN Document Server
Baur, U.; Parsons, J.; Albrow, M.; Denisov, D.; Han, T.; Kotwal, A.; Olness, F.; Qian, J.; Belyaev, S.; Bosman, M.; Brooijmans, G.; Gaines, I.; Godfrey, S.; Hansen, J.B.; Hauser, J.; Heintz, U.; Hinchliffe, I.; Kao, C.; Landsberg, G.; Maltoni, F.; Oleari, C.; Pagliarone, C.; Paige, F.; Plehn, T.; Rainwater, D.; Reina, L.; Rizzo, T.; Su, S.; Tait, T.; Wackeroth, D.; Vataga, E.; Zeppenfeld, D.
2001-01-01
We discuss the physics opportunities and detector challenges at future hadron colliders. As guidelines for energies and luminosities we use the proposed luminosity and/or energy upgrade of the LHC (SLHC), and the Fermilab design of a Very Large Hadron Collider (VLHC). We illustrate the physics capabilities of future hadron colliders for a variety of new physics scenarios (supersymmetry, strong electroweak symmetry breaking, new gauge bosons, compositeness and extra dimensions). We also investigate the prospects of doing precision Higgs physics studies at such a machine, and list selected Standard Model physics rates.
19. Hadron collider physics at UCR
Energy Technology Data Exchange (ETDEWEB)
Kernan, A.; Shen, B.C.
1997-07-01
This paper describes the research work in high energy physics by the group at the University of California, Riverside. Work has been divided between hadron collider physics and e{sup +}-e{sup {minus}} collider physics, and theoretical work. The hadron effort has been heavily involved in the startup activities of the D-Zero detector, commissioning and ongoing redesign. The lepton collider work has included work on TPC/2{gamma} at PEP and the OPAL detector at LEP, as well as efforts on hadron machines.
20. The Large Hadron Collider project: organizational and financial matters (of physics at the terascale)
NARCIS (Netherlands)
Engelen, J.
2012-01-01
n this paper, I present a view of organizational and financial matters relevant for the successful construction and operation of the experimental set-ups at the Large Hadron Collider of CERN, the European Laboratory for Particle Physics in Geneva. Construction of these experiments was particularly
1. Large Hadron Collider nears completion
CERN Multimedia
2008-01-01
Installation of the final component of the Large Hadron Collider particle accelerator is under way along the Franco-Swiss border near Geneva, Switzerland. When completed this summer, the LHC will be the world's largest and most complex scientific instrument.
Science.gov (United States)
Kotchetkov, Dmitri
2017-01-01
Rapid growth of the high energy physics program in the USSR during 1960s-1970s culminated with a decision to build the Accelerating and Storage Complex (UNK) to carry out fixed target and colliding beam experiments. The UNK was to have three rings. One ring was to be built with conventional magnets to accelerate protons up to the energy of 600 GeV. The other two rings were to be made from superconducting magnets, each ring was supposed to accelerate protons up to the energy of 3 TeV. The accelerating rings were to be placed in an underground tunnel with a circumference of 21 km. As a 3 x 3 TeV collider, the UNK would make proton-proton collisions with a luminosity of 4 x 1034 cm-1s-1. Institute for High Energy Physics in Protvino was a project leading institution and a site of the UNK. Accelerator and detector research and development studies were commenced in the second half of 1970s. State Committee for Utilization of Atomic Energy of the USSR approved the project in 1980, and the construction of the UNK started in 1983. Political turmoil in the Soviet Union during late 1980s and early 1990s resulted in disintegration of the USSR and subsequent collapse of the Russian economy. As a result of drastic reduction of funding for the UNK, in 1993 the project was restructured to be a 600 GeV fixed target accelerator only. While the ring tunnel and proton injection line were completed by 1995, and 70% of all magnets and associated accelerator equipment were fabricated, lack of Russian federal funding for high energy physics halted the project at the end of 1990s.
3. Heavy leptons at hadron colliders
International Nuclear Information System (INIS)
Ohnemus, J.E.
1987-01-01
The recent advent of high energy hadron colliders capable of producing weak bosons has opened new vistas for particle physics research, including the search for a possible fourth generation heavy charged lepton, which is the primary topic of the thesis. Signals for identifying a new heavy lepton have been calculated and compared to Standard Model backgrounds. Results are presented for signals at the CERN collider, the Fermilab collider, and the proposed Superconducting Supercollider
4. Design optimization of 600 A-13 kA current leads for the Large Hadron Collider project at CERN
CERN Document Server
Spiller, D M; Al-Mosawl, M K; Friend, C M; Thacker, P; Ballarino, A
2001-01-01
The requirements of the Large Hadron Collider project at CERN for high-temperature superconducting (HTS) current leads have been widely publicized. CERN require hybrid current leads of resistive and HTS materials with current ratings of 600 A, 6 kA and 13 kA. BICC General Superconductors, in collaboration with the University of Southampton, have developed and manufactured prototype current leads for the Large Hadron Collider project. The resistive section consists of a phosphorus de-oxidized copper conductor and heat exchanger and the HTS section is constructed from BICC General's (Pb, Bi)2223 tapes with a reduced thermal conductivity Ag alloy sheath. We present the results of the materials optimization studies for the resistive and the HTS sections. Some results of the acceptance tests at CERN are discussed. (9 refs).
CERN Document Server
Evans, Lyndon R
1992-01-01
The three colliders operated to date have taught us a great deal about the behaviour of both bunched and debunched beams in storage rings. The main luminosity limitations are now well enough understood that most of them can be stronglu attenuated or eliminated by approriate design precautions. Experience with the beam-beam interaction in both the SPS and the Tevatron allow us to predict the performance of the new generation of colliders with some degree of confidence. One of the main challenges that the accelerator physicist faces is the problem of the dynamic aperture limitations due to the lower field quality expected, imposed by economic and other constraints.
6. Physics at hadron colliders: Experimental view
International Nuclear Information System (INIS)
Siegrist, J.L.
1987-08-01
The physics of the hadron-hadron collider experiment is considered from an experimental point of view. The problems encountered in determination of how well the standard model describes collider results are discussed. 53 refs., 58 figs
CERN Multimedia
't Hooft, Gerardus; Llewellyn Smith, Christopher Hubert; Brüning, Oliver Sim; Collier, Paul; Stapnes, Steinar; Ellis, Jonathan Richard; Braun-Munzinger, Peter; Stachel, Johanna; Lederman, Leon Max
2007-01-01
Several articles about the LHC: The Making of the standard model; high-energy colliders and the rise of the standard model; How the LHC came to be; Building a behemoth; Detector challenges at the LHC; Beyond the standard model with the LHC; The quest for the quark-gluon plasma; The God particle et al. (42 pages
8. Hard QCD at hadron colliders
Energy Technology Data Exchange (ETDEWEB)
Moch, S
2008-02-15
We review the status of QCD at hadron colliders with emphasis on precision predictions and the latest theoretical developments for cross sections calculations to higher orders. We include an overview of our current information on parton distributions and discuss various Standard Model reactions such as W{sup {+-}}/Z-boson, Higgs boson or top quark production. (orig.)
9. Hard QCD at hadron colliders
International Nuclear Information System (INIS)
Moch, S.
2008-02-01
We review the status of QCD at hadron colliders with emphasis on precision predictions and the latest theoretical developments for cross sections calculations to higher orders. We include an overview of our current information on parton distributions and discuss various Standard Model reactions such as W ± /Z-boson, Higgs boson or top quark production. (orig.)
10. Top production at hadron colliders
New results on top quark production are presented from four hadron collider experiments: CDF and D0 at the Tevatron, and ATLAS and CMS at the LHC. Cross-sections for single top and top pair production are discussed, as well as results on the top–antitop production asymmetry and searches for new physics including ...
11. Electroweak results from hadron colliders
International Nuclear Information System (INIS)
Demarteau, Marcel
1997-01-01
A review of recent electroweak results from hadron colliders is given. Properties of the W ± and Z 0 gauge bosons using final states containing electrons and muons based on large integrated luminosities are presented. The emphasis is placed on the measurement of the mass of the W boson and the measurement of trilinear gauge boson couplings
12. Recent results from hadron colliders
International Nuclear Information System (INIS)
Frisch, H.J.
1990-01-01
This is a summary of some of the many recent results from the CERN and Fermilab colliders, presented for an audience of nuclear, medium-energy, and elementary particle physicists. The topics are jets and QCD at very high energies, precision measurements of electroweak parameters, the remarkably heavy top quark, and new results on the detection of the large flux of B mesons produced at these machines. A summary and some comments on the bright prospects for the future of hadron colliders conclude the talk. 39 refs., 44 figs., 3 tabs
13. Flavorful leptoquarks at hadron colliders
Science.gov (United States)
Hiller, Gudrun; Loose, Dennis; Nišandžić, Ivan
2018-04-01
B -physics data and flavor symmetries suggest that leptoquarks can have masses as low as a few O (TeV ) , predominantly decay to third generation quarks, and highlight p p →b μ μ signatures from single production and p p →b b μ μ from pair production. Abandoning flavor symmetries could allow for inverted quark hierarchies and cause sizable p p →j μ μ and j j μ μ cross sections, induced by second generation couplings. Final states with leptons other than muons including lepton flavor violation (LFV) ones can also arise. The corresponding couplings can also be probed by precision studies of the B →(Xs,K*,ϕ )e e distribution and LFV searches in B -decays. We demonstrate sensitivity in single leptoquark production for the large hadron collider (LHC) and extrapolate to the high luminosity LHC. Exploration of the bulk of the parameter space requires a hadron collider beyond the reach of the LHC, with b -identification capabilities.
14. The Tevatron Hadron Collider: A short history
International Nuclear Information System (INIS)
Tollestrup, A.V.
1994-11-01
The subject of this presentation was intended to cover the history of hadron colliders. However this broad topic is probably better left to historians. I will cover a much smaller portion of this subject and specialize my subject to the history of the Tevatron. As we will see, the Tevatron project is tightly entwined with the progress in collider technology. It occupies a unique place among accelerators in that it was the first to make use of superconducting magnets and indeed the basic design now forms a template for all machines using this technology. It was spawned in an incredibly productive era when new ideas were being generated almost monthly and it has matured into our highest energy collider complete with two large detectors that provide the major facility in the US for probing high Pt physics for the coming decade
15. Very large hadron collider (VLHC)
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-09-01
A VLHC informal study group started to come together at Fermilab in the fall of 1995 and at the 1996 Snowmass Study the parameters of this machine took form. The VLHC as now conceived would be a 100 TeV hadron collider. It would use the Fermilab Main Injector (now nearing completion) to inject protons at 150 GeV into a new 3 TeV Booster and then into a superconducting pp collider ring producing 100 TeV c.m. interactions. A luminosity of {approximately}10{sup 34} cm{sup -2}s{sup -1} is planned. Our plans were presented to the Subpanel on the Planning for the Future of US High- Energy Physics (the successor to the Drell committee) and in February 1998 their report stated The Subpanel recommends an expanded program of R&D on cost reduction strategies, enabling technologies, and accelerator physics issues for a VLHC. These efforts should be coordinated across laboratory and university groups with the aim of identifying design concepts for an economically and technically viable facility The coordination has been started with the inclusion of physicists from Brookhaven National Laboratory (BNL), Lawrence Berkeley National Laboratory (LBNL), and Cornell University. Clearly, this collaboration must expanded internationally as well as nationally. The phrase economically and technically viable facility presents the real challenge.
16. Energy Extraction in the CERN Large Hadron Collider a Project Overview
CERN Document Server
Dahlerup-Petersen, K; Kazmine, B; Medvedko, A S; Sytchev, V V; Vasilev, L B
2001-01-01
In case of a resistive transition (quench), fast and reliable extraction of the magnetic energy, stored in the superconducting coils of the electromagnets of a particle collider, represents an important part of its magnet protection system. In general, the quench detectors, the quench heaters and the cold by-pass diodes across each magnet, together with the energy extraction facilities provide the required protection of the quenching superconductors against damage due to local energy dissipation. In CERN's LHC machine the energy stored in each of its eight superconducting dipole chains exceeds 1300 MJ. Following an opening of the extraction switches this energy will be absorbed in large extraction resistors located in the underground collider tunnel or adjacent galleries, during the exponential current decay. Also the sixteen, 13 kA quadrupole chains (QF, QD) and more than one hundred and fifty, 600 A circuits of the corrector magnets will be equipped with extraction systems. The extraction switch-gear is bas...
17. Large hadron collider workshop. Proceedings. Vol. 2
International Nuclear Information System (INIS)
Jarlskog, G.; Rein, D.
1990-01-01
The aim of the LHC workshop at Aachen was to discuss the 'discovery potential' of a high-luminosity hadron collider (the Large Hadron Collider) and to define the requirements of the detectors. Of central interest was whether a Higgs particle with mass below 1 TeV could be seen using detectors potentially available within a few years from now. Other topics included supersymmetry, heavy quarks, excited gauge bosons, and exotica in proton-proton collisions, as well as physics to be observed in electron-proton and heavy-ion collisions. A large part of the workshop was devoted to the discussion of instrumental and detector concepts, including simulation, signal processing, data acquisition, tracking, calorimetry, lepton identification and radiation hardness. The workshop began with parallel sessions of working groups on physics and instrumentation and continued, in the second half, with plenary talks giving overviews of the LHC project and the SSC, RHIC, and HERA programmes, summaries of the working groups, presentations from industry, and conclusions. Vol.1 of these proceedings contains the papers presented at the plenary sessions, Vol.2 the individual contributions to the physics sessions, and Vol.3 those to the instrumentation sessions. (orig.)
18. Large hadron collider workshop. Proceedings. Vol. 3
International Nuclear Information System (INIS)
Jarlskog, G.; Rein, D.
1990-01-01
The aim of the LHC workshop at Aachen was to discuss the 'discovery potential' of a high-luminosity hadron collider (the Large Hadron Collider) and to define the requirements of the detectors. Of central interest was whether a Higgs particle with mass below 1 TeV could be seen using detectors potentially available within a few years from now. Other topics included supersymmetry, heavy quarks, excited gauge bosons, and exotica in proton-proton collisions, as well as physics to be observed in electron-proton and heavy-ion collisions. A large part of the workshop was devoted to the discussion of instrumental and detector concepts, including simulation, signal processing, data acquisition, tracking, calorimetry, lepton identification and radiation hardness. The workshop began with parallel sessions of working groups on physics and instrumentaiton and continued, in the second half, with plenary talks giving overviews of the LHC project and the SSC, RHIC, and HERA programmes, summaries of the working groups, presentations from industry, and conclusions. Vol. 1 of these proceedings contains the papers presented at the plenary sessions, Vol. 2 the individual contributions to the physics sessions, and Vol. 3 those to the instrumentation sessions. (orig.)
19. Large hadron collider workshop. Proceedings. Vol. 1
International Nuclear Information System (INIS)
Jarlskog, G.; Rein, D.
1990-01-01
The aim of the LCH workshop at Aachen was to discuss the 'discovery potential' of a high-luminosity hadron collider (the Large Hadron Collider) and to define the requirements of the detectors. Of central interest was whether a Higgs particle with mass below 1 TeV could be seen using detectors potentially available within a few years from now. Other topics included supersymmetry, heavy quarks, excited gauge bosons, and exotica in proton-proton collisions, as well as physics to be observed in electron-proton and heavy-ion collisions. A large part of the workshop was devoted to the discussion of instrumental and detector concepts, including simulation, signal processing, data acquisition, tracking, calorimetry, lepton identification and radiation hardness. The workshop began with parallel sessions of working groups on physics and instrumentation and continued, in the second half, with plenary talks giving overviews of the LHC project and the SSC, RHIC, and HERA programmes, summaries of the working groups, presentations from industry, and conclusions. Vol. 1 of these proceedings contains the papers presented at the plenary sessions, Vol. 2 the individual contributions to the physics sessions, and Vol. 3 those to the instrumentation sessions. (orig.)
20. Physics at Hadronic Colliders (4/4)
CERN Multimedia
CERN. Geneva
2008-01-01
Hadron colliders are often called "discovery machines" since they produce the highest mass particles and thus give often the best chance to discover new high mass particles. Currently they are particularly topical since the Large Hadron Collider will start operating later this year, increasing the centre-of-mass energy by a factor of seven compared to the current highest energy collider, the Tevatron. I will review the benefits and challenges of hadron colliders and review some of the current physics results from the Tevatron and give an outlook to the future results we are hoping for at the LHC. Prerequisite knowledge: Introduction to Particle Physics (F. Close), Detectors (W. Riegler, at least mostly) and The Standard Model (A. Pich)
1. Physics at Hadronic Colliders (1/4)
CERN Multimedia
CERN. Geneva
2008-01-01
Hadron colliders are often called "discovery machines" since they produce the highest mass particles and thus give often the best chance to discover new high mass particles. Currently they are particularly topical since the Large Hadron Collider will start operating later this year, increasing the centre-of-mass energy by a factor of seven compared to the current highest energy collider, the Tevatron. I will review the benefits and challenges of hadron colliders and review some of the current physics results from the Tevatron and give an outlook to the future results we are hoping for at the LHC. Prerequisite knowledge: Introduction to Particle Physics (F. Close), Detectors (W. Riegler, at least mostly) and The Standard Model (A. Pich)
2. Physics at Hadronic Colliders (2/4)
CERN Multimedia
CERN. Geneva
2008-01-01
Hadron colliders are often called "discovery machines" since they produce the highest mass particles and thus give often the best chance to discover new high mass particles. Currently they are particularly topical since the Large Hadron Collider will start operating later this year, increasing the centre-of-mass energy by a factor of seven compared to the current highest energy collider, the Tevatron. I will review the benefits and challenges of hadron colliders and review some of the current physics results from the Tevatron and give an outlook to the future results we are hoping for at the LHC. Prerequisite knowledge: Introduction to Particle Physics (F. Close), Detectors (W. Riegler, at least mostly) and The Standard Model (A. Pich)
3. Physics at Hadronic Colliders (3/4)
CERN Multimedia
CERN. Geneva
2008-01-01
Hadron colliders are often called "discovery machines" since they produce the highest mass particles and thus give often the best chance to discover new high mass particles. Currently they are particularly topical since the Large Hadron Collider will start operating later this year, increasing the centre-of-mass energy by a factor of seven compared to the current highest energy collider, the Tevatron. I will review the benefits and challenges of hadron colliders and review some of the current physics results from the Tevatron and give an outlook to the future results we are hoping for at the LHC. Prerequisite knowledge: Introduction to Particle Physics (F. Close), Detectors (W. Riegler, at least mostly) and The Standard Model (A. Pich)
4. Top quark studies at hadron colliders
Energy Technology Data Exchange (ETDEWEB)
Sinervo, P.K. [Univ. of Toronto, Ontario (Canada)
1997-01-01
The techniques used to study top quarks at hadron colliders are presented. The analyses that discovered the top quark are described, with emphasis on the techniques used to tag b quark jets in candidate events. The most recent measurements of top quark properties by the CDF and DO Collaborations are reviewed, including the top quark cross section, mass, branching fractions, and production properties. Future top quark studies at hadron colliders are discussed, and predictions for event yields and uncertainties in the measurements of top quark properties are presented.
5. Top quark studies at hadron colliders
International Nuclear Information System (INIS)
Sinervo, P.K.
1997-01-01
The techniques used to study top quarks at hadron colliders are presented. The analyses that discovered the top quark are described, with emphasis on the techniques used to tag b quark jets in candidate events. The most recent measurements of top quark properties by the CDF and DO Collaborations are reviewed, including the top quark cross section, mass, branching fractions, and production properties. Future top quark studies at hadron colliders are discussed, and predictions for event yields and uncertainties in the measurements of top quark properties are presented
6. Top quark studies at hadron colliders
International Nuclear Information System (INIS)
Sinervo, P.K.
1996-08-01
The techniques used to study top quarks at hadron colliders are presented. The analyses that discovered the top quark are described, with emphasis on the techniques used to tag b quark jets in candidate events. The most recent measurements of top quark properties by the CDF and D null collaborations are reviewed, including the top quark cross section, mass, branching fractions and production properties. Future top quark studies at hadron colliders are discussed, and predictions for event yields and uncertainties in the measurements of top quark properties are presented
7. Excited quark production at hadron colliders
International Nuclear Information System (INIS)
Baur, U.; Hinchliffe, I.; Zeppenfeld, D.
1987-06-01
Composite models generally predict the existence of excited quark and lepton states. We consider the production and experimental signatures of excited quarks Q* of spin and isospin 1/2 at hadron colliders and estimate the background for those channels which are most promising for Q* identification. Multi-TeV pp-colliders will give access to such particles with masses up to several TeV
8. Physics possibilities of lepton and hadron colliders
International Nuclear Information System (INIS)
Peccei, R.D.
1985-05-01
After a brief introduction to lepton and hadron colliders presently being planned, I give some examples of the nice standard physics which is expected to be seen in them. The bulk of the discussion, however, is centered on signals for new physics. Higgs searches at the new colliders are discussed, as well as signatures and prospects for detecting effects of supersymmetry, compositeness and dynamical symmetry breakdown. (orig.)
9. Black Holes and the Large Hadron Collider
Science.gov (United States)
Roy, Arunava
2011-01-01
The European Center for Nuclear Research or CERN's Large Hadron Collider (LHC) has caught our attention partly due to the film "Angels and Demons." In the movie, an antimatter bomb attack on the Vatican is foiled by the protagonist. Perhaps just as controversial is the formation of mini black holes (BHs). Recently, the American Physical Society…
10. Higgs physics at the Large Hadron Collider
Higgs boson; Large Hadron Collider; electroweak symmetry; spin and CP of the Higgs boson ... I shall then give a short description of the pre-LHC constraints on the Higgs mass and the theoretical predictions for the LHC along with a discussion of the current experimental results, ending with prospects in the near future at ...
11. Experiments at future hadron colliders
International Nuclear Information System (INIS)
Paige, F.E.
1991-01-01
This report summarizes signatures and backgrounds for processes in high-energy hadronic collisions, particularly at the SSC. It includes both signatures for new particles -- t quarks, Higgs bosons, new Ζ' bosons, supersymmetric particles, and technicolor particles -- and other experiments which might be done. It is based on the 1990 Snowmass Workshop and on work contained in the Expressions of Interest submitted to the SSC. 46 refs., 19 figs., 1 tab
12. ERL-BASED LEPTON-HADRON COLLIDERS: eRHIC AND LHeC
CERN Document Server
Zimmermann, F
2013-01-01
Two hadron-ERL colliders are being proposed. The Large Hadron electron Collider (LHeC) plans to collide the high-energy protons and heavy ions in the Large Hadron Collider (LHC) at CERN with 60-GeV polarized electrons or positrons. The baseline scheme for this facility adds to the LHC a separate recirculating superconducting (SC) lepton linac with energy recovery, delivering a lepton current of 6.4mA. The electron-hadron collider project eRHIC aims to collide polarized (and unpolarized) electrons with a current of 50 (220) mA and energies in the range 5–30 GeV with a variety of hadron beams— heavy ions as well as polarized light ions— stored in the existing Relativistic Heavy Ion Collider (RHIC) at BNL. The eRHIC electron beam will be generated in an energy recovery linac (ERL) installed inside the RHIC tunnel.
13. Status of the Large Hadron Collider (LHC)
International Nuclear Information System (INIS)
Evans, Lyndon R.
2004-01-01
The Large Hadron Collider (LHC), due to be commissioned in 2007, will provide particle physics with the first laboratory tool to access the energy frontier above 1 TeV. In order to achieve this, protons must be accelerated and stored at 7 TeV, colliding with an unprecedented luminosity of 10 34 cm -2 s -1 The 8.3 Tesla guide field is obtained using conventional NbTi technology cooled to below the lambda point of helium. The machine is now well into its installation phase, with first beam injection foreseen for spring 2007. A brief status report is given and future prospects are discussed. (orig.)
14. 1st Large Hadron Collider Physics Conference
CERN Document Server
Juste, A; Martínez, M; Riu, I; Sorin, V
2013-01-01
The conference is the result of merging two series of international conferences, "Physics at Large Hadron Collider" (PLHC2012) and "Hadron Collider Physics Symposium" (HCP2012). With a program devoted to topics such as the Standard Model and Beyond, the Higgs Boson, Supersymmetry, Beauty and Heavy Ion Physics, the conference aims at providing a lively forum for discussion between experimenters and theorists of the latest results and of new ideas. LHCP 2013 will be hosted by IFAE (Institut de Fisica d'Altes Energies) in Barcelona (Spain), and will take place from May 13 to 18, 2013. The venue will be the Hotel Catalonia Plaza, Plaza España (Barcelona). More information will be posted soon. For questions, please contact [email protected].
15. String Resonances at Hadron Colliders
CERN Document Server
Anchordoqui, Luis A; Dai, De-Chang; Feng, Wan-Zhe; Goldberg, Haim; Huang, Xing; Lust, Dieter; Stojkovic, Dejan; Taylor, Tomasz R
2014-01-01
[Abridged] We consider extensions of the standard model based on open strings ending on D-branes. Assuming that the fundamental string mass scale M_s is in the TeV range and that the theory is weakly coupled, we discuss possible signals of string physics at the upcoming HL-LHC run (3000 fb^{-1}) with \\sqrt{s} = 14 TeV, and at potential future pp colliders, HE-LHC and VLHC, operating at \\sqrt{s} = 33 and 100 TeV, respectively. In such D-brane constructions, the dominant contributions to full-fledged string amplitudes for all the common QCD parton subprocesses leading to dijets and \\gamma + jet are completely independent of the details of compactification, and can be evaluated in a parameter-free manner. We make use of these amplitudes evaluated near the first (n=1) and second (n=2) resonant poles to determine the discovery potential for Regge excitations of the quark, the gluon, and the color singlet living on the QCD stack. We show that for string scales as large as 7.1 TeV (6.1 TeV), lowest massive Regge exc...
16. Top Quark Production at Hadron Colliders
Energy Technology Data Exchange (ETDEWEB)
Phaf, Lukas Kaj [Univ. of Amsterdam (Netherlands)
2004-03-01
This thesis describes both theoretical and experimental research into top quark production. The theoretical part contains a calculation of the single-top quark production cross section at hadron colliders, at Next to Leading Order (NLO) accuracy. The experimental part describes a measurement of the top quark pair production cross section in proton-antiproton collisions, at a center of mass energy of 1.96 TeV.
17. Large Hadron Collider commissioning and first operation.
Science.gov (United States)
Myers, S
2012-02-28
A history of the commissioning and the very successful early operation of the Large Hadron Collider (LHC) is described. The accident that interrupted the first commissioning, its repair and the enhanced protection system put in place are fully described. The LHC beam commissioning and operational performance are reviewed for the period from 2010 to mid-2011. Preliminary plans for operation and future upgrades for the LHC are given for the short and medium term.
18. Really large hadron collider working group summary
International Nuclear Information System (INIS)
Dugan, G.; Limon, P.; Syphers, M.
1996-01-01
A summary is presented of preliminary studies of three 100 TeV center-of-mass hadron colliders made with magnets of different field strengths, 1.8T, 9.5T and 12.6T. Descriptions of the machines, and some of the major and most challenging subsystems, are presented, along with parameter lists and the major issues for future study
19. Ntuples for NLO Events at Hadron Colliders
CERN Document Server
Bern, Z.; Febres Cordero, F.; Höche, S.; Ita, H.; Kosower, D.A.; Maitre, D.
2014-01-01
We present an event-file format for the dissemination of next-to-leading-order (NLO) predictions for QCD processes at hadron colliders. The files contain all information required to compute generic jet-based infrared-safe observables at fixed order (without showering or hadronization), and to recompute observables with different factorization and renormalization scales. The files also make it possible to evaluate cross sections and distributions with different parton distribution functions. This in turn makes it possible to estimate uncertainties in NLO predictions of a wide variety of observables without recomputing the short-distance matrix elements. The event files allow a user to choose among a wide range of commonly-used jet algorithms and jet-size parameters. We provide event files for a $W$ or $Z$ boson accompanied by up to four jets, and for pure-jet events with up to four jets. The files are for the Large Hadron Collider with a center of mass energy of 7 or 8 TeV. A C++ library along with a Python in...
20. The large hadron collider beauty experiment calorimeters
International Nuclear Information System (INIS)
Martens, A.; LHCb Collaboration; Martens, A.
2010-01-01
The Large Hadron Collider beauty experiment (LHCb), one of the four largest experiments at the LHC at CERN, is dedicated to precision studies of CP violation and other rare effects, in particular in the b and c quark sectors. It aims at precisely measuring the Standard Model parameters and searching for effects inconsistent with this picture. The LHCb calorimeter system comprises a scintillating pad detector, a pre-shower (PS), electromagnetic (ECAL) and hadronic calorimeters, all of these employing the principle of transporting the light from scintillating layers with wavelength shifting fibers to photomultipliers. The fast response of the calorimeters ensures their key role in the LHCb trigger, which has to cope with the LHC collision rate of 40MHz. After discussing the design and expected performance of the LHCb calorimeter system, one addresses the time and energy calibration issues. The results obtained with the calorimeter system from the first LHC data will be shown.
1. Hadron collider searches for diboson resonances
Science.gov (United States)
Dorigo, Tommaso
2018-05-01
This review covers results of searches for new elementary particles that decay into boson pairs (dibosons), performed at the CERN Large Hadron Collider in proton-proton collision data collected by the ATLAS and CMS experiments at 7-, 8-, and 13-TeV center-of-mass energy until the year 2017. The available experimental results of the analysis of final states including most of the possible two-object combinations of W and Z bosons, photons, Higgs bosons, and gluons place stringent constraints on a variety of theoretical ideas that extend the standard model, pushing into the multi-TeV region the scale of allowed new physics phenomena.
2. SEARCHING FOR HIGGS BOSONS AND NEW PHYSICS AT HADRON COLLIDERS
International Nuclear Information System (INIS)
Chung Kao
2007-01-01
The objectives of research activities in particle theory are predicting the production cross section and decay branching fractions of Higgs bosons and new particles at hadron colliders, developing techniques and computer software to discover these particles and to measure their properties, and searching for new phenomena and new interactions at the Fermilab Tevatron and the CERN Large Hadron Collider. The results of our project could lead to the discovery of Higgs bosons, new particles, and signatures for new physics, or we will be able to set meaningful limits on important parameters in particle physics. We investigated the prospects for the discovery at the CERN Large Hadron Collider of Higgs bosons and supersymmetric particles. Promising results are found for the CP-odd pseudoscalar (A 0 ) and the heavier CP-even scalar (H 0 ) Higgs bosons with masses up to 800 GeV. Furthermore, we study properties of the lightest neutralino (χ 0 ) and calculate its cosmological relic density in a supersymmetric U(1)(prime) model as well as the muon anomalous magnetic moment a μ = (g μ -2)/2 in a supersymmetric U(1)(prime) model. We found that there are regions of the parameter space that can explain the experimental deviation of a μ from the Standard Model calculation and yield an acceptable cold dark matter relic density without conflict with collider experimental constraints. Recently, we presented a complete next-to-leading order (NLO) calculation for the total cross section of inclusive Higgs pair production via bottom-quark fusion (b(bar b) to hh) at the CERN Large Hadron Collider (LHC) in the Standard Model and the minimal supersymmetric model. We plan to predict the Higgs pair production rate and to study the trilinear coupling among the Higgs bosons. In addition, we have made significant contributions in B physics, single top production, charged Higgs search at the Fermilab as well as in grid computing for both D0 and ATLAS
3. Luminosity Tuning at the Large Hadron Collider
CERN Document Server
Wittmer, W
2006-01-01
By measuring and adjusting the beta-functions at the interaction point (IP the luminosity is being optimized. In LEP (Large Electron Positron Collider) this was done with the two closest doublet magnets. This approach is not applicable for the LHC (Large Hadron Collider) and RHIC (Relativistic Heavy Ion Collider) due to the asymmetric lattice. In addition in the LHC both beams share a common beam pipe through the inner triplet magnets (in these region changes of the magnetic field act on both beams). To control and adjust the beta-functions without perturbation of other optics functions, quadrupole groups situated on both sides further away from the IP have to be used where the two beams are already separated. The quadrupoles are excited in specific linear combinations, forming the so-called "tuning knobs" for the IP beta-functions. For a specific correction one of these knobs is scaled by a common multiplier. The different methods which were used to compute such knobs are discussed: (1) matching in MAD, (2)i...
4. 10th joint CERN-Fermilab Hadron Collider Physics Summer School
CERN Document Server
2015-01-01
The CERN-Fermilab Hadron Collider Physics Summer Schools are targeted particularly at young postdocs and senior PhD students working towards the completion of ther thesis project, in both experimental High Energy Physics (HEP) and phenomenology.
5. A Large Hadron Electron Collider at CERN, Physics, Machine, Detector
CERN Document Server
2011-01-01
The physics programme and the design are described of a new electron-hadron collider, the LHeC, in which electrons of $60$ to possibly $140$\\,GeV collide with LHC protons of $7000$\\,GeV. With an $ep$ design luminosity of about $10^{33}$\\,cm$^{-2}$s$^{-1}$, the Large Hadron Electron Collider exceeds the integrated luminosity collected at HERA by two orders of magnitude and the kinematic range by a factor of twenty in the four-momentum squared, $Q^2$, and in the inverse Bjorken $x$. The physics programme is devoted to an exploration of the energy frontier, complementing the LHC and its discovery potential for physics beyond the Standard Model with high precision deep inelastic scattering (DIS) measurements. These are projected to solve a variety of fundamental questions in strong and electroweak interactions. The LHeC thus becomes the world's cleanest high resolution microscope, designed to continue the path of deep inelastic lepton-hadron scattering into unknown areas of physics and kinematics. The physics ...
6. 12th CERN-Fermilab Hadron Collider Physics Summer School
CERN Document Server
2017-01-01
CERN and Fermilab are jointly offering a series of "Hadron Collider Physics Summer Schools", to prepare young researchers for these exciting times. The school has alternated between CERN and Fermilab, and will return to CERN for the twelfth edition, from 28th August to 6th September 2017. The CERN-Fermilab Hadron Collider Physics Summer School is an advanced school targeted particularly at young postdocs and senior PhD students working towards the completion of their thesis project, in both Experimental High Energy Physics (HEP) and phenomenology. Other schools, such as the CERN European School of High Energy Physics, may provide more appropriate training for students in experimental HEP who are still working towards their PhDs. Mark your calendar for 28 August - 6 September 2017, when CERN will welcome students to the twelfth CERN-Fermilab Hadron Collider Physics Summer School. The School will include nine days of lectures and discussions, and one free day in the middle of the period. Limited scholarship ...
7. QCD and Jets at Hadron Colliders
CERN Document Server
Sapeta, Sebastian
2016-01-01
We review various aspects of jet physics in the context of hadron colliders. We start by discussing the definitions and properties of jets and recent development in this area. We then consider the question of factorization for processes with jets, in particular for cases in which jets are produced in special configurations, like for example in the region of forward rapidities. We review numerous perturbative methods for calculating predictions for jet processes, including the fixed-order calculations as well as various matching and merging techniques. We also discuss the questions related to non-perturbative effects and the role they play in precision jet studies. We describe the status of calculations for processes with jet vetoes and we also elaborate on production of jets in forward direction. Throughout the article, we present selected comparisons between state-of-the-art theoretical predictions and the data from the LHC.
8. Signatures of massive sgoldstinos at hadron colliders
International Nuclear Information System (INIS)
Perazzi, Elena; Ridolfi, Giovanni; Zwirner, Fabio
2000-01-01
In supersymmetric extensions of the Standard Model with a very light gravitino, the effective theory at the weak scale should contain not only the goldstino G-tilde, but also its supersymmetric partners, the sgoldstinos. In the simplest case, the goldstino is a gauge-singlet and its superpartners are two neutral spin-0 particles, S and P. We study possible signals of massive sgoldstinos at hadron colliders, focusing on those that are most relevant for the Tevatron. We show that inclusive production of sgoldstinos, followed by their decay into two photons, can lead to observable signals or to stringent combined bounds on the gravitino and sgoldstino masses. Sgoldstino decays into two gluon jets may provide a useful complementary signature
9. Helicity antenna showers for hadron colliders
Energy Technology Data Exchange (ETDEWEB)
Fischer, Nadine; Skands, Peter [Monash University, School of Physics and Astronomy, Clayton, VIC (Australia); Lifson, Andrew [Monash University, School of Physics and Astronomy, Clayton, VIC (Australia); ETH Zuerich, Zurich (Switzerland)
2017-10-15
We present a complete set of helicity-dependent 2 → 3 antenna functions for QCD initial- and final-state radiation. The functions are implemented in the Vincia shower Monte Carlo framework and are used to generate showers for hadron-collider processes in which helicities are explicitly sampled (and conserved) at each step of the evolution. Although not capturing the full effects of spin correlations, the explicit helicity sampling does permit a significantly faster evaluation of fixed-order matrix-element corrections. A further speed increase is achieved via the implementation of a new fast library of analytical MHV amplitudes, while matrix elements from Madgraph are used for non-MHV configurations. A few examples of applications to QCD 2 → 2 processes are given, comparing the newly released Vincia 2.200 to Pythia 8.226. (orig.)
10. A Large Hadron Electron Collider at CERN
CERN Document Server
Abelleira Fernandez, J L; Adzic, P; Akay, A N; Aksakal, H; Albacete, J L; Allanach, B; Alekhin, S; Allport, P; Andreev, V; Appleby, R B; Arikan, E; Armesto, N; Azuelos, G; Bai, M; Barber, D; Bartels, J; Behnke, O; Behr, J; Belyaev, A S; Ben-Zvi, I; Bernard, N; Bertolucci, S; Bettoni, S; Biswal, S; Blumlein, J; Bottcher, H; Bogacz, A; Bracco, C; Bracinik, J; Brandt, G; Braun, H; Brodsky, S; Bruning, O; Bulyak, E; Buniatyan, A; Burkhardt, H; Cakir, I T; Cakir, O; Calaga, R; Caldwell, A; Cetinkaya, V; Chekelian, V; Ciapala, E; Ciftci, R; Ciftci, A K; Cole, B A; Collins, J C; Dadoun, O; Dainton, J; Roeck, A.De; d'Enterria, D; DiNezza, P; Dudarev, A; Eide, A; Enberg, R; Eroglu, E; Eskola, K J; Favart, L; Fitterer, M; Forte, S; Gaddi, A; Gambino, P; Garcia Morales, H; Gehrmann, T; Gladkikh, P; Glasman, C; Glazov, A; Godbole, R; Goddard, B; Greenshaw, T; Guffanti, A; Guzey, V; Gwenlan, C; Han, T; Hao, Y; Haug, F; Herr, W; Herve, A; Holzer, B J; Ishitsuka, M; Jacquet, M; Jeanneret, B; Jensen, E; Jimenez, J M; Jowett, J M; Jung, H; Karadeniz, H; Kayran, D; Kilic, A; Kimura, K; Klees, R; Klein, M; Klein, U; Kluge, T; Kocak, F; Korostelev, M; Kosmicki, A; Kostka, P; Kowalski, H; Kraemer, M; Kramer, G; Kuchler, D; Kuze, M; Lappi, T; Laycock, P; Levichev, E; Levonian, S; Litvinenko, V N; Lombardi, A; Maeda, J; Marquet, C; Mellado, B; Mess, K H; Milanese, A; Milhano, J G; Moch, S; Morozov, I I; Muttoni, Y; Myers, S; Nandi, S; Nergiz, Z; Newman, P R; Omori, T; Osborne, J; Paoloni, E; Papaphilippou, Y; Pascaud, C; Paukkunen, H; Perez, E; Pieloni, T; Pilicer, E; Pire, B; Placakyte, R; Polini, A; Ptitsyn, V; Pupkov, Y; Radescu, V; Raychaudhuri, S; Rinolfi, L; Rizvi, E; Rohini, R; Rojo, J; Russenschuck, S; Sahin, M; Salgado, C A; Sampei, K; Sassot, R; Sauvan, E; Schaefer, M; Schneekloth, U; Schorner-Sadenius, T; Schulte, D; Senol, A; Seryi, A; Sievers, P; Skrinsky, A N; Smith, W; South, D; Spiesberger, H; Stasto, A M; Strikman, M; Sullivan, M; Sultansoy, S; Sun, Y P; Surrow, B; Szymanowski, L; Taels, P; Tapan, I; Tasci, T; Tassi, E; Kate, H.Ten; Terron, J; Thiesen, H; Thompson, L; Thompson, P; Tokushuku, K; Tomas Garcia, R; Tommasini, D; Trbojevic, D; Tsoupas, N; Tuckmantel, J; Turkoz, S; Trinh, T N; Tywoniuk, K; Unel, G; Ullrich, T; Urakawa, J; VanMechelen, P; Variola, A; Veness, R; Vivoli, A; Vobly, P; Wagner, J; Wallny, R; Wallon, S; Watt, G; Weiss, C; Wiedemann, U A; Wienands, U; Willeke, F; Xiao, B W; Yakimenko, V; Zarnecki, A F; Zhang, Z; Zimmermann, F; Zlebcik, R; Zomer, F; CERN. Geneva. LHeC Department
2012-01-01
This document provides a brief overview of the recently published report on the design of the Large Hadron Electron Collider (LHeC), which comprises its physics programme, accelerator physics, technology and main detector concepts. The LHeC exploits and develops challenging, though principally existing, accelerator and detector technologies. This summary is complemented by brief illustrations of some of the highlights of the physics programme, which relies on a vastly extended kinematic range, luminosity and unprecedented precision in deep inelastic scattering. Illustrations are provided regarding high precision QCD, new physics (Higgs, SUSY) and electron-ion physics. The LHeC is designed to run synchronously with the LHC in the twenties and to achieve an integrated luminosity of O(100) fb$^{-1}$. It will become the cleanest high resolution microscope of mankind and will substantially extend as well as complement the investigation of the physics of the TeV energy scale, which has been enabled by the LHC.
11. Stop Lepton Associated Production at Hadron Colliders
CERN Document Server
Alves, A; Plehn, Tilman
2003-01-01
At hadron colliders, the search for R-parity violating supersymmetry can probe scalar masses beyond what is covered by pair production processes. We evaluate the next-to-leading order SUSY-QCD corrections to the associated stop or sbottom production with a lepton through R-parity violating interactions. We show that higher order corrections render the theoretical predictions more stable with respect to variations of the renormalization and factorization scales and that the total cross section is enhanced by a factor up to 70% at the Tevatron and 50% at the LHC. We investigate in detail how the heavy supersymmetric states decouple from the next-to-leading order process, which gives rise to a theory with an additional scalar leptoquark. In this scenario the inclusion of higher order QCD corrections increases the Tevatron reach on leptoquark masses by up to 40 GeV and the LHC reach by up to 200 GeV.
12. Aperture meter for the Large Hadron Collider
International Nuclear Information System (INIS)
Mueller, G.J.; Fuchsberger, K.; Redaelli, S.
2012-01-01
The control of the high intensity beams of the CERN Large Hadron Collider (LHC) is particular challenging and requires a good modeling of the machine and monitoring of various machine parameters. During operation it is crucial to ensure a minimal distance between the beam edge and the aperture of sensitive equipment, e.g. the superconducting magnets, which in all cases must be in the shadow of the collimator's that protect the machine. Possible dangerous situations must be detected as soon as possible. In order to provide the operator with information about the current machine bottlenecks an aperture meter application was developed based on the LHC online modeling tool-chain. The calculation of available free aperture takes into account the best available optics and aperture model as well as the relevant beam measurements. This paper describes the design and integration of this application into the control environment and presents results of the usage in daily operation and from validation measurements. (authors)
13. Helicity antenna showers for hadron colliders
Science.gov (United States)
Fischer, Nadine; Lifson, Andrew; Skands, Peter
2017-10-01
We present a complete set of helicity-dependent 2→ 3 antenna functions for QCD initial- and final-state radiation. The functions are implemented in the Vincia shower Monte Carlo framework and are used to generate showers for hadron-collider processes in which helicities are explicitly sampled (and conserved) at each step of the evolution. Although not capturing the full effects of spin correlations, the explicit helicity sampling does permit a significantly faster evaluation of fixed-order matrix-element corrections. A further speed increase is achieved via the implementation of a new fast library of analytical MHV amplitudes, while matrix elements from Madgraph are used for non-MHV configurations. A few examples of applications to QCD 2→ 2 processes are given, comparing the newly released Vincia 2.200 to Pythia 8.226.
14. QCD studies at the hadron colliders
International Nuclear Information System (INIS)
Flaugher, B.L.
1990-01-01
Two hadron collider experiments are actively pursuing QCD jet analyses. They are CDF, with a √s = 1800 GeV, and UA2, with a √s = 630 GeV. Recent results from these collaborations are discussed. The inclusive jet spectrum, dijet mass and angular distribution are compared to QCD predictions and used to set limits on quark substructure. Data from both experiments are compared to the O(α s 3 ) calculations for the inclusive jet cross section. Studies of 3-jet, 4-jet and 5-jet events are described. A limit is set on the cross section for double parton scattering from the UA2 4-jet analysis. The inclusive photon cross section has been measured by both CDF and UA2 and is compared to theoretical predictions. 13 refs., 17 figs., 1 tab
15. Weak mixing angle measurements at hadron colliders
CERN Document Server
Di Simone, Andrea; The ATLAS collaboration
2015-01-01
The Talk will cover weak mixing angle measurements at hadron colliders ATLAS and CMS in particular. ATLAS has measured the forward-backward asymmetry for the neutral current Drell Yan process in a wide mass range around the Z resonance region using dielectron and dimuon final states with $\\sqrt{s}$ =7 TeV data. For the dielectron channel, the measurement includes electrons detected in the forward calorimeter which extends the covered phase space. The result is then used to extract a measurement of the effective weak mixing angle. Uncertainties from the limited knowledge on the parton distribution functions in the proton constitute a significant part of the uncertainty and a dedicated study is performed to obtain a PDF set describing W and Z data measured previously by ATLAS. Similar studies from CMS will be reported.
16. A feedback microprocessor for hadron colliders
International Nuclear Information System (INIS)
Herrup, D.A.; Chapman, L.; Franck, A.; Groves, T.; Lublinsky, B.
1992-12-01
A feedback microprocessor has been built for the TEVATRON. It has been constructed to be applicable to hadron colliders in general. Its inputs are realtime accelerator measurements, data describing the state of the TEVATRON, and ramp tables. The microprocessor software includes a finite state machine. Each state corresponds to a specific TEVATRON operation and has a state-specific TEVATRON model. Transitions between states are initiated by the global TEVATRON clock. Each state includes a cyclic routine which is called periodically and where all calculations are performed. The output corrections are inserted onto a fast TEVATRON-wide link from which the power supplies will read the realtime corrections. We also store all of the input data and output corrections in a set of buffers which can easily be retrieved for diagnostic analysis. In this paper we will describe this device and its use to control the TEVATRON tunes as well as other possible applications
17. Protection of the CERN Large Hadron Collider
Science.gov (United States)
Schmidt, R.; Assmann, R.; Carlier, E.; Dehning, B.; Denz, R.; Goddard, B.; Holzer, E. B.; Kain, V.; Puccio, B.; Todd, B.; Uythoven, J.; Wenninger, J.; Zerlauth, M.
2006-11-01
The Large Hadron Collider (LHC) at CERN will collide two counter-rotating proton beams, each with an energy of 7 TeV. The energy stored in the superconducting magnet system will exceed 10 GJ, and each beam has a stored energy of 362 MJ which could cause major damage to accelerator equipment in the case of uncontrolled beam loss. Safe operation of the LHC will therefore rely on a complex system for equipment protection. The systems for protection of the superconducting magnets in case of quench must be fully operational before powering the magnets. For safe injection of the 450 GeV beam into the LHC, beam absorbers must be in their correct positions and specific procedures must be applied. Requirements for safe operation throughout the cycle necessitate early detection of failures within the equipment, and active monitoring of the beam with fast and reliable beam instrumentation, mainly beam loss monitors (BLM). When operating with circulating beams, the time constant for beam loss after a failure extends from apms to a few minutes—failures must be detected sufficiently early and transmitted to the beam interlock system that triggers a beam dump. It is essential that the beams are properly extracted on to the dump blocks at the end of a fill and in case of emergency, since the beam dump blocks are the only elements of the LHC that can withstand the impact of the full beam.
18. The Large Hadron Collider, a personal recollection
CERN Document Server
Evans, L
2014-01-01
The construction of the Large Hadron Collider (LHC) has been a massive endeavor spanning almost 30 years from conception to commissioning. Building the machine with the highest possible energy (7 TeV) in the existing LEP tunnel of 27 km circumference and with a tunnel diameter of only 3.8m has required considerable innovation. The first was the development of an idea first proposed by Bob Palmer at Brookhaven National Laboratory in 1978, where the two rings are integrated into a single magnetic structure. This compact 2-in-1 structure was essential for the LHC due to both the limited space available in the existing Large Electron-Positron collider tunnel and the cost. The second innovation was the bold move to use superfluid helium cooling on a massive scale, which was imposed by the need to achieve a high (8.3 T) magnetic field using an affordable Nb-Ti superconductor. In this article, no attempt is made to give a comprehensive review of the machine design. This can be found in the LHC Design Report {[}1], w...
19. Prospects for heavy flavor physics at hadron colliders
International Nuclear Information System (INIS)
Butler, J.N.
1997-09-01
The role of hadron colliders in the observation and study of CP violation in B decays is discussed. We show that hadron collider experiments can play a significant role in the early studies of these phenomena and will play an increasingly dominant role as the effort turns towards difficult to measure decays, especially those of the B s meson, and sensitive searches for rare decays and subtle deviations from Standard Model predictions. We conclude with a discussion of the relative merits of hadron collider detectors with 'forward' vs 'central' rapidity coverage
20. The Large Hadron Collider: Present Status and Prospects
CERN Document Server
Evans, Lyndon R
2000-01-01
The Large Hadron Collider (LHC), due to be commissioned in 2005, will provide particle physics with the first laboratory tool to access the energy frontier above 1 TeV. In order to achieve this , protons must be accelerated and stored at 7 TeV, colliding with an unprecedented luminosity of 1034 cm-2 s-1. The 8.3 Tesla guide field is obtained using conventional NbTi technology cooled to below the lambda point of helium. Considerable modification of the infrastructure around the existing LEP tunnel is needed to house the LHC machine and detectors. The project is advancing according to schedule with most of the major hardware systems including cryogenics and magnets under construction. A brief status report is given and future prospects are discussed.
International Nuclear Information System (INIS)
Potter, K.M.; Hoefert, M.; Stevenson, G.R.
1996-01-01
After a brief description of the Large Hadron Collider (LHC), which will produce 7 TeV on 7 TeV proton collisions, some of the radiological questions it raises will be discussed. The machine will be built in the 27 km circumference ring-tunnel of an existing collider at CERN. It aims to achieve collision rates of 10 9 per second in two of its high-energy particle detectors. This requires two high-intensity beams of more than 10 14 protons each. Shielding, access control and activation in addition to the high power in the proton-proton collisions must be taken into account. The detectors and local electronics of the particle physics experiments, which will surround these collisions, will have to be radiation resistant. Some of the environmental issues raised by the project will be discussed. (author)
2. The Large Hadron Collider Present Status and Prospects
CERN Document Server
Evans, Lyndon R
2001-01-01
The Large Hadron Collider (LHC), due to be commissioned in 2005, will provide particle physics with the first laboratory tool to access the energy frontier above 1 TeV. In order to achieve this , protons must be accelerated and stored at 7 TeV, colliding with an unprecedented luminosity of 1034 cm-2 s-1. The 8.3 Tesla guide field is obtained using conventional NbTi technology cooled to below the lambda point of helium. Considerable modification of the infrastructure around the existing LEP tunnel is needed to house the LHC machine and detectors. The project is advancing according to schedule with most of the major hardware systems including cryogenics and magnets under construction. A brief status report is given and future prospects are discussed.
3. Higgs Boson and the Large Hadron Collider
International Nuclear Information System (INIS)
Banerjee, Sunanda
2014-01-01
The Standard Model of particle physics has been extremely successful in explaining all the precision data collected during the past few decades. The model, however, was incomplete with one of the key particles still not experimentally observed till 2012. This particle is predicted by the theory in the context of providing mass to the fundamental constituents as well as the exchange particles W and Z bosons. In the recent past, two experiments, ATLAS and CMS operating at the Large Hadron Collider, CERN have observed the evidence of a new state. Search signal of this object has been motivated by the Higgs boson within the Standard Model. These results have been consolidated with newer data and some attempt has gone to determine some of the properties of this newly observed state. Some of the most important recent results in this context are presented in this lecture. Several groups from India have participated in the LHC program and contributed to various aspects like the machine, computing grid and the experiments. In particular, 3 institutes and 2 University groups have been a member of the CMS collaboration and took part in the discovery of the new state. The participation of the Indian groups are also highlighted. (author)
4. Weak boson emission in hadron collider processes
International Nuclear Information System (INIS)
Baur, U.
2007-01-01
The O(α) virtual weak radiative corrections to many hadron collider processes are known to become large and negative at high energies, due to the appearance of Sudakov-like logarithms. At the same order in perturbation theory, weak boson emission diagrams contribute. Since the W and Z bosons are massive, the O(α) virtual weak radiative corrections and the contributions from weak boson emission are separately finite. Thus, unlike in QED or QCD calculations, there is no technical reason for including gauge boson emission diagrams in calculations of electroweak radiative corrections. In most calculations of the O(α) electroweak radiative corrections, weak boson emission diagrams are therefore not taken into account. Another reason for not including these diagrams is that they lead to final states which differ from that of the original process. However, in experiment, one usually considers partially inclusive final states. Weak boson emission diagrams thus should be included in calculations of electroweak radiative corrections. In this paper, I examine the role of weak boson emission in those processes at the Fermilab Tevatron and the CERN LHC for which the one-loop electroweak radiative corrections are known to become large at high energies (inclusive jet, isolated photon, Z+1 jet, Drell-Yan, di-boson, tt, and single top production). In general, I find that the cross section for weak boson emission is substantial at high energies and that weak boson emission and the O(α) virtual weak radiative corrections partially cancel
5. Cryogenics for the Large Hadron Collider
CERN Document Server
Lebrun, P
2000-01-01
The Large Hadron Collider (LHC), a 26.7 km circumference superconducting accelerator equipped with high-field magnets operating in superfluid helium below 1.9 K, has now fully entered construction at CERN, the European Laboratory for Particle Physics. The heart of the LHC cryogenic system is the quasi-isothermal magnet cooling scheme, in which flowing two-phase saturated superfluid helium removes the heat load from the 36000 ton cold mass, immersed in some 400 m/sup 3/ static pressurised superfluid helium. The LHC also makes use of supercritical helium for nonisothermal cooling of the beam screens which intercept most of the dynamic heat loads at higher temperature. Although not used in normal operation, liquid nitrogen will provide the source of refrigeration for precooling the machine. Refrigeration for the LHC is produced in eight large refrigerators, each with an equivalent capacity of about 18 kW at 4.5 K, completed by 1.8 K refrigeration units making use of several stages of hydrodynamic cold compressor...
6. Recent results from the Large Hadron Collider
CERN Document Server
Alcaraz Maestre, J
2013-01-01
We present an overview of the physics results obtained by experiments at the Large Hadron Collider (LHC) in 2009–2010, for an integrated luminosity of L ≈ 40 pb$^{−1}$ , collected mostly at a centre-of-mass energy of √ s = 7 TeV. After an introduction to the physics environment at the LHC and the current performance of the accelerator and detectors, we will discuss quantum chro- modynamics and B-physics analyses, W and Z production, the first results in the top sector, and searches for new physics, with particular emphasis on su- persymmetry and Higgs studies. While most of the presented results are in remarkable agreement with Standard Model predictions, the excellent perfor- mance of the LHC machine and experiments, the prompt analysis of all data within just a few months after the end of data taking, and the high quality of the results obtained constitute an encouraging step towards unique measurements and exciting discoveries in the 2011–2012 period and beyond.
7. The ATLAS experiment at the CERN Large Hadron Collider
NARCIS (Netherlands)
Aad, G.; et al., [Unknown; Bentvelsen, S.; Bobbink, G.J.; Bos, K.; Boterenbrood, H.; Brouwer, G.; Buis, E.J.; Buskop, J.J.F.; Colijn, A.P.; Dankers, R.; Daum, C.; de Boer, R.; de Jong, P.; Ennes, P.; Gosselink, M.; Groenstege, H.; Hart, R.G.G.; Hartjes, F.; Hendriks, P.J.; Hessey, N.P.; Jansweijer, P.P.M.; Kieft, G.; Klok, P.F.; Klous, S.; Kluit, P.; Koffeman, E.; Koutsman, A.; Liebig, W.; Limper, M.; Linde, F.; Luijckx, G.; Massaro, G.; Muijs, A.; Peeters, S.J.M.; Reichold, A.; Rewiersma, P.; Rijpstra, M.; Scholte, R.C.; Schuijlenburg, H.W.; Snuverink, J.; van der Graaf, H.; van der Kraaij, E.; van Eijk, B.; van Kesteren, Z.; van Vulpen, I.; Verkerke, W.; Vermeulen, J.C.; Vreeswijk, M.; Werneke, P.
2008-01-01
The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper. A brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented.
8. Dijet physics with CMS detector at the Large Hadron Collider
2012-10-06
Oct 6, 2012 ... Hadron Collider, at a proton–proton collision energy of. √ ... generator predicts less azimuthal decorrelation than observed in data [8]. ... The dijet mass spectrum predicted by quantum chromodynamics (QCD) falls smoothly.
9. Computing and data handling requirements for SSC [Superconducting Super Collider] and LHC [Large Hadron Collider] experiments
International Nuclear Information System (INIS)
Lankford, A.J.
1990-05-01
A number of issues for computing and data handling in the online in environment at future high-luminosity, high-energy colliders, such as the Superconducting Super Collider (SSC) and Large Hadron Collider (LHC), are outlined. Requirements for trigger processing, data acquisition, and online processing are discussed. Some aspects of possible solutions are sketched. 6 refs., 3 figs
10. Flat beams in a 50 TeV hadron collider
International Nuclear Information System (INIS)
Peggs, S.; Harrison, M.; Pilat, F.; Syphers, M.
1997-01-01
The basic beam dynamics of a next generation 50 x 50 TeV hadron collider based on a high field magnet approach have been outlined over the past several years. Radiation damping not only produces small emittances, but also flat beams, just as in electron machines. Based on open-quotes Snowmass 96close quotes parameters, we investigate the issues associated with flat beams in very high energy hadron colliders
11. Working group report: Physics at the Large Hadron Collider
cally viable physics issues at two hadron colliders currently under operation, the p¯p collider ... corrections to different SM processes are very important. ... Keeping all these in mind and the available skills and interests of the ... relation involving the masses of the Standard Model particles as well as the masses of any.
12. Tolerable systematic errors in Really Large Hadron Collider dipoles
International Nuclear Information System (INIS)
Peggs, S.; Dell, F.
1996-01-01
Maximum allowable systematic harmonics for arc dipoles in a Really Large Hadron Collider are derived. The possibility of half cell lengths much greater than 100 meters is justified. A convenient analytical model evaluating horizontal tune shifts is developed, and tested against a sample high field collider
13. Detectors and luminosity for hadron colliders
International Nuclear Information System (INIS)
Diebold, R.
1983-01-01
Three types of very high energy hadron-hadron coliders are discussed in terms of the trade-off between energy and luminosity. The usable luminosity depends both on the physics under study and the rate capabilities of the detector
14. Challenges for MSSM Higgs searches at hadron colliders
Energy Technology Data Exchange (ETDEWEB)
Carena, Marcela S.; /Fermilab; Menon, A.; /Argonne /Chicago U., EFI; Wagner, C.E.M.; /Argonne /Chicago U., EFI /KICP, Chicago /Chicago U.
2007-04-01
In this article we analyze the impact of B-physics and Higgs physics at LEP on standard and non-standard Higgs bosons searches at the Tevatron and the LHC, within the framework of minimal flavor violating supersymmetric models. The B-physics constraints we consider come from the experimental measurements of the rare B-decays b {yields} s{gamma} and B{sub u} {yields} {tau}{nu} and the experimental limit on the B{sub s} {yields} {mu}{sup +}{mu}{sup -} branching ratio. We show that these constraints are severe for large values of the trilinear soft breaking parameter A{sub t}, rendering the non-standard Higgs searches at hadron colliders less promising. On the contrary these bounds are relaxed for small values of A{sub t} and large values of the Higgsino mass parameter {mu}, enhancing the prospects for the direct detection of non-standard Higgs bosons at both colliders. We also consider the available ATLAS and CMS projected sensitivities in the standard model Higgs search channels, and we discuss the LHC's ability in probing the whole MSSM parameter space. In addition we also consider the expected Tevatron collider sensitivities in the standard model Higgs h {yields} b{bar b} channel to show that it may be able to find 3 {sigma} evidence in the B-physics allowed regions for small or moderate values of the stop mixing parameter.
15. Design of the large hadron electron collider interaction region
Science.gov (United States)
Cruz-Alaniz, E.; Newton, D.; Tomás, R.; Korostelev, M.
2015-11-01
The large hadron electron collider (LHeC) is a proposed upgrade of the Large Hadron Collider (LHC) within the high luminosity LHC (HL-LHC) project, to provide electron-nucleon collisions and explore a new regime of energy and luminosity for deep inelastic scattering. The design of an interaction region for any collider is always a challenging task given that the beams are brought into crossing with the smallest beam sizes in a region where there are tight detector constraints. In this case integrating the LHeC into the existing HL-LHC lattice, to allow simultaneous proton-proton and electron-proton collisions, increases the difficulty of the task. A nominal design was presented in the the LHeC conceptual design report in 2012 featuring an optical configuration that focuses one of the proton beams of the LHC to β*=10 cm in the LHeC interaction point to reach the desired luminosity of L =1033 cm-2 s-1 . This value is achieved with the aid of a new inner triplet of quadrupoles at a distance L*=10 m from the interaction point. However the chromatic beta beating was found intolerable regarding machine protection issues. An advanced chromatic correction scheme was required. This paper explores the feasibility of the extension of a novel optical technique called the achromatic telescopic squeezing scheme and the flexibility of the interaction region design, in order to find the optimal solution that would produce the highest luminosity while controlling the chromaticity, minimizing the synchrotron radiation power and maintaining the dynamic aperture required for stability.
16. Design of the large hadron electron collider interaction region
Directory of Open Access Journals (Sweden)
E. Cruz-Alaniz
2015-11-01
Full Text Available The large hadron electron collider (LHeC is a proposed upgrade of the Large Hadron Collider (LHC within the high luminosity LHC (HL-LHC project, to provide electron-nucleon collisions and explore a new regime of energy and luminosity for deep inelastic scattering. The design of an interaction region for any collider is always a challenging task given that the beams are brought into crossing with the smallest beam sizes in a region where there are tight detector constraints. In this case integrating the LHeC into the existing HL-LHC lattice, to allow simultaneous proton-proton and electron-proton collisions, increases the difficulty of the task. A nominal design was presented in the the LHeC conceptual design report in 2012 featuring an optical configuration that focuses one of the proton beams of the LHC to β^{*}=10 cm in the LHeC interaction point to reach the desired luminosity of L=10^{33} cm^{-2} s^{-1}. This value is achieved with the aid of a new inner triplet of quadrupoles at a distance L^{*}=10 m from the interaction point. However the chromatic beta beating was found intolerable regarding machine protection issues. An advanced chromatic correction scheme was required. This paper explores the feasibility of the extension of a novel optical technique called the achromatic telescopic squeezing scheme and the flexibility of the interaction region design, in order to find the optimal solution that would produce the highest luminosity while controlling the chromaticity, minimizing the synchrotron radiation power and maintaining the dynamic aperture required for stability.
17. The Very Large Hadron Collider: The farthest energy frontier
International Nuclear Information System (INIS)
Barletta, William A.
2001-01-01
The Very Large Hadron Collider (or Eloisatron) represents what may well be the final step on the energy frontier of accelerator-based high energy physics. While an extremely high luminosity proton collider at 100-200 TeV center of mass energy can probably be built in one step with LHC technology, that machine would cost more than what is presently politically acceptable. This talk summarizes the strategies of collider design including staged deployment, comparison with electron-positron colliders, opportunities for major innovation, and the technical challenges of reducing costs to manageable proportions. It also presents the priorities for relevant R and D for the next few years
18. Overview of the Insertable B-Layer (IBL) Project of the ATLAS Experiment at the Large Hadron Collider at CERN
International Nuclear Information System (INIS)
Flick, Tobias
2013-06-01
The ATLAS experiment will upgrade its Pixel Detector with the installation of a new pixel layer in 2013/14. The new sub-detector, named Insertable B-Layer (IBL), will be installed between the existing Pixel Detector and a new smaller diameter beam-pipe at a radius of 33 mm. To cope with the high radiation and hit occupancy due to the proximity to the interaction point, a new read-out chip and two different silicon sensor technologies (planar and 3D) have been developed and are currently under investigation and production for the IBL. Furthermore, the physics performance should be improved through the reduction of pixel size whereas targeting for a low material budget, pushing for a new mechanical support using lightweight staves and a CO 2 -based cooling system. An overview of the IBL project, the results of beam tests on different sensor technologies, testing of pre-series staves made before going into production in order to qualify the assembly procedure, the loaded module electrical integrity, and the read-out chain will be presented. (authors)
CERN Multimedia
HR Department
2010-01-01
Regular Programme 21, 22, 23 & 24 June 2010 from 11:00 to 12:00 - Main Auditorium, Bldg. 500-1-001 Higgs Boson Searches at Hadron Colliders by Dr. Karl Jakobs (University of Freiburg) In these Academic Training lectures, the phenomenology of Higgs bosons and search strategies at hadron colliders are discussed. After a brief introduction on Higgs bosons in the Standard Model and a discussion of present direct and indirect constraints on its mass the status of the theoretical cross section calculations for Higgs boson production at hadron colliders is reviewed. In the following lectures important experimental issues relevant for Higgs boson searches (trigger, measurements of leptons, jets and missing transverse energy) are presented. This is followed by a detailed discussion of the discovery potential for the Standard Model Higgs boson for both the Tevatron and the LHC experiments. In addition, various scenarios beyond the Standard Model, primarily the MSSM, are considered. Finally, the potential and ...
20. Electron Lenses for the Large Hadron Collider
Energy Technology Data Exchange (ETDEWEB)
Stancari, Giulio [Fermilab; Valishev, Alexander [Fermilab; Bruce, Roderik [CERN; Redaelli, Stefano [CERN; Rossi, Adriana [CERN; Salvachua, Belen [CERN
2014-07-01
Electron lenses are pulsed, magnetically confined electron beams whose current-density profile is shaped to obtain the desired effect on the circulating beam. Electron lenses were used in the Fermilab Tevatron collider for bunch-by-bunch compensation of long-range beam-beam tune shifts, for removal of uncaptured particles in the abort gap, for preliminary experiments on head-on beam-beam compensation, and for the demonstration of halo scraping with hollow electron beams. Electron lenses for beam-beam compensation are being commissioned in RHIC at BNL. Within the US LHC Accelerator Research Program and the European HiLumi LHC Design Study, hollow electron beam collimation was studied as an option to complement the collimation system for the LHC upgrades. This project is moving towards a technical design in 2014, with the goal to build the devices in 2015-2017, after resuming LHC operations and re-assessing needs and requirements at 6.5 TeV. Because of their electric charge and the absence of materials close to the proton beam, electron lenses may also provide an alternative to wires for long-range beam-beam compensation in LHC luminosity upgrade scenarios with small crossing angles.
1. Electron lenses for the large hadron collider
CERN Document Server
Stancari†, G; Bruce, R; Redaelli, S; Rossi, A; Salvachua Ferrando, B
2014-01-01
Electron lenses are pulsed, magnetically confined electron beamswhose current-density profile is shaped to obtain the desired effect on the circulating beam. Electron lenses were used in the Fermilab Tevatron collider for bunch-bybunch compensation of long-range beam-beam tune shifts, for removal of uncaptured particles in the abort gap, for preliminary experiments on head-on beam-beamcompensation, and for the demonstration of halo scrapingwith hollow electron beams. Electron lenses for beam-beam compensation are being commissioned in RHIC at BNL. Within the US LHC Accelerator Research Program and the European HiLumi LHC Design Study, hollow electron beam collimation was studied as an option to complement the collimation system for the LHC upgrades. A conceptual design was recently completed, and the project is moving towards a technical design in 2014–2015 for construction in 2015–2017, if needed, after resuming LHC operations and re-assessing collimation needs and requirements at 6.5 TeV. Because of the...
International Nuclear Information System (INIS)
Yamazaki, Toshimitsu
1990-01-01
The Japanese Hadron Project (JHP) is aimed at producing various kinds of unstable secondary beams based on high-intensity protons from a new accelerator complex. The 1 GeV protons, first produced from a 1 GeV linac, are transferred to a compressor/stretcher ring, where a sharply-pulsed beam or a stretched continuous beam will be produced. The pulsed beam will be used for a pulsed muon source (M arena) and a spallation neutron source (N arena). A part of the proton beam will be used to produce unstable nuclei, which will be accelerated to several MeV/nucleon (E arena). The purpose and impact of JHP will be described in view of future applications of hadronic beams to nuclear energy and material science. (author)
3. The Large Hadron Collider project
CERN Document Server
Maiani, Luciano
1999-01-01
Knowledge of the fundamental constituents of matter has greatly advanced, over the last decades. The standard theory of fundamental interactions presents us with a theoretically sound picture, which describes with great accuracy known physical phenomena on most diverse energy and distance scales. These range from 10/sup -16/ cm, inside the nucleons, up to large-scale astrophysical bodies, including the early Universe at some nanosecond after the Big-Bang and temperatures of the order of 10/sup 2/ GeV. The picture is not yet completed, however, as we lack the observation of the Higgs boson, predicted in the 100-500 GeV range-a particle associated with the generation of particle masses and with the quantum fluctuations in the primordial Universe. In addition, the standard theory is expected to undergo a change of regime in the 10/sup 3/ GeV region, with the appearance of new families of particles, most likely associated with the onset of a new symmetry (supersymmetry). In 1994, the CERN Council approved the con...
4. Il Collisore LHC (Large Hadron Collider)
CERN Multimedia
Brianti, Giorgio
2004-01-01
In 2007, in a new Collider in the tunnel of 27km, collisions will be made between very powerful beams of protons and ions. The energies will be very high to try to catch the most tiny particle (1 page)
5. Production of electroweak bosons at hadron colliders: theoretical aspects
CERN Document Server
Mangano, Michelangelo L.
2016-01-01
Since the W and Z discovery, hadron colliders have provided a fertile ground, in which continuously improving measurements and theoretical predictions allow to precisely determine the gauge boson properties, and to probe the dynamics of electroweak and strong interactions. This article will review, from a theoretical perspective, the role played by the study, at hadron colliders, of electroweak boson production properties, from the better understanding of the proton structure, to the discovery and studies of the top quark and of the Higgs, to the searches for new phenomena beyond the Standard Model.
6. 2nd CERN-Fermilab Hadron Collider Physics Summer School
CERN Document Server
Gian Giudice; Ellis, Nick; Jakobs, Karl; Mage, Patricia; Seymour, Michael H; Spiropulu, Maria; Wilkinson, Guy; CERN-FNAL Summer School; Hadron Collider Physics Summer School
2007-01-01
For the past few years, experiments at the Fermilab Tevatron Collider have once again been exploring uncharted territory at the current energy frontier of particle physics. With CERN's LHC operations to start in 2007, a new era in the exploration of the fundamental laws of nature will begin. In anticipation of this era of discovery, Fermilab and CERN are jointly organizing a series of "Hadron Collider Physics Summer Schools", whose main goal is to offer a complete picture of both the theoretical and experimental aspects of hadron collider physics. Preparing young researchers to tackle the current and anticipated challenges at hadron colliders, and spreading the global knowledge required for a timely and competent exploitation of the LHC physics potential, are concerns equally shared by CERN, the LHC host laboratory, and by Fermilab, the home of the Tevatron and host of CMS's LHC Physics Center in the U.S. The CERN-Fermilab Hadron Collider Physics Summer School is targeted particularly at young postdocs in exp...
7. Department of Energy assessment of the Large Hadron Collider
International Nuclear Information System (INIS)
1996-06-01
This report summarizes the conclusions of the committee that assessed the cost estimate for the Large Hadron Collider (LHC). This proton-proton collider will be built at CERN, the European Laboratory for Particle Physics near Geneva, Switzerland. The committee found the accelerator-project cost estimate of 2.3 billion in 1995 Swiss francs, or about $2 billion US, to be adequate and reasonable. The planned project completion date of 2005 also appears achievable, assuming the resources are available when needed. The cost estimate was made using established European accounting procedures. In particular, the cost estimate does not include R and D, prototyping and testing, spare parts, and most of the engineering labor. Also excluded are costs for decommissioning the Large Electron-Positron collider (LEP) that now occupies the tunnel, modifications to the injector system, the experimental areas, preoperations costs, and CERN manpower. All these items are assumed by CERN to be included in the normal annual operations budget rather than the construction budget. Finally, contingency is built into the base estimate, in contrast to Department of Energy (DOE) estimates that explicitly identify contingency. The committee's charge, given by Dr. James F. Decker, Deputy Directory of the DOE Office of Energy Research, was to understand the basis for the LHC cost estimate, identify uncertainties, and judge the overall validity of the estimate, proposed schedule, and related issues. The committee met at CERN April 22--26, 1996. The assessment was based on the October 1995 LHC Conceptual Design Report or ''Yellow Book,'' cost estimates and formal presentations made by the CERN staff, site inspection, detailed discussions with LHC technical experts, and the committee members' considerable experience 8. First Considerations on Beam Optics and Lattice Design for the Future Hadron-Hadron Collider FCC CERN Document Server Alemany Fernandez, R 2014-01-01 The present document explains the steps carried out in order to make the first design of the Future Hadron-Hadron Collider (FCC-hh) following the base line parameters that can be found in [1]. Two lattice layouts are presented, a ring collider with 12 arcs and 12 straight sections, four of them designed as interaction points, and a racetrack like collider with two arcs and two straight sections, each of them equipped with two interaction points. The lattice design presented in the paper is modular allowing the same modules be used for both layouts. The present document addresses as well the beta star reach at the interaction points. 9. Effective models of new physics at the Large Hadron Collider International Nuclear Information System (INIS) Llodra-Perez, J. 2011-07-01 With the start of the Large Hadron Collider runs, in 2010, particle physicists will be soon able to have a better understanding of the electroweak symmetry breaking. They might also answer to many experimental and theoretical open questions raised by the Standard Model. Surfing on this really favorable situation, we will first present in this thesis a highly model-independent parametrization in order to characterize the new physics effects on mechanisms of production and decay of the Higgs boson. This original tool will be easily and directly usable in data analysis of CMS and ATLAS, the huge generalist experiments of LHC. It will help indeed to exclude or validate significantly some new theories beyond the Standard Model. In another approach, based on model-building, we considered a scenario of new physics, where the Standard Model fields can propagate in a flat six-dimensional space. The new spatial extra-dimensions will be compactified on a Real Projective Plane. This orbifold is the unique six-dimensional geometry which possesses chiral fermions and a natural Dark Matter candidate. The scalar photon, which is the lightest particle of the first Kaluza-Klein tier, is stabilized by a symmetry relic of the six dimension Lorentz invariance. Using the current constraints from cosmological observations and our first analytical calculation, we derived a characteristic mass range around few hundred GeV for the Kaluza-Klein scalar photon. Therefore the new states of our Universal Extra-Dimension model are light enough to be produced through clear signatures at the Large Hadron Collider. So we used a more sophisticated analysis of particle mass spectrum and couplings, including radiative corrections at one-loop, in order to establish our first predictions and constraints on the expected LHC phenomenology. (author) 10. The Large Hadron Collider unraveling the mysteries of the universe CERN Document Server Beech, Martin 2010-01-01 The Large Hadron Collider (LHC) is the largest engineering project ever undertaken, and one of the most expensive. Why are physicists around the world so excited about it? What secrets of the universe does this gargantuan piece of machinery hope to reveal? What risks are there in operating it? Could the exotic particles that are produced in the collisions—including tiny black holes that should wink into and out of existence— between subatomic particles be a threat not only to humankind but to the planet itself? In this thorough and engaging review of cutting-edge physics and cosmology, you will learn why the collider was built and how it works. You will find out what scientists are hoping to find out and what current aspects of the Standard Model might need to be revised. You will even learn about the quest to identify so-called dark matter and dark energy, which many now feel make up most of what's out there. This is a wild ride into some very unfamiliar and strange territory, but it is well worth your t... 11. TOP AND HIGGS PHYSICS AT THE HADRON COLLIDERS Energy Technology Data Exchange (ETDEWEB) Jabeen, Shabnam 2013-10-20 This review summarizes the recent results for top quark and Higgs boson measurements from experiments at Tevatron, a proton–antiproton collider at a center-of-mass energy of √ s =1 . 96 TeV, and the Large Hadron Collider, a proton–proton collider at a center- of-mass energy of √ s = 7 TeV. These results include the discovery of a Higgs-like boson and measurement of its various properties, and measurements in the top quark sector, e.g. top quark mass, spin, charge asymmetry and production of single top quark. 12. The future of the Large Hadron Collider and CERN. Science.gov (United States) Heuer, Rolf-Dieter 2012-02-28 This paper presents the Large Hadron Collider (LHC) and its current scientific programme and outlines options for high-energy colliders at the energy frontier for the years to come. The immediate plans include the exploitation of the LHC at its design luminosity and energy, as well as upgrades to the LHC and its injectors. This may be followed by a linear electron-positron collider, based on the technology being developed by the Compact Linear Collider and the International Linear Collider collaborations, or by a high-energy electron-proton machine. This contribution describes the past, present and future directions, all of which have a unique value to add to experimental particle physics, and concludes by outlining key messages for the way forward. 13. Model independent spin determination at hadron colliders International Nuclear Information System (INIS) Edelhaeuser, Lisa 2012-01-01 By the end of the year 2011, both the CMS and ATLAS experiments at the Large Hadron Collider have recorded around 5 inverse femtobarns of data at an energy of 7 TeV. There are only vague hints from the already analysed data towards new physics at the TeV scale. However, one knows that around this scale, new physics should show up so that theoretical issues of the standard model of particle physics can be cured. During the last decades, extensions to the standard model that are supposed to solve its problems have been constructed, and the corresponding phenomenology has been worked out. As soon as new physics is discovered, one has to deal with the problem of determining the nature of the underlying model. A first hint is of course given by the mass spectrum and quantum numbers such as electric and colour charges of the new particles. However, there are two popular model classes, supersymmetric models and extradimensional models, which can exhibit almost equal properties at the accessible energy range. Both introduce partners to the standard model particles with the same charges and thus one needs an extended discrimination method. From the origin of these partners arises a relevant difference: The partners constructed in extradimensional models have the same spin as their standard model partners while in Supersymmetry they differ by spin 1/2. These different spins have an impact on the phenomenology of the two models. For example, one can exploit the fact that the total cross sections are affected, but this requires a very good knowledge of the couplings and masses involved. Another approach uses angular distributions depending on the particle spins. A prevailing method based on this idea uses the invariant mass distribution of the visible particles in decay chains. One can relate these distributions to the spin of the particle mediating the decay since it reflects itself in the highest power of the invariant mass s ff of the adjacent particles. In this thesis we 14. Model independent spin determination at hadron colliders Energy Technology Data Exchange (ETDEWEB) Edelhaeuser, Lisa 2012-04-25 By the end of the year 2011, both the CMS and ATLAS experiments at the Large Hadron Collider have recorded around 5 inverse femtobarns of data at an energy of 7 TeV. There are only vague hints from the already analysed data towards new physics at the TeV scale. However, one knows that around this scale, new physics should show up so that theoretical issues of the standard model of particle physics can be cured. During the last decades, extensions to the standard model that are supposed to solve its problems have been constructed, and the corresponding phenomenology has been worked out. As soon as new physics is discovered, one has to deal with the problem of determining the nature of the underlying model. A first hint is of course given by the mass spectrum and quantum numbers such as electric and colour charges of the new particles. However, there are two popular model classes, supersymmetric models and extradimensional models, which can exhibit almost equal properties at the accessible energy range. Both introduce partners to the standard model particles with the same charges and thus one needs an extended discrimination method. From the origin of these partners arises a relevant difference: The partners constructed in extradimensional models have the same spin as their standard model partners while in Supersymmetry they differ by spin 1/2. These different spins have an impact on the phenomenology of the two models. For example, one can exploit the fact that the total cross sections are affected, but this requires a very good knowledge of the couplings and masses involved. Another approach uses angular distributions depending on the particle spins. A prevailing method based on this idea uses the invariant mass distribution of the visible particles in decay chains. One can relate these distributions to the spin of the particle mediating the decay since it reflects itself in the highest power of the invariant mass s{sub ff} of the adjacent particles. In this thesis 15. Model independent spin determination at hadron colliders Energy Technology Data Exchange (ETDEWEB) Edelhaeuser, Lisa 2012-04-25 By the end of the year 2011, both the CMS and ATLAS experiments at the Large Hadron Collider have recorded around 5 inverse femtobarns of data at an energy of 7 TeV. There are only vague hints from the already analysed data towards new physics at the TeV scale. However, one knows that around this scale, new physics should show up so that theoretical issues of the standard model of particle physics can be cured. During the last decades, extensions to the standard model that are supposed to solve its problems have been constructed, and the corresponding phenomenology has been worked out. As soon as new physics is discovered, one has to deal with the problem of determining the nature of the underlying model. A first hint is of course given by the mass spectrum and quantum numbers such as electric and colour charges of the new particles. However, there are two popular model classes, supersymmetric models and extradimensional models, which can exhibit almost equal properties at the accessible energy range. Both introduce partners to the standard model particles with the same charges and thus one needs an extended discrimination method. From the origin of these partners arises a relevant difference: The partners constructed in extradimensional models have the same spin as their standard model partners while in Supersymmetry they differ by spin 1/2. These different spins have an impact on the phenomenology of the two models. For example, one can exploit the fact that the total cross sections are affected, but this requires a very good knowledge of the couplings and masses involved. Another approach uses angular distributions depending on the particle spins. A prevailing method based on this idea uses the invariant mass distribution of the visible particles in decay chains. One can relate these distributions to the spin of the particle mediating the decay since it reflects itself in the highest power of the invariant mass s{sub ff} of the adjacent particles. In this thesis 16. The 20th Hadron Collider Physics Symposium in Evian CERN Multimedia Ludwik Dobrzynski and Emmanuel Tsesmelis The 20th Hadron Collider Physics Symposium took place in Evian from 16 to 20 November 2009. The Hadron Collider Physics Symposium series has been a major forum for presentations of physics at the Tevatron over the past two decades. The merger of the former Topical Conference on Hadron Collider Physics with the LHC Symposium in 2005 brought together the Tevatron and LHC communities in a single forum. The 20th Hadron Collider Physics Symposium took place in Evian, on the shores of Lake Geneva, from 16-20 November 2009, some 17 years after the historic ECFA-CERN Evian meeting in March 1992 when Expressions of Interest for LHC detectors were presented for the first time. The 2009 event was organized jointly by CERN and the French high-energy physics community (CNRS-IN2P3 and CEA-IRFU). More than 170 people registered for this symposium. This year’s symposium was held at an important time for both the Tevatron and the LHC. It stimulated the completion of analyses for a significant Tevatron data sam... 17. Parton Distributions at a 100 TeV Hadron Collider NARCIS (Netherlands) Rojo, Juan 2016-01-01 The determination of the parton distribution functions (PDFs) of the proton will be an essential input for the physics program of a future 100 TeV hadron collider. The unprecedented center-of-mass energy will require knowledge of PDFs in currently unexplored kinematical regions such as the ultra 18. CERN to start Large Hadron Collider november 2007 CERN Multimedia 2006-01-01 "The Large Hadron Collider (LHC) is expected to provide its first collisions in November 2007, CERN has announced. A two-month run at 0.9 TeV is planned for 2007 to test the accelerating and detecting equipment, and a full power run at 14 TeV is expected in the spring of 2008." 19. Search for invisibly decaying Higgs boson at Large Hadron Collider Indian Academy of Sciences (India) In several scenarios of Beyond Standard Model physics, the invisible decay mode of the Higgs boson is an interesting possibility. The search strategy for an invisible Higgs boson at the Large Hadron Collider (LHC), using weak boson fusion process, has been studied in detail, by taking into account all possible ... 20. Charged Hadron Multiplicity Distribution at Relativistic Heavy-Ion Colliders Directory of Open Access Journals (Sweden) Ashwini Kumar 2013-01-01 Full Text Available The present paper reviews facts and problems concerning charge hadron production in high energy collisions. Main emphasis is laid on the qualitative and quantitative description of general characteristics and properties observed for charged hadrons produced in such high energy collisions. Various features of available experimental data, for example, the variations of charged hadron multiplicity and pseudorapidity density with the mass number of colliding nuclei, center-of-mass energies, and the collision centrality obtained from heavy-ion collider experiments, are interpreted in the context of various theoretical concepts and their implications. Finally, several important scaling features observed in the measurements mainly at RHIC and LHC experiments are highlighted in the view of these models to draw some insight regarding the particle production mechanism in heavy-ion collisions. 1. Hadron Collider Physics with Real Time Trajectory Reconstruction Energy Technology Data Exchange (ETDEWEB) Annovi, Alberto [Univ. of Pisa (Italy) 2005-01-01 During last century experiments with accelerators have been extensively used to improve our understanding of matter. They are now the most common tool used to search for new phenomena in high energy physics. In the process of probing smaller distances and searching for new particles the center of mass energy has been steadily increased. The need for higher center of mass energy made hadron colliders the natural tool for discovery physics. Hadron colliders have a major drawback with respect to electron-positron colliders. As shown in fig. 1 the total cross section is several orders of magnitude larger than the cross section of interesting processes such as top or Higgs production. This means that, in order to observe interesting processes, it’s necessary to have collisions at very high rates and it becomes necessary to reject on-line most of the “non-interesting” events. In this thesis I have described the wide range of SVT applications within CDF. 2. Jet shapes in hadron and electron colliders International Nuclear Information System (INIS) Wainer, N. 1993-05-01 High energy jets are observed both in hadronic machines like the Tevatron and electron machines like LEP. These jets have an extended structure in phase space which can be measured. This distribution is usually called the jet shape. There is an intrinsic relation between jet variables, like energy and direction, the jet algorithm used, and the jet shape. Jet shape differences can be used to separate quark and gluon jets 3. Large Hadron Collider The Discovery Machine CERN Multimedia 2008-01-01 The mammoth machine, after a nine-year construction period, is scheduled (touch wood) to begin producing its beams of particles later this year. The commissioning process is planned to proceed from one beam to two beams to colliding beams; from lower energies to the terascale; from weaker test intensities to stronger ones suitable for producing data at useful rates but more difficult to control. 4. The Compact Muon Solenoid Experiment at the Large Hadron Collider The Compact Muon Solenoid Experiment at the Large Hadron Collider Directory of Open Access Journals (Sweden) David Delepine 2012-02-01 Full Text Available The Compact Muon Solenoid experiment at the CERN Large Hadron Collider will study protonproton collisions at unprecedented energies and luminosities. In this article we providefi rst a brief general introduction to particle physics. We then explain what CERN is. Thenwe describe the Large Hadron Collider at CERN, the most powerful particle acceleratorever built. Finally we describe the Compact Muon Solenoid experiment, its physics goals,construction details, and current status.El experimento Compact Muon Solenoid en el Large Hadron Collider del CERN estudiarácolisiones protón protón a energías y luminosidades sin precedente. En este artículo presentamos primero una breve introducción general a la física de partículas. Despuésexplicamos lo que es el CERN. Luego describimos el Large Hadron Collider, el más potente acelerador de partículas construido por el hombre, en el CERN. Finalmente describimos el experimento Compact Muon Solenoid, sus objetivos en física, los detalles de su construcción,y su situación presente. 5. Physics Opportunities at the Large Hadron Collider International Nuclear Information System (INIS) Roeck, Albert de 2006-01-01 In about two years time the LHC is scheduled to deliver its first pp collisions at a centre of mass energy of 14 TeV. The LHC is expected to open up the discovery of new physics at the TeV scale, and give the final answer on the Standard Model Higgs. The LHC will however also be a tool for precision physics. Furthermore LHC is also a pA and AA collider. This report summarizes some of the physics opportunities of the LHC 6. CERN-Fermilab Hadron Collider Physics Summer School CERN Multimedia 2007-01-01 Applications are now open for the 2nd CERN-Fermilab Hadron Collider Physics Summer School, which will take place at CERN from 6 to 15 June 2007. The school web site is http://cern.ch/hcpss with links to the academic program and application procedure. The application deadline is 9 March 2007. The results of the selection process will be announced shortly thereafter. The goal of the CERN-Fermilab Hadron Collider Physics Summer Schools is to offer students and young researchers in high energy physics a concentrated syllabus on the theory and experimental challenges of hadron collider physics. The first school in the series, held last summer at Fermilab, covered extensively the physics at the Tevatron collider experiments. The second school to be held at CERN, will focus on the technology and physics of the LHC experiments. Emphasis will be given on the first years of data-taking at the LHC and on the discovery potential of the programme. The series of lectures will be supported by in-depth discussion sess... 7. Physics at the Large Hadron Collider CERN Document Server Mukhopadhyaya, Biswarup; Raychaudhari, Amitava 2009-01-01 In an epoch when particle physics is awaiting a major step forward, the Large Hydron Collider (LHC) at CERN, Geneva will soon be operational. It will collide a beam of high energy protons with another similar beam circulation in the same 27 km tunnel but in the opposite direction, resulting in the production of many elementary particles some never created in the laboratory before. It is widely expected that the LHC will discover the Higgs boson, the particle which supposedly lends masses to all other fundamental particles. In addition, the question as to whether there is some new law of physics at such high energy is likely to be answered through this experiment. The present volume contains a collection of articles written by international experts, both theoreticians and experimentalists, from India and abroad, which aims to acquaint a non-specialist with some basic issues related to the LHC. At the same time, it is expected to be a useful, rudimentary companion of introductory exposition and technical expert... 8. Superconductive technologies for the Large Hadron collider at CERN CERN Document Server Rossi, L 2000-01-01 The Large Hadron Collider (LHC) project is the largest plant based on superconductivity and cryogenics: 27 km of tunnel filled with superconducting magnets and other equipment that will be kept at 1.9 K. The dipole magnets have to generate a minimum magnetic field of 8.3 T to allow collisions of proton beams at an energy of 14 TeV in the centre of mass. The construction of LHC started in 1997 at CERN in Geneva and required 10 years of research and development on fine- filament NbTi superconducting wires and cables, on magnet technology and on He-II refrigerators. In particular the project needs the production of about 1000 tons of high-homogeneity NbTi with current densities of more than 2000 A mm/sup -2/ at 9 T and 1.9 K, with tight control also of all other cable properties such as magnetization, interstrand resistance and copper resistivity. The paper describes the main dipole magnets and reviews the most significant steps in the research and development, focusing on the issues related to the conductor, to... 9. The Large Hadron Collider - Expectations and Reality International Nuclear Information System (INIS) Litov, Leandar 2010-01-01 The Large Hadron Colider (LHC) is the biggest particle accelerator in the world designed to accelerate protons and heavy ions to extremely high energies. The four detector complexes installed around the beam crossing points, are expected to shed light on some of the more fundamental questions about our Universe ever asked--what are the fundamental constituents of the matter, what are the forces controlling their behavior and what is the structure of the space-time. In November, the LHC will be restarted and the detector complexes are expected to commence taking the first collision data. The physical motivation for the LHC experimental program and some open questions of the Standard Model of strong and electroweak interactions (SM) are discussed. Special attention is paid to observation of signatures for physics beyond the SM and the discovery potential of the LHC experiments is commented. One of the two general-purpose detector complexes (CMS, the Compact Muon Solenoid) is described briefly. 10. Monotop phenomenology at the Large Hadron Collider CERN Document Server Agram, Jean-Laurent; Buttignol, Michael; Conte, Eric; Fuks, Benjamin 2014-01-01 We investigate new physics scenarios where systems comprised of a single top quark accompanied by missing transverse energy, dubbed monotops, can be produced at the LHC. Following a simplified model approach, we describe all possible monotop production modes via an effective theory and estimate the sensitivity of the LHC, assuming 20 fb$^{-1}$of collisions at a center-of-mass energy of 8 TeV, to the observation of a monotop state. Considering both leptonic and hadronic top quark decays, we show that large fractions of the parameter space are reachable and that new physics particles with masses ranging up to 1.5 TeV can leave hints within the 2012 LHC dataset, assuming moderate new physics coupling strengths. 11. Tracking study of hadron collider boosters Energy Technology Data Exchange (ETDEWEB) Machida, S.; Bourianoff, G.; Huang, Y.; Mahale, N. 1992-07-01 A simulation code SIMPSONS (previously called 6D-TEASE T) of single- and multi-particle tracking has been developed for proton synchrotrons. The 6D phase space coordinates are calculated each time step including acceleration with an arbitrary ramping curve by integration of the rf phase. Space-charge effects are modelled by means of the Particle In Cell (PIC) method. We observed the transverse emittance growth around the injection energy of the Low Energy Booster (LEB) of the Superconducting Super Collider (SSC) with and without second harmonic rf cavities which reduce peak line density. We also employed the code to see the possible transverse emittance deterioration around the transition energy in the Medium Energy Booster (MEB) and to estimate the emittance dilution due to an injection error of the MEB. 12. QCD threshold corrections for gluino pair production at hadron colliders Energy Technology Data Exchange (ETDEWEB) Langenfeld, Ulrich [Wuerzburg Univ. (Germany); Moch, Sven-Olaf; Pfoh, Torsten [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany) 2012-11-15 We present the complete threshold enhanced predictions in QCD for the total cross section of gluino pair production at hadron colliders at next-to-next-to-leading order. Thanks to the computation of the required one-loop hard matching coefficients our results are accurate to the next-to-next-to-leading logarithm. In a brief phenomenological study we provide predictions for the total hadronic cross sections at the LHC and we discuss the uncertainties arising from scale variations and the parton distribution functions. 13. Superconducting Super Collider project International Nuclear Information System (INIS) Perl, M.L. 1986-04-01 The scientific need for the Superconducting Super Collider (SSC) is outlined, along with the history of the development of the SSC concept. A brief technical description is given of each of the main points of the SSC conceptual design. The construction cost and construction schedule are discussed, followed by issues associated with the realization of the SSC. 8 refs., 3 figs., 3 tabs 14. An investigation of triply heavy baryon production at hadron colliders CERN Document Server Gomshi Nobary, M A 2006-01-01 The triply heavy baryons have a rather diverse mass range. While some of them possess considerable production rates at existing facilities, others need to be produced at future high energy colliders. Here we study the direct fragmentation production of the Ωccc and Ωbbb baryons as the prototypes of triply heavy baryons at the hadron colliders with different . We present and compare the transverse momentum distributions of the differential cross sections, distributions of total cross sections and the integrated total cross sections of these states at the RHIC, the Tevatron Run II and the CERN LHC. 15. An investigation of triply heavy baryon production at hadron colliders Energy Technology Data Exchange (ETDEWEB) Gomshi Nobary, M.A. [Department of Physics, Faculty of Science, Razi University, Kermanshah (Iran, Islamic Republic of) and Center for Theoretical Physics and Mathematics, AEOI, Roosbeh Building, PO Box 11365-8486, Tehran (Iran, Islamic Republic of)]. E-mail: [email protected]; Sepahvand, R. [Department of Physics, Faculty of Science, Razi University, Kermanshah (Iran, Islamic Republic of) 2006-05-01 The triply heavy baryons have a rather diverse mass range. While some of them possess considerable production rates at existing facilities, others need to be produced at future high energy colliders. Here we study the direct fragmentation production of the {omega}{sub ccc} and {omega}{sub bbb} baryons as the prototypes of triply heavy baryons at the hadron colliders with different s. We present and compare the transverse momentum distributions of the differential cross sections, p{sub T}{sup min} distributions of total cross sections and the integrated total cross sections of these states at the RHIC, the Tevatron Run II and the CERN LHC. 16. Supersymmetric Higgs pair discovery prospects at hadron colliders CERN Document Server Belyaev, A; Éboli, Oscar J P; Mizukoshi, J K; Novaes, S F 2000-01-01 We study the potential of hadron colliders in the search for the pair production of neutral Higgs bosons in the framework of the Minimal Supersymmetric Standard Model. Using analytical expressions for the relevant amplitudes, we perform a detailed signal and background analysis, working out efficient kinematical cuts for the extraction of the signal. The important role of squark loop contributions to the signal is emphasised. If the signal is sufficiently enhanced by these contributions, it could even be observable at the next run of the upgraded Tevatron collider in the near future. At the LHC the pair production of light and heavy Higgs bosons might be detectable simultaneously. 17. Beyond the Large Hadron Collider: A First Look at Cryogenics for CERN Future Circular Colliders Science.gov (United States) Lebrun, Philippe; Tavian, Laurent Following the first experimental discoveries at the Large Hadron Collider (LHC) and the recent update of the European strategy in particle physics, CERN has undertaken an international study of possible future circular colliders beyond the LHC. The study, conducted with the collaborative participation of interested institutes world-wide, considers several options for very high energy hadron-hadron, electron-positron and hadron-electron colliders to be installed in a quasi-circular underground tunnel in the Geneva basin, with a circumference of 80 km to 100 km. All these machines would make intensive use of advanced superconducting devices, i.e. high-field bending and focusing magnets and/or accelerating RF cavities, thus requiring large helium cryogenic systems operating at 4.5 K or below. Based on preliminary sets of parameters and layouts for the particle colliders under study, we discuss the main challenges of their cryogenic systems and present first estimates of the cryogenic refrigeration capacities required, with emphasis on the qualitative and quantitative steps to be accomplished with respect to the present state-of-the-art. 18. Large hadron collider in the LEP tunnel. Proceedings. Vol. 2 International Nuclear Information System (INIS) 1984-01-01 A Workshop, jointly organized by ECFA and CERN, took place at Lausanne and at CERN in March 1984 to study various options for a pp (or panti p) collider which might be installed at a later data alongside LEP in the LEP tunnel. Following the exploration of e + e - physics up to the highest energy now foreseeable, this would open up the opportunity to investigate hadron collisions in the new energy range of 10 to 20 TeV in the centre of mass. These proceedings put together the documents prepared in connection with this Workshop. They cover possible options for a Large Hadron Collider (LHC) in the LEP tunnel, the physics case as it stands at present, and studies of experimental possibilities in this energy range with luminosities as now considered. See hints under the relevant topics. (orig./HSI) 19. Large hadron collider in the LEP tunnel. Proceedings. Vol. 1 International Nuclear Information System (INIS) 1984-01-01 A Workshop, jointly organized by ECFA and CERN, took place at Lausanne and at CERN in March 1984 to study various options for a pp (or panti p) collider which might be installed at a later date alongside LEP in the LEP tunnel. Following the exploration of e + e - physics up to the highest energy now foreseeable, this would open up the opportunity to investigate hadron collisions in the new energy range of 10 to 20 TeV in the centre of mass. These proceedings put together the documents prepared in connection with this Workshop. They cover possible options for a Large Hadron Collider (LHC= in the LEP tunnel, the physics case at it stands at present, and studies of experimental possibilities in this energy range with luminosities as now considered. See hints under the relevant topics. (orig.) 20. Higgs Boson Searches at Hadron Colliders (1/4) CERN Multimedia CERN. Geneva 2010-01-01 In these Academic Training lectures, the phenomenology of Higgs bosons and search strategies at hadron colliders are discussed. After a brief introduction on Higgs bosons in the Standard Model and a discussion of present direct and indirect constraints on its mass the status of the theoretical cross section calculations for Higgs boson production at hadron colliders is reviewed. In the following lectures important experimental issues relevant for Higgs boson searches (trigger, measurements of leptons, jets and missing transverse energy) are presented. This is followed by a detailed discussion of the discovery potential for the Standard Model Higgs boson for both the Tevatron and the LHC experiments. In addition, various scenarios beyond the Standard Model, primarily the MSSM, are considered. Finally, the potential and strategies to measured Higgs boson parameters and the investigation of alternative symmetry breaking scenarios are addressed. 1. The Initial Stages of Colliding Nuclei and Hadrons International Nuclear Information System (INIS) Tribedy, Prithwish 2017-01-01 The final day of the Hot Quarks 2016 conference was focused on the discussions of the initial stages of colliding nuclei and hadrons. In this conference proceedings we give a brief overview of a few selective topics discussed at the conference that include latest developments in the theoretical description of the initial state towards understanding a number of recent experimental results from RHIC and LHC. (paper) 2. Next to leading order three jet production at hadron colliders International Nuclear Information System (INIS) Kilgore, W. 1997-01-01 Results from a next-to-leading order event generator of purely gluonic jet production are presented. This calculation is the first step in the construction of a full next-to-leading order calculation of three jet production at hadron colliders. Several jet algorithms commonly used in experiments are implemented and their numerical stability is investigated. A numerical instability is found in the iterative cone algorithm which makes it inappropriate for use in fixed order calculations beyond leading order. (author) 3. Higgs-photon associated production at hadron colliders International Nuclear Information System (INIS) Abbasabadi, A.; Repko, W.W. 1997-01-01 The authors present cross sections for the reactions p anti p → Hγ and pp → Hγ arising from the subprocess q anti q → Hγ. The calculation includes the complete one-loop contribution from all light quarks and is the main source of Higgs-photon associated production in hadron colliders. At Tevatron energies, the cross section is typically 0.1 fb or less, while at LHC energies it can exceed 1.0fb 4. Theory Overview of Electroweak Physics at Hadron Colliders Energy Technology Data Exchange (ETDEWEB) Campbell, John M. [Fermilab 2016-09-03 This contribution summarizes some of the important theoretical progress that has been made in the arena of electroweak physics at hadron colliders. The focus is on developments that have sharpened theoretical predictions for final states produced through electroweak processes. Special attention is paid to new results that have been presented in the last year, since LHCP2015, as well as on key issues for future measurements at the LHC. 5. The Large Hadron Collider: lessons learned and summary CERN Document Server Llewellyn Smith, Chris 2012-01-01 The Large Hadron Collider (LHC) machine and detectors are now working superbly. There are good reasons to hope and expect that the new domain that the LHC is already exploring, operating at 7 TeV with a luminosity of 1033 cm−2 s−1, or the much bigger domain that will be opened up as the luminosity increases to over 1034 and the energy to 14 TeV, will provide clues that will usher in a new era in particle physics. The arguments that new phenomena will be found in the energy range that will be explored by the LHC have become stronger since they were first seriously analysed in 1984, although their essence has changed little. I will review the evolution of these arguments in a historical context, the development of the LHC project since 1984, and the outlook in the light of reports on the performance of the machine and detectors presented at this meeting. 6. 3rd CERN-Fermilab Hadron Collider Physics Summer School CERN Multimedia 2008-01-01 August 12-22, 2008, Fermilab The school web site is http://cern.ch/hcpss with links to the academic programme and the application procedure. The APPLICATION DEADLINE IS 29 FEBRUARY 2008. The goal of the CERN-Fermilab Hadron Collider Physics Summer Schools is to offer students and young researchers in high-energy physics a concentrated syllabus on the theory and experimental challenges of hadron collider physics. The third session of the summer school will focus on exposing young post-docs and advanced graduate students to broader theories and real data beyond what they’ve learned at their home institutions. Experts from across the globe will lecture on the theoretical and experimental foundations of hadron collider physics, host parallel discussion sessions and answer students’ questions. This year’s school will also have a greater focus on physics beyond the Standard Model, as well as more time for questions at the end of each lecture. The 2008 School will be held at ... 7. 2nd CERN-Fermilab Hadron Collider Physics Summer School CERN Document Server 2007-01-01 June 6-15, 2007, CERN The school web site is http://cern.ch/hcpss with links to the academic programme and the application procedure. The APPLICATION DEADLINE IS 9 MARCH 2007 The results of the selection process will be announced shortly thereafter. The goal of the CERN-Fermilab Hadron Collider Physics Summer Schools is to offer students and young researchers in high energy physics a concentrated syllabus on the theory and experimental challenges of hadron collider physics. The first school in the series, held last summer at Fermilab, extensively covered the physics at the Tevatron collider experiments. The second school, to be held at CERN, will focus on the technology and physics of the LHC experiments. Emphasis will be placed on the first years of data-taking at the LHC and on the discovery potential of the programme. The series of lectures will be supported by in-depth discussion sessions and will include the theory and phenomenology of hadron collisions, discovery physics topics, detector and analysis t... 8. Hunting electroweakinos at future hadron colliders and direct detection experiments Energy Technology Data Exchange (ETDEWEB) Cortona, Giovanni Grilli di [SISSA - International School for Advanced Studies,Via Bonomea 265, I-34136 Trieste (Italy); INFN - Sezione di Trieste,via Valerio 2, I-34127 Trieste (Italy) 2015-05-07 We analyse the mass reach for electroweakinos at future hadron colliders and their interplay with direct detection experiments. Motivated by the LHC data, we focus on split supersymmetry models with different electroweakino spectra. We find for example that a 100 TeV collider may explore Winos up to ∼7 TeV in low scale gauge mediation models or thermal Wino dark matter around 3 TeV in models of anomaly mediation with long-lived Winos. We show moreover how collider searches and direct detection experiments have the potential to cover large part of the parameter space even in scenarios where the lightest neutralino does not contribute to the whole dark matter relic density. 9. The Large Hadron Collider in the LEP tunnel International Nuclear Information System (INIS) Brianti, G.; Huebner, K. 1987-01-01 The status of the studies for the CERN Large Hadron Collider (LHC) is described. This collider will provide proton-proton collisions with 16 TeV centre-of-mass energy and a luminosity exceeding 10 33 cm -2 s -1 per interaction point. It can be installed in the tunnel of the Large Electron-Positron Storage Ring (LEP) above the LEP elements. It will use superconducting magnets of a novel, compact design, having two horizontally separated channels for the two counter-rotating bunched proton beams, which can collide in a maximum of seven interaction points. Collisions between protons of the LHC and electrons of LEP are also possible with a centre-of-mass energy of up to 1.8 TeV and a luminosity of up to 2 x 10 32 cm -2 s -1 . (orig.) 10. Online track reconstruction at hadron colliders International Nuclear Information System (INIS) Amerio, Silvia; Bettini, Marco; Nicoletto, Marino; Crescioli, Francesco; Bucciantonio, Martina; DELL'ORSO, Mauro; Piendibene, Marco; VOLPI, Guido; Annovi, Alberto; Catastini, Pierluigi; Giannetti, Paola; Lucchesi, Donatella 2010-01-01 Real time event reconstruction plays a fundamental role in High Energy Physics experiments. Reducing the rate of data to be saved on tape from millions to hundreds per second is critical. In order to increase the purity of the collected samples, rate reduction has to be coupled with the capability to simultaneously perform a first selection of the most interesting events. A fast and efficient online track reconstruction is important to effectively trigger on leptons and/or displaced tracks from b-quark decays. This talk will be an overview of online tracking techniques in different HEP environments: we will show how H1 experiment at HERA faced the challenges of online track reconstruction implementing pattern matching and track linking algorithms on CAMs and FPGAs in the Fast Track Processor (FTT). The pattern recognition technique is also at the basis of the Silicon Vertex Trigger (SVT) at the CDF experiment at Tevatron: coupled to a very fast fitting phase, SVT allows to trigger on displaced tracks, thus greatly increasing the efficiency for the hadronic B decay modes. A recent upgrade of the SVT track fitter, the Giga-fitter, can perform more than 1 fit/ns and further improves the CDF online trigger capabilities at high luminosity. At SLHC, where luminosities will be 2 orders of magnitude greater than Tevatron, online tracking will be much more challenging: we will describe CMS future plans for a Level-1 track trigger and the Fast Tracker (FTK) processor at the ATLAS experiment, based on the Giga-fitter architecture and designed to provide high quality tracks reconstructed over the entire detector in time for a Level-2 trigger decision.luminosity. At SLHC, where luminosities will be 2 orders of magnitude greater than Tevatron, online tracking will be much more challenging: we will describe CMS future plans for a Level-1 track trigger and the Fast Tracker (FTK) processor at the Atlas experiment, based on the Giga-fitter architecture and designed to provide high 11. Japan Hadron Facility (JHF) project International Nuclear Information System (INIS) Nagamiya, S. 1999-01-01 The Japan Hadron Facility (JHF) is the next accelerator project proposed at KEK to promote exciting sciences by utilising high-intensity proton beams. The project is characterised by three unique features: hadronic beams of the world's highest intensity; a variety of beams from one accelerator complex; frontier sciences to cover a broad research area including nuclear physics, particle physics, material sciences and life sciences by utilising a common accelerator complex. (author) 12. Physics and Analysis at a Hadron Collider - An Introduction (1/3) CERN Multimedia CERN. Geneva 2010-01-01 This is the first lecture of three which together discuss the physics of hadron colliders with an emphasis on experimental techniques used for data analysis. This first lecture provides a brief introduction to hadron collider physics and collider detector experiments as well as offers some analysis guidelines. The lectures are aimed at graduate students. 13. Design Study for a Staged Very Large Hadron Collider Energy Technology Data Exchange (ETDEWEB) Chao, Alex W. 2002-02-27 Particle physics makes its greatest advances with experiments at the highest energy. The only sure way to advance to a higher-energy regime is through hadron colliders--the Tevatron, the LHC, and then, beyond that, a Very Large Hadron Collider. At Snowmass-1996 [1], investigators explored the best way to build a VLHC, which they defined as a 100 TeV collider. The goals in this study are different. The current study seeks to identify the best and cheapest way to arrive at frontier-energy physics, while simultaneously starting down a path that will eventually lead to the highest-energy collisions technologically possible in any accelerator using presently conceivable technology. This study takes the first steps toward understanding the accelerator physics issues, the technological possibilities and the approximate cost of a particular model of the VLHC. It describes a staged approach that offers exciting physics at each stage for the least cost, and finally reaches an energy one-hundred times the highest energy currently achievable. 14. A high granularity plastic scintillator tile hadronic calorimeter with APD readout for a linear collider detector Czech Academy of Sciences Publication Activity Database Andreev, V.; Cvach, Jaroslav; Danilov, M.; Devitsin, E.; Dodonov, V.; Eigen, G.; Garutti, E.; Gilitzky, Yu.; Groll, M.; Heuer, R.D.; Janata, Milan; Kacl, Ivan; Korbel, V.; Kozlov, V. Yu; Meyer, H.; Morgunov, V.; Němeček, Stanislav; Pöschl, R.; Polák, Ivo; Raspereza, A.; Reiche, S.; Rusinov, V.; Sefkow, F.; Smirnov, P.; Terkulov, A.; Valkár, Š.; Weichert, Jan; Zálešák, Jaroslav 2006-01-01 Roč. 564, - (2006), s. 144-154 ISSN 0168-9002 R&D Projects: GA MŠk(CZ) LC527; GA MŠk(CZ) 1P05LA259; GA ČR(CZ) GA202/05/0653 Institutional research plan: CEZ:AV0Z10100502 Keywords : hadronic calorimeter * plastic scintillator tile * APD readout * linear collider detector Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 1.185, year: 2006 15. Updates on the optics of the future hadron-hadron collider FCC-hh CERN Document Server AUTHOR|(CDS)2093721; Boutin, David Jean Henri; Dalena, Barbara; Holzer, Bernhard; Langner, Andy Sven; Schulte, Daniel 2017-01-01 The FCC-hh (Future Hadron-Hadron Circular Collider) is one of the three options considered for the next generation accelerator in high-energy physics as recommended by the European Strategy Group. The layout of FCC-hh has been optimized to a more compact design following recommendations from civil engineering aspects. The updates on the first order and second order optics of the ring will be shown for collisions at the required centre-of-mass energy of 100 TeV. Special emphasis is put on the dispersion suppressors and general beam cleaning sections as well as first considerations of injection and extraction sections. 16. Design considerations and expectations of a very large hadron collider International Nuclear Information System (INIS) Ruggiero, A.G. 1996-01-01 The ELOISATRON Project is a proton-proton collider at very high energy and very large luminosity. The main goal is to determine the ultimate performance that is possible to achieve with reasonable extrapolation of the present accelerator technology. A complete study and design of the collider requires that several steps of investigations are undertaken. The authors count five of such steps as outlined in the report 17. Large Hadron Collider (LHC) phenomenology, operational challenges and theoretical predictions CERN Document Server Gilles, Abelin R 2013-01-01 The Large Hadron Collider (LHC) is the highest-energy particle collider ever constructed and is considered "one of the great engineering milestones of mankind." It was built by the European Organization for Nuclear Research (CERN) from 1998 to 2008, with the aim of allowing physicists to test the predictions of different theories of particle physics and high-energy physics, and particularly prove or disprove the existence of the theorized Higgs boson and of the large family of new particles predicted by supersymmetric theories. In this book, the authors study the phenomenology, operational challenges and theoretical predictions of LHC. Topics discussed include neutral and charged black hole remnants at the LHC; the modified statistics approach for the thermodynamical model of multiparticle production; and astroparticle physics and cosmology in the LHC era. 18. Probing the$WW \\gamma$vertex at hadron colliders CERN Document Server Papavassiliou, J 1999-01-01 We present a new, model independent method for extracting bounds for the anomalous$\\gamma WW$couplings from hadron collider experiments. At the partonic level we introduce a set of three observables which are constructed from the unpolarized differential cross-section for the process$d\\bar{u}\\to W^{-}\\gamma$by appropriate convolution with a set of simple polynomials depending only on the center-of-mass angle. One of these observables allows for the direct determination of the anomalous coupling usually denoted by presence of a radiation zero. The other two observables impose two sum rules on the remaining three anomalous couplings. The inclusion of the structure functions is discussed in detail for both$p\\bar{p}$and$pp$colliders. We show that, whilst for$p\\bar{p}$experiments this can be accomplished straightforwardly, in the$pp$case one has to resort to somewhat more elaborate techniques, such as the binning of events according to their longitudinal momenta. 19. Unveiling the top secrets with the Large Hadron Collider Science.gov (United States) Chierici, R. 2013-12-01 Top quark physics is one of the pillars of fundamental research in the field of high energy physics. It not only gives access to precision measurements for constraining the Standard Model of particles and interactions but also it represents a privileged domain for new physics searches. This contribution summarizes the main results in top quark physics obtained with the two general-purpose detectors ATLAS and CMS during the first two years of operations of the Large Hadron Collider (LHC) at CERN. It covers the 2010 and 2011 data taking periods, where the LHC ran at a centre-of-mass energy of 7 TeV. 20. Unveiling the top secrets with the Large Hadron Collider International Nuclear Information System (INIS) Chierici, R 2013-01-01 Top quark physics is one of the pillars of fundamental research in the field of high energy physics. It not only gives access to precision measurements for constraining the Standard Model of particles and interactions but also it represents a privileged domain for new physics searches. This contribution summarizes the main results in top quark physics obtained with the two general-purpose detectors ATLAS and CMS during the first two years of operations of the Large Hadron Collider (LHC) at CERN. It covers the 2010 and 2011 data taking periods, where the LHC ran at a centre-of-mass energy of 7 TeV. (paper) 1. News Teaching: The epiSTEMe project: KS3 maths and science improvement Field trip: Pupils learn physics in a stately home Conference: ShowPhysics welcomes fun in Europe Student numbers: Physics numbers increase in UK Tournament: Physics tournament travels to Singapore Particle physics: Hadron Collider sets new record Astronomy: Take your classroom into space Forthcoming Events Science.gov (United States) 2010-05-01 Teaching: The epiSTEMe project: KS3 maths and science improvement Field trip: Pupils learn physics in a stately home Conference: ShowPhysics welcomes fun in Europe Student numbers: Physics numbers increase in UK Tournament: Physics tournament travels to Singapore Particle physics: Hadron Collider sets new record Astronomy: Take your classroom into space Forthcoming Events 2. Detector development for the High Luminosity Large Hadron Collider CERN Document Server AUTHOR|(INSPIRE)INSPIRE-00367854; Gößling, Claus To maximise the discovery potential of the Large Hadron Collider, it will be upgraded to the High Luminosity Large Hadron Collider in 2024. New detector challenges arise from the higher instantaneous luminosity and the higher particle flux. The new ATLAS Inner Tracker will replace the current tracking detector to be able to cope with these challenges. Many pixel detector technologies exist for particle tracking, but their suitability for the ATLAS Inner Tracker needs to be studied. Active high-voltage CMOS sensors, which are produced in industrialised processes, offer a fast readout and radiation tolerance. In this thesis the HV2FEI4v2 sensor, which is capacitively coupled to the ATLAS Pixel FE-I4 readout chip, is characterised for the usage in the outer layers of the ATLAS Inner Tracker. Key quantities of this prototype module are studied, such as the hit efficiency and the subpixel encoding. The early HV2FEI4v2 prototype shows promising results as a starting point for further module developments. Active CMO... 3. The higgsino-singlino world at the large hadron collider Energy Technology Data Exchange (ETDEWEB) Kim, Jong Soo [Universidad Autonoma de Madrid, Instituto de Fisica Teorica UAM/CSIC, Madrid (Spain); Ray, Tirtha Sankar [University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, Melbourne, VIC (Australia) 2015-02-01 We consider light higgsinos and singlinos in the next-to-minimal supersymmetric standard model at the large hadron collider. We assume that the singlino is the lightest supersymmetric particle and that the higgsino is the next-to-lightest supersymmetric particle with the remaining supersymmetric particles in the multi-TeV range. This scenario, which is motivated by the flavor and CP issues, provides a phenomenologically viable dark matter candidate and improved electroweak fit consistent with the measured Higgs mass. Here, the higgsinos decay into on (off)-shell gauge boson and the singlino. We consider the leptonic decay modes and the resulting signature is three isolated leptons and missing transverse energy which is known as the trilepton signal. We simulate the signal and the Standard Model backgrounds and present the exclusion region in the higgsino-singlino mass plane at the large hadron collider at √(s) = 14 TeV for an integrated luminosity of 300 fb{sup -1}. (orig.) 4. High luminosity electron-hadron collider eRHIC Energy Technology Data Exchange (ETDEWEB) Ptitsyn, V.; Aschenauer, E.; Bai, M.; Beebe-Wang, J.; Belomestnykh, S.; Ben-Zvi, I.; Blaskiewicz, M..; Calaga, R.; Chang, X.; Fedotov, A.; Gassner, D.; Hammons, L.; Hahn, H.; Hammons, L.; He, P.; Hao, Y.; Jackson, W.; Jain, A.; Johnson, E.C.; Kayran, D.; Kewisch, J.; Litvinenko, V.N.; Luo, Y.; Mahler, G.; McIntyre, G.; Meng, W.; Minty, M.; Parker, B.; Pikin, A.; Rao, T.; Roser, T.; Skaritka, J.; Sheehy, B.; Skaritka, J.; Tepikian, S.; Than, Y.; Trbojevic, D.; Tsoupas, N.; Tuozzolo, J.; Wang, G.; Webb, S.; Wu, Q.; Xu, W.; Pozdeyev, E.; Tsentalovich, E. 2011-03-28 We present the design of a future high-energy high-luminosity electron-hadron collider at RHIC called eRHIC. We plan on adding 20 (potentially 30) GeV energy recovery linacs to accelerate and to collide polarized and unpolarized electrons with hadrons in RHIC. The center-of-mass energy of eRHIC will range from 30 to 200 GeV. The luminosity exceeding 10{sup 34} cm{sup -2} s{sup -1} can be achieved in eRHIC using the low-beta interaction region with a 10 mrad crab crossing. We report on the progress of important eRHIC R&D such as the high-current polarized electron source, the coherent electron cooling, ERL test facility and the compact magnets for recirculation passes. A natural staging scenario of step-by-step increases of the electron beam energy by building-up of eRHIC's SRF linacs is presented. 5. FCC-hh Hadron Collider - Parameter Scenarios and Staging Options CERN Document Server Benedikt, Michael; Schulte, Daniel; Zimmermann, F; Syphers, M J 2015-01-01 FCC-hh is a proposed future energy-frontier hadron collider, based on dipole magnets with a field around 16 T installed in a new tunnel with a circumference of about 100 km, which would provide proton collisions at a centre-of-mass energy of 100 TeV, as well as heavy-ion collisions at the equivalent energy. The FCC-hh should deliver a high integrated proton-proton luminosity at the level of several 100 fb−1 per year, or more. The challenges for operating FCC-hh with high beam current and at high luminosity include the heat load from synchrotron radiation in a cold environment, the radiation from collision debris around the interaction region, and machine protection. In this paper, starting from the FCC-hh design baseline parameters we explore different approaches for increasing the integrated luminosity, and discuss the impact of key individual pa- rameters, such as the turnaround time. We also present some injector considerations and options for early hadron-collider operation. 6. Hadron collider tests of neutrino mass-generating mechanisms Science.gov (United States) Ruiz, Richard Efrain The Standard Model of particle physics (SM) is presently the best description of nature at small distances and high energies. However, with tiny but nonzero neutrino masses, a Higgs boson mass unstable under radiative corrections, and little guidance on understanding the hierarchy of fermion masses, the SM remains an unsatisfactory description of nature. Well-motivated scenarios that resolve these issues exist but also predict extended gauge (e.g., Left-Right Symmetric Models), scalar (e.g., Supersymmetry), and/or fermion sectors (e.g., Seesaw Models). Hence, discovering such new states would have far-reaching implications. After reviewing basic tenets of the SM and collider physics, several beyond the SM (BSM) scenarios that alleviate these shortcomings are investigated. Emphasis is placed on the production of a heavy Majorana neutrinos at hadron colliders in the context of low-energy, effective theories that simultaneously explain the origin of neutrino masses and their smallness compared to other elementary fermions, the so-called Seesaw Mechanisms. As probes of new physics, rare top quark decays to Higgs bosons in the context of the SM, the Types I and II Two Higgs Doublet Model (2HDM), and the semi-model independent framework of Effective Field Theory (EFT) have also been investigated. Observation prospects and discovery potentials of these models at current and future collider experiments are quantified. 7. Summary of the very large hadron collider physics and detector workshop International Nuclear Information System (INIS) Anderson, G.; Berger, M.; Brandt, A.; Eno, S. 1997-01-01 One of the options for an accelerator beyond the LHC is a hadron collider with higher energy. Work is going on to explore accelerator technologies that would make such a machine feasible. This workshop concentrated on the physics and detector issues associated with a hadron collider with an energy in the center of mass of the order of 100 to 200 TeV 8. Extra dimension searches at hadron colliders to next-to-leading ... Indian Academy of Sciences (India) The quantitative impact of NLO-QCD corrections for searches of large and warped extra dimensions at hadron colliders are investigated for the Drell-Yan process. The K-factor for various observables at hadron colliders are presented. Factorisation, renormalisation scale dependence and uncertainties due to various parton ... 9. Forward-central jet correlations at the Large Hadron Collider Energy Technology Data Exchange (ETDEWEB) Deak, M. [Univ. Autonoma de Madrid, Cantoblanco (Spain). Inst. de Fisica Teorica UAM/CSIC; Hautmann, F. [Oxford Univ. (United Kingdom). Theoretical Physics Dept.; Jung, H. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Antwerpen Univ. (Belgium). Elementaire Deeltjes Fysics; Kutak, K. [Antwerpen Univ. (Belgium). Elementaire Deeltjes Fysics 2010-12-15 For high-p{sub T} forward processes at the Large Hadron Collider (LHC), QCD logarithmic corrections in the hard transverse momentum and in the large rapidity interval may both be quantitatively significant. The theoretical framework to resum consistently both kinds of logarithmic corrections to higher orders in perturbation theory is based on QCD high-energy factorization. We present numerical Monte Carlo applications of this method to final-state observables associated with production of one forward and one central jet. By computing jet correlations in rapidity and azimuth, we analyze the role of corrections to the parton-showering chain from large-angle gluon radiation, and discuss this in relationship with Monte Carlo results modeling interactions due to multiple parton chains. (orig.) 10. Fast symplectic map tracking for the CERN Large Hadron Collider Directory of Open Access Journals (Sweden) Dan T. Abell 2003-06-01 Full Text Available Tracking simulations remain the essential tool for evaluating how multipolar imperfections in ring magnets restrict the domain of stable phase-space motion. In the Large Hadron Collider (LHC at CERN, particles circulate at the injection energy, when multipole errors are most significant, for more than 10^{7} turns, but systematic tracking studies are limited to a small fraction of this total time—even on modern computers. A considerable speedup is expected by replacing element-by-element tracking with the use of a symplectified one-turn map. We have applied this method to the realistic LHC lattice, version 6, and report here our results for various map orders, with special emphasis on precision and speed. 11. Threshold resummation for slepton-pair production at hadron colliders International Nuclear Information System (INIS) Bozzi, Giuseppe; Fuks, Benjamin; Klasen, Michael 2007-01-01 We present a first and extensive study of threshold resummation effects for supersymmetric (SUSY) particle production at hadron colliders, focusing on Drell-Yan like slepton-pair and slepton-sneutrino associated production. After confirming the known next-to-leading order (NLO) QCD corrections and generalizing the NLO SUSY-QCD corrections to the case of mixing squarks in the virtual loop contributions, we employ the usual Mellin N-space resummation formalism with the minimal prescription for the inverse Mellin-transform and improve it by resumming 1/N-suppressed and a class of N-independent universal contributions. Numerically, our results increase the theoretical cross sections by 5 to 15% with respect to the NLO predictions and stabilize them by reducing the scale dependence from up to 20% at NLO to less than 10% with threshold resummation 12. Precision Muon Tracking Detectors for High-Energy Hadron Colliders CERN Document Server Gadow, Philipp; Kroha, Hubert; Richter, Robert 2016-01-01 Small-diameter muon drift tube (sMDT) chambers with 15 mm tube diameter are a cost-effective technology for high-precision muon tracking over large areas at high background rates as expected at future high-energy hadron colliders including HL-LHC. The chamber design and construction procedures have been optimized for mass production and provide sense wire positioning accuracy of better than 10 ?m. The rate capability of the sMDT chambers has been extensively tested at the CERN Gamma Irradiation Facility. It exceeds the one of the ATLAS muon drift tube (MDT) chambers, which are operated at unprecedentedly high background rates of neutrons and gamma-rays, by an order of magnitude, which is sufficient for almost the whole muon detector acceptance at FCC-hh at maximum luminosity. sMDT operational and construction experience exists from ATLAS muon spectrometer upgrades which are in progress or under preparation for LHC Phase 1 and 2. 13. A real-time tracker for hadronic collider experiments International Nuclear Information System (INIS) Bardi, A.; Belforte, S.; Galeotti, S.; Giannetti, P.; Morsani, F.; Spinella, F.; Dell'Orso, M.; Meschi, E. 1999-01-01 In this paper the authors propose highly parallel dedicated processors, able to provide precise on-line track reconstruction for future hadronic collider experiments. The processors, organized in a 2-level pipelined architecture, execute very fast algorithms based on the use of a large bank of pre-stored patterns of trajectory points. An associative memory implements the first stage by recognized track candidates at low resolution to match the demanding task of tracking at the detector readout rate. Alternative technological implementations for the associative memory are compared. The second stage receives track candidates and high resolution hits to refine pattern recognition at the associative memory output rate. A parallel and pipelines hardware implements a binary search strategy inside a hierarchically structured pattern bank, stored in the high density commercial RAMs 14. The fast tracker processor for hadron collider triggers CERN Document Server Annovi, A; Bardi, A; Carosi, R; Dell'Orso, Mauro; D'Onofrio, M; Giannetti, P; Iannaccone, G; Morsani, E; Pietri, M; Varotto, G 2001-01-01 Perspectives for precise and fast track reconstruction in future hadron collider experiments are addressed. We discuss the feasibility of a pipelined highly parallel processor dedicated to the implementation of a very fast tracking algorithm. The algorithm is based on the use of a large bank of pre-stored combinations of trajectory points, called patterns, for extremely complex tracking systems. The CMS experiment at LHC is used as a benchmark. Tracking data from the events selected by the level-1 trigger are sorted and filtered by the Fast Tracker processor at an input rate of 100 kHz. This data organization allows the level-2 trigger logic to reconstruct full resolution tracks with transverse momentum above a few GeV and search for secondary vertices within typical level-2 times. (15 refs). 15. The fast tracker processor for hadronic collider triggers CERN Document Server Annovi, A; Bardi, A; Carosi, R; Dell'Orso, Mauro; D'Onofrio, M; Giannetti, P; Iannaccone, G; Morsani, F; Pietri, M; Varotto, G 2000-01-01 Perspective for precise and fast track reconstruction in future hadronic collider experiments are addressed. We discuss the feasibility of a pipelined highly parallelized processor dedicated to the implementation of a very fast algorithm. The algorithm is based on the use of a large bank of pre-stored combinations of trajectory points (patterns) for extremely complex tracking systems. The CMS experiment at LHC is used as a benchmark. Tracking data from the events selected by the level-1 trigger are sorted and filtered by the Fast Tracker processor at a rate of 100 kHz. This data organization allows the level-2 trigger logic to reconstruct full resolution traces with transverse momentum above few GeV and search secondary vertexes within typical level-2 times. 15 Refs. 16. Threshold resummation for slepton-pair production at hadron colliders International Nuclear Information System (INIS) Bozzi, Giuseppe; Fuks, Benjamin; Klasen, Michael 2007-01-01 We present a first and extensive study of threshold resummation effects for supersymmetric (SUSY) particle production at hadron colliders, focusing on Drell-Yan like slepton-pair and slepton-sneutrino associated production. After confirming the known next-to-leading order (NLO) QCD corrections and generalizing the NLO SUSY-QCD corrections to the case of mixing squarks in the virtual loop contributions, we employ the usual Mellin N-space resummation formalism with the minimal prescription for the inverse Mellin-transform and improve it by re-summing 1/N-suppressed and a class of N-independent universal contributions. Numerically, our results increase the theoretical cross sections by 5 to 15% with respect to the NLO predictions and stabilize them by reducing the scale dependence from up to 20% at NLO to less than 10% with threshold resummation. (authors) 17. Forward-central jet correlations at the Large Hadron Collider International Nuclear Information System (INIS) Deak, M.; Hautmann, F.; Jung, H.; Antwerpen Univ.; Kutak, K. 2010-12-01 For high-p T forward processes at the Large Hadron Collider (LHC), QCD logarithmic corrections in the hard transverse momentum and in the large rapidity interval may both be quantitatively significant. The theoretical framework to resum consistently both kinds of logarithmic corrections to higher orders in perturbation theory is based on QCD high-energy factorization. We present numerical Monte Carlo applications of this method to final-state observables associated with production of one forward and one central jet. By computing jet correlations in rapidity and azimuth, we analyze the role of corrections to the parton-showering chain from large-angle gluon radiation, and discuss this in relationship with Monte Carlo results modeling interactions due to multiple parton chains. (orig.) 18. Cooldown and Warmup Studies for the Large Hadron Collider CERN Document Server Lebrun, P; Tavian, L; Wagner, U 1998-01-01 The Large Hadron Collider (LHC), currently under construction at CERN, will make use of superconducting magnets operating in superfluid helium below 2 K. The LHC ring is divided in 8 sectors, each of them cooled by a refrigerator of 18 kW at 4.5 K equivalent cooling power. For the cooldown and warmup of a 3.3 km long LHC sector, the flow available above 80 K per refrigerator is 770 g/s and the cor responding capacity is 600 kW. This paper presents the results of cooldown and warmup simulations, as concerns time delays, temperature difference across magnets, available power and flow-rates, and estimates of energy and liquid nitrogen consumption. 19. Radial scaling in inclusive jet production at hadron colliders Science.gov (United States) Taylor, Frank E. 2018-03-01 Inclusive jet production in p-p and p ¯ -p collisions shows many of the same kinematic systematics as observed in single-particle inclusive production at much lower energies. In an earlier study (1974) a phenomenology, called radial scaling, was developed for the single-particle inclusive cross sections that attempted to capture the essential underlying physics of pointlike parton scattering and the fragmentation of partons into hadrons suppressed by the kinematic boundary. The phenomenology was successful in emphasizing the underlying systematics of the inclusive particle productions. Here we demonstrate that inclusive jet production at the Large Hadron Collider (LHC) in high-energy p-p collisions and at the Tevatron in p ¯ -p inelastic scattering shows similar behavior. The ATLAS inclusive jet production plotted as a function of this scaling variable is studied for √s of 2.76, 7 and 13 TeV and is compared to p ¯ -p inclusive jet production at 1.96 TeV measured at the CDF and D0 at the Tevatron and p-Pb inclusive jet production at the LHC ATLAS at √sNN=5.02 TeV . Inclusive single-particle production at Fermi National Accelerator Laboratory fixed target and Intersecting Storage Rings energies are compared to inclusive J /ψ production at the LHC measured in ATLAS, CMS and LHCb. Striking common features of the data are discussed. 20. Undergraduate Laboratory Experiment: Measuring Matter Antimatter Asymmetries at the Large Hadron Collider CERN Document Server Parkes, Chris; Gutierrez, J 2015-01-01 This document is the student manual for a third year undergraduate laboratory experiment at the University of Manchester. This project aims to measure a fundamental difference between the behaviour of matter and antimatter through the analysis of data collected by the LHCb experiment at the Large Hadron Collider. The three-body dmecays$B^\\pm \\rightarrow h^\\pm h^+ h^-$, where$h^\\pm$is a$\\pi^\\pm$or$K^\\pm$are studied. The inclusive matter antimatter asymmetry is calculated, and larger asymmetries are searched for in localized regions of the phase-space. 1. Jets in hadron colliders at order αs3 International Nuclear Information System (INIS) Ellis, S.D.; Kunszt, Z.; Soper, D.E. 1991-10-01 Recent results from the study of hadronic jets in hadron-hadron collisions at order a s 3 in perturbation theory are presented. The numerical results are in good agreement with data and this agreement is illustrated where possible 2. Seismic studies for Fermilab future collider projects International Nuclear Information System (INIS) Lauh, J.; Shiltsev, V. 1997-11-01 Ground motion can cause significant beam emittance growth and orbit oscillations in large hadron colliders due to a vibration of numerous focusing magnets. Larger accelerator ring circumference leads to smaller revolution frequency and, e.g. for the Fermilab Very Large Hadron Collider(VLHC) 50-150 Hz vibrations are of particular interest as they are resonant with the beam betatron frequency. Seismic measurements at an existing large accelerator under operation can help to estimate the vibrations generated by the technical systems in future machines. Comparison of noisy and quiet microseismic conditions might be useful for proper choice of technical solutions for future colliders. This article presents results of wide-band seismic measurements at the Fermilab site, namely, in the tunnel of the Tevatron and on the surface nearby, and in two deep tunnels in the Illinois dolomite which is though to be a possible geological environment of the future accelerators 3. NLO corrections to production of heavy particles at hadron colliders International Nuclear Information System (INIS) Pagani, Davide 2013-01-01 In this thesis we study specific aspects of the production of heavy particles at hadron colliders, with emphasis on precision predictions including next-to-leading order (NLO) corrections from the strong and electroweak interactions. In the first part of the thesis we consider the top quark charge asymmetry. In particular, we discuss in detail the calculation of the electroweak contributions from the asymmetric part of the top quark pair production cross section at O(α 2 s α) and O(α 2 ) and their numerical impact on predictions for the asymmetry measurements at the Tevatron. These electroweak contributions provide a non-negligible addition to the QCD-induced asymmetry with the same overall sign and, in general, enlarge the Standard Model predictions by a factor around 1.2, diminishing the deviations from experimental measurements. In the second part of the thesis we consider the production of squarks, the supersymmetric partners of quarks, at the Large Hadron Collider (LHC). We discuss the calculation of the contribution of factorizable NLO QCD corrections to the production of squark-squark pairs combined at fully differential level with squark decays. Combining the production process with two different configurations for the squark decays, our calculation is used to provide precise phenomenological predictions for two different experimental signatures that are important for the search of supersymmetry at the LHC. We focus, for one signature, on the impact of our results on important physical differential distributions and on cut-and-count searches performed by the ATLAS and CMS collaborations. Considering the other signature, we analyze the effects from NLO QCD corrections and from the combination of production and decays on distributions relevant for parameter determination. In general, factorizable NLO QCD corrections have to be taken into account to obtain precise phenomenological predictions for the analyzed distributions and inclusive quantities. Moreover 4. Resummation for supersymmetric particle production at hadron colliders Energy Technology Data Exchange (ETDEWEB) Brensing, Silja Christine 2011-05-10 The search for supersymmetry is among the most important tasks at current and future colliders. Especially the production of coloured supersymmetric particles would occur copiously in hadronic collisions. Since these production processes are of high relevance for experimental searches accurate theoretical predictions are needed. Higher-order corrections in quantum chromodynamics (QCD) to these processes are dominated by large logarithmic terms due to the emission of soft gluons from initial-state and final-state particles. A systematic treatment of these logarithms to all orders in perturbation theory is provided by resummation methods. We perform the resummation of soft gluons at next-to-leading-logarithmic (NLL) accuracy for all possible production processes in the framework of the Minimal Supersymmetric Standard Model. In particular we consider pair production processes of mass-degenerate light-flavour squarks and gluinos as well as the pair production of top squarks and non-mass-degenerate bottom squarks. We present analytical results for all considered processes including the soft anomalous dimensions. Moreover numerical predictions for total cross sections and transverse-momentum distributions for both the Large Hadron Collider (LHC) and the Tevatron are presented. We provide an estimate of the theoretical uncertainty due to scale variation and the parton distribution functions. The inclusion of NLL corrections leads to a considerable reduction of the theoretical uncertainty due to scale variation and to an enhancement of the next-to-leading order (NLO) cross section predictions. The size of the soft-gluon corrections and the reduction in the scale uncertainty are most significant for processes involving gluino production. At the LHC, where the sensitivity to squark and gluino masses ranges up to 3 TeV, the corrections due to NLL resummation over and above the NLO predictions can be as high as 35 % in the case of gluino-pair production, whereas at the 5. The Hunt for New Physics at the Large Hadron Collider CERN Document Server Nath, Pran; Davoudiasl, Hooman; Dutta, Bhaskar; Feldman, Daniel; Liu, Zuowei; Han, Tao; Langacker, Paul; Mohapatra, Rabi; Valle, Jose; Pilaftsis, Apostolos; Zerwas, Dirk; AbdusSalam, Shehu; Adam-Bourdarios, Claire; Aguilar-Saavedra, J A; Allanach, Benjamin; Altunkaynak, B; Anchordoqui, Luis A; Baer, Howard; Bajc, Borut; Buchmueller, O; Carena, M; Cavanaugh, R; Chang, S; Choi, Kiwoon; Csaki, C; Dawson, S; de Campos, F; De Roeck, A; Duhrssen, M; Eboli, O J.P; Ellis, J R; Flacher, H; Goldberg, H; Grimus, W; Haisch, U; Heinemeyer, S; Hirsch, M; Holmes, M; Ibrahim, Tarek; Isidori, G; Kane, Gordon; Kong, K; Lafaye, Remi; Landsberg, G; Lavoura, L; Lee, Jae Sik; Lee, Seung J; Lisanti, M; Lust, Dieter; Magro, M B; Mahbubani, R; Malinsky, M; Maltoni, Fabio; Morisi, S; Muhlleitner, M M; Mukhopadhyaya, B; Neubert, M; Olive, K A; Perez, Gilad; Perez, Pavel Fileviez; Plehn, T; Ponton, E; Porod, Werner; Quevedo, F; Rauch, M; Restrepo, D; Rizzo, T G; Romao, J C; Ronga, F J; Santiago, Jose; Schechter, J; Senjanovic, G; Shao, J; Spira, M; Stieberger, S; Sullivan, Zack; Tait, Tim M P; Tata, Xerxes; Taylor, T R; Toharia, M; Wacker, J; Wagner, C E.M; Wang, Lian-Tao; Weiglein, G; Zeppenfeld, D; Zurek, K 2010-01-01 The Large Hadron Collider presents an unprecedented opportunity to probe the realm of new physics in the TeV region and shed light on some of the core unresolved issues of particle physics. These include the nature of electroweak symmetry breaking, the origin of mass, the possible constituent of cold dark matter, new sources of CP violation needed to explain the baryon excess in the universe, the possible existence of extra gauge groups and extra matter, and importantly the path Nature chooses to resolve the hierarchy problem - is it supersymmetry or extra dimensions. Many models of new physics beyond the standard model contain a hidden sector which can be probed at the LHC. Additionally, the LHC will be a top factory and accurate measurements of the properties of the top and its rare decays will provide a window to new physics. Further, the LHC could shed light on the origin of neutralino masses if the new physics associated with their generation lies in the TeV region. Finally, the LHC is also a laboratory ... 6. A Search for Technicolor at The Large Hadron Collider CERN Document Server Love, Jeremy R The ATLAS detector has been used in this analysis to search for Technihadrons, predicted by Technicolor theories, decaying to two muons. These new states can be produced by the Large Hadron Collider in proton-proton collisions with a center of mass energy of 7 TeV. The Low-Scale Technicolor model predicts the phenomenology of the new$\\rho_T$and$\\omega_T$. The dimuon invariant mass spectrum is analyzed above 130 GeV to test the consistency of the observed data with the Standard Model prediction. We observe excellent agreement between our data and the background only hypothesis, and proceed to set limits on the cross section times branching ratio of the$\\rho_T$and$\\omega_T$as a function of their mass. We combine the dielectron and dimuon channels to exclude masses of the$\\rho_T$and$\\omega_T$between 130 GeV - 480 GeV at 95 % Confidence Level for masses of the$\\pi_T$between 50 GeV - 480 GeV. In addition for the parameter choice of m($\\pi_T$) = m($\\rho_T$/$\\omega_T) - 100 GeV, 95 % Confidence Level l... 7. Development of superconducting links for the Large Hadron Collider machine CERN Document Server Ballarino, A 2014-01-01 In the framework of the upgrade of the Large Hadron Collider (LHC) machine, new superconducting lines are being developed for the feeding of the LHC magnets. The proposed electrical layout envisages the location of the power converters in surface buildings, and the transfer of the current from the surface to the LHC tunnel, where the magnets are located, via superconducting links containing tens of cables feeding different circuits and transferring altogether more than 150 kA. Depending on the location, the links will have a length ranging from 300 m to 500 m, and they will span a vertical distance of about 80 m. An overview of the R&D program that has been launched by CERN is presented, with special attention to the development of novel types of cables made from MgB 2 and high temperature superconductors (Bi-2223 and REBCO) and to the results of the tests performed on prototype links. Plans for future activities are presented, together with a timeline for potential future integration in the LHC machine. 8. Development of superconducting links for the Large Hadron Collider machine Science.gov (United States) Ballarino, Amalia 2014-04-01 In the framework of the upgrade of the Large Hadron Collider (LHC) machine, new superconducting lines are being developed for the feeding of the LHC magnets. The proposed electrical layout envisages the location of the power converters in surface buildings, and the transfer of the current from the surface to the LHC tunnel, where the magnets are located, via superconducting links containing tens of cables feeding different circuits and transferring altogether more than 150 kA. Depending on the location, the links will have a length ranging from 300 m to 500 m, and they will span a vertical distance of about 80 m. An overview of the R&D program that has been launched by CERN is presented, with special attention to the development of novel types of cables made from MgB2 and high temperature superconductors (Bi-2223 and REBCO) and to the results of the tests performed on prototype links. Plans for future activities are presented, together with a timeline for potential future integration in the LHC machine. 9. Using Data from the Large Hadron Collider in the Classroom Science.gov (United States) Smith, Jeremy 2017-01-01 Now is an exciting time for physics students, because they have access to technology and experiments all over the world that were unthinkable a generation ago. Therefore, now is also the ideal time to bring these experiments into the classroom, so students can see what cutting edge science looks like, both in terms of the underlying physics and in terms of the technology used to gather data. With the continued running of the Large Hadron Collider at CERN, and the lab's continued dedication to providing open, worldwide access to their data, there is a unique opportunity for students to use these data in a manner very similar to how it's done in the particle physics community. In this session, we will explore ways for students to analyze real data from the CMS experiment at the LHC, plot these data to discover patterns and signals, and use these plots to determine quantities such as the invariant masses of the W, Z and Higgs bosons. Furthermore, we will show how such activities already fit well into standard introductory physics classes, and can in fact enhance already-existing lessons in the topics of momentum, kinematics, energy and electromagnetism. 10. Dissecting multi-photon resonances at the large hadron collider Energy Technology Data Exchange (ETDEWEB) Allanach, B.C. [University of Cambridge, Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, Cambridge (United Kingdom); Bhatia, D.; Iyer, Abhishek M. [Tata Institute of Fundamental Research, Department of Theoretical Physics, Mumbai (India) 2017-09-15 We examine the phenomenology of the production, at the 13 TeV Large Hadron Collider (LHC), of a heavy resonance X, which decays via other new on-shell particles n into multi-(i.e. three or more) photon final states. In the limit that n has a much smaller mass than X, the multi-photon final state may dominantly appear as a two-photon final state because the γs from the n decay are highly collinear and remain unresolved. We discuss how to discriminate this scenario from X → γγ: rather than discarding non-isolated photons, it is better to relax the isolation criteria and instead form photon jets substructure variables. The spins of X and n leave their imprint upon the distribution of pseudo-rapidity gap Δη between the apparent two-photon states. Depending on the total integrated luminosity, this can be used in many cases to claim discrimination between the possible spin choices of X and n, although the case where X and n are both scalar particles cannot be discriminated from the direct X → γγ decay in this manner. Information on the mass of n can be gained by considering the mass of each photon jet. (orig.) 11. First electron-cloud studies at the Large Hadron Collider CERN Document Server Dominguez, O; Arduini, G; Metral, E; Rumolo, G; Zimmermann, F; Maury Cuna, H 2013-01-01 During the beam commissioning of the Large Hadron Collider (LHC) with 150, 75, 50, and 25-ns bunch spacing, important electron-cloud effects, like pressure rise, cryogenic heat load, beam instabilities, or emittance growth, were observed. Methods have been developed to infer different key beam-pipe surface parameters by benchmarking simulations and pressure rise as well as heat-load observations. These methods allow us to monitor the scrubbing process, i.e., the reduction of the secondary emission yield as a function of time, in order to decide on the most appropriate strategies for machine operation. To better understand the influence of electron clouds on the beam dynamics, simulations have been carried out to examine both the coherent and the incoherent effects on the beam. In this paper we present the methodology and first results for the scrubbing monitoring process at the LHC. We also review simulated instability thresholds and tune footprints for beams of different emittance, interacting with an electr... 12. Longitudinal emittance blowup in the large hadron collider CERN Document Server Baudrenghien, P 2013-01-01 The Large Hadron Collider (LHC) relies on Landau damping for longitudinal stability. To avoid decreasing the stability margin at high energy, the longitudinal emittance must be continuously increased during the acceleration ramp. Longitudinal blowup provides the required emittance growth. The method was implemented through the summer of 2010. Band-limited RF phase-noise is injected in the main accelerating cavities during the whole ramp of about 11min. Synchrotron frequencies change along the energy ramp, but the digitally created noise tracks the frequency change. The position of the noise-band, relative to the nominal synchrotron frequency, and the bandwidth of the spectrum are set by pre-defined constants, making the diffusion stop at the edges of the demanded distribution. The noise amplitude is controlled by feedback using the measurement of the average bunch length. This algorithm reproducibly achieves the programmed bunch length of about 1.2ns, at flat top with low bunch-to-bunch scatter and provides a... 13. The hunt for new physics at the Large Hadron Collider International Nuclear Information System (INIS) AbdusSalam, S.; Adam-Bourdarios, C.; Aguilar-Saavedra, J.A.; Allanach, B.; Altunkaynak, B.; Wagner, C.E.M. 2010-01-01 The Large Hadron Collider presents an unprecedented opportunity to probe the realm of new physics in the TeV region and shed light on some of the core unresolved issues of particle physics. These include the nature of electroweak symmetry breaking, the origin of mass, the possible constituent of cold dark matter, new sources of CP violation needed to explain the baryon excess in the universe, the possible existence of extra gauge groups and extra matter, and importantly the path Nature chooses to resolve the hierarchy problem - is it supersymmetry or extra dimensions. Many models of new physics beyond the standard model contain a hidden sector which can be probed at the LHC. Additionally, the LHC will be a top factory and accurate measurements of the properties of the top and its rare decays will provide a window to new physics. Further, the LHC could shed light on the origin of neutralino masses if the new physics associated with their generation lies in the TeV region. Finally, the LHC is also a laboratory to test the hypothesis of TeV scale strings and D brane models. An overview of these possibilities is presented in the spirit that it will serve as a companion to the Technical Design Reports (TDRs) by the particle detector groups ATLAS and CMS to facilitate the test of the new theoretical ideas at the LHC. Which of these ideas stands the test of the LHC data will govern the course of particle physics in the subsequent decades. 14. Sextupole correction magnets for the Large Hadron Collider CERN Document Server Meinke, R B; Senti, M; Op de Beeck, W J; De Ryck, C; MacKay, W W 1999-01-01 About 2500 superconducting sextupole corrector magnets (MCS) are needed for the Large Hadron Collider (LHC) at CERN to compensate persistent current sextupole fields of the main dipoles. The MCS is a cold bore magnet with iron yoke. The coils are made from a NbTi conductor, which is cooled to 1.9 K. In the original CERN design 6 individual sub-coils, made from a monolithic composite conductor, are assembled and spliced together to form the sextupole. The coils are individually wound around precision-machined central islands and stabilized with matching saddle pieces at both ends. The Advanced Magnet Lab, Inc. (AML) has produced an alternative design, which gives improved performance and reliability at reduced manufacturing cost. In the AML design, the magnet consists of three splice-free sub-coils, which are placed with an automated winding process into pockets of prefabricated G-11 support cylinders. Any assembly process of sub-coils with potential misalignment is eliminated. The AML magnet uses a Kapton-wra... 15. CP violation in supersymmetry, Higgs sector and large hadron collider International Nuclear Information System (INIS) Godbole, Rohini M. 2006-01-01 In this talk I discuss some aspects of CP violation (CPV) in supersymmetry (SUSY) as well as in the Higgs sector. Further, I discuss ways in which these may be probed at hadronic colliders. In particular I will point out the ways in which studies in the χ ∼± , χ 2 ∼0 sector at Tevatron may be used to provide information on this and how the search can be extended to the LHC. I will then follow this by a discussion of the CP mixing induced in the Higgs sector due to the above-mentioned CPV in the soft SUSY breaking parameters and its effects on the Higgs phenomenology at the LHC. I would then point out some interesting aspects of the phenomenology of a moderately light charged Higgs boson, consistent with the LEP constraints, in this scenario. Decay of such a charged Higgs boson would also allow a probe of a light (≤)50 GeV), CP-violating (CPV) Higgs boson. Such a light neutral Higgs boson might have escaped detection at LEP and could also be missed at the LHC in the usual search channels. (author) 16. Unparticle self-interactions at the Large Hadron Collider International Nuclear Information System (INIS) Bergstroem, Johannes; Ohlsson, Tommy 2009-01-01 We investigate the effect of unparticle self-interactions at the Large Hadron Collider (LHC). Especially, we discuss the three-point correlation function, which is determined by conformal symmetry up to a constant, and study its relation to processes with four-particle final states. These processes could be used as a favorable way to look for unparticle physics, and for weak enough couplings to the standard model, even the only way. We find updated upper bounds on the cross sections for unparticle-mediated 4γ final states at the LHC and novel upper bounds for the corresponding 2γ2l and 4l final states. The size of the allowed cross sections obtained are comparably large for large values of the scaling dimension of the unparticle sector, but they decrease with decreasing values of this parameter. In addition, we present relevant distributions for the different final states, enabling the possible identification of the unparticle scaling dimension if there was to be a large number of events of such final states at the LHC. 17. Supersymmetric dark matter in the harsh light of the Large Hadron Collider Science.gov (United States) Peskin, Michael E. 2015-01-01 I review the status of the model of dark matter as the neutralino of supersymmetry in the light of constraints on supersymmetry given by the 7- to 8-TeV data from the Large Hadron Collider (LHC). PMID:25331902 18. The case for future hadron colliders from B → K (*) μ + μ - decays Science.gov (United States) Allanach, B. C.; Gripaios, Ben; You, Tevong 2018-03-01 Recent measurements in B → K (*) μ + μ - decays are somewhat discrepant with Standard Model predictions. They may be harbingers of new physics at an energy scale potentially accessible to direct discovery. We estimate the sensitivity of future hadron colliders to the possible new particles that may be responsible for the anomalies at tree-level: leptoquarks or Z's. We consider luminosity upgrades for a 14 TeV LHC, a 33 TeV LHC, and a 100 TeV pp collider such as the FCC-hh. In the most conservative and pessimistic models, for narrow particles with perturbative couplings, Z' masses up to 20 TeV and leptoquark masses up to 41 TeV may in principle explain the anomalies. Coverage of Z' models is excellent: a 33 TeV 1 ab-1 LHC is expected to cover most of the parameter space up to 8 TeV in mass, whereas the 100 TeV FCC-hh with 10 ab-1 will cover all of it. A smaller portion of the leptoquark parameter space is covered by future colliders: for example, in a μ + μ - jj di-leptoquark search, a 100 TeV 10 ab-1 collider has a projected sensitivity up to leptoquark masses of 12 TeV (extendable to 21 TeV with a strong coupling for single leptoquark production). 19. Hadronic cross-sections in two photon processes at a future linear collider International Nuclear Information System (INIS) Godbole, Rohini M.; Roeck, Albert de; Grau, Agnes; Pancheri, Giulia 2003-01-01 In this note we address the issue of measurability of the hadronic cross-sections at a future photon collider as well as for the two-photon processes at a future high energy linear e + e - collider. We extend, to higher energy, our previous estimates of the accuracy with which the γ γ cross-section needs to be measured, in order to distinguish between different theoretical models of energy dependence of the total cross-sections. We show that the necessary precision to discriminate among these models is indeed possible at future linear colliders in the Photon Collider option. Further we note that even in the e + e - option a measurement of the hadron production cross-section via γ γ processes, with an accuracy necessary to allow discrimination between different theoretical models, should be possible. We also comment briefly on the implications of these predictions for hadronic backgrounds at the future TeV energy e + e - collider CLIC. (author) 20. K factor for Higgs boson production via gluon fusion process at hadron colliders International Nuclear Information System (INIS) Tanaka, H. 1992-01-01 In this paper soft gluon corrections for Higgs boson production at hadron colliders are calculated. It is found that the soft contributions for the Higgs boson production via gluon fusion process is large and it cannot be neglected even at SSC energy. Some qualitative discussions for the QCD corrections to the Higgs boson production at hadron colliders and their background processes are presented for various Higgs boson mass cases 1. Extra dimension searches at hadron colliders to next-to-leading order-QCD Science.gov (United States) Kumar, M. C.; Mathews, Prakash; Ravindran, V. 2007-11-01 The quantitative impact of NLO-QCD corrections for searches of large and warped extra dimensions at hadron colliders are investigated for the Drell-Yan process. The K-factor for various observables at hadron colliders are presented. Factorisation, renormalisation scale dependence and uncertainties due to various parton distribution functions are studied. Uncertainties arising from the error on experimental data are estimated using the MRST parton distribution functions. 2. Destination Universe: The Incredible Journey of a Proton in the Large Hadron Collider CERN Multimedia Lefevre, C 2008-01-01 This brochure illustrates the incredible journey of a proton as he winds his way through the CERN accelerator chain and ends up inside the Large Hadron Collider (LHC). The LHC is CERN's flagship particle accelerator which can collide protons together at close to the speed of light, creating circumstances like those just seconds after the Big Bang. 3. Destination Universe: The Incredible Journey of a Proton in the Large Hadron Collider (English version) CERN Multimedia Lefevre, C 2008-01-01 This brochure illustrates the incredible journey of a proton as he winds his way through the CERN accelerator chain and ends up inside the Large Hadron Collider (LHC). The LHC is CERN's flagship particle accelerator which can collide protons together at close to the speed of light, creating circumstances like those just seconds after the Big Bang. 4. A search for technicolor at the large hadron collider Science.gov (United States) Love, Jeremy R. The Standard Model of particle physics provides an accurate description of all experimental data to date. The only unobserved piece of the Standard Model is the Higgs boson, a consequence of the spontaneous breaking of electroweak symmetry by the Higgs mechanism. An alternative to the Higgs mechanism is proposed by Technicolor theories which break electroweak symmetry dynamically through a new force. Technicolor predicts many new particles, called Technihadrons, that could be observed by experiments at hadron colliders. This thesis presents a search for two of the lightest Technihadrons, the rhoT and oT. The Low-Scale Technicolor model predicts the phenomenology of these new states. The rhoT and oT are produced through qq annihilation and couple to Standard Model fermions through the Drell-Yan process, which can result in the dimuon final state. The rhoT and oT preferentially decay to the piT and a Standard Model gauge boson if kinematically allowed. Changing the mass of the piT relative to that of the rhoT and o T affects the cross section times branching fraction to dimuons. The rhoT and oT are expected to have masses below about 1 TeV. The Large Hadron Collider (LHC) at CERN outside of Geneva, Switzerland, produces proton-proton collisions with a center of mass energy of 7 TeV. A general purpose high energy physics detector ATLAS has been used in this analysis to search for Technihadrons decaying to two muons. We use the ATLAS detector to reconstruct the tracks of muons with high transverse momentum coming from these proton-proton collisions. The dimuon invariant mass spectrum is analyzed above 130 GeV to test the consistency of the observed data with the Standard Model prediction. We observe excellent agreement between our data and the background only hypothesis, and proceed to set limits on the cross section times branching ratio of the rhoT and oT as a function of their mass using the Low-Scale Technicolor model. We combine the dielectron and dimuon channels 5. CERN Library | Mario Campanelli presents "Inside CERN's Large Hadron Collider" | 16 March CERN Multimedia CERN Library 2016-01-01 "Inside CERN's Large Hadron Collider" by Mario Campanelli. Presentation on Wednesday, 16 March at 4 p.m. in the Library (bldg 52-1-052) The book aims to explain the historical development of particle physics, with special emphasis on CERN and collider physics. It describes in detail the LHC accelerator and its detectors, describing the science involved as well as the sociology of big collaborations, culminating with the discovery of the Higgs boson. Inside CERN's Large Hadron Collider Mario Campanelli World Scientific Publishing, 2015 ISBN 9789814656641 6. Quench testing of HTS sub-elements for 13 kA and 600 A leads designed to the specifications for the CERN Large Hadron Collider project CERN Document Server Cowey, L; Krischel, D; Bock, J J 2000-01-01 Ability to safely withstand and survive self quench conditions is an important consideration in the design and utilisation of HTS current leads. The provision of a non superconducting shunt path allows current to be diverted in the event of a transition to the normal state. This shunt should allow very rapid transfer of current out of the HTS material and be able to safely support the full load current for the time required to detect the fault and reduce the current to zero. However, the shunt should also be designed to minimise the increased heat load which will result from it's addition to the lead. Test of leads based on melt cast BSCCO 2212 utilising a fully integrated silver gold alloy sheath are described. The HTS sub- elements form part of a full 13 kA lead, designed to the specifications of CERN for the LHC project. The sub-elements proved able to fully comply with and exceed the quench performance required by CERN. The HTS module was quenched at the full design current and continued to maintain this ... 7. SLAC-Linac-Collider (SLC) Project International Nuclear Information System (INIS) Wiedemann, H. 1981-02-01 The proposed SLAC Linear Collider Project (SLC) and its features are described in this paper. In times of ever increasing costs for energy the electron storage ring principle is about to reach its practical limit. A new class of colliding beam beam facilities, the Linear Colliders, are getting more and more attractive and affordable at very high center-of-mass energies. The SLC is designed to be a poineer of this new class of colliding beam facilities and at the same time will serve as a valuable tool to explore the high energy physics at the level of 100 GeV in the center-of-mass system 8. For information - Université de Genève : Accelerator Physics Challenges for the Large Hadron Collider at CERN CERN Multimedia Université de Genève 2005-01-01 UNIVERSITE DE GENEVE Faculte des sciences Section de physique - Département de physique nucléaire et corspusculaire 24, Quai Ernest-Ansermet - 1211 GENEVE 4 Tél : (022) 379 62 73 Fax: (022) 379 69 92 Mercredi 16 March SEMINAIRE DE PHYSIQUE CORPUSCULAIRE à 17h00 - Auditoire Stückelberg Accelerator Physics Challenges for the Large Hadron Collider at CERN Prof. Olivier Bruning / CERN The Large Hadron Collider project at CERN will bring the energy frontier of high energy particle physics back to Europe and with it push the accelerator technology into uncharted teritory. The talk presents the LHC project in the context of the past CERN accelerator developments and addresses the main challenges in terms of technology and accelerator physics. Information: http://dpnc.unige.ch/seminaire/annonce.html Organizer: A. Cervera Villanueva 9. Radiation protection considerations in the design of the LHC, CERN's large hadron collider International Nuclear Information System (INIS) Hoefert, M.; Huhtinen, M.; Moritz, L.E.; Nakashima, H.; Potter, K.M.; Rollet, S.; Stevenson, G.R.; Zazula, J.M. 1996-01-01 This paper describes the radiological concerns which are being taken into account in the design of the LHC (CERN's future Large Hadron Collider). The machine will be built in the 27 km circumference ring tunnel of the existing LEP collider at CERN. The high intensity of the circulating beams (each containing more than 10 14 protons at 7 TeV) determines the thickness specification of the shielding of the main-ring tunnel, the precautions to be taken in the design of the beam dumps and their associated caverns and the radioactivity induced by the loss of protons in the main ring by inelastic beam-gas interactions. The high luminosity of the collider is designed to provide inelastic collision rates of 10 9 per second in each of the two principal detector installations, ATLAS and CMS. These collisions determine the shielding of the experimental areas, the radioactivity induced in both the detectors and in the machine components on either side of the experimental installations and, to some extent, the radioactivity induced in the beam-cleaning (scraper) systems. Some of the environmental issues raised by the project will be discussed. (author) 10. Investigation of hadronic matter at the Fermilab Tevatron Collider: Technical progress report, 1986 October-1987 October International Nuclear Information System (INIS) Anderson, E.W. 1987-01-01 An investigation of hadronic matter at very high energy densities is reported. The present experiment, E-735, is a search for a deconfined quark-gluon plasma phase of matter expected to occur when temperatures of 240 MeV are achieved. Preliminary data have been obtained during the first operation of the Fermilab Tevatron Collider during the period January to May 1987. The collaboration is about to publish first results on the charged particle multiplicity and transverse momentum distributions. In addition, we have data on the particle identification of the produced secondaries. Both measurements are regarded on theoretical grounds to be sensitive indicators of the formation of a high temperature plasma. The capital project funded under this contract was a 240-element trigger hodoscope array, with associated electronics and monitor. The hodoscope was completed and performed to design expectations in the high-rate and high-radiation environment of the Collider. Scientific personnel supported under this contract were also responsible for the implementation of the data acquisition system used for E-735. Although the system underwent several unanticipated modifications in response to changing schedules, the required service was provided. Preparations are currently under way for the principal data acquisition during the spring of 1988. At that time we will have in place the central tracking chamber, and the remainder of the spectrometer chambers. Tests will also be made on backgrounds and detector materials appropriate to our proposal, P-787, to measure leptons and photons in the third Collider running period 11. A conceptual solution for a beam halo collimation system for the Future Circular hadron-hadron Collider (FCC-hh) Science.gov (United States) Fiascaris, M.; Bruce, R.; Redaelli, S. 2018-06-01 We present the first conceptual solution for a collimation system for the hadron-hadron option of the Future Circular Collider (FCC-hh). The collimation layout is based on the scaling of the present Large Hadron Collider collimation system to the FCC-hh energy and it includes betatron and momentum cleaning, as well as dump protection collimators and collimators in the experimental insertions for protection of the final focus triplet magnets. An aperture model for the FCC-hh is defined and the geometrical acceptance is calculated at injection and collision energy taking into account mechanical and optics imperfections. The performance of the system is then assessed through the analysis of normalized halo distributions and complete loss maps for an ideal lattice. The performance limitations are discussed and a solution to improve the system performance with the addition of dispersion suppression collimators around the betatron cleaning insertion is presented. 12. Accelerator physics and technology challenges of very high energy hadron colliders Science.gov (United States) Shiltsev, Vladimir D. 2015-08-01 High energy hadron colliders have been in the forefront of particle physics for more than three decades. At present, international particle physics community considers several options for a 100 TeV proton-proton collider as a possible post-LHC energy frontier facility. The method of colliding beams has not fully exhausted its potential but has slowed down considerably in its progress. This paper briefly reviews the accelerator physics and technology challenges of the future very high energy colliders and outlines the areas of required research and development towards their technical and financial feasibility. 13. Trading studies of a very large hadron collider International Nuclear Information System (INIS) Ruggiero, A.G. 1996-01-01 The authors have shown that the design of the ELOISATRON can be approached in five separate steps. In this report they deal with the two major issues of the collider: the size and the strength of the superconducting magnets. The reference design of the SSC calls for a collider circumference of 86 km. It represents the largest size that until recently was judged feasible. The reference design of the LHC requires a bending field of 9 Tesla, that industries are presently determined to demonstrate. Clearly the large size of the project presents problem with magnet tolerances, and collider operation and management. The high field of the superconducting magnets needs to be demonstrated, and the high-field option in excess of 9 Tesla requires extensive research and development. It is obvious from the start that, if the ELOISATRON has to allow large beam energies, the circumference has also to be larger than that of the SSC, probably of few hundred kilometers. On the other end, Tevatron, RHIC and SSC type of superconducting magnets have been built and demonstrated on a large scale and proven to be cost effective and reliable. Their field, nevertheless, hardly can exceed a value of 7.5 Tesla, without major modifications that need to be studied. The LHC type of magnets may be capable of 9 Tesla, but they are being investigated presently by the European industries. It is desired that if one wants to keep the size of the ring under reasonable limits, a somewhat higher bending field is required for the ELOISATRON, especially if one wants also to take advantage of the synchrotron radiation effects. A field value of 13 Tesla, twice the value of the SSC superconducting magnets, has recently been proposed, but it clearly needs a robust program of research and development. This magnet will not probably be of the RHIC/SSC type and not even of the LHC type. It will have to be designed and conceived anew. In the following they examine two possible approaches. In the first approach 14. Industrial Technology for Unprecented Energy and Luminosity The Large Hadron Collider CERN Document Server Lebrun, P 2004-01-01 With over 3 billion Swiss francs procurement contracts under execution in industry and the installation of major technical systems in its first 3.3 km sector, the Large Hadron Collider (LHC) construction is now in full swing at CERN, the European Organization for Nuclear Research. The LHC is not only the most challenging particle accelerator, it is also the largest global project ever for a scientific instrument based on advanced technology. Starting from accelerator performance requirements, we recall how these can be met by an appropriate combination of technologies, such as high-field superconducting magnets, superfluid helium cryogenics, power electronics, with particular emphasis on developments required to meet demanding specifications, and industrialization issues which had to be solved for achieving series production of precision components under tight quality assurance and within limited resources. This provides the opportunity for reviewing the production status of the main systems and the progress ... 15. Manufacturing and Installation of the Compound Cryogenic Distribution Line for the Large Hadron Collider CERN Document Server Riddone,, G; Bouillot, A; Brodzinski, K; Dupont, M; Fathallah, M; Fournel, JL; Gitton, E; Junker, S; Moussavi, H; Parente, C; Riddone, G 2007-01-01 The Large Hadron Collider (LHC) [1] currently under construction at CERN will make use of superconducting magnets operating in superfluid helium below 2 K. A compound cryogenic distribution line (QRL) will feed with helium at different temperatures and pressures the local elementary cooling loops in the cryomagnet strings. Low heat inleak to all temperature levels is essential for the overall LHC cryogenic performance. Following a competitive tendering, CERN adjudicated in 2001 the contract for the series line to Air Liquide (France). This paper recalls the main features of the technical specification and shows the project status. The basic choices and achievements for the industrialization phase of the series production are also presented, as well as the installation issues and status. 16. Cost-Benefit Analysis of the Large Hadron Collider to 2025 and beyond CERN Document Server Florio, Massimo; Sirtori, Emanuela 2015-01-01 Social cost-benefit analysis (CBA) of projects has been successfully applied in different fields such as transport, energy, health, education, and environment, including climate change. It is often argued that it is impossible to extend the CBA approach to the evaluation of the social impact of research infrastructures, because the final benefit to society of scientific discovery is generally unpredictable. Here, we propose a quantitative approach to this problem, we use it to design an empirically testable CBA model, and we apply it to the the Large Hadron Collider (LHC), the highest-energy accelerator in the world, currently operating at CERN. We show that the evaluation of benefits can be made quantitative by determining their value to users (scientists, early-stage researchers, firms, visitors) and non-users (the general public). Four classes of contributions to users are identified: knowledge output, human capital development, technological spillovers, and cultural effects. Benefits for non-users can be ... 17. Simulations of fast crab cavity failures in the high luminosity Large Hadron Collider Science.gov (United States) Yee-Rendon, Bruce; Lopez-Fernandez, Ricardo; Barranco, Javier; Calaga, Rama; Marsili, Aurelien; Tomás, Rogelio; Zimmermann, Frank; Bouly, Frédéric 2014-05-01 Crab cavities (CCs) are a key ingredient of the high luminosity Large Hadron Collider (HL-LHC) project for increasing the luminosity of the LHC. At KEKB, CCs have exhibited abrupt changes of phase and voltage during a time period of the order of a few LHC turns and considering the significant stored energy in the HL-LHC beam, CC failures represent a serious threat in regard to LHC machine protection. In this paper, we discuss the effect of CC voltage or phase changes on a time interval similar to, or longer than, the one needed to dump the beam. The simulations assume a quasistationary-state distribution to assess the particles losses for the HL-LHC. These distributions produce beam losses below the safe operation threshold for Gaussian tails, while, for non-Gaussian tails are on the same order of the limit. Additionally, some mitigation strategies are studied for reducing the damage caused by the CC failures. 18. Study for a failsafe trigger generation system for the Large Hadron Collider beam dump kicker magnets CERN Document Server Rampl, M 1999-01-01 The 27 km-particle accelerator Large Hadron Collider (LHC), which will be completed at the European Laboratory for Particle Physics (CERN) in 2005, will work with extremely high beam energies (~334 MJ per beam). Since the equipment and in particular the superconducting magnets must be protected from damage caused by these high energy beams the beam dump must be able to absorb this energy very reliable at every stage of operation. The kicker magnets that extract the particles from the accelerator are synchronised with the beam by the trigger generation system. This thesis is a first study for this electronic module and its functions. A special synchronisation circuit and a very reliable electronic switch were developed. Most functions were implemented in a Gate-Array to improve the reliability and to facilitate modifications during the test stage. This study also comprises the complete concept for the prototype of the trigger generation system. During all project stages reliability was always the main determin... 19. Effects of bulk viscosity and hadronic rescattering in heavy ion collisions at energies available at the BNL Relativistic Heavy Ion Collider and at the CERN Large Hadron Collider Science.gov (United States) Ryu, Sangwook; Paquet, Jean-François; Shen, Chun; Denicol, Gabriel; Schenke, Björn; Jeon, Sangyong; Gale, Charles 2018-03-01 We describe ultrarelativistic heavy ion collisions at the BNL Relativistic Heavy Ion Collider and the CERN Large Hadron Collider with a hybrid model using the IP-Glasma model for the earliest stage and viscous hydrodynamics and microscopic transport for the later stages of the collision. We demonstrate that within this framework the bulk viscosity of the plasma plays an important role in describing the experimentally observed radial flow and azimuthal anisotropy simultaneously. We further investigate the dependence of observables on the temperature below which we employ the microscopic transport description. 20. Higgs boson production at hadron colliders at N3LO in QCD Science.gov (United States) Mistlberger, Bernhard 2018-05-01 We present the Higgs boson production cross section at Hadron colliders in the gluon fusion production mode through N3LO in perturbative QCD. Specifically, we work in an effective theory where the top quark is assumed to be infinitely heavy and all other quarks are considered to be massless. Our result is the first exact formula for a partonic hadron collider cross section at N3LO in perturbative QCD. Furthermore, our result is an analytic computation of a hadron collider cross section involving elliptic integrals. We derive numerical predictions for the Higgs boson cross section at the LHC. Previously this result was approximated by an expansion of the cross section around the production threshold of the Higgs boson and we compare our findings. Finally, we study the impact of our new result on the state of the art prediction for the Higgs boson cross section at the LHC. 1. Controls for the CERN large hadron collider (LHC) International Nuclear Information System (INIS) Kissler, K.H.; Perriollat, F.; Rabany, M.; Shering, G. 1992-01-01 CERN's planned large superconducting collider project presents several new challenges to the Control System. These are discussed along with current thinking as to how they can be met. The high field superconducting magnets are subject to 'persistent currents' which will require real time measurements and control using a mathematical model on a 2-10 second time interval. This may be realized using direct links, multiplexed using TDM, between the field equipment and central servers. Quench control and avoidance will make new demands on speed of response, reliability and surveillance. The integration of large quantities of industrially controlled equipment will be important. Much of the controls will be in common with LEP so a seamless integration of LHC and LEP controls will be sought. A very large amount of new high-tech equipment will have to be tested, assembled and installed in the LEP tunnel in a short time. The manpower and cost constrains will be much tighter than previously. New approaches will have to be found to solve many of these problems, with the additional constraint of integrating them into an existing frame work. (author) 2. Top-quark pair production at hadron colliders Energy Technology Data Exchange (ETDEWEB) Ahrens, Valentin 2011-12-08 In this thesis we investigate several phenomenologically important properties of top-quark pair production at hadron colliders. We calculate double differential cross sections in two different kinematical setups, pair invariant-mass (PIM) and single-particle inclusive (1PI) kinematics. In pair invariant-mass kinematics we are able to present results for the double differential cross section with respect to the invariant mass of the top-quark pair and the top-quark scattering angle. Working in the threshold region, where the pair invariant mass M is close to the partonic center-of-mass energy {radical}(s), we are able to factorize the partonic cross section into different energy regions. We use renormalization-group (RG) methods to resum large threshold logarithms to next-to-next-to-leading-logarithmic (NNLL) accuracy. On a technical level this is done using effective field theories, such as heavy-quark effective theory (HQET) and soft-collinear effective theory (SCET). The same techniques are applied when working in 1PI kinematics, leading to a calculation of the double differential cross section with respect to transverse-momentum pT and the rapidity of the top quark. We restrict the phase-space such that only soft emission of gluons is possible, and perform a NNLL resummation of threshold logarithms. The obtained analytical expressions enable us to precisely predict several observables, and a substantial part of this thesis is devoted to their detailed phenomenological analysis. Matching our results in the threshold regions to the exact ones at next-to-leading order (NLO) in fixed-order perturbation theory, allows us to make predictions at NLO+NNLL order in RG-improved, and at approximate next-to-next-to-leading order (NNLO) in fixed order perturbation theory. We give numerical results for the invariant mass distribution of the top-quark pair, and for the top-quark transverse-momentum and rapidity spectrum. We predict the total cross section, separately for both 3. Possibilities of polarized protons in Sp anti p S and other high energy hadron colliders International Nuclear Information System (INIS) Courant, E.D. 1984-01-01 The requirements for collisions with polarized protons in hadron colliders above 200 GeV are listed and briefly discussed. Particular attention is given to the use of the ''Siberan snake'' to eliminate depolarizing resonances, which occur when the spin precession frequency equals a frequency contained in the spectrum of the field seen by the beam. The Siberian snake is a device which makes the spin precession frequency essentially constant by using spin rotators, which precess the spin by 180 0 about either the longitudinal or transverse horizontal axis. It is concluded that operation with polarized protons should be possible at all the high energy hadron colliders 4. Heavy-Ion Collimation at the Large Hadron Collider: Simulations and Measurements OpenAIRE Hermes, Pascal Dominik; Wessels, Johannes Peter; Bruce, Roderik; Wessels, Johannes Peter; Bruce, Roderik 2017-01-01 The CERN Large Hadron Collider (LHC) stores and collides proton and^{208}$Pb$^{82+}$beams of unprecedented energy and intensity. Thousands of superconducting magnets, operated at 1.9 K, guide the very intense and energetic particle beams, which have a large potential for destruction. This implies the demand for a multi-stage collimation system to provide protection from beam-induced quenches or even hardware damage. In heavy-ion operation, ion fragments with significant rigidity offsets ca... 5. High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report International Nuclear Information System (INIS) Apollinari, G.; Béjar Alonso, I.; Brüning, O.; Lamont, M.; Rossi, L. 2015-01-01 The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHC is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cavities for beam rotation with ultra-precise phase control, new technology and physical processes for beam collimation and 300 metre-long high-power superconducting links with negligible energy dissipation. The present document describes the technologies and components that will be used to realise the project and is intended to serve as the basis for the detailed engineering design of HL-LHC. 6. Study of high muon multiplicity cosmic ray events with ALICE at the CERN Large Hadron Collider CERN Document Server Rodriguez Cahuantzi, Mario 2015-01-01 ALICE is one of four large experiments at the CERN Large Hadron Collider. Located 52 meters undergroundwith 28meters of overburden rock, it has also been used to detect atmosphericmuons produced by cosmic-ray interactions in the upper atmosphere. We present the muon multiplicity distribution of these cosmic-ray events and their comparison with Monte Carlo simulation. This analysis exploits the large size and excellent tracking capability of the ALICE Time Projection Chamber. A special emphasis is given to the study of high multiplicity events containing more than 100 reconstructed muons and corresponding to a muon areal density larger than 5.9 m$^{−2}$. The measured rate of these events shows that they stem from primary cosmic-rays with energies above 10$^{16}$eV. The frequency of these events can be successfully described by assuming a heavy mass composition of primary cosmic-rays in this energy range and using the most recent hadronic interaction models to simulate the development of the resulting air sh... 7. High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report Energy Technology Data Exchange (ETDEWEB) Apollinari, G. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Béjar Alonso, I. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Brüning, O. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Lamont, M. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Rossi, L. [European Organization for Nuclear Research (CERN), Geneva (Switzerland) 2015-12-17 The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHC is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cavities for beam rotation with ultra-precise phase control, new technology and physical processes for beam collimation and 300 metre-long high-power superconducting links with negligible energy dissipation. The present document describes the technologies and components that will be used to realise the project and is intended to serve as the basis for the detailed engineering design of HL-LHC. 8. Diffractive dissociation of top hadrons at collider energies International Nuclear Information System (INIS) Di Bitonto, D. 1983-01-01 In an optical model in which heavy-flavour production is described as coherent scattering within the nucleon, the total heavy-flavour cross-section is expected to obey a m q -2 dependence, while the partial differential cross-section dσ/dp T for top hadrons is expected to show a broad plateau above approx. 20 GeV/c in transverse momentum. The expected top hadron signal dσ/dp T is 60 nb for m t = 20 GeV/c 2 appearing in the approx. angular range of expected value of θ 10 deg - 20 deg. (author) 9. Computer simulation of the emittance growth due to noise in large hadron colliders International Nuclear Information System (INIS) Lebedev, V. 1993-03-01 The problem of emittance growth due to random fluctuations of the magnetic field in a hadron collider is considered. The results of computer simulations are compared with the analytical theory developed earlier. A good agreement was found between the analytical theory predictions and the computer simulations for the collider tunes located far enough from high order betatron resonances. The dependencies of the emittance growth rate on noise spectral density, beam separation at the Interaction Point (IP) and value of beam separation at long range collisions are studied. The results are applicable to the Superconducting Super Collider (SSC) 10. Recognizing Critical Behavior amidst Minijets at the Large Hadron Collider Directory of Open Access Journals (Sweden) Rudolph C. Hwa 2015-01-01 Full Text Available The transition from quarks to hadrons in a heavy-ion collision at high energy is usually studied in two different contexts that involve very different transverse scales: local and nonlocal. Models that are concerned with the pT spectra and azimuthal anisotropy belong to the former, that is, hadronization at a local point in (η,ϕ space, such as the recombination model. The nonlocal problem has to do with quark-hadron phase transition where collective behavior through near-neighbor interaction can generate patterns of varying sizes in the (η,ϕ space. The two types of problems are put together in this paper both as brief reviews separately and to discuss how they are related to each other. In particular, we ask how minijets produced at LHC can affect the investigation of multiplicity fluctuations as signals of critical behavior. It is suggested that the existing data from LHC have sufficient multiplicities in small pT intervals to make the observation of distinctive features of clustering of soft particles, as well as voids, feasible that characterize the critical behavior at phase transition from quarks to hadrons, without any ambiguity posed by the clustering of jet particles. 11. Working group report: Dictionary of Large Hadron Collider signatures Indian Academy of Sciences (India) 4Centre for High Energy Physics, Indian Institute of Science, Bangalore 560 012, ... of 14 TeV will shed light on the origin of electroweak symmetry breaking and are expected to provide collider signatures of dark matter (DM), thus directly ... SUSY superpartners have a different spin compared to their partners, while LHT. 12. Vector-like fermion and standard Higgs production at hadron colliders International Nuclear Information System (INIS) Aguila, F. del; Ametller, L.; Kane, G.L.; Vidal, J.; Centro Mixto Valencia Univ./CSIC, Valencia 1990-01-01 Vector-like fermions are characterized by large neutral current decay rates, in particular into Higgs bosons. If they exist, their clear signals at hadron colliders open a window to Higgs detection, especially to the intermediate Higgs mass region. We discuss in some detail rates and signatures for simple cases. (orig.) 13. Physics and Analysis at a Hadron Collider - Searching for New Physics (2/3) CERN Multimedia CERN. Geneva 2010-01-01 This is the second lecture of three which together discuss the physics of hadron colliders with an emphasis on experimental techniques used for data analysis. This second lecture discusses techniques important for analyses searching for new physics using the CDF B_s --> mu+ mu- search as a specific example. The lectures are aimed at graduate students. 14. Taking Energy to the Physics Classroom from the Large Hadron Collider at CERN Science.gov (United States) Cid, Xabier; Cid, Ramon 2009-01-01 In 2008, the greatest experiment in history began. When in full operation, the Large Hadron Collider (LHC) at CERN will generate the greatest amount of information that has ever been produced in an experiment before. It will also reveal some of the most fundamental secrets of nature. Despite the enormous amount of information available on this… 15. Discovering a Light Scalar or Pseudoscalar at The Large Hadron Collider DEFF Research Database (Denmark) Frandsen, Mads Toudal; Sannino, Francesco 2012-01-01 The allowed standard model Higgs mass range has been reduced to a region between 114 and 130 GeV or above 500 GeV, at the 99% confidence level, since the Large Hadron Collider (LHC) program started. Furthermore some of the experiments at Tevatron and LHC observe excesses that could arise from... 16. Status of the 16 T dipole development program for a future hadron collider NARCIS (Netherlands) Tommasini, Davide; Arbelaez, Diego; Auchmann, Bernhard; Bajas, Hugues; Bajko, Marta; Ballarino, Amalia; Barzi, Emanuela; Bellomo, Giovanni; Benedikt, Michael; Izquierdo Bermudez, Susana; Bordini, Bernardo; Bottura, Luca; Brouwer, Lucas; Buzio, Marco; Caiffi, Barbara; Caspi, Shlomo; Dhalle, Marc; Durante, Maria; De Rijk, Gijs; Fabbricatore, Pasquale; Farinon, Stefania; Ferracin, Paolo; Gao, Peng; Gourlay, Steve; Juchno, Mariusz; Kashikhin, Vadim; Lackner, Friedrich; Lorin, Clement; Marchevsky, Maxim; Marinozzi, Vittorio; Martinez, Teresa; Munilla, Javier; Novitski, Igor; Ogitsu, Toru; Ortwein, Rafal; Perez, Juan Carlos; Petrone, Carlo; Prestemon, Soren; Prioli, Marco; Rifflet, Jean Michel; Rochepault, Etienne; Russenschuck, Stephan; Salmi, Tiina; Savary, Frederic; Schoerling, Daniel; Segreti, Michel; Senatore, Carmine; Sorbi, Massimo; Stenvall, Antti; Todesco, Ezio; Toral, Fernando; Verweij, Arjan P.; Wessel, W.A.J.; Wolf, Felix; Zlobin, Alexander A next step of energy increase of hadron colliders beyond the LHC requires high-field superconducting magnets capable of providing a dipolar field in the range of 16 T in a 50 mm aperture with accelerator quality. These characteristics could meet the re-quirements for an upgrade of the LHC to twice 17. Summary of the Very Large Hadron Collider Physics and Detector subgroup International Nuclear Information System (INIS) Denisov, D.; Keller, S. 1996-01-01 We summarize the activity of the Very Large Hadron Collider Physics and Detector subgroup during Snowmass 96. Members of the group: M. Albrow, R. Diebold, S. Feher, L. Jones, R. Harris, D. Hedin, W. Kilgore, J. Lykken, F. Olness, T. Rizzo, V. Sirotenko, and J. Womersley. 9 refs 18. Large Hadron particle collider may not have its run this November CERN Multimedia 2007-01-01 "The Large Hadron Collider (LHC), based at CERN in Geneva, Switzerland, will not run in November this year as scheduled. The LHC was supposed to have a test run this yera, before switching on the scientific search for the Higgs boson in 2008."(1 page) 19. Improved squark and gluino mass limits from searches for supersymmetry at hadron colliders NARCIS (Netherlands) Beenakker, W.; Brensing, S.; D'Onofrio, M.; Krämer, M.; Kulesza, A.; Laenen, E.; Martinzez, M.; Niessen, I. 2012-01-01 Squarks and gluinos have been searched for at hadron colliders in events with multiple jets and missing transverse energy. No excess has been observed to date, and from a comparison of experimental cross section limits and theoretical cross section predictions one can deduce lower bounds on the 20. Smash! exploring the mysteries of the Universe with the Large Hadron Collider CERN Document Server Latta, Sara 2017-01-01 What is the universe made of? At CERN, the European Organization for Nuclear Research, scientists have searched for answers to this question using the largest machine in the world: the Large Hadron Collider. It speeds up tiny particles, then smashes them togetherand the collision gives researchers a look at the building blocks of the universe. 1. CERN-Fermilab Hadron Collider Physics Summer School 2013 open for applications CERN Multimedia 2013-01-01 Mark your calendar for 28 August - 6 September 2013, when CERN will welcome students to the eighth CERN-Fermilab Hadron Collider Physics Summer School. Experiments at hadron colliders will continue to provide our best tools for exploring physics at the TeV scale for some time. With the completion of the 7-8 TeV runs of the LHC, and the final results from the full Tevatron data sample becoming available, a new era in particle physics is beginning, heralded by the Higgs-like particle recently discovered at 125 GeV. To realize the full potential of these developments, CERN and Fermilab are jointly offering a series of "Hadron Collider Physics Summer Schools", to prepare young researchers for these exciting times. The school has alternated between CERN and Fermilab, and will return to CERN for the eighth edition, from 28 August to 6 September 2013. The CERN-Fermilab Hadron Collider Physics Summer School is an advanced school which particularly targets young postdocs in exper... 2. CERN celebrating the Lowering of the final detector element for large Hadron Collider CERN Multimedia 2008-01-01 In the early hours of the morning the final element of the Compact Muon Solenoid (CMS) detector began the descent into its underground experimental cavern in preparation for the start-up of CERNs Large Hadron Collider (LHC) this summer. This is a pivotal moment for the CMS collaboration. 3. One-loop helicity amplitudes for t anti t production at hadron colliders International Nuclear Information System (INIS) Badger, Simon; Yundin, Valery 2011-01-01 We present compact analytic expressions for all one-loop helicity amplitudes contributing to t anti t production at hadron colliders. Using recently developed generalised unitarity methods and a traditional Feynman based approach we produce a fast and flexible implementation. (ORIG.) 4. One-loop helicity amplitudes for t anti t production at hadron colliders Energy Technology Data Exchange (ETDEWEB) Badger, Simon [The Niels Bohr International Academy and Discovery Center, Copenhagen (Denmark). Niels Bohr Inst.; Sattler, Ralf [Humboldt Univ. Berlin (Germany). Inst. fuer Physik; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Yundin, Valery [Silesia Univ., Katowice (Poland). Inst. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany) 2011-01-15 We present compact analytic expressions for all one-loop helicity amplitudes contributing to t anti t production at hadron colliders. Using recently developed generalised unitarity methods and a traditional Feynman based approach we produce a fast and flexible implementation. (ORIG.) 5. Preliminary design of the beam screen cooling for the Future Circular Collider of hadron beams Science.gov (United States) Kotnig, C.; Tavian, L. 2015-12-01 Following recommendations of the recent update of the European strategy in particle physics, CERN has undertaken an international study of possible future circular colliders beyond the LHC. This study considers an option for a very high energy (100 TeV) hadron-hadron collider located in a quasi-circular underground tunnel having a circumference of 80 to 100 km. The synchrotron radiation emitted by the high-energy hadron beam increases by more than two orders of magnitude compared to the LHC. To reduce the entropic load on the superconducting magnets’ refrigeration system, beam screens are indispensable to extract the heat load at a higher temperature level. After illustrating the decisive constraints of the beam screen's refrigeration design, this paper presents a preliminary design of the length of a continuous cooling loop comparing helium and neon, for different cooling channel geometries with emphasis on the cooling length limitations and the exergetic efficiency. 6. Preliminary design of the beam screen cooling for the Future Circular Collider of hadron beams CERN Document Server Kotnig, C 2015-01-01 Following recommendations of the recent update of the European strategy in particle physics, CERN has undertaken an international study of possible future circular colliders beyond the LHC. This study considers an option for a very high energy (100 TeV) hadron-hadron collider located in a quasi-circular underground tunnel having a circumference of 80 to 100 km. The synchrotron radiation emitted by the high-energy hadron beam increases by more than two orders of magnitude compared to the LHC. To reduce the entropic load on the superconducting magnets' refrigeration system, beam screens are indispensable to extract the heat load at a higher temperature level. After illustrating the decisive constraints of the beam screen's refrigeration design, this paper presents a preliminary design of the length of a continuous cooling loop comparing helium and neon, for different cooling channel geometries with emphasis on the cooling length limitations and the exergetic efficiency. 7. Signals of doubly-charged Higgsinos at the CERN Large Hadron Collider International Nuclear Information System (INIS) Demir, Durmus A.; Frank, Mariana; Turan, Ismail; Huitu, Katri; Rai, Santosh Kumar 2008-01-01 Several supersymmetric models with extended gauge structures, motivated by either grand unification or by neutrino mass generation, predict light doubly-charged Higgsinos. In this work we study productions and decays of doubly-charged Higgsinos present in left-right supersymmetric models, and show that they invariably lead to novel collider signals not found in the minimal supersymmetric model or in any of its extensions motivated by the μ problem or even in extra dimensional theories. We investigate their distinctive signatures at the Large Hadron Collider in both pair- and single-production modes, and show that they are powerful tools in determining the underlying model via the measurements at the Large Hadron Collider experiments. 8. Unintegrated parton distributions and electroweak boson production at hadron colliders CERN Document Server Watt, G; Ryskin, M G 2004-01-01 We describe the use of doubly-unintegrated parton distributions in hadron-hadron collisions, using the (z,k_t)-factorisation prescription where the transverse momentum of the incoming parton is generated in the last evolution step. We apply this formalism to calculate the transverse momentum (P_T) distributions of produced W and Z bosons and compare the predictions to Tevatron Run 1 data. We find that the observed P_T distributions can be generated almost entirely by the leading order q_1 q_2 -> W,Z subprocesses, using known and universal doubly-unintegrated quark distributions. We also calculate the P_T distribution of the Standard Model Higgs boson at the LHC, where the dominant production mechanism is by gluon-gluon fusion. 9. Ultra-high-field magnets for future hadron colliders International Nuclear Information System (INIS) McIntyre, P.M.; Shen, W. 1997-01-01 Several new concepts in magnetic design and coil fabrication are being incorporated into designs for ultra-high field collider magnets: a 16 Tesla block-coil dual dipole, also using Nb 3 Sn cable, featuring simple pancake coil construction and face-loaded prestress geometry; a 330 T/m block-coil quadrupole; and a ∼ 20 Tesla pipe-geometry dual dipole, using A15 or BSCCO tape. Field design and fabrication issues are discussed for each magnet 10. Low-cost hadron colliders at Fermilab: A discussion paper International Nuclear Information System (INIS) Foster, G.W.; Malamud, E. 1996-01-01 New more economic approaches are required to continue the dramatic exponential rise in collider energies as represented by the well known Livingston plot. The old idea of low cost, low field iron dominated magnets in a small diameter pipe may become feasible in the next decade with dramatic recent advances in technology: (1) advanced tunneling technologies for small diameter, non human accessible tunnels, (2) accurate remote guidance systems for tunnel survey and boring machine steering, (3) high T c superconductors operating at liquid N 2 or liquid H 2 temperatures, (4) industrial applications of remote manipulation and robotics, (5) digitally multiplexed electronics to minimize cables, (6) achievement of high luminosities in p-p and p-anti P colliders. The goal of this paper is to stimulate continuing discussions on approaches to this new collider and to identify critical areas needing calculations, construction of models, proof of principle experiments, and full scale prototypes in order to determine feasibility and arrive at cost estimates 11. Status of the SLAC Linear Collider Project International Nuclear Information System (INIS) Stiening, R. 1983-01-01 The SLAC Linear Collider Project has two principal goals. The first is to serve as a prototype for a future very high energy linear electron-positron collider. The second is to quickly, at low cost, achieve sufficient luminosity at 100 GeV center-of-mass energy to explore the physics of the Z 0 . The first goal is important to the future of electron-positron physics because the rapid increase of synchrotron radiation with energy causes the cost of circular storage ring colliders to whereas the cost of linear colliders increases only in proportion to the center-of-mass energy. The second is important because the existance at SLAC of a linear accelerator which can be converted at low cost to collider operation makes possible a unique opportunity to quickly achieve 100 GeV center-of-mass collisions. At the design luminosity of 6.0 x 10 30 many thousands of Z 0 decays should be observed in each day of operation 12. For Information: CERN-Fermilab2006 Hadron Collider Physics Summer School CERN Multimedia 2006-01-01 Applications are Now Open for the CERN-Fermilab2006 Hadron Collider Physics Summer School August 9-18, 2006 Please go to the school web site http://hcpss.fnal.gov/ and follow the links to the Application process. The APPLICATION DEADLINE IS APRIL 8, 2006. Successful applicants and support awards will be announced shortly thereafter. Also available on the web is the tentative academic program of the school. The main goal of the CERN-Fermilab Hadron Collider Physics Summer Schools is to offer students and young researchers a broad picture of both the theoretical and experimental aspects of hadron collider physics. The emphasis of the first school will be on the physics potential of the first years of data taking at the LHC, and on the experimental and theoretical tools needed to exploit that potential. A series of lectures and informal discussions will include an introduction to the theoretical and phenomenological framework of hadron collisions, and current theoretical models of frontier physics, as... 13. Rapidity correlations in Wγ production at hadron colliders International Nuclear Information System (INIS) Baur, U.; Errede, S.; Landsberg, G. 1994-01-01 We study the correlation of photon and charged lepton pseudorapidities, η(γ) and η(l), l=e,μ, in pp (-) →W ± γ+X→l ± at sign;sp T γ+X. In the standard model, the Δη(γ,l)=η(γ)-η(l) differential cross section is found to exhibit a pronounced dip at Δη(γ,l)∼ minus-plus 0.3 (=0) in p bar p(pp) collisions, which originates from the radiation zero present in q bar q'→Wγ. The sensitivity of the Δη(γ,l) distribution to higher order QCD corrections, nonstandard WWγ couplings, the W+ jet ''fake'' background, and the cuts imposed is explored. At hadron supercolliders, next-to-leading order QCD corrections are found to considerably obscure the radiation zero. The advantages of the Δη(γ,l) distribution over other quantities which are sensitive to the radiation zero are discussed. We conclude that photon-lepton rapidity correlations at the Fermilab Tevatron offer a unique opportunity to search for the standard model radiation zero in hadronic Wγ production 14. The Quest for High Luminosity in Hadron Colliders (413th Brookhaven Lecture) International Nuclear Information System (INIS) Fischer, Wolfram 2006-01-01 In 1909, by bombarding a gold foil with alpha particles from a radioactive source, Ernest Rutherford and coworkers learned that the atom is made of a nucleus surrounded by an electron cloud. Ever since, scientists have been probing deeper and deeper into the structure of matter using the same technique. With increasingly powerful machines, they accelerate beams of particles to higher and higher energies, to penetrate more forcefully into the matter being investigated and reveal more about the contents and behavior of the unknown particle world. To achieve the highest collision energies, projectile particles must be as heavy as possible, and collide not with a fixed target but another beam traveling in the opposite direction. These experiments are done in machines called hadron colliders, which are some of the largest and most complex research tools in science. Five such machines have been built and operated, with Brookhaven's Relativistic Heavy Ion Collider (RHIC) currently the record holder for the total collision energy. One more such machine is under construction. Colliders have two vital performance parameters on which their success depends: one is their collision energy, and the other, the number of particle collisions they can produce, which is proportional to a quantity known as the luminosity. One of the tremendous achievements in the world's latest collider, RHIC, is the amazing luminosity that it produces in addition to its high energy. To learn about the performance evolution of these colliders and the way almost insurmountable difficulties can be overcome, especially in RHIC, join Wolfram Fischer, a physicist in the Collider-Accelerator (C-A) Department, who will give the next Brookhaven Lecture, on 'The Quest for High Luminosity in Hadron Colliders.' 15. A Large Hadron Electron Collider at CERN: Report on the Physics and Design Concepts for Machine and Detector CERN Document Server Abelleira Fernandez, J.L.; Akay, A.N.; Aksakal, H.; Albacete, J.L.; Alekhin, S.; Allport, P.; Andreev, V.; Appleby, R.B.; Arikan, E.; Armesto, N.; Azuelos, G.; Bai, M.; Barber, D.; Bartels, J.; Behnke, O.; Behr, J.; Belyaev, A.S.; Ben-Zvi, I.; Bernard, N.; Bertolucci, S.; Bettoni, S.; Biswal, S.; Blumlein, J.; Bottcher, H.; Bogacz, A.; Bracco, C.; Brandt, G.; Braun, H.; Brodsky, S.; Buning, O.; Bulyak, E.; Buniatyan, A.; Burkhardt, H.; Cakir, I.T.; Cakir, O.; Calaga, R.; Cetinkaya, V.; Ciapala, E.; Ciftci, R.; Ciftci, A.K.; Cole, B.A.; Collins, J.C.; Dadoun, O.; Dainton, J.; De Roeck, A.; d'Enterria, D.; Dudarev, A.; Eide, A.; Enberg, R.; Eroglu, E.; Eskola, K.J.; Favart, L.; Fitterer, M.; Forte, S.; Gaddi, A.; Gambino, P.; Garcia Morales, H.; Gehrmann, T.; Gladkikh, P.; Glasman, C.; Godbole, R.; Goddard, B.; Greenshaw, T.; Guffanti, A.; Guzey, V.; Gwenlan, C.; Han, T.; Hao, Y.; Haug, F.; Herr, W.; Herve, A.; Holzer, B.J.; Ishitsuka, M.; Jacquet, M.; Jeanneret, B.; Jimenez, J.M.; Jowett, J.M.; Jung, H.; Karadeniz, H.; Kayran, D.; Kilic, A.; Kimura, K.; Klein, M.; Klein, U.; Kluge, T.; Kocak, F.; Korostelev, M.; Kosmicki, A.; Kostka, P.; Kowalski, H.; Kramer, G.; Kuchler, D.; Kuze, M.; Lappi, T.; Laycock, P.; Levichev, E.; Levonian, S.; Litvinenko, V.N.; Lombardi, A.; Maeda, J.; Marquet, C.; Mellado, B.; Mess, K.H.; Milanese, A.; Moch, S.; Morozov, I.I.; Muttoni, Y.; Myers, S.; Nandi, S.; Nergiz, Z.; Newman, P.R.; Omori, T.; Osborne, J.; Paoloni, E.; Papaphilippou, Y.; Pascaud, C.; Paukkunen, H.; Perez, E.; Pieloni, T.; Pilicer, E.; Pire, B.; Placakyte, R.; Polini, A.; Ptitsyn, V.; Pupkov, Y.; Radescu, V.; Raychaudhuri, S.; Rinol, L.; Rohini, R.; Rojo, J.; Russenschuck, S.; Sahin, M.; Salgado, C.A.; Sampei, K.; Sassot, R.; Sauvan, E.; Schneekloth, U.; Schorner-Sadenius, T.; Schulte, D.; Senol, A.; Seryi, A.; Sievers, P.; Skrinsky, A.N.; Smith, W.; Spiesberger, H.; Stasto, A.M.; Strikman, M.; Sullivan, M.; Sultansoy, S.; Sun, Y.P.; Surrow, B.; Szymanowski, L.; Taels, P.; Tapan, I.; Tasci, T.; Tassi, E.; Ten Kate, H.; Terron, J.; Thiesen, H.; Thompson, L.; Tokushuku, K.; Tomas Garcia, R.; Tommasini, D.; Trbojevic, D.; Tsoupas, N.; Tuckmantel, J.; Turkoz, S.; Trinh, T.N.; Tywoniuk, K.; Unel, G.; Urakawa, J.; VanMechelen, P.; Variola, A.; Veness, R.; Vivoli, A.; Vobly, P.; Wagner, J.; Wallny, R.; Wallon, S.; Watt, G.; Weiss, C.; Wiedemann, U.A.; Wienands, U.; Willeke, F.; Xiao, B.W.; Yakimenko, V.; Zarnecki, A.F.; Zhang, Z.; Zimmermann, F.; Zlebcik, R.; Zomer, F. 2012-01-01 The physics programme and the design are described of a new collider for particle and nuclear physics, the Large Hadron Electron Collider (LHeC), in which a newly built electron beam of 60 GeV, up to possibly 140 GeV, energy collides with the intense hadron beams of the LHC. Compared to HERA, the kinematic range covered is extended by a factor of twenty in the negative four-momentum squared,$Q^2$, and in the inverse Bjorken$x$, while with the design luminosity of$10^{33}$cm$^{-2}$s$^{-1}$the LHeC is projected to exceed the integrated HERA luminosity by two orders of magnitude. The physics programme is devoted to an exploration of the energy frontier, complementing the LHC and its discovery potential for physics beyond the Standard Model with high precision deep inelastic scattering measurements. These are designed to investigate a variety of fundamental questions in strong and electroweak interactions. The physics programme also includes electron-deuteron and electron-ion scattering in a$(Q^2, 1/x)$ran... 16. Hadron collider physics. Final report, February 1, 1991--January 31, 1994 International Nuclear Information System (INIS) 1994-01-01 This report contains summaries of work accomplished for Task A1 and A2 (Hadron Collider physics) and Task B. During the first half of the contract period work for Task A1 was focused on the design and implementation of both the D0 detector high voltage system and Level 1 muon trigger. During the second half the emphasis shifted to data analysis. For the major project of Task A2, OPAL, they have recorded and analyzed over one million decays of the Z 0 boson. They began participating in the RD5 experiment at the CERN SPS to study muon tracking in high energy collisions. The LSND experiment at LAMPF recorded physics data in the fall of 1993 and expects to report analysis results at upcoming conferences. In this three year period, the theory task, Task B, completed a number of projects, resulting in over 40 publications. The main emphasis of the research is on a better understanding of the fundamental interactions of quarks and leptons, and the possibility of physics beyond the standard model 17. Benchmarking the Particle Background in the Large Hadron Collider Experiments CERN Document Server Gschwendtner, Edda; Fabjan, Christian Wolfgang; Hessey, N P; Otto, Thomas 2002-01-01 Background benchmarking measurements have been made to check the low-energy processes which will contribute via nuclear reactions to the radiation background in the LHC experiments at CERN. Previously these processes were only evaluated with Monte Carlo simulations, estimated to be reliable within an uncertainty factor of 2.5. Measurements were carried out in an experimental set-up comparable to the shielding of ATLAS, one of the general-purpose experiments at LHC. The absolute yield and spectral measurements of photons and neutrons emanating from the final stages of the hadronic showers were made with a Bi_4Ge_3O_{12} (BGO) detector. The particle transport code FLUKA was used for detailed simulations. Comparison between measurements and simulations show that they agree within 20% and hence the uncertainty factor resulting from the shower processes can be reduced to a factor of 1.2. 18. Exploring Higher Dimensional Black Holes at the Large Hadron Collider CERN Document Server Harris, C M; Parker, M A; Richardson, P; Sabetfakhri, A; Webber, Bryan R 2005-01-01 In some extra dimension theories with a TeV fundamental Planck scale, black holes could be produced in future collider experiments. Although cross sections can be large, measuring the model parameters is difficult due to the many theoretical uncertainties. Here we discuss those uncertainties and then we study the experimental characteristics of black hole production and decay at a typical detector using the ATLAS detector as a guide. We present a new technique for measuring the temperature of black holes that applies to many models. We apply this technique to a test case with four extra dimensions and, using an estimate of the parton-level production cross section error of 20\\%, determine the Planck mass to 15\\% and the number of extra dimensions to$\\pm$0.75. 19. Exploring higher dimensional black holes at the Large Hadron Collider International Nuclear Information System (INIS) Harris, Christopher M.; Palmer, Matthew J.; Parker, Michael A.; Richardson, Peter; Sabetfakhri, Ali; Webber, Bryan R. 2005-01-01 In some extra dimension theories with a TeV fundamental Planck scale, black holes could be produced in future collider experiments. Although cross sections can be large, measuring the model parameters is difficult due to the many theoretical uncertainties. Here we discuss those uncertainties and then we study the experimental characteristics of black hole production and decay at a typical detector using the ATLAS detector as a guide. We present a new technique for measuring the temperature of black holes that applies to many models. We apply this technique to a test case with four extra dimensions and, using an estimate of the parton-level production cross section error of 20%, determine the Planck mass to 15% and the number of extra dimensions to ±0.75 20. Electroweak and flavor dynamics at hadron colliders - I International Nuclear Information System (INIS) Elchtent, E.; Lane, K. 1998-02-01 This is the first of two reports cataloging the principal signatures of electroweak and flavor dynamics at anti pp and pp colliders. Here, we discuss some of the signatures of dynamical electroweak and flavor symmetry breaking. The framework for dynamical symmetry breaking we assume is technicolor, with a walking coupling α TC , and extended technicolor. The reactions discussed occur mainly at subprocess energies √s approx-lt 1 TeV. They include production of color-singlet and octet technirhos and their decay into pairs of technipions, longitudinal weak bosons, or jets. Technipions, in turn, decay predominantly into heavy fermions. This report will appear in the Proceedings of the 1996 DPF/DPB Summer Study on New Directions for High Energy Physics (Snowmass 96) 1. Calorimeter based detectors for high energy hadron colliders International Nuclear Information System (INIS) 1993-01-01 The work was directed in two complementary directions, the D0 experiment at Fermilab, and the GEM detector for the SSC. Efforts have been towards the data taking and analysis with the newly commissioned D0 detector at Fermilab in the bar pp Collider run that started in May 1992 and ended on June 1, 1993. We involved running and calibration of the calorimeter and tracking chambers, the second level trigger development, and various parts of the data analysis, as well as studies for the D0 upgrade planned in the second half of this decade. Another major accomplishment was the ''delivery'' of the Technical Design Report for the GEM SSC detector. Efforts to the overall detector and magnet design, design of the facilities, installation studies, muon system coordination, muon chamber design and tests, muon system simulation studies, and physics simulation studies. In this document we describe these activities separately 2. Analysis of possible free quarks production process at hadron colliders International Nuclear Information System (INIS) Boos, E.E.; Ermolov, P.F.; Golubkov, Yu.A. 1990-01-01 The authors regard the process of free b-quark production in proton-antiproton collisions at energies of new colliders. It is suggested to use the pair of unlike sign with transverse momenta in the range p tr >5 GeV/c to trigger this process. Additionally it is suggested to measure a weak ionization signal from free s-quark from b-quark decay. The calculations of free bb-quarks production cross-sections have been made taking into account their energy losses in strong colour field. It is shown that the most effective range of lepton transverse momenta for observation of the process does not depend on threshold energy and is approximately equal to one for usual b mesons. 16 refs.; 10 figs 3. Inside CERN's Large Hadron Collider from the proton to the Higgs boson CERN Document Server AUTHOR|(CDS)2051256 2016-01-01 The book aims to explain the historical development of particle physics, with special emphasis on CERN and collider physics. It describes in detail the LHC accelerator and its detectors, describing the science involved as well as the sociology of big collaborations, culminating with the discovery of the Higgs boson. Readers are led step-by-step to understanding why we do particle physics, as well as the tools and problems involved in the field. It provides an insider's view on the experiments at the Large Hadron Collider. 4. NLO production of W' bosons at hadron colliders using the MCatNLO and POWHEG methods International Nuclear Information System (INIS) Papaefstathiou, A.; Latunde-Dada, O. 2009-01-01 We present a next-to-leading order (NLO) treatment of the production of a new charged heavy vector boson, generically called W', at hadron colliders via the Drell-Yan process. We fully consider the interference effects with the Standard Model W boson and allow for arbitrary chiral couplings to quarks and leptons. We present results at both leading order (LO) and NLO in QCD using the MCatNLO/Herwig++ and POWHEG methods. We derive theoretical observation curves on the mass-width plane for both the LO and NLO cases at different collider luminosities. The event generator used, Wpnlo, is fully customisable and publicly available. 5. Emittance growth due to noise and its suppression with the Feedback system in large hadron colliders International Nuclear Information System (INIS) Lebedev, V.; Parkhomchuk, V.; Shiltsev, V.; Stupakov, G. 1993-03-01 The problem of emittance growth due to random fluctuation of the magnetic field in hadron colliders is considered. Based on a simple one-dimensional linear model, a formula for an emittance growth rate as a function of the noise spectrum is derived. Different sources of the noise are analyzed and their role is estimated for the Superconducting Super Collider (SSC). A theory of feedback suppression of the emittance growth is developed which predicts the residual growth of the emittance in the accelerator with a feedback system 6. How to Find a Hidden World at the Large Hadron Collider CERN Document Server Wells, James D. 2008-01-01 I discuss how the Large Hadron Collider era should broaden our view of particle physics research, and apply this thinking to the case of Hidden Worlds. I focus on one of the simplest representative cases of a Hidden World, and detail the rich implications it has for LHC physics, including universal suppression of Higgs boson production, trans-TeV heavy Higgs boson signatures, heavy-to-light Higgs boson decays, weakly coupled exotic gauge bosons, and Higgs boson decays to four fermions via light exotic gauge bosons. Some signatures may be accessible in the very early stages of collider operation, whereas others motivate a later high-lumonosity upgrade. 7. Higgs Bosons, Electroweak Symmetry Breaking, and the Physics of the Large Hadron Collider CERN Document Server Quigg, Chris 2007-01-01 The Large Hadron Collider, a 7 + 7 TeV proton-proton collider under construction at CERN (the European Laboratory for Particle Physics in Geneva), will take experiments squarely into a new energy domain where mysteries of the electroweak interaction will be unveiled. What marks the 1-TeV scale as an important target? Why is understanding how the electroweak symmetry is hidden important to our conception of the world around us? What expectations do we have for the agent that hides the electroweak symmetry? Why do particle physicists anticipate a great harvest of discoveries within reach of the LHC? 8. Processes with weak gauge boson pairs at hadron colliders. Precise predictions and future prospects Energy Technology Data Exchange (ETDEWEB) Salfelder, Lukas 2017-02-08 percent. Due to the comparably small production cross sections and the generally large QCD backgrounds, studying VBS reactions at hadron colliders is an intricate task, and even with the target luminosity of several 100 fb{sup -1} presumably collected at the end of LHC Run II, dedicated differential analyses will hardly be realizable. In our analysis we therefore investigate the opportunities of a potential follow-up project of the LHC which is proposed to operate at a center-of-mass energy of 100 TeV and assumed to deliver a total integrated luminosity of 30 ab{sup -1}. For several decay modes we perform a detailed signal-to-background analysis, revealing the excellent possibilities for future measurements of VBS processes at yet unprecedented energy scales that such a machine facilitates. With process-specific event-selection criteria we manage to significantly reduce the background contribution, while due to the deep energy reach definitely sufficient events of the VBS signal remain for a detailed examination at the differential level. 9. TRISTAN, electron-positron colliding beam project International Nuclear Information System (INIS) 1987-03-01 In this report e + e - colliding beam program which is now referred to as TRISTAN Project will be described. A brief chronology and outline of TRISTAN Project is given in Chapter 1. Chapter 2 of this article gives a discussion of physics objectives at TRISTAN. Chapter 3 treats the overall description of the accelerators. Chapter 4 describes design of each of the accelerator systems. In Chapter 5, detector facilities are discussed in some detail. A description of accelerator tunnels, experimental areas, and utilities are given in Chapter 6. In the Appendix, the publications on the TRISTAN Project are listed. (author) 10. CERN 's large hadron collider : Radiation protection aspects of design and commissioning International Nuclear Information System (INIS) Forkel-Wirth, Doris; Brugger, Markus; Menzel, Hans; Roesler, Stefan; Vincke, Heinz; Vincke, Helmut 2008-01-01 Full text: CERN, the world's largest particle physics laboratory provides high energy hadron beams for experiments exploring matter. For this purpose various accelerators are operated and in 2008 the last link will be added to the accelerator chain: beam will be injected into CERN 's new 'flagship', the Large Hadron Collider (LHC). From then on high energy physics experiments will exploit the LHC 's colliding beams of protons and lead ions with a center of mass energy of 14 TeV and 1150 TeV, respectively. Radiation Protection aspects were taken into account during the whole duration of the design phase. Conservative design constraints were defined in 1996; some years later some of them, in particular with respect to the dose to occupational exposed workers, had to be readjusted to account for the latest development in CERN 's radiation protection rules and regulations. Numerous radiation protection studies had been performed to ensure a lay-out of the machine and its experiments in compliance with these constraints. These studies assessed all radiation risks related to the various beam-operation modes of the accelerator. In all cases external exposure was identified as the major risk: due to high energetic, mixed radiation fields during beam-on and due to beta and gamma radiation fields caused by induced radioactivity during beam-off. Counter measures were implemented like an optimized beam operation to limit beam losses, installation of thick shielding, prohibition of access to the major part of the LHC underground areas during beam-operation and optimization of the equipment and its handling during maintenance and repair. Detailed Monte Carlo simulations were performed to derive from the various beam loss scenarios the dose rates the workers will be exposed to. Individual and collective doses were projected based on the calculations and the maintenance scenarios provided by the teams concerned. In an iterative way the lay-out of the various regions were optimized 11. Probing gauge-phobic heavy Higgs bosons at high energy hadron colliders Directory of Open Access Journals (Sweden) Yu-Ping Kuang 2015-07-01 Full Text Available We study the probe of the gauge-phobic (or nearly gauge-phobic heavy Higgs bosons (GPHB at high energy hadron colliders including the 14 TeV LHC and the 50 TeV Super Proton–Proton Collider (SppC. We take the process pp→tt¯tt¯, and study it at the hadron level including simulating the jet formation and top quark tagging (with jet substructure. We show that, for a GPHB with MH<800 GeV, MH can be determined by adjusting the value of MH in the theoretical pT(b1 distribution to fit the observed pT(b1 distribution, and the resonance peak can be seen at the SppC for MH=800 GeV and 1 TeV. 12. Accelerator physics and technology limitations to ultimate energy and luminosity in very large hadron colliders Energy Technology Data Exchange (ETDEWEB) P. Bauer et al. 2002-12-05 The following presents a study of the accelerator physics and technology limitations to ultimate energy and luminosity in very large hadron colliders (VLHCs). The main accelerator physics limitations to ultimate energy and luminosity in future energy frontier hadron colliders are synchrotron radiation (SR) power, proton-collision debris power in the interaction regions (IR), number of events-per-crossing, stored energy per beam and beam-stability [1]. Quantitative estimates of these limits were made and translated into scaling laws that could be inscribed into the particle energy versus machine size plane to delimit the boundaries for possible VLHCs. Eventually, accelerator simulations were performed to obtain the maximum achievable luminosities within these boundaries. Although this study aimed at investigating a general VLHC, it was unavoidable to refer in some instances to the recently studied, [2], 200 TeV center-of-mass energy VLHC stage-2 design (VLHC-2). A more thorough rendering of this work can be found in [3]. 13. Stop decay into right-handed sneutrino LSP at hadron colliders International Nuclear Information System (INIS) Gouvea, Andre de; Gopalakrishna, Shrihari; Porod, Werner 2006-01-01 Right-handed neutrinos offer us the possibility of accommodating neutrino masses. In a supersymmetric model, this implies the existence of right-handed sneutrinos. Right-handed sneutrinos are expected to be as light as other supersymmetric particles if the neutrinos are Dirac fermions or if the lepton-number breaking scale is at (or below) the supersymmetry (SUSY) breaking scale, assumed to be around the electroweak scale. Depending on the mechanism of SUSY breaking, the lightest right-handed sneutrino may be the lightest supersymmetric particle (LSP). We consider the unique hadron collider signatures of a weak scale right-handed sneutrino LSP, assuming R-parity conservation. For concreteness, we concentrate on stop pair-production and decay at the Tevatron and the Large Hadron Collider, and briefly comment on the production and decay of other supersymmetric particles 14. Prospects for heavy charged Higgs search at hadron Colliders CERN Document Server Belyaev, A S; Guasch, J; Solà, J; Belyaev, Alexander; Garcia, David; Guasch, Jaume; Sola, Joan 2002-01-01 We investigate the prospects for heavy charged Higgs boson production through the mechanisms pp-bar(pp)->tbH+ +X at the upgraded Fermilab Tevatron and at the upcoming LHC collider at CERN respectively. We focus on the MSSM case at high values of tan[beta]> m_top/m_bot and include the leading SUSY quantum corrections. A detailed study is performed for all important production modes and basic background processes for the "ttbb" signature. At the upgraded Tevatron a charged Higgs signal is potentially viable in the 220-250 GeV range or excluded at 95%CL up to 300 GeV. At the LHC, a H+ of mass up to 800 GeV can be discovered at 5 sigma or else be excluded up to a mass of ~ 1.5 TeV. The presence ofSUSY quantum effects may highly influence the discovery potential in both machines and can typically shift these limits by 200 GeV at the LHC. 15. [Calorimeter based detectors for high energy hadron colliders International Nuclear Information System (INIS) 1992-01-01 This document provides a progress report on research that has been conducted under DOE Grant DEFG0292ER40697 for the past year, and describes proposed work for the second year of this 8 year grant starting November 15, 1992. Personnel supported by the contract include 4 faculty, 1 research faculty, 4 postdocs, and 9 graduate students. The work under this grant has in the past been directed in two complementary directions -- DO at Fermilab, and the second SSC detector GEM. A major effort has been towards the construction and commissioning of the new Fermilab Collider detector DO, including design, construction, testing, the commissioning of the central tracking and the central calorimeters. The first DO run is now underway, with data taking and analysis of the first events. Trigger algorithms, data acquisition, calibration of tracking and calorimetry, data scanning and analysis, and planning for future upgrades of the DO detector with the advent of the FNAL Main Injector are all involved. The other effort supported by this grant has been towards the design of GEM, a large and general-purpose SSC detector with special emphasis on accurate muon measurement over a large solid angle. This effort will culminate this year in the presentation to the SSC laboratory of the GEM Technical Design Report. Contributions are being made to the detector design, coordination, and physics simulation studies with special emphasis on muon final states. Collaboration with the RD5 group at CERN to study muon punch through and to test cathode strip chamber prototypes was begun 16. Physics and Analysis at a Hadron Collider - Making Measurements (3/3) CERN Multimedia CERN. Geneva 2010-01-01 This is the third lecture of three which together discuss the physics of hadron colliders with an emphasis on experimental techniques used for data analysis. This third lecture discusses techniques important for analyses making a measurement (e.g. determining a cross section or a particle property such as its mass or lifetime) using some CDF top-quark analyses as specific examples. The lectures are aimed at graduate students. 17. Environmental monitoring at CERN: present status and future plans for the Large Hadron Collider (LHC) International Nuclear Information System (INIS) Hoefert, M.; Stevenson, G.R.; Vojtyla, P.; Wittekind, D. 1998-01-01 The present radiological impact of CERN on the environment is negligible. It is assessed that this will also be the case after the Large Hadron Collider starts operation in 2005. Nevertheless, the environmental monitoring programme at CERN will be further extended, so as to demonstrate that the Organization fully complies with standards and limits for environmental impact of nuclear installations as laid down by authorities in the CERN host countries. (P.A.) 18. Nucleon Decay and Neutrino Experiments, Experiments at High Energy Hadron Colliders, and String Theor Energy Technology Data Exchange (ETDEWEB) Jung, Chang Kee [State University of New York at Stony Brook; Douglas, Michaek [State University of New York at Stony Brook; Hobbs, John [State University of New York at Stony Brook; McGrew, Clark [State University of New York at Stony Brook; Rijssenbeek, Michael [State University of New York at Stony Brook 2013-07-29 This is the final report of the DOE grant DEFG0292ER40697 that supported the research activities of the Stony Brook High Energy Physics Group from November 15, 1991 to April 30, 2013. During the grant period, the grant supported the research of three Stony Brook particle physics research groups: The Nucleon Decay and Neutrino group, the Hadron Collider Group, and the Theory Group. 19. Study of vector boson decay and determination of the Standard Model parameters at hadronic colliders International Nuclear Information System (INIS) Amidei, D. 1991-01-01 The power of the detectors and the datasets at hadronic colliders begins to allow measurement of the electroweak parameters with a precision that confronts the perturbative corrections to the theory. Recent measurements of M z , M w , and sin θ w by CDF and UA2 are reviewed, with some emphasis on how experimental precision is achieved, and some discussion of the import for the specifications of the Standard Model. 14 refs., 10 figs., 4 tabs 20. Operational Experience with and Performance of the ATLAS Pixel Detector at the Large Hadron Collider CERN Document Server Grummer, Aidan; The ATLAS collaboration 2018-01-01 The operational experience and requirements to ensure optimum data quality and data taking efficiency with the 4-layer ATLAS Pixel Detector are discussed. The detector has undergone significant hardware and software upgrades to meet the challenges imposed by the fact that the Large Hadron Collider is exceeding expectations for instantaneous luminosity by more than a factor of two (more than$2 \\times 10^{34}$cm$^{-2}$s$^{-1}$). Emphasizing radiation damage effects, the key status and performance metrics are described. 1. Production and decay channels of charged Higgs boson at high energy hadron colliders Science.gov (United States) Demirci, Alev Ezgi; ćakır, Orhan 2018-02-01 We have studied charged Higgs boson interactions and production cross sections within the framework of two Higgs doublet model, which is an extension of standard model and the decay processes of charged Higgs boson have been calculated. There are different scenarios which have been studied in this work and these parameters have been transferred to the event generator, and the cross sections calculations for different center of mass energies of hadron colliders have been performed. 2. University of Tennessee deploys force10 C-series to analyze data from CERN's Large Hadron Collider CERN Multimedia 2007-01-01 "Force20 networks, the pioneer in building and securing reliable networks, today announced that the University of Tennessee physics department has deployed the C300 resilient switch to analyze data form CERN's Large Hadron Collider." (1 page) 3. A central rapidity straw tracker and measurements on cryogenic components for the large hadron collider Energy Technology Data Exchange (ETDEWEB) Danielsson, Hans 1997-04-01 The thesis is divided into two parts in which two different aspects of the Large Hadron Collider (LHC) project are discussed. The first part describes the design of a transition radiation tracker (TRT) for the inner detector in ATLAS. In particular, the barrel part was studied in detail. The barrel TRT consists of 52544 1.5 m long proportional tubes (straws), parallel to the beam axis and each with a diameter of 4 mm. The detector is divided into three module layers with 32 modules in each layer. The preparatory study comprises: module size optimization, mechanical and thermal calculations, tracking performance and material budget studies. The second part deals with the cryogenic system for the LHC superconducting magnets. They will work at a temperature below 2 K and it is essential to understand the thermal behaviour of the individual cryogenic components in order to assess the insulating properties of the magnet cryostat. The work involves the design of two dedicated heat-inlet measuring benches for cryogenic components, and the results from heat-inlet measurements on two different types of cryogenic components are reported. 54 refs., 79 figs., 14 tabs. 4. A central rapidity straw tracker and measurements on cryogenic components for the large hadron collider International Nuclear Information System (INIS) Danielsson, Hans. 1997-04-01 The thesis is divided into two parts in which two different aspects of the Large Hadron Collider (LHC) project are discussed. The first part describes the design of a transition radiation tracker (TRT) for the inner detector in ATLAS. In particular, the barrel part was studied in detail. The barrel TRT consists of 52544 1.5 m long proportional tubes (straws), parallel to the beam axis and each with a diameter of 4 mm. The detector is divided into three module layers with 32 modules in each layer. The preparatory study comprises: module size optimization, mechanical and thermal calculations, tracking performance and material budget studies. The second part deals with the cryogenic system for the LHC superconducting magnets. They will work at a temperature below 2 K and it is essential to understand the thermal behaviour of the individual cryogenic components in order to assess the insulating properties of the magnet cryostat. The work involves the design of two dedicated heat-inlet measuring benches for cryogenic components, and the results from heat-inlet measurements on two different types of cryogenic components are reported. 54 refs., 79 figs., 14 tabs 5. Simulations of fast crab cavity failures in the high luminosity Large Hadron Collider Directory of Open Access Journals (Sweden) Bruce Yee-Rendon 2014-05-01 Full Text Available Crab cavities (CCs are a key ingredient of the high luminosity Large Hadron Collider (HL-LHC project for increasing the luminosity of the LHC. At KEKB, CCs have exhibited abrupt changes of phase and voltage during a time period of the order of a few LHC turns and considering the significant stored energy in the HL-LHC beam, CC failures represent a serious threat in regard to LHC machine protection. In this paper, we discuss the effect of CC voltage or phase changes on a time interval similar to, or longer than, the one needed to dump the beam. The simulations assume a quasistationary-state distribution to assess the particles losses for the HL-LHC. These distributions produce beam losses below the safe operation threshold for Gaussian tails, while, for non-Gaussian tails are on the same order of the limit. Additionally, some mitigation strategies are studied for reducing the damage caused by the CC failures. 6. The adventures of the Large Hadron Collider from the Big Bang to the Higgs boson CERN Document Server Denegri, Daniel; Hoecker, Andreas; Roos, Lydia 2018-01-01 An introduction to the world of quarks and leptons, and of their interactions governed by fundamental symmetries of nature, as well as an introduction to the connection that exists between worlds of the infinitesimally small and the infinitely large. The book starts with a simple presentation of the theoretical framework, the so-called Standard Model, which evolved gradually since the 1960's. This is followed by its main experimental successes, and its weaknesses and incompleteness. We proceed then with the incredible story of the Large Hadron Collider at CERN — the largest purely scientific project ever realized. What follows is the discussion of the conception, design and construction of the detectors of size and complexity without precedent in scientific history. The book summarizes the main physics results obtained firstly during the initial phase of operation of the LHC, which culminated in the discovery of the Higgs boson in 2012 (the Nobel Prize in Physics in 2013). This is followed by the results o... 7. Instrumentation status of the low-b magnet systems at the Large Hadron Collider (LHC) CERN Document Server Darve, C.; Casas-Cubillos, J.; Perin, A.; Vauthier, N. 2011-01-01 The low-beta magnet systems are located in the Large Hadron Collider (LHC) insertion regions around the four interaction points. They are the key elements in the beams focusing/defocusing process allowing proton collisions at luminosity up to 10**34/cm**2s. Those systems are a contribution of the US-LHC Accelerator project. The systems are mainly composed of the quadrupole magnets (triplets), the separation dipoles and their respective electrical feed-boxes (DFBX). The low-beta magnet systems operate in an environment of extreme radiation, high gradient magnetic field and high heat load to the cryogenic system due to the beam dynamic effect. Due to the severe environment, the robustness of the diagnostics is primordial for the operation of the triplets. The hardware commissioning phase of the LHC was completed in February 2010. In the sake of a safer and more user-friendly operation, several consolidations and instrumentation modifications were implemented during this commissioning phase. This paper presents ... 8. Instrumentation Status of the Low-β Magnet Systems at the Large Hadron Collider (LHC) CERN Document Server Darve, C; Casas-Cubillos, J; Perin, A; Vauthier, N 2011-01-01 The low-β magnet systems are located in the Large Hadron Collider (LHC) insertion regions around the four interaction points. They are the key elements in the beams focusing/defocusing process allowing proton collisions at luminosity up to 1034cm-2s-1. Those systems are a contribution of the US-LHC Accelerator project. The systems are mainly composed of the quadrupole magnets (triplets), the separation dipoles and their respective electrical feed-boxes (DFBX). The low-β magnet systems operate in an environment of extreme radiation, high gradient magnetic field and high heat load to the cryogenic system due to the beam dynamic effect. Due to the severe environment, the robustness of the diagnostics is primordial for the operation of the triplets. The hardware commissioning phase of the LHC was completed in February 2010. In the sake of a safer and more user-friendly operation, several consolidations and instrumentation modifications were implemented during this commissioning phase. This paper presents the in... 9. Development of cost-effective Nb3Sn conductors for the next generation hadron colliders International Nuclear Information System (INIS) Scanlan, R.M.; Dietderich, D.R.; Zeitlin, B.A. 2001-01-01 Significant progress has been made in demonstrating that reliable, efficient high field dipole magnets can be made with Nb 3 Sn superconductors. A key factor in determining whether these magnets will be a cost-effective solution for the next generation hadron collider is the conductor cost. Consequently, DOE initiated a conductor development program to demonstrate that Nb 3 Sn can be improved to reach a cost/performance value of$1.50/kA-m at 12T, 4.2K. The first phase of this program was initiated in Jan 2000, with the goal of improving the key properties of interest for accelerator dipole magnets--high critical current density and low magnetization. New world record critical current densities have been reported recently, and it appears that significant potential exists for further improvement. Although new techniques for compensating for magnetization effects have reduced the requirements somewhat, techniques for lowering the effective filament size while maintaining these high Jc values are a program priority. The next phase of this program is focused on reducing the conductor cost through substitution of lower cost raw materials and through process improvements. The cost drivers for materials and fabrication have been identified, and projects are being initiated to demonstrate cost reductions
10. The High Luminosity Large Hadron Collider the new machine for illuminating the mysteries of Universe
CERN Document Server
Brüning, Oliver
2015-01-01
This book provides a broad introduction to the physics and technology of the High Luminosity Large Hadron Collider (HL-LHC). This new configuration of the LHC is one of the major accelerator projects for the next 15 years and will give new life to the LHC after its first 15-year operation. Not only will it allow more precise measurements of the Higgs boson and of any new particles that might be discovered in the next LHC run, but also extend the mass limit reach for detecting new particles. The HL-LHC is based on the innovative accelerator magnet technologies capable of generating 11–13 Tesla fields, with effectiveness enhanced by use of the new Achromatic Telescopic Squeezing scheme, and other state-of-the-art accelerator technologies, such as superconducting compact RF crab cavities, advanced collimation concepts, and novel power technology based on high temperature superconducting links. The book consists of a series of chapters touching on all issues of technology and design, and each chapter can be re...
11. 3rd CERN-Fermilab HadronCollider Physics Summer School
CERN Multimedia
EP Department
2008-01-01
August 12-22, 2008, Fermilab The school web site is http://cern.ch/hcpss with links to the academic programme and the application procedure. The APPLICATION DEADLINE IS 29 FEBRUARY 2008. The goal of the CERN-Fermilab Hadron Collider Physics Summer Schools is to offer students and young researchers in high-energy physics a concentrated syllabus on the theory and experimental challenges of hadron collider physics. The third session of the summer school will focus on exposing young post-docs and advanced graduate students to broader theories and real data beyond what they’ve learned at their home institutions. Experts from across the globe will lecture on the theoretical and experimental foundations of hadron collider physics, host parallel discussion sessions and answer students’ questions. This year’s school will also have a greater focus on physics beyond the Standard Model, as well as more time for questions at the end of each lecture. The 2008 School will be held at Fermilab. Further enquiries should ...
12. 2nd CERN-Fermilab Hadron Collider Physics Summer School, June 6-15, 2007, CERN
CERN Multimedia
2007-01-01
The school web site is http://cern.ch/hcpss with links to the academic programme and the application procedure. The APPLICATION DEADLINE IS 9 MARCH 2007. The results of the selection process will be announced shortly thereafter. The goal of the CERN-Fermilab Hadron Collider Physics Summer Schools is to offer students and young researchers in high energy physics a concentrated syllabus on the theory and experimental challenges of hadron collider physics. The first school in the series, held last summer at Fermilab, covered extensively the physics at the Tevatron collider experiments. The second school, to be held at CERN, will focus on the technology and physics of the LHC experiments. Emphasis will be placed on the first years of data-taking at the LHC and on the discovery potential of the programme. The series of lectures will be supported by in-depth discussion sessions and will include the theory and phenomenology of hadron collisions, discovery physics topics, detector and analysis techniques and tools...
13. Extracting the top-quark running mass using t anti t+1-jet events produced at the Large Hadron Collider
Energy Technology Data Exchange (ETDEWEB)
Fuster, J.; Vos, M. [Valencia Univ. and CSIC, Paterna (Spain). IFIC; Irles, A. [Paris-Sud XI Univ., CNRS/IN2P3, Orsay (France). Lab. de l' Accelerateur Lineaire; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Melini, D. [Valencia Univ. and CSIC, Paterna (Spain). IFIC; Granada Univ. (Spain). Dept. de Fisica Teorica y del Cosmos; Uwer, P. [Humboldt-Univ., Berlin (Germany)
2017-04-04
We present the calculation of the next-to-leading order QCD corrections for top quark pair production in association with an additional jet at hadron colliders, using the modified minimal subtraction scheme to renormalize the top-quark mass. The results are compared to measurements at the Large Hadron Collider run I. In particular, we determine the top-quark running mass from a fit of the theoretical results presented here to the LHC data.
14. Extracting the top-quark running mass using t anti t + 1-jet events produced at the Large Hadron Collider
Energy Technology Data Exchange (ETDEWEB)
Fuster, J.; Vos, M. [IFIC, Universitat de Valencia y CSIC, Paterna (Spain); Irles, A. [Universite de Paris-Sud XI, CNRS/IN2P3, Laboratoire de l' Accelerateur Lineaire, Orsay (France); Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Melini, D. [IFIC, Universitat de Valencia y CSIC, Paterna (Spain); Universidad de Granada, Departamento de Fisica Teorica y del Cosmos, Granada (Spain); Uwer, P. [Humboldt-Universitaet Berlin (Germany)
2017-11-15
We present the calculation of the next-to-leading order QCD corrections for top-quark pair production in association with an additional jet at hadron colliders, using the modified minimal subtraction scheme to renormalize the top-quark mass. The results are compared to measurements at the Large Hadron Collider run I. In particular, we determine the top-quark running mass from a fit of the theoretical results presented here to the LHC data. (orig.)
15. Revealing Partons in Hadrons: From the ISR to the SPS Collider
CERN Document Server
Darriulat, Pierre
2015-01-01
Our understanding of the structure of hadrons has developed during the seventies and early eighties from a few vague ideas to a precise theory, Quantum Chromodynamics, that describes hadrons as made of elementary partons (quarks and gluons). Deep inelastic scattering of electrons and neutrinos on nucleons and electron–positron collisions have played a major role in this development. Less well known is the role played by hadron collisions in revealing the parton structure, studying the dynamic of interactions between partons and offering an exclusive laboratory for the direct study of gluon interactions. The present article recalls the decisive contributions made by the CERN Intersecting Storage Rings and, later, the proton–antiproton SPS Collider to this chapter of physics.
16. Calorimeter based detectors for high energy hadron colliders
International Nuclear Information System (INIS)
Marx, M.D.; Rijssenbeek, M.
1990-01-01
This report discusses the following topics: the central calorimeter; and installation; commissioning; and calorimeter beam tests; the central drift chamber; cosmic ray and beam tests; chamber installation and commissioning; and software development; and SSC activities: the EMPACT project
17. Secondary particle background levels and effects on detectors at future hadron colliders
International Nuclear Information System (INIS)
Pal, T.
1993-01-01
The next generation of hadron colliders, the Superconducting Super Collider (SSC) and the Large Hadron Collider (LHC), will operate at high center-of-mass energies and luminosities. Namely, for the SSC(LHC) √s=40TeV (√s=16TeV) and L=10 33 cm -2 s -1 (L=3x10 34 cm -2 s -1 ). These conditions will result in the production of large backgrounds as well as radiation environments. Ascertaining the backgrounds, in terms of the production of secondary charged and neutral particles, and the radiation environments are important considerations for the detectors proposed for these colliders. An initial investigation of the radiation levels in the SSC detectors was undertaken by D. Groom and colleagues, in the context of the open-quotes task force on radiation levels in the SSC interaction regions.close quotes The method consisted essentially of an analytic approach, using standard descriptions of average events in conjunction with simulations of secondary processes
18. Secondary particle in background levels and effects on detectors at future hadron colliders
International Nuclear Information System (INIS)
Pal, T.
1993-06-01
The next generation of hadron colliders, the Superconducting Super Collider (SSC) and the Large Hadron Collider (LHC), will operate at high center-of-mass energies and luminosities. Namely, for the SSC (LHC) √s = 40 TeV (√s = 16 TeV) and L = 10 33 cm -2 s -1 (L = 3 x 10 34 cm -2 s -1 ). These conditions will result in the production of large backgrounds as well as radiation environments. Ascertaining the backgrounds, in terms of the production of secondary charged and neutral particles, and the radiation environments are important considerations for the detectors proposed for these colliders. An initial investigation of the radiation levels in the SSC detectors was undertaken by D. Groom and colleagues, in the context of the ''task force on radiation levels in the SSC interaction regions.'' The method consisted essentially of an analytic approach, using standard descriptions of average events in conjunction with simulations of secondary processes. Following Groom's work, extensive Monte Carlo simulations were performed to address the issues of backgrounds and radiation environments for the GEM and SD C3 experiments proposed at the SSC, and for the ATLAS and CMS experiments planned for the LHC. The purpose of the present article is to give a brief summary of some aspects of the methods, assumptions, and calculations performed to date (principally for the SSC detectors), and to stress the relevance of such calculations to the detectors proposed for the study of B-physics in particular
19. Probing gluon number fluctuation effects in future electron–hadron colliders
Energy Technology Data Exchange (ETDEWEB)
Amaral, J.T.; Gonçalves, V.P. [Instituto de Física e Matemática, Universidade Federal de Pelotas, Caixa Postal 354, CEP 96010-900, Pelotas, RS (Brazil); Kugeratski, M.S. [Universidade Federal de Santa Catarina, Campus Joinville, Rua Presidente Prudente de Moraes, 406, CEP 89218-000, Joinville, SC (Brazil)
2014-10-15
The description of the QCD dynamics in the kinematical range which will be probed in the future electron–hadron colliders is still an open question. Although phenomenological studies indicate that the gluon number fluctuations, which are related to discreteness in the QCD evolution, are negligible at HERA, the magnitude of these effects for the next generation of colliders still should be estimated. In this paper we investigate inclusive and diffractive ep observables considering a model for the physical scattering amplitude which describes the HERA data. Moreover, we estimate, for the first time, the contribution of the fluctuation effects for the nuclear structure functions. Our results indicate that the study of these observables in the future colliders can be useful to constrain the presence of gluon number fluctuations.
20. Large Hadron Collider in crisis as magnet costs spiral upwards
CERN Multimedia
2001-01-01
Managers of the LHC project admitted this week that it faces cost overruns of several hundred million dollars. CERN will face years of budget cuts but this will cover only a fraction of the extra costs - the 20 member states will be asked to cover the rest (1 page).
1. The ATLAS Experiment at the CERN Large Hadron Collider
Czech Academy of Sciences Publication Activity Database
Aad, G.; Abat, E.; Abdallah, J.; Bazalová, Magdalena; Böhm, Jan; Chudoba, Jiří; Gunther, J.; Hruška, I.; Jahoda, M.; Jež, J.; Juránek, Vojtěch; Kepka, Oldřich; Kupčo, Alexander; Kus, V.; Kvasnička, O.; Lokajíček, Miloš; Marčišovský, Michal; Mikeštíková, Marcela; Myška, Miroslav; Němeček, Stanislav; Panušková, M.; Polák, Ivo; Popule, Jiří; Přibyl, Lukáš; Šícho, Petr; Staroba, Pavel; Šťastný, Jan; Taševský, Marek; Tic, Tomáš; Tomášek, Lukáš; Tomášek, Michal; Valenta, Jan; Vrba, Václav
2008-01-01
Roč. 3, - (2008), S08003/1-S08003/437 ISSN 1748-0221 R&D Projects: GA MŠk LA08032; GA MŠk 1P04LA212 Institutional research plan: CEZ:AV0Z10100502 Keywords : ATLAS * LHC * CERN * accelerator * proton-proton collisions * heavy-ion collisions * minimum-bias events * bunch-crossings * pile-up * superconducting magnets Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 0.333, year: 2008
2. Investigation of hadronic matter at the Fermilab Tevatron Collider. Technical progress report
International Nuclear Information System (INIS)
Anderson, E.W.
1985-01-01
Hadronic matter at very high energy densities is investigated. The present experimental effort is focused on a search for a new quark-gluon plasma phase expected to occur when temperatures of 240 MeV are achieved. Instrumentation for several unique signatures is being developed to exploit the first operation of the Fermilab Tevatron Collider in 1986. The capital projects funded under this contract are a 240-element trigger hodoscope array, and in phase II a segmented photon detector. For these projects $172K are requested for the period 1986 February 1 through 1987 January 31 to complete the trigger hodoscope, and$160K for the period 1987 February 1 through 1988 January 31 to construct a portion of the photon detector. These figures are as presented in the original proposal. Due to budget constraints on the Fermilab experimental support program, we will not be able to receive the full complement of necessary electronics from the Fermilab PREP pool in the required period. Consequently, an additional $35K is requested for the period 1986 February 1 through 1987 January 31 for a portion of the electronics for the 240-channel trigger hodoscope. For the same reasons, Fermilab cannot provide the required magnet on schedule; a one year delay is proposed. As this would seriously impact our physics goals, the collaboration is attempting to fund the magnet without delay through the universities. Efforts to date have concentrated on the design and testing of the hodoscope. Extensive measurements on the radiation levels and effects during the various accelerator cycles have been made. These data are essential to the proper selection of scintillator and design of electronics. These tests are now complete, and final construction is beginning. 11 refs 3. Literature in focus - The Large Hadron Collider: A Marvel of Technology CERN Document Server Cecile Noels Inside an insulating vacuum chamber in a tunnel about 100 metres below the surface of the Franco-Swiss plain near Geneva, packets of protons whirl around the 27-km circumference of the Large Hadron Collider (LHC) at a speed close to that of light, colliding every 25 nanoseconds at four beam crossing points. The products of these collisions, of which hundreds of billions will be produced each second, are observed and measured with the most advanced particle-detection technology, capable of tracking individual particles as they generate a signature track during their passage through the detectors. All this information is captured, filtered and piped to huge networks of microprocessors for analysis and study by an international team of physicists. When the Large Hadron Collider (LHC) comes on line in 2009, it will be the largest scientific experiment ever constructed, and the data it produces will lead to a new understanding of our Universe. Many thousands of scientists and engineers were behind the planning... 4. Literature in focus - The Large Hadron Collider: A Marvel of Technology CERN Document Server Cecile Noels 2009-01-01 Inside an insulating vacuum chamber in a tunnel about 100 metres below the surface of the Franco-Swiss plain near Geneva, packets of protons whirl around the 27-km circumference of the Large Hadron Collider (LHC) at a speed close to that of light, colliding every 25 nanoseconds at four beam crossing points. The products of these collisions, of which hundreds of billions will be produced each second, are observed and measured with the most advanced particle-detection technology, capable of tracking individual particles as they generate a signature track during their passage through the detectors. All this information is captured, filtered and piped to huge networks of microprocessors for analysis and study by an international team of physicists. When the Large Hadron Collider (LHC) comes on line in 2009, it will be the largest scientific experiment ever constructed, and the data it produces will lead to a new understanding of our Universe. Many thousands of scientists and engineers were behind the planning... 5. Quantum chromodynamics at high energy, theory and phenomenology at hadron colliders; Chromodynamique quantique a haute energie, theorie et phenomenologie appliquee aux collisions de hadrons Energy Technology Data Exchange (ETDEWEB) Marquet, C 2006-09-15 When probing small distances inside a hadron, one can resolve its partonic constituents: quarks and gluons that obey the laws of perturbative Quantum Chromodynamics (QCD). This substructure reveals itself in hadronic collisions characterized by a large momentum transfer: in such collisions, a hadron acts like a collection of partons whose interactions can be described in QCD. In a collision at moderate energy, a hadron looks dilute and the partons interact incoherently. As the collision energy increases, the parton density inside the hadron grows. Eventually, at some energy much bigger than the momentum transfer, one enters the saturation regime of QCD: the gluon density has become so large that collective effects are important. We introduce a formalism suitable to study hadronic collisions in the high-energy limit in QCD, and the transition to the saturation regime. In this framework, we derive known results that are needed to present our personal contributions and we compute different cross-sections in the context of hard diffraction and particle production. We study the transition to the saturation regime as given by the Balitsky-Kovchegov equation. In particular we derive properties of its solutions.We apply our results to deep inelastic scattering and show that, in the energy range of the HERA collider, the predictions of high-energy QCD are in good agreement with the data. We also consider jet production in hadronic collisions and discuss the possibility to test saturation at the Large Hadron Collider. (author) 6. Radioactivation of silicon tracker modules in high-luminosity hadron collider radiation environments CERN Document Server Dawson, I; Buttar, C; Cindro, V; Mandic, I 2003-01-01 One of the consequences of operating detector systems in harsh radiation environments will be radioactivation of the components. This will certainly be true in experiments such as ATLAS and CMS, which are currently being built to exploit the physics potential at CERN's Large Hadron Collider. If the levels of radioactivity and corresponding dose rates are significant, then there will be implications for any access or maintenance operations. This paper presents predictions for the radioactivation of ATLAS's Semi- Conductor Tracker (SCT) barrel system, based on both calculations and measurements. It is shown that both neutron capture and high-energy hadron reactions must be taken into account. The predictions also show that the SCT barrel-module should not pose any serious radiological problems after operation in high radiation environments. 7. Theoretical studies of hadronic calorimetry for high luminosity, high energy colliders Energy Technology Data Exchange (ETDEWEB) Brau, J.E.; Gabriel, T.A. 1989-01-01 Experiments at the high luminosity, high energy colliders of the future are going to demand optimization of the state of the art of calorimetry design and construction. During the past few years, the understanding of the basic phenomenology of hadron calorimeters has advanced through paralleled theoretical and experimental investigations. The important underlying processes are reviewed to set the framework for the presentation of recent calculations of the expected performance of silicon detector based hadron calorimeters. Such devices employing uranium are expected to achieve the compensation condition (that is, e/h approx. 1.0) based on the understanding that has been derived from the uranium-liquid argon and uranium-plastic scintillator systems. In fact, even lead-silicon calorimeters are found to achieve the attractive value for the e/h ratio of 1.16 at 10 GeV. 62 refs., 22 figs., 3 tabs. 8. Theoretical studies of hadronic calorimetry for high luminosity, high energy colliders International Nuclear Information System (INIS) Brau, J.E.; Gabriel, T.A. 1989-01-01 Experiments at the high luminosity, high energy colliders of the future are going to demand optimization of the state of the art of calorimetry design and construction. During the past few years, the understanding of the basic phenomenology of hadron calorimeters has advanced through paralleled theoretical and experimental investigations. The important underlying processes are reviewed to set the framework for the presentation of recent calculations of the expected performance of silicon detector based hadron calorimeters. Such devices employing uranium are expected to achieve the compensation condition (that is, e/h ∼ 1.0) based on the understanding that has been derived from the uranium-liquid argon and uranium-plastic scintillator systems. In fact, even lead-silicon calorimeters are found to achieve the attractive value for the e/h ratio of 1.16 at 10 GeV. 62 refs., 22 figs., 3 tabs 9. Heavy-ion collisions at the dawn of the large hadron collider era International Nuclear Information System (INIS) Takahashi, J. 2011-01-01 In this paper I present a review of the main topics associated with the study of heavy-ion collisions, intended for students starting or interested in the field. It is impossible to summarize in a few pages the large amount of information that is available today, after a decade of operations of the Relativistic Heavy Ion Collider and the beginning of operations at the Large Hadron Collider. Thus, I had to choose some of the results and theories in order to present the main ideas and goals. All results presented here are from publicly available references, but some of the discussions and opinions are my personal view, where I have made that clear in the text (author) 10. Test of Relativistic Gravity for Propulsion at the Large Hadron Collider Science.gov (United States) Felber, Franklin 2010-01-01 A design is presented of a laboratory experiment that could test the suitability of relativistic gravity for propulsion of spacecraft to relativistic speeds. An exact time-dependent solution of Einstein's gravitational field equation confirms that even the weak field of a mass moving at relativistic speeds could serve as a driver to accelerate a much lighter payload from rest to a good fraction of the speed of light. The time-dependent field of ultrarelativistic particles in a collider ring is calculated. An experiment is proposed as the first test of the predictions of general relativity in the ultrarelativistic limit by measuring the repulsive gravitational field of bunches of protons in the Large Hadron Collider (LHC). The estimated `antigravity beam' signal strength at a resonant detector of each proton bunch is 3 nm/s2 for 2 ns during each revolution of the LHC. This experiment can be performed off-line, without interfering with the normal operations of the LHC. 11. Heavy-Ion Collimation at the Large Hadron Collider Simulations and Measurements CERN Document Server AUTHOR|(CDS)2083002; Wessels, Johannes Peter; Bruce, Roderik; Wessels, Johannes Peter; Bruce, Roderik The CERN Large Hadron Collider (LHC) stores and collides proton and$^{208}$Pb$^{82+}$beams of unprecedented energy and intensity. Thousands of superconducting magnets, operated at 1.9 K, guide the very intense and energetic particle beams, which have a large potential for destruction. This implies the demand for a multi-stage collimation system to provide protection from beam-induced quenches or even hardware damage. In heavy-ion operation, ion fragments with significant rigidity offsets can still scatter out of the collimation system. When they irradiate the superconducting LHC magnets, the latter risk to quench (lose their superconducting property). These secondary collimation losses can potentially impose a limitation for the stored heavy-ion beam energy. Therefore, their distribution in the LHC needs to be understood by sophisticated simulations. Such simulation tools must accurately simulate the particle motion of many different nuclides in the magnetic LHC lattice and simulate their interaction with t... 12. Large Hadron Collider Physics (LHCP2017) conference | 15-20 May 2017 | Shanghai CERN Multimedia 2016-01-01 The fifth Annual Large Hadron Collider Physics will be held in Shanghai and hosted by Shanghai Jiao Tong University in the period of May 15-20, 2017. The main goal of the conference is to provide intense and lively discussions between experimenters and theorists in such research areas as the Standard Model Physics and Beyond, the Higgs Boson, Supersymmetry, Heavy Quark Physics and Heavy Ion Physics as well as to share a recent progress in the high luminosity upgrades and future colliders developments. The LHCP2017 website: http://lhcp2017.physics.sjtu.edu.cn/ Event date: 15 - 20 May 2017 Location: Shanghai, China 13. The feasibility of experiments at high luminosity at the large hadron collider International Nuclear Information System (INIS) Mulvey, J.H. 1988-01-01 The studies reported in this volume extend some of those made during Workshop on Physics at Future Accelerators held at La Thuile and CERN in January 1987 (CERN 87-07, Vol. 1 and 2). They consider the feasibility of performing experiments with a 16 TeV proton-proton collider, the Large Hadron Collider (LHC), at luminosities as high as 5.10 34 cm -2 s -1 . To illustrate the difficulties and the extent to which the potential for discovery at the LHC might be improved by such a step, three specific topics were chosen: searches for a) a massive Higgs boson, b) SUSY gluinos and squarks, and c) a new Z'. Following the Summary Report of the High Luminosity Study Group are papers discussing a possible detector system, radiation levels, and the analyses leading to estimated mass-limits for the searches. (orig.) 14. Electroweak corrections to top quark pair production in association with a hard photon at hadron colliders International Nuclear Information System (INIS) Duan, Peng-Fei; Zhang, Yu; Wang, Yong; Song, Mao; Li, Gang 2017-01-01 We present the next-to-leading order (NLO) electroweak (EW) corrections to the top quark pair production associated with a hard photon at the current and future hadron colliders. The dependence of the leading order (LO) and NLO EW corrected cross sections on the photon transverse momentum cut are investigated. We also provide the LO and NLO EW corrected distributions of the transverse momentum of final top quark and photon and the invariant mass of top quark pair and top–antitop-photon system. The results show that the NLO EW corrections are significant in high energy regions due to the EW Sudakov effect. 15. 25th anniversary of the Large Hadron Collider (LHC) experimental programme CERN Multimedia AUTHOR|(CDS)2094367 2017-01-01 On Friday 15 December 2017, CERN celebrated the 25th anniversary of the Large Hadron Collider (LHC) experimental programme. The occasion was marked with a special scientific symposium looking at the LHC’s history, the physics landscape into which the LHC experiments were born, and the challenging path that led to the very successful LHC programme we know today. The anniversary was linked to a meeting that took place in 1992, in Evian, entitled "Towards the LHC Experimental Programme", marking a crucial milestone in the design and development of the LHC experiments. 16. Polar Coding for the Large Hadron Collider: Challenges in Code Concatenation CERN Document Server AUTHOR|(CDS)2238544; Podzorny, Tomasz; Uythoven, Jan 2018-01-01 In this work, we present a concatenated repetition-polar coding scheme that is aimed at applications requiring highly unbalanced unequal bit-error protection, such as the Beam Interlock System of the Large Hadron Collider at CERN. Even though this concatenation scheme is simple, it reveals significant challenges that may be encountered when designing a concatenated scheme that uses a polar code as an inner code, such as error correlation and unusual decision log-likelihood ratio distributions. We explain and analyze these challenges and we propose two ways to overcome them. 17. The Fermi motion contribution to J/{psi} production at the hadron colliders Energy Technology Data Exchange (ETDEWEB) Gomshi Nobary, M.A. [Department of Physics, Faculty of Science, Razi University, Kermanshah (Iran, Islamic Republic of) and Center for Theoretical Physics and Mathematics, AEOI, Roosbeh Building, P.O. Box 11365-8486 Tehran (Iran, Islamic Republic of)]. E-mail: [email protected]; Nikoobakht, B. [Department of Physics, Faculty of Science, Razi University, Kermanshah (Iran, Islamic Republic of) 2006-08-17 We investigate the relativistic Fermi motion effect in the case of J/{psi} production in various hadron colliders. A light-cone wave function is adopted to represent the J/{psi} final state. The change in the confint parameter which sets a scale for the size of the final state, allows one to see the effect in an explicit manner. While the effect has considerable influence on the fragmentation probabilities and the differential cross sections, the total cross sections essentially are left unchanged. Such a feature is in agreement with the momentum sum rule which the fragmentation functions should satisfy. 18. Beam-related machine protection for the CERN Large Hadron Collider experiments Directory of Open Access Journals (Sweden) R. B. Appleby 2010-06-01 Full Text Available The Large Hadron Collider at CERN, Geneva stores 360 MJ per beam of protons at the top machine energy. This amount of energy storage presents a considerable challenge to the machine protection systems designed to protect both the machine and the six LHC experiments. This paper provides an overview of the machine protection systems relevant to the protection of the experiments, and demonstrates their operation and level of protection through a series of injection and stored beam failure scenarios. We conclude that the systems provide sufficient coverage for the protection of the experiments as far as reasonably possible. 19. Modeling of random geometric errors in superconducting magnets with applications to the CERN Large Hadron Collider Directory of Open Access Journals (Sweden) P. Ferracin 2000-12-01 Full Text Available Estimates of random field-shape errors induced by cable mispositioning in superconducting magnets are presented and specific applications to the Large Hadron Collider (LHC main dipoles and quadrupoles are extensively discussed. Numerical simulations obtained with Monte Carlo methods are compared to analytic estimates and are used to interpret the experimental data for the LHC dipole and quadrupole prototypes. The proposed approach can predict the effect of magnet tolerances on geometric components of random field-shape errors, and it is a useful tool to monitor the obtained tolerances during magnet production. 20. NNLO QCD corrections to jet production at hadron colliders from gluon scattering International Nuclear Information System (INIS) Currie, James; Ridder, Aude Gehrmann-De; Glover, E.W.N.; Pires, João 2014-01-01 We present the next-to-next-to-leading order (NNLO) QCD corrections to dijet production in the purely gluonic channel retaining the full dependence on the number of colours. The sub-leading colour contribution in this channel first appears at NNLO and increases the NNLO correction by around 10% and exhibits a p T dependence, rising from 8% at low p T to 15% at high p T . The present calculation demonstrates the utility of the antenna subtraction method for computing the full colour NNLO corrections to dijet production at the Large Hadron Collider 1. SUSY-QCD corrections to Higgs boson production at hadron colliders International Nuclear Information System (INIS) Djouadi, A.; Spira, M. 1999-12-01 We analyze the next-to-leading order SUSY-QCD corrections to the production of Higgs particles at hadron colliders in supersymmetric extensions of the standard model. Besides the standard QCD corrections due to gluon exchange and emission, genuine supersymmetric corrections due to the virtual exchange of squarks and gluinos are present. At both the Tevatron and the LHC, these corrections are found to be small in the Higgs-strahlung, Drell-Yan-like Higgs pair production and vector boson fusion processes. (orig.) 2. Design and Installation Challenges of the Neutral Beam Absorbers for the Large Hadron Collider at CERN OpenAIRE Fernández Vélez, Óscar 2005-01-01 El CERN (Consejo Europeo de Investigación Nuclear) está construyendo su nuevo acelerador de partículas en la frontera franco-suiza. Actualmente en la fase de instalación, El Large Hadron Collider (LHC), con 26,7 kilómetros de longitud a 100 metros bajo tierra, será el mayor y más potente acelerador de partículas jamás construido. A su llegada al CERN, cada uno de casi 2000 imanes superconductores que formarán parte del acelerador debe ser verificado, ensamblado y transportado hasta ... 3. Dijet asymmetry at the energies available at the CERN Large Hadron Collider International Nuclear Information System (INIS) Young, Clint; Jeon, Sangyong; Gale, Charles; Schenke, Bjoern 2011-01-01 The martini numerical simulation allows for direct comparison of theoretical model calculations and the latest results for dijet asymmetry from the ATLAS and CMS collaborations. In this paper, partons are simulated as undergoing radiative and collisional processes throughout the evolution of central lead-lead collisions at the Large Hadron Collider. Using hydrodynamical background evolution determined by a simulation which fits well with the data on charged particle multiplicities from ALICE and a value of α s ≅0.25-0.3, the dijet asymmetry is found to be consistent with partonic energy loss in a hot, strongly interacting medium. 4. Estimates of Hadronic Backgrounds in a 5 TeV e+e- Linear Collider International Nuclear Information System (INIS) Murayama, H.; Ohgaki, Tomomi; Xie, M. 1998-01-01 We have estimated hadronic backgrounds by γγ collisions in an e + e - linear collider at a center-of-mass energy of 5 TeV. We introduce a simple ansatz, that is, a total γγ cross section of σ γγ = (σγ p ) 2 /σ pp shall be saturated by minijet productions, whose rate is controlled by p t,min (√s). We present that the background yields are small and the energy deposits are tinier than the collision energy of the initial electron and positron beams by a simulation 5. Beam dynamics aspects of crab cavities in the CERN Large Hadron Collider CERN Document Server Sun, Y P; Barranco, J; Tomás, R; Weiler, T; Zimmermann, F; Calaga, R; Morita, A 2009-01-01 Modern colliders bring into collision a large number of bunches to achieve a high luminosity. The long-range beam-beam effects arising from parasitic encounters at such colliders are mitigated by introducing a crossing angle. Under these conditions, crab cavities (CC) can be used to restore effective head-on collisions and thereby to increase the geometric luminosity. Such crab cavities have been proposed for both linear and circular colliders. The crab cavities are rf cavities operated in a transverse dipole mode, which imparts on the beam particles a transverse kick that varies with the longitudinal position along the bunch. The use of crab cavities in the Large Hadron Collider (LHC) may not only raise the luminosity, but it could also complicate the beam dynamics, e.g., crab cavities might not only cancel synchrobetatron resonances excited by the crossing angle but they could also excite new ones, they could reduce the dynamic aperture for off-momentum particles, they could influence the aperture and orbit... 6. Study of cosmic ray events with high muon multiplicity using the ALICE detector at the CERN Large Hadron Collider CERN Document Server Adam, Jaroslav; Aggarwal, Madan Mohan; Aglieri Rinella, Gianluca; Agnello, Michelangelo; Agrawal, Neelima; Ahammed, Zubayer; Ahn, Sang Un; Aiola, Salvatore; Akindinov, Alexander; Alam, Sk Noor; Aleksandrov, Dmitry; Alessandro, Bruno; Alexandre, Didier; Alfaro Molina, Jose Ruben; Alici, Andrea; Alkin, Anton; Millan Almaraz, Jesus Roberto; Alme, Johan; Alt, Torsten; Altinpinar, Sedat; Altsybeev, Igor; Alves Garcia Prado, Caio; Andrei, Cristian; Andronic, Anton; Anguelov, Venelin; Anielski, Jonas; Anticic, Tome; Antinori, Federico; Antonioli, Pietro; Aphecetche, Laurent Bernard; Appelshaeuser, Harald; Arcelli, Silvia; Armesto Perez, Nestor; Arnaldi, Roberta; Arsene, Ionut Cristian; Arslandok, Mesut; Audurier, Benjamin; Augustinus, Andre; Averbeck, Ralf Peter; Azmi, Mohd Danish; Bach, Matthias Jakob; Badala, Angela; Baek, Yong Wook; Bagnasco, Stefano; Bailhache, Raphaelle Marie; Bala, Renu; Baldisseri, Alberto; Baltasar Dos Santos Pedrosa, Fernando; Baral, Rama Chandra; Barbano, Anastasia Maria; Barbera, Roberto; Barile, Francesco; Barnafoldi, Gergely Gabor; Barnby, Lee Stuart; Ramillien Barret, Valerie; Bartalini, Paolo; Barth, Klaus; Bartke, Jerzy Gustaw; Bartsch, Esther; Basile, Maurizio; Bastid, Nicole; Basu, Sumit; Bathen, Bastian; Batigne, Guillaume; Batista Camejo, Arianna; Batyunya, Boris; Batzing, Paul Christoph; Bearden, Ian Gardner; Beck, Hans; Bedda, Cristina; Belikov, Iouri; Bellini, Francesca; Bello Martinez, Hector; Bellwied, Rene; Belmont Iii, Ronald John; Belmont Moreno, Ernesto; Belyaev, Vladimir; Bencedi, Gyula; Beole, Stefania; Berceanu, Ionela; Bercuci, Alexandru; Berdnikov, Yaroslav; Berenyi, Daniel; Bertens, Redmer Alexander; Berzano, Dario; Betev, Latchezar; Bhasin, Anju; Bhat, Inayat Rasool; Bhati, Ashok Kumar; Bhattacharjee, Buddhadeb; Bhom, Jihyun; Bianchi, Livio; Bianchi, Nicola; Bianchin, Chiara; Bielcik, Jaroslav; Bielcikova, Jana; Bilandzic, Ante; Biswas, Rathijit; Biswas, Saikat; Bjelogrlic, Sandro; Blair, Justin Thomas; Blanco, Fernando; Blau, Dmitry; Blume, Christoph; Bock, Friederike; Bogdanov, Alexey; Boggild, Hans; Boldizsar, Laszlo; Bombara, Marek; Book, Julian Heinz; Borel, Herve; Borissov, Alexander; Borri, Marcello; Bossu, Francesco; Botta, Elena; Boettger, Stefan; Braun-Munzinger, Peter; Bregant, Marco; Breitner, Timo Gunther; Broker, Theo Alexander; Browning, Tyler Allen; Broz, Michal; Brucken, Erik Jens; Bruna, Elena; Bruno, Giuseppe Eugenio; Budnikov, Dmitry; Buesching, Henner; Bufalino, Stefania; Buncic, Predrag; Busch, Oliver; Buthelezi, Edith Zinhle; Bashir Butt, Jamila; Buxton, Jesse Thomas; Caffarri, Davide; Cai, Xu; Caines, Helen Louise; Calero Diaz, Liliet; Caliva, Alberto; Calvo Villar, Ernesto; Camerini, Paolo; Carena, Francesco; Carena, Wisla; Carnesecchi, Francesca; Castillo Castellanos, Javier Ernesto; Castro, Andrew John; Casula, Ester Anna Rita; Cavicchioli, Costanza; Ceballos Sanchez, Cesar; Cepila, Jan; Cerello, Piergiorgio; Cerkala, Jakub; Chang, Beomsu; Chapeland, Sylvain; Chartier, Marielle; Charvet, Jean-Luc Fernand; Chattopadhyay, Subhasis; Chattopadhyay, Sukalyan; Chelnokov, Volodymyr; Cherney, Michael Gerard; Cheshkov, Cvetan Valeriev; Cheynis, Brigitte; Chibante Barroso, Vasco Miguel; Dobrigkeit Chinellato, David; Cho, Soyeon; Chochula, Peter; Choi, Kyungeon; Chojnacki, Marek; Choudhury, Subikash; Christakoglou, Panagiotis; Christensen, Christian Holm; Christiansen, Peter; Chujo, Tatsuya; Chung, Suh-Urk; Zhang, Chunhui; Cicalo, Corrado; Cifarelli, Luisa; Cindolo, Federico; Cleymans, Jean Willy Andre; Colamaria, Fabio Filippo; Colella, Domenico; Collu, Alberto; Colocci, Manuel; Conesa Balbastre, Gustavo; Conesa Del Valle, Zaida; Connors, Megan Elizabeth; Contreras Nuno, Jesus Guillermo; Cormier, Thomas Michael; Corrales Morales, Yasser; Cortes Maldonado, Ismael; Cortese, Pietro; Cosentino, Mauro Rogerio; Costa, Filippo; Crochet, Philippe; Cruz Albino, Rigoberto; Cuautle Flores, Eleazar; Cunqueiro Mendez, Leticia; Dahms, Torsten; Dainese, Andrea; Danu, Andrea; Das, Debasish; Das, Indranil; Das, Supriya; Dash, Ajay Kumar; Dash, Sadhana; De, Sudipan; De Caro, Annalisa; De Cataldo, Giacinto; De Cuveland, Jan; De Falco, Alessandro; De Gruttola, Daniele; De Marco, Nora; De Pasquale, Salvatore; Deisting, Alexander; Deloff, Andrzej; Denes, Ervin Sandor; D'Erasmo, Ginevra; Dhankher, Preeti; Di Bari, Domenico; Di Mauro, Antonio; Di Nezza, Pasquale; Diaz Corchero, Miguel Angel; Dietel, Thomas; Dillenseger, Pascal; Divia, Roberto; Djuvsland, Oeystein; Dobrin, Alexandru Florin; Dobrowolski, Tadeusz Antoni; Domenicis Gimenez, Diogenes; Donigus, Benjamin; Dordic, Olja; Drozhzhova, Tatiana; Dubey, Anand Kumar; Dubla, Andrea; Ducroux, Laurent; Dupieux, Pascal; Ehlers Iii, Raymond James; Elia, Domenico; Engel, Heiko; Epple, Eliane; Erazmus, Barbara Ewa; Erdemir, Irem; Erhardt, Filip; Espagnon, Bruno; Estienne, Magali Danielle; Esumi, Shinichi; Eum, Jongsik; Evans, David; Evdokimov, Sergey; Eyyubova, Gyulnara; Fabbietti, Laura; Fabris, Daniela; Faivre, Julien; Fantoni, Alessandra; Fasel, Markus; Feldkamp, Linus; Felea, Daniel; Feliciello, Alessandro; Feofilov, Grigorii; Ferencei, Jozef; Fernandez Tellez, Arturo; Gonzalez Ferreiro, Elena; Ferretti, Alessandro; Festanti, Andrea; Feuillard, Victor Jose Gaston; Figiel, Jan; Araujo Silva Figueredo, Marcel; Filchagin, Sergey; Finogeev, Dmitry; Fionda, Fiorella; Fiore, Enrichetta Maria; Fleck, Martin Gabriel; Floris, Michele; Foertsch, Siegfried Valentin; Foka, Panagiota; Fokin, Sergey; Fragiacomo, Enrico; Francescon, Andrea; Frankenfeld, Ulrich Michael; Fuchs, Ulrich; Furget, Christophe; Furs, Artur; Fusco Girard, Mario; Gaardhoeje, Jens Joergen; Gagliardi, Martino; Gago Medina, Alberto Martin; Gallio, Mauro; Gangadharan, Dhevan Raja; Ganoti, Paraskevi; Gao, Chaosong; Garabatos Cuadrado, Jose; Garcia-Solis, Edmundo Javier; Gargiulo, Corrado; Gasik, Piotr Jan; Gauger, Erin Frances; Germain, Marie; Gheata, Andrei George; Gheata, Mihaela; Ghosh, Premomoy; Ghosh, Sanjay Kumar; Gianotti, Paola; Giubellino, Paolo; Giubilato, Piero; Gladysz-Dziadus, Ewa; Glassel, Peter; Gomez Coral, Diego Mauricio; Gomez Ramirez, Andres; Gonzalez Zamora, Pedro; Gorbunov, Sergey; Gorlich, Lidia Maria; Gotovac, Sven; Grabski, Varlen; Graczykowski, Lukasz Kamil; Graham, Katie Leanne; Grelli, Alessandro; Grigoras, Alina Gabriela; Grigoras, Costin; Grigoryev, Vladislav; Grigoryan, Ara; Grigoryan, Smbat; Grynyov, Borys; Grion, Nevio; Grosse-Oetringhaus, Jan Fiete; Grossiord, Jean-Yves; Grosso, Raffaele; Guber, Fedor; Guernane, Rachid; Guerzoni, Barbara; Gulbrandsen, Kristjan Herlache; Gulkanyan, Hrant; Gunji, Taku; Gupta, Anik; Gupta, Ramni; Haake, Rudiger; Haaland, Oystein Senneset; Hadjidakis, Cynthia Marie; Haiduc, Maria; Hamagaki, Hideki; Hamar, Gergoe; Harris, John William; Harton, Austin Vincent; Hatzifotiadou, Despina; Hayashi, Shinichi; Heckel, Stefan Thomas; Heide, Markus Ansgar; Helstrup, Haavard; Herghelegiu, Andrei Ionut; Herrera Corral, Gerardo Antonio; Hess, Benjamin Andreas; Hetland, Kristin Fanebust; Hilden, Timo Eero; Hillemanns, Hartmut; Hippolyte, Boris; Hosokawa, Ritsuya; Hristov, Peter Zahariev; Huang, Meidana; Humanic, Thomas; Hussain, Nur; Hussain, Tahir; Hutter, Dirk; Hwang, Dae Sung; Ilkaev, Radiy; Ilkiv, Iryna; Inaba, Motoi; Ippolitov, Mikhail; Irfan, Muhammad; Ivanov, Marian; Ivanov, Vladimir; Izucheev, Vladimir; Jacobs, Peter Martin; Jadhav, Manoj Bhanudas; Jadlovska, Slavka; Jahnke, Cristiane; Jang, Haeng Jin; Janik, Malgorzata Anna; Pahula Hewage, Sandun; Jena, Chitrasen; Jena, Satyajit; Jimenez Bustamante, Raul Tonatiuh; Jones, Peter Graham; Jung, Hyungtaik; Jusko, Anton; Kalinak, Peter; Kalweit, Alexander Philipp; Kamin, Jason Adrian; Kang, Ju Hwan; Kaplin, Vladimir; Kar, Somnath; Karasu Uysal, Ayben; Karavichev, Oleg; Karavicheva, Tatiana; Karayan, Lilit; Karpechev, Evgeny; Kebschull, Udo Wolfgang; Keidel, Ralf; Keijdener, Darius Laurens; Keil, Markus; Khan, Mohammed Mohisin; Khan, Palash; Khan, Shuaib Ahmad; Khanzadeev, Alexei; Kharlov, Yury; Kileng, Bjarte; Kim, Beomkyu; Kim, Do Won; Kim, Dong Jo; Kim, Hyeonjoong; Kim, Jinsook; Kim, Mimae; Kim, Minwoo; Kim, Se Yong; Kim, Taesoo; Kirsch, Stefan; Kisel, Ivan; Kiselev, Sergey; Kisiel, Adam Ryszard; Kiss, Gabor; Klay, Jennifer Lynn; Klein, Carsten; Klein, Jochen; Klein-Boesing, Christian; Kluge, Alexander; Knichel, Michael Linus; Knospe, Anders Garritt; Kobayashi, Taiyo; Kobdaj, Chinorat; Kofarago, Monika; Kollegger, Thorsten; Kolozhvari, Anatoly; Kondratev, Valerii; Kondratyeva, Natalia; Kondratyuk, Evgeny; Konevskikh, Artem; Kopcik, Michal; Kour, Mandeep; Kouzinopoulos, Charalampos; Kovalenko, Oleksandr; Kovalenko, Vladimir; Kowalski, Marek; Koyithatta Meethaleveedu, Greeshma; Kral, Jiri; Kralik, Ivan; Kravcakova, Adela; Kretz, Matthias; Krivda, Marian; Krizek, Filip; Kryshen, Evgeny; Krzewicki, Mikolaj; Kubera, Andrew Michael; Kucera, Vit; Kugathasan, Thanushan; Kuhn, Christian Claude; Kuijer, Paulus Gerardus; Kumar, Ajay; Kumar, Jitendra; Lokesh, Kumar; Kumar, Shyam; Kurashvili, Podist; Kurepin, Alexander; Kurepin, Alexey; Kuryakin, Alexey; Kushpil, Svetlana; Kweon, Min Jung; Kwon, Youngil; La Pointe, Sarah Louise; La Rocca, Paola; Lagana Fernandes, Caio; Lakomov, Igor; Langoy, Rune; Lara Martinez, Camilo Ernesto; Lardeux, Antoine Xavier; Lattuca, Alessandra; Laudi, Elisa; Lea, Ramona; Leardini, Lucia; Lee, Graham Richard; Lee, Seongjoo; Legrand, Iosif; Lehas, Fatiha; Lemmon, Roy Crawford; Lenti, Vito; Leogrande, Emilia; Leon Monzon, Ildefonso; Leoncino, Marco; Levai, Peter; Li, Shuang; Li, Xiaomei; Lien, Jorgen Andre; Lietava, Roman; Lindal, Svein; Lindenstruth, Volker; Lippmann, Christian; Lisa, Michael Annan; Ljunggren, Hans Martin; Lodato, Davide Francesco; Lonne, Per-Ivar; Loginov, Vitaly; Loizides, Constantinos; Lopez, Xavier Bernard; Lopez Torres, Ernesto; Lowe, Andrew John; Luettig, Philipp Johannes; Lunardon, Marcello; Luparello, Grazia; Ferreira Natal Da Luz, Pedro Hugo; Maevskaya, Alla; Mager, Magnus; Mahajan, Sanjay; Mahmood, Sohail Musa; Maire, Antonin; Majka, Richard Daniel; Malaev, Mikhail; Maldonado Cervantes, Ivonne Alicia; Malinina, Liudmila; Mal'Kevich, Dmitry; Malzacher, Peter; Mamonov, Alexander; Manko, Vladislav; Manso, Franck; Manzari, Vito; Marchisone, Massimiliano; Mares, Jiri; Margagliotti, Giacomo Vito; Margotti, Anselmo; Margutti, Jacopo; Marin, Ana Maria; Markert, Christina; Marquard, Marco; Martin, Nicole Alice; Martin Blanco, Javier; Martinengo, Paolo; Martinez Hernandez, Mario Ivan; Martinez-Garcia, Gines; Martinez Pedreira, Miguel; Martynov, Yevgen; Mas, Alexis Jean-Michel; Masciocchi, Silvia; Masera, Massimo; Masoni, Alberto; Massacrier, Laure Marie; Mastroserio, Annalisa; Masui, Hiroshi; Matyja, Adam Tomasz; Mayer, Christoph; Mazer, Joel Anthony; Mazzoni, Alessandra Maria; Mcdonald, Daniel; Meddi, Franco; Melikyan, Yuri; Menchaca-Rocha, Arturo Alejandro; Meninno, Elisa; Mercado-Perez, Jorge; Meres, Michal; Miake, Yasuo; Mieskolainen, Matti Mikael; Mikhaylov, Konstantin; Milano, Leonardo; Milosevic, Jovan; Minervini, Lazzaro Manlio; Mischke, Andre; Mishra, Aditya Nath; Miskowiec, Dariusz Czeslaw; Mitra, Jubin; Mitu, Ciprian Mihai; Mohammadi, Naghmeh; Mohanty, Bedangadas; Molnar, Levente; Montano Zetina, Luis Manuel; Montes Prado, Esther; Morando, Maurizio; Moreira De Godoy, Denise Aparecida; Perez Moreno, Luis Alberto; Moretto, Sandra; Morreale, Astrid; Morsch, Andreas; Muccifora, Valeria; Mudnic, Eugen; Muhlheim, Daniel Michael; Muhuri, Sanjib; Mukherjee, Maitreyee; Mulligan, James Declan; Gameiro Munhoz, Marcelo; Munzer, Robert Helmut; Murray, Sean; Musa, Luciano; Musinsky, Jan; Naik, Bharati; Nair, Rahul; Nandi, Basanta Kumar; Nania, Rosario; Nappi, Eugenio; Naru, Muhammad Umair; Nattrass, Christine; Nayak, Kishora; Nayak, Tapan Kumar; Nazarenko, Sergey; Nedosekin, Alexander; Nellen, Lukas; Ng, Fabian; Nicassio, Maria; Niculescu, Mihai; Niedziela, Jeremi; Nielsen, Borge Svane; Nikolaev, Sergey; Nikulin, Sergey; Nikulin, Vladimir; Noferini, Francesco; Nomokonov, Petr; Nooren, Gerardus; Cabanillas Noris, Juan Carlos; Norman, Jaime; Nyanin, Alexander; Nystrand, Joakim Ingemar; Oeschler, Helmut Oskar; Oh, Saehanseul; Oh, Sun Kun; Ohlson, Alice Elisabeth; Okatan, Ali; Okubo, Tsubasa; Olah, Laszlo; Oleniacz, Janusz; Oliveira Da Silva, Antonio Carlos; Oliver, Michael Henry; Onderwaater, Jacobus; Oppedisano, Chiara; Orava, Risto; Ortiz Velasquez, Antonio; Oskarsson, Anders Nils Erik; Otwinowski, Jacek Tomasz; Oyama, Ken; Ozdemir, Mahmut; Pachmayer, Yvonne Chiara; Pagano, Paola; Paic, Guy; Pajares Vales, Carlos; Pal, Susanta Kumar; Pan, Jinjin; Pandey, Ashutosh Kumar; Pant, Divyash; Papcun, Peter; Papikyan, Vardanush; Pappalardo, Giuseppe; Pareek, Pooja; Park, Woojin; Parmar, Sonia; Passfeld, Annika; Paticchio, Vincenzo; Patra, Rajendra Nath; Paul, Biswarup; Peitzmann, Thomas; Pereira Da Costa, Hugo Denis Antonio; Pereira De Oliveira Filho, Elienos; Peresunko, Dmitry Yurevich; Perez Lara, Carlos Eugenio; Perez Lezama, Edgar; Peskov, Vladimir; Pestov, Yury; Petracek, Vojtech; Petrov, Viacheslav; Petrovici, Mihai; Petta, Catia; Piano, Stefano; Pikna, Miroslav; Pillot, Philippe; Pinazza, Ombretta; Pinsky, Lawrence; Piyarathna, Danthasinghe; Ploskon, Mateusz Andrzej; Planinic, Mirko; Pluta, Jan Marian; Pochybova, Sona; Podesta Lerma, Pedro Luis Manuel; Poghosyan, Martin; Polishchuk, Boris; Poljak, Nikola; Poonsawat, Wanchaloem; Pop, Amalia; Porteboeuf, Sarah Julie; Porter, R Jefferson; Pospisil, Jan; Prasad, Sidharth Kumar; Preghenella, Roberto; Prino, Francesco; Pruneau, Claude Andre; Pshenichnov, Igor; Puccio, Maximiliano; Puddu, Giovanna; Pujahari, Prabhat Ranjan; Punin, Valery; Putschke, Jorn Henning; Qvigstad, Henrik; Rachevski, Alexandre; Raha, Sibaji; Rajput, Sonia; Rak, Jan; Rakotozafindrabe, Andry Malala; Ramello, Luciano; Rami, Fouad; Raniwala, Rashmi; Raniwala, Sudhir; Rasanen, Sami Sakari; Rascanu, Bogdan Theodor; Rathee, Deepika; Read, Kenneth Francis; Real, Jean-Sebastien; Redlich, Krzysztof; Reed, Rosi Jan; Rehman, Attiq Ur; Reichelt, Patrick Simon; Reidt, Felix; Ren, Xiaowen; Renfordt, Rainer Arno Ernst; Reolon, Anna Rita; Reshetin, Andrey; Rettig, Felix Vincenz; Revol, Jean-Pierre; Reygers, Klaus Johannes; Riabov, Viktor; Ricci, Renato Angelo; Richert, Tuva Ora Herenui; Richter, Matthias Rudolph; Riedler, Petra; Riegler, Werner; Riggi, Francesco; Ristea, Catalin-Lucian; Rivetti, Angelo; Rocco, Elena; Rodriguez Cahuantzi, Mario; Rodriguez Manso, Alis; Roeed, Ketil; Rogochaya, Elena; Rohr, David Michael; Roehrich, Dieter; Romita, Rosa; Ronchetti, Federico; Ronflette, Lucile; Rosnet, Philippe; Rossi, Andrea; Roukoutakis, Filimon; Roy, Ankhi; Roy, Christelle Sophie; Roy, Pradip Kumar; Rubio Montero, Antonio Juan; Rui, Rinaldo; Russo, Riccardo; Ryabinkin, Evgeny; Ryabov, Yury; Rybicki, Andrzej; Sadovskiy, Sergey; Safarik, Karel; Sahlmuller, Baldo; Sahoo, Pragati; Sahoo, Raghunath; Sahoo, Sarita; Sahu, Pradip Kumar; Saini, Jogender; Sakai, Shingo; Saleh, Mohammad Ahmad; Salgado Lopez, Carlos Alberto; Salzwedel, Jai Samuel Nielsen; Sambyal, Sanjeev Singh; Samsonov, Vladimir; Sandor, Ladislav; Sandoval, Andres; Sano, Masato; Sarkar, Debojit; Scapparone, Eugenio; Scarlassara, Fernando; Scharenberg, Rolf Paul; Schiaua, Claudiu Cornel; Schicker, Rainer Martin; Schmidt, Christian Joachim; Schmidt, Hans Rudolf; Schuchmann, Simone; Schukraft, Jurgen; Schulc, Martin; Schuster, Tim Robin; Schutz, Yves Roland; Schwarz, Kilian Eberhard; Schweda, Kai Oliver; Scioli, Gilda; Scomparin, Enrico; Scott, Rebecca Michelle; Seger, Janet Elizabeth; Sekiguchi, Yuko; Sekihata, Daiki; Selyuzhenkov, Ilya; Senosi, Kgotlaesele; Seo, Jeewon; Serradilla Rodriguez, Eulogio; Sevcenco, Adrian; Shabanov, Arseniy; Shabetai, Alexandre; Shadura, Oksana; Shahoyan, Ruben; Shangaraev, Artem; Sharma, Ankita; Sharma, Mona; Sharma, Monika; Sharma, Natasha; Shigaki, Kenta; Shtejer Diaz, Katherin; Sibiryak, Yury; Siddhanta, Sabyasachi; Sielewicz, Krzysztof Marek; Siemiarczuk, Teodor; Silvermyr, David Olle Rickard; Silvestre, Catherine Micaela; Simatovic, Goran; Simonetti, Giuseppe; Singaraju, Rama Narayana; Singh, Ranbir; Singha, Subhash; Singhal, Vikas; Sinha, Bikash; Sarkar - Sinha, Tinku; Sitar, Branislav; Sitta, Mario; Skaali, Bernhard; Slupecki, Maciej; Smirnov, Nikolai; Snellings, Raimond; Snellman, Tomas Wilhelm; Soegaard, Carsten; Soltz, Ron Ariel; Song, Jihye; Song, Myunggeun; Song, Zixuan; Soramel, Francesca; Sorensen, Soren Pontoppidan; Spacek, Michal; Spiriti, Eleuterio; Sputowska, Iwona Anna; Spyropoulou-Stassinaki, Martha; Srivastava, Brijesh Kumar; Stachel, Johanna; Stan, Ionel; Stefanek, Grzegorz; Stenlund, Evert Anders; Steyn, Gideon Francois; Stiller, Johannes Hendrik; Stocco, Diego; Strmen, Peter; Alarcon Do Passo Suaide, Alexandre; Sugitate, Toru; Suire, Christophe Pierre; Suleymanov, Mais Kazim Oglu; Suljic, Miljenko; Sultanov, Rishat; Sumbera, Michal; Symons, Timothy; Szabo, Alexander; Szanto De Toledo, Alejandro; Szarka, Imrich; Szczepankiewicz, Adam; Szymanski, Maciej Pawel; Tabassam, Uzma; Takahashi, Jun; Tambave, Ganesh Jagannath; Tanaka, Naoto; Tangaro, Marco-Antonio; Tapia Takaki, Daniel Jesus; Tarantola Peloni, Attilio; Tarhini, Mohamad; Tariq, Mohammad; Tarzila, Madalina-Gabriela; Tauro, Arturo; Tejeda Munoz, Guillermo; Telesca, Adriana; Terasaki, Kohei; Terrevoli, Cristina; Teyssier, Boris; Thaeder, Jochen Mathias; Thomas, Deepa; Tieulent, Raphael Noel; Timmins, Anthony Robert; Toia, Alberica; Trogolo, Stefano; Trubnikov, Victor; Trzaska, Wladyslaw Henryk; Tsuji, Tomoya; Tumkin, Alexandr; Turrisi, Rosario; Tveter, Trine Spedstad; Ullaland, Kjetil; Uras, Antonio; Usai, Gianluca; Utrobicic, Antonija; Vajzer, Michal; Valencia Palomo, Lizardo; Vallero, Sara; Van Der Maarel, Jasper; Van Hoorne, Jacobus Willem; Van Leeuwen, Marco; Vanat, Tomas; Vande Vyvre, Pierre; Varga, Dezso; Diozcora Vargas Trevino, Aurora; Vargyas, Marton; Varma, Raghava; Vasileiou, Maria; Vasiliev, Andrey; Vauthier, Astrid; Vechernin, Vladimir; Veen, Annelies Marianne; Veldhoen, Misha; Velure, Arild; Venaruzzo, Massimo; Vercellin, Ermanno; Vergara Limon, Sergio; Vernet, Renaud; Verweij, Marta; Vickovic, Linda; Viesti, Giuseppe; Viinikainen, Jussi Samuli; Vilakazi, Zabulon; Villalobos Baillie, Orlando; Villatoro Tello, Abraham; Vinogradov, Alexander; Vinogradov, Leonid; Vinogradov, Yury; Virgili, Tiziano; Vislavicius, Vytautas; Viyogi, Yogendra; Vodopyanov, Alexander; Volkl, Martin Andreas; Voloshin, Kirill; Voloshin, Sergey; Volpe, Giacomo; Von Haller, Barthelemy; Vorobyev, Ivan; Vranic, Danilo; Vrlakova, Janka; Vulpescu, Bogdan; Vyushin, Alexey; Wagner, Boris; Wagner, Jan; Wang, Hongkai; Wang, Mengliang; Watanabe, Daisuke; Watanabe, Yosuke; Weber, Michael; Weber, Steffen Georg; Wessels, Johannes Peter; Westerhoff, Uwe; Wiechula, Jens; Wikne, Jon; Wilde, Martin Rudolf; Wilk, Grzegorz Andrzej; Wilkinson, Jeremy John; Williams, Crispin; Windelband, Bernd Stefan; Winn, Michael Andreas; Yaldo, Chris G; Yang, Hongyan; Yang, Ping; Yano, Satoshi; Yasar, Cigdem; Yin, Zhongbao; Yokoyama, Hiroki; Yoo, In-Kwon; Yurchenko, Volodymyr; Yushmanov, Igor; Zaborowska, Anna; Zaccolo, Valentina; Zaman, Ali; Zampolli, Chiara; Correia Zanoli, Henrique Jose; Zaporozhets, Sergey; Zardoshti, Nima; Zarochentsev, Andrey; Zavada, Petr; Zavyalov, Nikolay; Zbroszczyk, Hanna Paulina; Zgura, Sorin Ion; Zhalov, Mikhail; Zhang, Haitao; Zhang, Xiaoming; Zhang, Yonghong; Zhang, Zuman; Zhao, Chengxin; Zhigareva, Natalia; Zhou, Daicui; Zhou, You; Zhou, Zhuo; Zhu, Hongsheng; Zhu, Jianhui; Zichichi, Antonino; Zimmermann, Alice; Zimmermann, Markus Bernhard; Zinovjev, Gennady; Zyzak, Maksym 2016-01-19 ALICE is one of four large experiments at the CERN Large Hadron Collider near Geneva, specially designed to study particle production in ultra-relativistic heavy-ion collisions. Located 52 meters underground with 28 meters of overburden rock, it has also been used to detect muons produced by cosmic ray interactions in the upper atmosphere. In this paper, we present the multiplicity distribution of these atmospheric muons and its comparison with Monte Carlo simulations. This analysis exploits the large size and excellent tracking capability of the ALICE Time Projection Chamber. A special emphasis is given to the study of high multiplicity events containing more than 100 reconstructed muons and corresponding to a muon areal density$\\rho_{\\mu} > 5.9~$m$^{-2}$. Similar events have been studied in previous underground experiments such as ALEPH and DELPHI at LEP. While these experiments were able to reproduce the measured muon multiplicity distribution with Monte Carlo simulations at low and intermediate multiplic... 7. On measuring the masses of pair-produced semi-invisibly decaying particles at hadron colliders International Nuclear Information System (INIS) Tovey, Daniel R. 2008-01-01 A straightforward new technique is introduced which enables measurement at hadron colliders of an analytical combination of the masses of pair-produced semi-invisibly decaying particles and their invisible decay products. The new technique makes use of the invariance under contra-linear Lorentz boosts of a simple combination of the transverse momentum components of the aggregate visible products of each decay chain. In the general case where the invariant masses of the visible decay products are non-zero it is shown that in principle the masses of both the initial particles from the hard scattering and the invisible particles produced in the decay chains can be determined independently. This application is likely to be difficult to realise in practice however due to the contamination of the final state with ISR jets. The technique may be of most use for measurements of SUSY particle masses at the LHC, however the technique should be applicable to any class of hadron collider events in which heavy particles of unknown mass are pair-produced and decay to semi-invisible final states 8. The Large Hadron Collider of CERN and the roadmap toward higher performance CERN Document Server Rossi, L 2012-01-01 The Large Hadron Collider is exploring the new frontier of particle physics. It is the largest and most ambitious scientific instrument ever built and 100 years after the Rutherford experiment it continues that tradition of “smashing atoms” to unveil the secret of the infinitely small. LHC makes use of all what we learnt in 40 years of hadron colliders, in particular of ISR and Sp-pbarS at CERN and Tevatron at Fermilab, and it is based on Superconductivity, discovered also 100 years ago. Designing, developing the technology, building and finally commissioning the LHC took more than twenty years. While LHC is now successfully running, we are already preparing the future for the next step. First, by increasing of a factor five the LHC luminosity in ten years from now, and then by increasing its energy by a factor two or more, on the horizon of the next twenty years. These LHC upgrades, in luminosity and energy, will be the super-exploitation of the CERN infrastructure and is the best investment that the HEP... 9. A new micro-strip tracker for the new generation of experiments at hadron colliders International Nuclear Information System (INIS) Dinardo, Mauro E.; Milan U. 2005-01-01 This thesis concerns the development and characterization of a prototype Silicon micro-strip detector that can be used in the forward (high rapidity) region of a hadron collider. These detectors must operate in a high radiation environment without any important degradation of their performance. The innovative feature of these detectors is the readout electronics, which, being completely data-driven, allows for the direct use of the detector information at the lowest level of the trigger. All the particle hits on the detector can be readout in real-time without any external trigger and any particular limitation due to dead-time. In this way, all the detector information is available to elaborate a very selective trigger decision based on a fast reconstruction of tracks and vertex topology. These detectors, together with the new approach to the trigger, have been developed in the context of the BTeV RandD program; our aim was to define the features and the design parameters of an optimal experiment for heavy flavour physics at hadron colliders 10. Vector-like quarks coupling discrimination at the LHC and future hadron colliders Science.gov (United States) Barducci, D.; Panizzi, L. 2017-12-01 The existence of new coloured states with spin one-half, i.e. extra-quarks, is a striking prediction of various classes of new physics models. Should one of these states be discovered during the 13 TeV runs of the LHC or at future high energy hadron colliders, understanding its properties will be crucial in order to shed light on the underlying model structure. Depending on the extra-quarks quantum number under SU(2) L , their coupling to Standard Model quarks and bosons have either a dominant left- or right-handed chiral component. By exploiting the polarisation properties of the top quarks arising from the decay of pair-produced extra quarks, we show how it is possible to discriminate among the two hypothesis in the whole discovery range currently accessible at the LHC, thus effectively narrowing down the possible interpretations of a discovered state in terms of new physics scenarios. Moreover, we estimate the discovery and discrimination power of future prototype hadron colliders with centre of mass energies of 33 and 100 TeV. 11. Measuring CP nature of top-Higgs couplings at the future Large Hadron electron Collider Directory of Open Access Journals (Sweden) Baradhwaj Coleppa 2017-07-01 Full Text Available We investigate the sensitivity of top-Higgs coupling by considering the associated vertex as CP phase (ζt dependent through the process pe−→t¯hνe in the future Large Hadron electron Collider. In particular the decay modes are taken to be h→bb¯ and t¯ → leptonic mode. Several distinct ζt dependent features are demonstrated by considering observables like cross sections, top-quark polarisation, rapidity difference between h and t¯ and different angular asymmetries. Luminosity (L dependent exclusion limits are obtained for ζt by considering significance based on fiducial cross sections at different σ-levels. For electron and proton beam-energies of 60 GeV and 7 TeV respectively, at L=100 fb−1, the regions above π/5<ζt≤π are excluded at 2σ confidence level, which reflects better sensitivity expected at the Large Hadron Collider. With appropriate error fitting methodology we find that the accuracy of SM top-Higgs coupling could be measured to be κ=1.00±0.17(0.08 at s=1.3(1.8 TeV for an ultimate L=1ab−1. 12. Conceptual design of hollow electron lenses for beam halo control in the Large Hadron Collider Energy Technology Data Exchange (ETDEWEB) Stancari, Giulio [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Previtali, Valentina [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Valishev, Alexander [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Bruce, Roderik [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Redaelli, Stefano [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Rossi, Adriana [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Salvachua Ferrando, Belen [European Organization for Nuclear Research (CERN), Geneva (Switzerland) 2014-06-26 Collimation with hollow electron beams is a technique for halo control in high-power hadron beams. It is based on an electron beam (possibly pulsed or modulated in intensity) guided by strong axial magnetic fields which overlaps with the circulating beam in a short section of the ring. The concept was tested experimentally at the Fermilab Tevatron collider using a hollow electron gun installed in one of the Tevatron electron lenses. We are proposing a conceptual design for applying this technique to the Large Hadron Collider at CERN. A prototype hollow electron gun for the LHC was built and tested. The expected performance of the hollow electron beam collimator was based on Tevatron experiments and on numerical tracking simulations. Halo removal rates and enhancements of halo diffusivity were estimated as a function of beam and lattice parameters. Proton beam core lifetimes and emittance growth rates were checked to ensure that undesired effects were suppressed. Hardware specifications were based on the Tevatron devices and on preliminary engineering integration studies in the LHC machine. Required resources and a possible timeline were also outlined, together with a brief discussion of alternative halo-removal schemes and of other possible uses of electron lenses to improve the performance of the LHC. 13. Azimuthal coil size and field quality in the main CERN Large Hadron Collider dipoles Directory of Open Access Journals (Sweden) P. Ferracin 2002-06-01 Full Text Available Field quality in superconducting magnets strongly depends on the geometry of the coil. Fiberglass spacers (shims placed between the coil and the collars have been used to optimize magnetic and mechanical performances of superconducting magnets in large accelerators. A change in the shim thickness affects both the geometry of the coil and its state of compression (prestress under operational conditions. In this paper we develop a coupled magnetomechanical model of the main Large Hadron Collider dipole. This model allows us to evaluate the prestress dependence on the shim thickness and the map of deformations of the coil and the collars. Results of the model are compared to experimental measurements carried out in a dedicated experiment, where a magnet model has been reassembled 5 times with different shims. A good agreement is found between simulations and experimental data both on the mechanical behavior and on the field quality. We show that this approach allows us to improve this agreement with respect to models previously used in the literature. We finally evaluate the range of tunability that will be provided by shims during the production of the Large Hadron Collider main dipoles. 14. submitter Training Behavior of the Main Dipoles in the Large Hadron Collider CERN Document Server Todesco, Ezio; Bajko, Marta; Bottura, Luca; Bruning, Oliver; De Rijk, Gijs; Fessia, Paolo; Hagen, Per; Naour, Sandrine Le; Modena, Michele; Perez, Juan Carlos; Rossi, Lucio; Schmidt, Rudiger; Siemko, Andrzej; Tock, Jean-Philippe; Tommasini, Davide; Verweij, Arjan; Willering, Gerard 2017-01-01 In 2015, the 1232 Nb-Ti dipole magnets in the Large Hadron Collider (LHC) have been commissioned to 7.8 T operational field, with 172 quenches. More than 80% of these quenches occurred in the magnets of one of the three cold mass assemblers (3000 series), confirming what was already observed in 2008. In this paper, the recent analysis carried out on the quench performance of the Large Hadron Collider dipole magnets is reported, including the individual reception tests and the 2008 and 2015 commissioning campaigns, to better understand the above-mentioned anomaly and give an outlook for future operation and possible increase of the operational field. The lower part of the quench probability spectrum is compatible with Gaussian distributions; therefore, the training curve can be fit through error functions. An essential ingredient in this analysis is the estimate of the error to be associated with the training data due to sampling of rare events, allowing to test different hypothesis. Using this approach, an es... 15. Heavy-Quark Associated Production with One Hard Photon at Hadron Colliders Energy Technology Data Exchange (ETDEWEB) Hartanto, Heribertus Bayu [Florida State Univ., Tallahassee, FL (United States) 2013-01-01 We present the calculation of heavy-quark associated production with a hard photon at hadron colliders, namely$pp(p\\bar p) → Q\\bar Q +X$γ (for$Q=t,b$), at Next-to-Leading Order (NLO) in Quantum Chromodynamics (QCD). We study the impact of NLO QCD corrections on the total cross section and several differential distributions at both the Tevatron and the Large Hadron Collider (LHC). For$t\\bar t$γ production we observe a sizeable reduction of the renormalization and factorization scale dependence when the NLO QCD corrections are included, while for$b\\bar b$γ production a considerable scale dependence still persists at NLO in QCD. This is consistent with what emerges in similar processes involving$b$quarks and vector bosons and we explain its origin in detail. For$b\\bar b$γ production we study both the case in which at least one$b$jet and the case in which at least two$b$jets are observed. We perform the$b\\bar b$γ calculation using the Four Flavor Number Scheme (4FNS) and compare the case where at least one$b$jet is observed with the corresponding results from the Five Flavor Number Scheme (5FNS) calculation. Finally we compare our results for$p\\bar p →+b+X$γ with the Tevatron data. 16. The large Hadron Collider (LHC) and the search for the divine particle International Nuclear Information System (INIS) Sanchez, G. 2008-01-01 The large Hadron Collider (LHC) is a particle circular accelerator of 27 km of circumference. I t will be used to study the smallest known particles. Two beams of subatomic particles called hadrons either protons or lead ion- will travel in opposite directions inside the circular accelerator gaining energy with every lap. Physicists will use the LHC to recreate the conditions just after the Big Bang, by colliding the two beams had-on at very high energy. There are many theories as to what will result from these collisions, but what's for sure is that a brave new world of physics will emerge from the new accelerator, as knowledge in particle physics goes on to describe the working of the Universe. for decades, the Standard Model of particle physics has served physicists well as a means of understanding the fundamental laws of Nature, but it does not tell the whole story. Only experimental data using the higher energies reached by the LHC can push knowledge forward, challenging those who seek confirmation of established knowledge, and those who dare to dream beyond the paradigm. The Higgs boson, that complete the standard model, is waited to be found. (Author) 17. Conceptual design of hollow electron lenses for beam halo control in the Large Hadron Collider CERN Document Server Stancari, Giulio; Valishev, Alexander; Bruce, Roderik; Redaelli, Stefano; Rossi, Adriana; Salvachua Ferrando, Belen 2014-01-01 Collimation with hollow electron beams is a technique for halo control in high-power hadron beams. It is based on an electron beam (possibly pulsed or modulated in intensity) guided by strong axial magnetic fields which overlaps with the circulating beam in a short section of the ring. The concept was tested experimentally at the Fermilab Tevatron collider using a hollow electron gun installed in one of the Tevatron electron lenses. Within the US LHC Accelerator Research Program (LARP) and the European FP7 HiLumi LHC Design Study, we are proposing a conceptual design for applying this technique to the Large Hadron Collider at CERN. A prototype hollow electron gun for the LHC was built and tested. The expected performance of the hollow electron beam collimator was based on Tevatron experiments and on numerical tracking simulations. Halo removal rates and enhancements of halo diffusivity were estimated as a function of beam and lattice parameters. Proton beam core lifetimes and emittance growth rates were check... 18. Production of H H H and H H V (V =γ ,Z ) at the hadron colliders Science.gov (United States) Agrawal, Pankaj; Saha, Debashis; Shivaji, Ambresh 2018-02-01 We consider the production of two Higgs bosons in association with a gauge boson or another Higgs boson at the hadron colliders. We compute the cross sections and distributions for the processes p p →H H H and H H Z within the standard model. In particular, we compute the gluon-gluon fusion one-loop contributions mediated via heavy quarks in the loop. It is the leading order contribution to p p →H H H process. To the process p p →H H Z , it is next-to-next-to-leading-order (NNLO) contribution in QCD coupling. We also compare this contribution to the next-to-leading-order (NLO) QCD contribution to this process. The NNLO contribution can be similar to NLO contribution at the Large Hadron Collider (LHC), and significantly more at higher center-of-mass energy machines. We also study new physics effects in these processes by considering t t H , H H H , H H H H , H Z Z , and H H Z Z interactions as anomalous. The anomalous couplings can enhance the cross sections significantly. The g g →H H H process is specially sensitive to anomalous trilinear Higgs boson self-coupling. For the g g →H H Z process, there is some modest dependence on anomalous H Z Z couplings. 19. Phenomenology of the Higgs at the hadron colliders: from the standard model to supersymmetry International Nuclear Information System (INIS) Baglio, J. 2011-10-01 This thesis has been conducted in the context of one of the utmost important searches at current hadron colliders, that is the search for the Higgs boson, the remnant of the electroweak symmetry breaking. We wish to study the phenomenology of the Higgs boson in both the Standard Model (SM) framework and its minimal Supersymmetric extension (MSSM). After a review of the Standard Model in a first part and of the key reasons and ingredients for the supersymmetry in general and the MSSM in particular in a third part, we will present the calculation of the inclusive production cross sections of the Higgs boson in the main channels at the two current hadron colliders that are the Fermilab Tevatron collider and the CERN Large Hadron Collider (LHC), starting by the SM case in the second part and presenting the MSSM results, where we have 5 Higgs bosons and focusing on the two main production channels that are the gluon gluon fusion and the bottom quarks fusion, in the fourth part. The main output of this calculation is the extensive study of the various theoretical uncertainties that affect the predictions: the scale uncertainties which probe our ignorance of the higher-order terms in a fixed order perturbative calculation, the parton distribution functions (PDF) uncertainties and its related uncertainties from the value of the strong coupling constant, and the uncertainties coming from the use of an effective field theory to simplify the hard calculation. We then move on to the study of the Higgs decay branching ratios which are also affected by diverse uncertainties. We will present the combination of the production cross sections and decay branching fractions in some specific cases which will show interesting consequences on the total theoretical uncertainties. We move on to present the results confronted to experiments and show that the theoretical uncertainties have a significant impact on the inferred limits either in the SM search for the Higgs boson or on the MSSM 20. Particle production at energies available at the CERN Large Hadron Collider within an evolutionary model Science.gov (United States) Sinyukov, Yu. M.; Shapoval, V. M. 2018-06-01 The particle yields and particle number ratios in Pb+Pb collisions at the CERN Large Hadron Collider (LHC) energy √{sN N}=2.76 TeV are described within the integrated hydrokinetic model (iHKM) at two different equations of state (EoS) for quark-gluon matter and the two corresponding hadronization temperatures T =165 MeV and T =156 MeV. The role of particle interactions at the final afterburner stage of the collision in the particle production is investigated by means of comparison of the results of full iHKM simulations with those where the annihilation and other inelastic processes (except for resonance decays) are switched off after hadronization/particlization, similarly as in the thermal models. An analysis supports the picture of continuous chemical freeze-out in the sense that the corrections to the sudden chemical freeze-out results, which arise because of the inelastic reactions at the subsequent evolution times, are noticeable and improve the description of particle number ratios. An important observation is that, although the particle number ratios with switched-off inelastic reactions are quite different at different particlization temperatures which are adopted for different equations of state to reproduce experimental data, the complete iHKM calculations bring very close results in both cases. 1. Governance of the International Linear Collider Project Energy Technology Data Exchange (ETDEWEB) Foster, B.; /Oxford U.; Barish, B.; /Caltech; Delahaye, J.P.; /CERN; Dosselli, U.; /INFN, Padua; Elsen, E.; /DESY; Harrison, M.; /Brookhaven; Mnich, J.; /DESY; Paterson, J.M.; /SLAC; Richard, F.; /Orsay, LAL; Stapnes, S.; /CERN; Suzuki, A.; /KEK, Tsukuba; Wormser, G.; /Orsay, LAL; Yamada, S.; /KEK, Tsukuba 2012-05-31 Governance models for the International Linear Collider Project are examined in the light of experience from similar international projects around the world. Recommendations for one path which could be followed to realize the ILC successfully are outlined. The International Linear Collider (ILC) is a unique endeavour in particle physics; fully international from the outset, it has no 'host laboratory' to provide infrastructure and support. The realization of this project therefore presents unique challenges, in scientific, technical and political arenas. This document outlines the main questions that need to be answered if the ILC is to become a reality. It describes the methodology used to harness the wisdom displayed and lessons learned from current and previous large international projects. From this basis, it suggests both general principles and outlines a specific model to realize the ILC. It recognizes that there is no unique model for such a laboratory and that there are often several solutions to a particular problem. Nevertheless it proposes concrete solutions that the authors believe are currently the best choices in order to stimulate discussion and catalyze proposals as to how to bring the ILC project to fruition. The ILC Laboratory would be set up by international treaty and be governed by a strong Council to whom a Director General and an associated Directorate would report. Council would empower the Director General to give strong management to the project. It would take its decisions in a timely manner, giving appropriate weight to the financial contributions of the member states. The ILC Laboratory would be set up for a fixed term, capable of extension by agreement of all the partners. The construction of the machine would be based on a Work Breakdown Structure and value engineering and would have a common cash fund sufficiently large to allow the management flexibility to optimize the project's construction. Appropriate contingency 2. Development of radiation-tolerant components for the quench detection system at the CERN Large Hadron Collider Energy Technology Data Exchange (ETDEWEB) Bitterling, Oliver 2017-04-03 This works describes the results of a three year project to improve the radiation tolerance of the Quench Protection System of the CERN Large Hadron Collider. Radiation-induced premature beam aborts have been a limiting factor for accelerator availability in the recent years. Furthermore, the future upgrade of the Large Hadron Collider to its High Luminosity phase will further increase the radiation load and has higher requirements for the overall machine availability. Therefore equipment groups like the Quench protection groups have used the last years to redesign many of their systems to fulfill those requirements. In support of the development of radiation-tolerant systems, several proton beam irradiation campaigns were conducted to determine the inherent radiation tolerance of a selection of varied electronic components. Using components from this selection a new Quench Protection System for the 600 A corrector magnets was developed. The radiation tolerance of this system was further improved by developing a filter and error correction system for all discovered failure modes. Furthermore, compliance of the new system with the specification was shown by simulating the behavior of the system using data taken from the irradiation campaigns. The resulting system is operational since the beginning of 2016 and has in the first 9 months of operation not shown a single radiation-induced failure. Using results from simulations and irradiation campaigns the predicted failure cross section for the full new 600 A Quench Protection System is 4.358±0.564.10{sup -10} cm{sup 2} which is one order of magnitude lower than the target set during the development of this system. 3. Development of radiation-tolerant components for the quench detection system at the CERN Large Hadron Collider International Nuclear Information System (INIS) Bitterling, Oliver 2017-01-01 This works describes the results of a three year project to improve the radiation tolerance of the Quench Protection System of the CERN Large Hadron Collider. Radiation-induced premature beam aborts have been a limiting factor for accelerator availability in the recent years. Furthermore, the future upgrade of the Large Hadron Collider to its High Luminosity phase will further increase the radiation load and has higher requirements for the overall machine availability. Therefore equipment groups like the Quench protection groups have used the last years to redesign many of their systems to fulfill those requirements. In support of the development of radiation-tolerant systems, several proton beam irradiation campaigns were conducted to determine the inherent radiation tolerance of a selection of varied electronic components. Using components from this selection a new Quench Protection System for the 600 A corrector magnets was developed. The radiation tolerance of this system was further improved by developing a filter and error correction system for all discovered failure modes. Furthermore, compliance of the new system with the specification was shown by simulating the behavior of the system using data taken from the irradiation campaigns. The resulting system is operational since the beginning of 2016 and has in the first 9 months of operation not shown a single radiation-induced failure. Using results from simulations and irradiation campaigns the predicted failure cross section for the full new 600 A Quench Protection System is 4.358±0.564.10 -10 cm 2 which is one order of magnitude lower than the target set during the development of this system. 4. Model independent particle mass measurements in missing energy events at hadron colliders Science.gov (United States) Park, Myeonghun 2011-12-01 This dissertation describes several new kinematic methods to measure the masses of new particles in events with missing transverse energy at hadron colliders. Each method relies on the measurement of some feature (a peak or an endpoint) in the distribution of a suitable kinematic variable. The first method makes use of the "Gator" variable s min , whose peak provides a global and fully inclusive measure of the production scale of the new particles. In the early stage of the LHC, this variable can be used both as an estimator and a discriminator for new physics over the standard model backgrounds. The next method studies the invariant mass distributions of the visible decay products from a cascade decay chain and the shapes and endpoints of those distributions. Given a sufficient number of endpoint measurements, one could in principle attempt to invert and solve for the mass spectrum. However, the non-linear character of the relevant coupled quadratic equations often leads to multiple solutions. In addition, there is a combinatorial ambiguity related to the ordering of the decay products from the cascade decay chain. We propose a new set of invariant mass variables which are less sensitive to these problems. We demonstrate how the new particle mass spectrum can be extracted from the measurement of their kinematic endpoints. The remaining methods described in the dissertation are based on "transverse" invariant mass variables like the "Cambridge" transverse mass MT2, the "Sheffield" contrasverse mass MCT and their corresponding one-dimensional projections MT2⊥, M T2||, MCT⊥ , and MCT|| with respect to the upstream transverse momentum U⃗T . The main advantage of all those methods is that they can be applied to very short (single-stage) decay topologies, as well as to a subsystem of the observed event. The methods can also be generalized to the case of non-identical missing particles, as demonstrated in Chapter 7. A complete set of analytical results for the 5. Top quark threshold scan and study of detectors for highly granular hadron calorimeters at future linear colliders International Nuclear Information System (INIS) Tesar, Michal 2014-01-01 Two major projects for future linear electron-positron colliders, the International Linear Collider (ILC) and the Compact Linear Collider (CLIC), are currently under development. These projects can be seen as complementary machines to the Large Hadron Collider (LHC) which permit a further progress in high energy physics research. They overlap considerably and share the same technological approaches. To meet the ambitious goals of precise measurements, new detector concepts like very finely segmented calorimeters are required. We study the precision of the top quark mass measurement achievable at CLIC and the ILC. The employed method was a t anti t pair production threshold scan. In this technique, simulated measurement points of the t anti t production cross section around the threshold are fitted with theoretical curves calculated at next-to-next-to-leading order. Detector effects, the influence of the beam energy spectrum and initial state radiation of the colliding particles are taken into account. Assuming total integrated luminosity of 100 fb -1 , our results show that the top quark mass in a theoretically well-defined 1S mass scheme can be extracted with a combined statistical and systematic uncertainty of less than 50 MeV. The other part of this work regards experimental studies of highly granular hadron calorimeter (HCAL) elements. To meet the required high jet energy resolution at the future linear colliders, a large and finely segmented detector is needed. One option is to assemble a sandwich calorimeter out of many low-cost scintillators read out by silicon photomultipliers (SiPM). We characterize the areal homogeneity of SiPM response with the help of a highly collimated beam of pulsed visible light. The spatial resolution of the experiment reach the order of 1 μm and allows to study the active area structures within single SiPM microcells. Several SiPM models are characterized in terms of relative photon detection efficiency and probability crosstalk 6. submitter Projects for ultra-high-energy circular colliders at CERN CERN Document Server Bogomyagkov, A V; Levichev, E B; Piminov, P A; Sinyatkin, S V; Shatilov, D N; Benedict, M; Oide, K; Zimmermann, F 2016-01-01 Within the Future Circular Collider (FCC) design study launched at CERN in 2014, it is envisaged to construct hadron (FCC-hh) and lepton (FCC-ee) ultra-high-energy machines aimed to replace the LHC upon the conclusion of its research program. The Budker Institute of Nuclear Physics is actively involved in the development of the FCC-ee electron–positron collider. The Crab Waist (CR) scheme of the collision region that has been proposed by INP and will be implemented at FCC-ee is expected to provide high luminosity over a broad energy range. The status and development of the FCC project are described, and its parameters and limitations are discussed for the lepton collider in particular. 7. Exotic nuclei arena in Japanese Hadron Project International Nuclear Information System (INIS) Nomura, T. 1990-04-01 A description is given on the radioactive beam facility proposed as one of the research arenas in Japanese Hadron Project. The facility consists of a 1 GeV proton linac, an isotope separator on-line (ISOL) and a series of heavy-ion (HI) linacs. Various exotic nuclei produced by 1 GeV proton beam mainly via spallation processes of a thick target, are mass-separated by the ISOL with a high mass-resolving power and are injected into the HI linac with the energy of 1 keV/u. The acceleration is made in three stages using different types of linacs, i.e., split-coaxial RFQ. Interdigital-H, and Alvarez, the maximum energy in each stage being 0.17, 1.4 and 6.5 MeV/u, respectively. A few examples of scientific interests realized in this facility will be briefly discussed. (author) 8. NLO supersymmetric QCD corrections to tt-bar h0 associated production at hadron colliders International Nuclear Information System (INIS) Wu Peng; Ma Wengan; Hou Hongsheng; Zhang Renyou; Han Liang; Jiang Yi 2005-01-01 We calculate NLO QCD corrections to the lightest neutral Higgs boson production associated with top quark pair at hadron colliders in the minimal supersymmetric standard model (MSSM). Our calculation shows that the total QCD correction significantly reduces its dependence on the renormalization/factorization scale. The relative correction from the SUSY QCD part approaches to be a constant, if either M S or m g- bar is heavy enough. The corrections are generally moderate (in the range of few percent to 20%) and under control in most of the SUSY parameter space. The relative correction is obviously related to m g- bar , A t and μ, but not very sensitive to tanβ, M S at both the Tevatron and the LHC with our specified parameters 9. The ERL-based Design of Electron-Hadron Collider eRHIC Energy Technology Data Exchange (ETDEWEB) Ptitsyn, Vadim [et al. 2016-06-01 Recent developments of the ERL-based design of future high-luminosity electron-hadron collider eRHIC focused on balancing technological risks present in the design versus the design cost. As a result a lower risk design has been adopted at moderate cost increase. The modifications include a change of the main linac RF frequency, reduced number of SRF cavity types and modified electron spin transport using a spin rotator. A luminosity-staged approach is being explored with a Nominal design ($L \\sim 10^{33} {\\rm cm}^2 {\\rm s}^{-1}$) that employs reduced electron current and could possibly be based on classical electron cooling, and then with the Ultimate design ($L \\gt 10^{34} {\\rm cm}^{-2} {\\rm s}^{-1}$) that uses higher electron current and an innovative cooling technique (CeC). The paper describes the recent design modifications, and presents the full status of the eRHIC ERL-based design. 10. Geometrical position of the Large Hadron Collider main dipole inside the cryostat CERN Document Server La China, M; Gubello, G; Hauviller, Claude; Scandale, Walter; Todesco, Ezio 2002-01-01 The superconducting dipole of the Large Hadron Collider (LHC) is a cylindrical structure made of a shrinking cylinder containing iron laminations and collared coils. This 15 m long structure, weighing about 28 t, is horizontally bent by 5 mrad. Its geometrical shape should be preserved, from the assembly phase to the operational condition at cryogenic temperature. When inserted in its cryostat, the dipole cold mass is supported by three posts also providing the thermal insulation. Sliding interfaces should minimize the interference between the dipole and the cryostat during cooling down and warming up. Indeed, a possible non-linear response of the sliding interface can detrimentally affect the final dipole shape. This paper presents the results of dedicated tests investigating interferences and of specific simulations with a 3D finite element model (FEM) describing the mechanical behaviour of the dipole inside the cryostat. Comparison between measurements and FEM simulations is also discussed. 11. Electron cloud buildup driving spontaneous vertical instabilities of stored beams in the Large Hadron Collider Directory of Open Access Journals (Sweden) Annalisa Romano 2018-06-01 Full Text Available At the beginning of the 2016 run, an anomalous beam instability was systematically observed at the CERN Large Hadron Collider (LHC. Its main characteristic was that it spontaneously appeared after beams had been stored for several hours in collision at 6.5 TeV to provide data for the experiments, despite large chromaticity values and high strength of the Landau-damping octupole magnet. The instability exhibited several features characteristic of those induced by the electron cloud (EC. Indeed, when LHC operates with 25 ns bunch spacing, an EC builds up in a large fraction of the beam chambers, as revealed by several independent indicators. Numerical simulations have been carried out in order to investigate the role of the EC in the observed instabilities. It has been found that the beam intensity decay is unfavorable for the beam stability when LHC operates in a strong EC regime. 12. Photoproduction of vector mesons in proton-proton ultraperipheral collisions at the CERN Large Hadron Collider Science.gov (United States) Xie, Ya-Ping; Chen, Xurong 2018-05-01 Photoproduction of vector mesons is computed with dipole model in proton-proton ultraperipheral collisions (UPCs) at the CERN Large Hadron Collider (LHC). The dipole model framework is employed in the calculations of vector mesons production in diffractive processes. Parameters of the bCGC model are refitted with the latest inclusive deep inelastic scattering experimental data. Employing the bCGC model and boosted Gaussian light-cone wave function for vector mesons, we obtain the prediction of rapidity distributions of J/ψ and ψ(2s) mesons in proton-proton ultraperipheral collisions at the LHC. The predictions give a good description of the experimental data of LHCb. Predictions of ϕ and ω mesons are also evaluated in this paper. 13. Grid computing in pakistan and: opening to large hadron collider experiments International Nuclear Information System (INIS) Batool, N.; Osman, A.; Mahmood, A.; Rana, M.A. 2009-01-01 A grid computing facility was developed at sister institutes Pakistan Institute of Nuclear Science and Technology (PINSTECH) and Pakistan Institute of Engineering and Applied Sciences (PIEAS) in collaboration with Large Hadron Collider (LHC) Computing Grid during early years of the present decade. The Grid facility PAKGRID-LCG2 as one of the grid node in Pakistan was developed employing mainly local means and is capable of supporting local and international research and computational tasks in the domain of LHC Computing Grid. Functional status of the facility is presented in terms of number of jobs performed. The facility developed provides a forum to local researchers in the field of high energy physics to participate in the LHC experiments and related activities at European particle physics research laboratory (CERN), which is one of the best physics laboratories in the world. It also provides a platform of an emerging computing technology (CT). (author) 14. Search for Supersymmetry using Heavy Flavour Jets with the ATLAS Detector at the Large Hadron Collider CERN Document Server Tua, Alan The Standard Model of particle physics, despite being extremely successful, is not the ultimate description of physics. The nature of dark matter is not well described, unification of the forces is not achieved and the theory is plagued by a hierarchy problem. One of the proposed solutions to these issues is supersymmetry. This thesis describes numerous searches for supersymmetry carried out using the ATLAS detector at the Large Hadron Collider. In scenarios where R-parity is conserved, supersymmetric final states contain large amounts of missing transverse energy. Furthermore, should supersymmetry correctly describe Nature, the scalar partners of the third generation quarks might be the lightest scalar quarks. The searches reported here exploit these possibilities and make use of signatures which are rich in missing transverse energy and jets coming from heavy flavour quarks. Searches are carried out for direct pair production of third generation scalar quarks as well as gluino-mediated production of these p... 15. Electromigration driven failures on miniature silver fuses at the Large Hadron Collider CERN Document Server Trikoupis, Nikolaos; Perez Fontenla, Ana Teresa 2017-01-01 Spurious faults were observed on the miniature silver fuses of electronic cards used for the cryogenics instrumentation in the LHC (Large Hadron Collider) accelerator at CERN. By applying analytical tools and techniques such as Scanning Electron Microscopy, spectrometry and Weibull reliability calculations and by the knowledge of operating temperatures and operational time of each unit, the origin of the problem has now been understood and can be attributed to electromigration. The selected fuse was operated at moderate temperature and load conditions and was considered as a “lifetime” component. However, it turned out to have a smaller than expected MTTF with failures following a Weibull distribution of$\\beta = 3.91$and$\\eta = 2323$. The literature describes extensively the effects of electromigration, but there are only limited references referring to the impact of this phenomenon on miniature silver fuses for electronic circuits. 16. EXERGY ANALYSIS OF THE CRYOGENIC HELIUM DISTRIBUTION SYSTEM FOR THE LARGE HADRON COLLIDER (LHC) International Nuclear Information System (INIS) Claudet, S.; Lebrun, Ph.; Tavian, L.; Wagner, U. 2010-01-01 The Large Hadron Collider (LHC) at CERN features the world's largest helium cryogenic system, spreading over the 26.7 km circumference of the superconducting accelerator. With a total equivalent capacity of 145 kW at 4.5 K including 18 kW at 1.8 K, the LHC refrigerators produce an unprecedented exergetic load, which must be distributed efficiently to the magnets in the tunnel over the 3.3 km length of each of the eight independent sectors of the machine. We recall the main features of the LHC cryogenic helium distribution system at different temperature levels and present its exergy analysis, thus enabling to qualify second-principle efficiency and identify main remaining sources of irreversibility. 17. The data acquisition and reduction challenge at the Large Hadron Collider. Science.gov (United States) Cittolin, Sergio 2012-02-28 The Large Hadron Collider detectors are technological marvels-which resemble, in functionality, three-dimensional digital cameras with 100 Mpixels-capable of observing proton-proton (pp) collisions at the crossing rate of 40 MHz. Data handling limitations at the recording end imply the selection of only one pp event out of each 10(5). The readout and processing of this huge amount of information, along with the selection of the best approximately 200 events every second, is carried out by a trigger and data acquisition system, supplemented by a sophisticated control and monitor system. This paper presents an overview of the challenges that the development of these systems has presented over the past 15 years. It concludes with a short historical perspective, some lessons learnt and a few thoughts on the future. 18. Slip-Stick Mechanism in Training the Superconducting Magnets in the Large Hadron Collider CERN Document Server Granieri, P P; Todesco, E 2011-01-01 Superconducting magnets can exhibit training quenches during successive powering to reaching nominal performance. The slip–stick motion of the conductors is considered to be one of the mechanisms of training. In this paper, we present a simple quantitative model where the training is described as a discrete dynamical system matching the equilibrium between the energy margin of the superconducting cable and the frictional energy released during the conductor motion. The model can be explicitly solved in the linearized case, showing that the short sample limit is reached via a power law. Training phenomena have a large random component. A large set of data of the large hadron collider magnet tests is postprocessed according to previously defined methods to extract an average training curve for dipoles and quadrupoles. These curves show the asymptotic power law predicted by the model. The curves are then fit through the model, which has two free parameters. The model shows good agreement over a large range, bu... 19. Slip-Stick Mechanism in Training the Superconducting Magnets in the Large Hadron Collider CERN Document Server Granieri, P P; Lorin, C 2011-01-01 Superconducting magnets can exhibit training quenches during successive powering to reaching nominal performance. The slip-stick motion of the conductors is considered to be one of the mechanisms of training. In this paper, we present a simple quantitative model where the training is described as a discrete dynamical system matching the equilibrium between the energy margin of the superconducting cable and the frictional energy released during the conductor motion. The model can be explicitly solved in the linearized case, showing that the short sample limit is reached via a power law. Training phenomena have a large random component. A large set of data of the large hadron collider magnet tests is postprocessed according to previously defined methods to extract an average training curve for dipoles and quadrupoles. These curves show the asymptotic power law predicted by the model. The curves are then fit through the model, which has two free parameters. The model shows good agreement over a large range, but ... 20. Quench protection diodes for the large hadron collider LHC at CERN International Nuclear Information System (INIS) Hagedorn, D.; Naegele, W. 1992-01-01 For the quench protection of the main ring dipole and quadrupole magnets for the proposed Large Hadron Collider at CERN two lines of approach have been pursued for the realization of a suitable high current by-pass element and liquid helium temperature. Two commercially available diodes of the HERA type connected in parallel can easily meet the requirements if a sufficient good current sharing is imposed by current balancing elements. Design criteria for these current balancing elements are derived from individual diode characteristics. Single diode elements of thin base region, newly developed in industry, have been successfully tested. The results are promising and, if the diodes can be made with reproducible characteristics, they will provide the preferred solution especially in view of radiation hardness 1. Precision Muon Tracking at Future Hadron Colliders with sMDT Chambers CERN Document Server Kortner, Oliver; Müller, Felix; Nowak, Sebastian; Richter, Robert 2016-01-01 Small-diameter muon drift tube (sMDT) chambers are a cost-effective technology for high-precision muon tracking. The rate capability of the sMDT chambers has been extensively tested at the Gamma Irradiation Facility at CERN in view of expected rates at future high-energy hadron colliders. Results show that it fulfills the requirements over most of the acceptance of muon detectors. The optimization of the read-out electronics to further increase the rate capability of the detectors is discussed. Chambers of this type are under construction for upgrades of the muon spectrometer of the ATLAS detector at high LHC luminosities. Design and construction procedures have been optimized for mass production while providing a precision of better than 10 micrometers in the sense wire positions and the mechanical stability required to cover large areas. 2. Search for excited electrons using the CMS detector at the Large Hadron Collider International Nuclear Information System (INIS) Jain, Shilpi 2013-01-01 The start of the Large Hadron Collider (LHC) opened a new window to the energy scale far beyond 1 TeV. There are different theories that predict new physics, and hence it is not clear what signature to expect in the data and which of the theory will describe it properly. However new physics could as well manifest itself in ways no one has yet thought of. Thus we have implemented a Model Unspecific Search in CMS (MUSiC). This approach has been applied to the CMS data and we have obtained the preliminary results. I will talk about this details of the analysis techniques, its implementation in analysing CMS data, results obtained and the discussion on the discrepancy observed 3. Operational Experience and Performance with the ATLAS Pixel Detector at the Large Hadron Collider CERN Document Server Grummer, Aidan; The ATLAS collaboration 2018-01-01 The tracking performance of the ATLAS detector relies critically on its 4-layer Pixel Detector, that has undergone significant hardware and software upgrades to meet the challenges imposed by the higher collision energy, pileup and luminosity that are being delivered by the Large Hadron Collider, with record breaking instantaneous luminosities of 2 x 10^34 cm-2 s-1 recently surpassed. The key status and performance metrics of the ATLAS Pixel Detector are summarised, and the operational experience and requirements to ensure optimum data quality and data taking efficiency will be described, with special emphasis to radiation damage experience. In particular, radiation damage effects will be showed and signs of degradation which are visible but which are not impacting yet the tracking performance (but will): dE/dX, occupancy reduction with integrated luminosity, under-depletion effects with IBL in 2016, effects of annealing that is not insignificant for the inner-most layers. Therefore the offline software strat... 4. Exergy Analysis of the Cryogenic Helium Distribution System for the Large Hadron Collider (LHC) CERN Document Server Claudet, S; Tavian, L; Wagner, U 2010-01-01 The Large Hadron Collider (LHC) at CERN features the world’s largest helium cryogenic system, spreading over the 26.7 km circumference of the superconducting accelerator. With a total equivalent capacity of 145 kW at 4.5 K including 18 kW at 1.8 K, the LHC refrigerators produce an unprecedented exergetic load, which must be distributed efficiently to the magnets in the tunnel over the 3.3 km length of each of the eight independent sectors of the machine. We recall the main features of the LHC cryogenic helium distribution system at different temperature levels and present its exergy analysis, thus enabling to qualify second-principle efficiency and identify main remaining sources of irreversibility.. 5. Design, construction, and performance of superconducting magnet support posts for the Large Hadron Collider International Nuclear Information System (INIS) Blin, M.; Danielsson, H.; Evans, B.; Mathieu, M. 1994-01-01 Different support posts for the Large Hadron Collider (LHC) prototype superconducting magnets have been designed and manufactured. They have been evaluated both mechanically and thermally. The posts are made of a tubular section in composite materials, i.e. glass- or carbon-fibre and epoxy resin, with glued metallic heat intercepts and connections. Mechanical tests have been carried out with both radial and axial loads, before and after cooldown to working temperature. The design considerations and future developments concerning dimensions and other materials are also discussed in this paper. Thermal performance has been evaluated at 1.8 K, 5 K and 80 K in a precision heat leak measuring bench. The measurements have been carried out using calibrated thermal conductances (open-quotes heatmetersclose quotes) and boil-off methods. The measured performances of the posts have been compared with analytical predictions 6. First β-beating measurement and optics analysis for the CERN Large Hadron Collider Directory of Open Access Journals (Sweden) M. Aiba 2009-08-01 Full Text Available Proton beams were successfully steered through the entire ring of the CERN Large Hadron Collider (LHC on September the 10th of 2008. A reasonable lifetime was achieved for the counterclockwise beam, namely beam 2, after the radiofrequency capture of the particle bunch was established. This provided the unique opportunity of acquiring turn-by-turn betatron oscillations for a maximum of 90 turns right at injection. Transverse coupling was not corrected and chromaticity was estimated to be large. Despite this largely constrained scenario, reliable optics measurements have been accomplished. These measurements together with the application of new algorithms for the reconstruction of optics errors have led to the identification of a dominant error source. 7. A novel technique for studying the Z boson transverse momentum distribution at hadron colliders International Nuclear Information System (INIS) Vesterinen, M.; Wyatt, T.R. 2009-01-01 We present a novel method for studying the shape of the Z boson transverse momentum distribution, Q T , at hadron colliders in pp-bar/pp→Z/γ*→l + l - . The Q T is decomposed into two orthogonal components; one transverse and the other parallel to the di-lepton thrust axis. We show that the transverse component is almost insensitive to the momentum resolution of the individual leptons and is thus more precisely determined on an event-by-event basis than the Q T . Furthermore, we demonstrate that a measurement of the distribution of this transverse component is substantially less sensitive to the dominant experimental systematics (resolution unfolding and Q T dependence of event selection efficiencies) reported in the previous measurements of the Q T distribution. 8. Field and structural analysis of 56 mm aperture dipole model magnets for the Large Hadron Collider International Nuclear Information System (INIS) Song, Naihao; Yamamoto, Akira; Shintomi, Takakazu; Hirabayashi, Hiromi; Yamaoka, Hiroshi; Terashima, A. 1996-01-01 A new dipole model magnet design has been made with an aperture of 56 mm according to re-optimization of the accelerator design for the Large Hadron Collider (LHC) to be built at CERN. A feature of symmetric/separate collar configuration in the new design proposed by KEK has been evaluated in terms of field quality and mechanical stability according to the process of the magnet fabrication, cool-down and excitations. The analysis has been carried out by using the finite element analysis code ANSYS, in linkage of field analysis with structural analysis. Effect of the deformation, due to electromagnetic force, on the field quality has been also investigated. Results of the analysis will be presented 9. Electron reconstruction and electroweak processes as tools to achieve precision measurements at a hadron collider: From CDF to CMS Energy Technology Data Exchange (ETDEWEB) Giolo-Nicollerat, Anne-Sylvie [Univ. of Lausanne (Switzerland) 2004-01-01 Precision measurements are an important aspect of hadron colliders physics program. This thesis describes a method, together with a first application, of how to achieve and use precision measurements at the LHC. The idea is to use refernce processes to control the detector systematics and to constrain the theoretical predictions. 10. Design Concept and Parameters of a 15 T$Nb_{3}Sn$Dipole Demonstrator for a 100 TEV Hadron Collider Energy Technology Data Exchange (ETDEWEB) Zlobin, A. V. [Fermilab; Andreev, N. [Fermilab; Barzi, E. [Fermilab; Kashikhin, V. V. [Fermilab; Novitski, I. [Fermilab 2015-06-01 FNAL has started the development of a 15 T$Nb_{3}Sn$dipole demonstrator for a 100 TeV scale hadron collider. This paper describes the design concept and parameters of the 15 T$Nb_{3}Sn$dipole demonstrator. The dipole magnetic, mechanical and quench protection concept and parameters are presented and discussed. 11. Head-On Beam-Beam Interactions in High-Energy Hadron Colliders. GPU-Powered Modelling of Nonlinear Effects CERN Document Server AUTHOR|(CDS)2160109; Støvneng, Jon Andreas 2017-08-15 The performance of high-energy circular hadron colliders, as the Large Hadron Collider, is limited by beam-beam interactions. The strength of the beam-beam interactions will be higher after the upgrade to the High-Luminosity Large Hadron Collider, and also in the next generation of machines, as the Future Circular Hadron Collider. The strongly nonlinear force between the two opposing beams causes diverging Hamiltonians and drives resonances, which can lead to a reduction of the lifetime of the beams. The nonlinearity makes the effect of the force difficult to study analytically, even at first order. Numerical models are therefore needed to evaluate the overall effect of different configurations of the machines. For this thesis, a new code named CABIN (Cuda-Accelerated Beam-beam Interaction) has been developed to study the limitations caused by the impact of strong beam-beam interactions. In particular, the evolution of the beam emittance and beam intensity has been monitored to study the impact quantitatively... 12. Probing two-photon decay widths of mesons at energies available at the CERN Large Hadron Collider (LHC) International Nuclear Information System (INIS) Bertulani, C. A. 2009-01-01 Meson production cross sections in ultraperipheral relativistic heavy ion collisions at the CERN Large Hadron Collider are revisited. The relevance of meson models and of exotic QCD states is discussed. This study includes states that have not been considered before in the literature. 13. Viewpoint: the End of the World at the Large Hadron Collider? International Nuclear Information System (INIS) Peskin, Michael E. 2008-01-01 New arguments based on astrophysical phenomena constrain the possibility that dangerous black holes will be produced at the CERN Large Hadron Collider. On 8 August, the Large Hadron Collider (LHC) at CERN injected its first beams, beginning an experimental program that will produce proton-proton collisions at an energy of 14 TeV. Particle physicists are waiting expectantly. The reason is that the Standard Model of strong, weak, and electromagnetic interactions, despite its many successes, is clearly incomplete. Theory says that the holes in the model should be filled by new physics in the energy region that will be studied by the LHC. Some candidate theories are simple quick fixes, but the most interesting ones involve new concepts of spacetime waiting to be discovered. Look up the LHC on Wikipedia, however, and you will find considerable space devoted to safety concerns. At the LHC, we will probe energies beyond those explored at any previous accelerator, and we hope to create particles that have never been observed. Couldn't we, then, create particles that would actually be dangerous, for example, ones that would eat normal matter and eventually turn the earth into a blob of unpleasantness? It is morbid fun to speculate about such things, and candidates for such dangerous particles have been suggested. These suggestions have been analyzed in an article in Reviews of Modern Physics by Jaffe, Busza, Wilczek, and Sandweiss and excluded on the basis of constraints from observation and from the known laws of physics. These conclusions have been upheld by subsequent studies conducted at CERN. 14. Applications of SCET to the pair production of supersymmetric particles at hadron colliders Energy Technology Data Exchange (ETDEWEB) Broggio, Alessandro 2013-02-04 In this thesis we investigate the phenomenology of supersymmetric particles at hadron colliders beyond next-to-leading order (NLO) in perturbation theory. We discuss the foundations of Soft-Collinear Effective Theory (SCET) and, in particular, we explicitly construct the SCET Lagrangian for QCD. As an example, we discuss factorization and resummation for the Drell-Yan process in SCET. We use techniques from SCET to improve existing calculations of the production cross sections for slepton-pair production and top-squark-pair production at hadron colliders. As a first application, we implement soft-gluon resummation at next-to-next-to-next-to-leading logarithmic order (NNNLL) for slepton-pair production in the minimal supersymmetric extension of the Standard Model (MSSM). This approach resums large logarithmic corrections arising from the dynamical enhancement of the partonic threshold region caused by steeply falling parton luminosities. We evaluate the resummed invariant-mass distribution and total cross section for slepton-pair production at the Tevatron and LHC and we match these results, in the threshold region, onto NLO fixed-order calculations. As a second application we present the most precise predictions available for top-squark-pair production total cross sections at the LHC. These results are based on approximate NNLO formulas in fixed-order perturbation theory, which completely determine the coefficients multiplying the singular plus distributions. The analysis of the threshold region is carried out in pair invariant mass (PIM) kinematics and in single-particle inclusive (1PI) kinematics. We then match our results in the threshold region onto the exact fixed-order NLO results and perform a detailed numerical analysis of the total cross section. 15. arXiv Proceedings of the Sixth International Workshop on Multiple Partonic Interactions at the Large Hadron Collider CERN Document Server Astalos, R.; Bartalini, P.; Belyaev, I.; Bierlich, Ch.; Blok, B.; Buckley, A.; Ceccopieri, F.A.; Cherednikov, I.; Christiansen, J.R.; Ciangottini, D.; Deak, M.; Ducloue, B.; Field, R.; Gaunt, J.R.; Golec-Biernat, K.; Goerlich, L.; Grebenyuk, A.; Gueta, O.; Gunnellini, P.; Helenius, I.; Jung, H.; Kar, D.; Kepka, O.; Klusek-Gawenda, M.; Knutsson, A.; Kotko, P.; Krasny, M.W.; Kutak, K.; Lewandowska, E.; Lykasov, G.; Maciula, R.; Moraes, A.M.; Martin, T.; Mitsuka, G.; Motyka, L.; Myska, M.; Otwinowski, J.; Pierog, T.; Pleskot, V.; Rinaldi, M.; Schafer, W.; Siodmok, A.; Sjostrand, T.; Snigirev, A.; Stasto, A.; Staszewski, R.; Stebel, T.; Strikman, M.; Szczurek, A.; Treleani, D.; Trzebinski, M.; van Haevermaet, H.; van Hameren, A.; van Mechelen, P.; Waalewijn, W.; Wang, W.Y.; MPI@LHC 2014 2014-01-01 Multiple Partonic Interactions are often crucial for interpreting results obtained at the Large Hadron Collider (LHC). The quest for a sound understanding of the dynamics behind MPI - particularly at this time when the LHC is due to start its "Run II" operations - has focused the aim of this workshop. MPI@LHC2014 concentrated mainly on the phenomenology of LHC measurements whilst keeping in perspective those results obtained at previous hadron colliders. The workshop has also debated some of the state-of-the-art theoretical considerations and the modeling of MPI in Monte Carlo event generators. The topics debated in the workshop included: Phenomenology of MPI processes and multiparton distributions; Considerations for the description of MPI in Quantum Chromodynamics (QCD); Measuring multiple partonic interactions; Experimental results on inelastic hadronic collisions: underlying event, minimum bias, forward energy flow; Monte Carlo generator development and tuning; Connections with low-x phenomena, diffractio... 16. Forward-backward asymmetries of lepton pairs in events with a large-transverse-momentum jet at hadron colliders International Nuclear Information System (INIS) Aguila, F. del; Ametller, Ll.; Talavera, P. 2002-01-01 We discuss forward-backward charge asymmetries for lepton-pair production in association with a large-transverse-momentum jet at hadron colliders. The lepton charge asymmetry relative to the jet direction A FB j gives a new determination of the effective weak mixing angle sin 2 θ eff lept (M Z 2 ) with a statistical precision after cuts of ∼10 -3 (8x10 -3 ) at LHC (Tevatron). This is to be compared with the current uncertainty at LEP and SLD from the asymmetries alone, 2x10 -4 . The identification of b jets also allows for the measurement of the bottom-quark-Z asymmetry A FB b at hadron colliders, the resulting statistical precision for sin 2 θ eff lept (M Z 2 ) being ∼9x10 -4 (2x10 -2 at Tevatron), also lower than the reported precision at e + e - colliders, 3x10 -4 17. High-Luminosity Large Hadron Collider (HL-LHC) Technical Design Report V. 0.1 CERN Document Server Béjar Alonso I.; Brüning O.; Fessia P.; Lamont M.; Rossi L.; Tavian L. 2017-01-01 The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a newenergy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists work-ing in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. Tosustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase itsinstantaneous luminosity (rate of collisions) by a factor of five beyond the original design value and the integratedluminosity (total collisions created) by a factor ten. The LHC is already a highly complex and exquisitely opti-mised machine so this upgrade must be carefully conceived and will require about ten years to implement. Thenew configuration, known as High Luminosity LHC (HL-LHC), relies on a number of key innovations that pushaccelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting mag-nets, compact superconduc... 18. High-Luminosity Large Hadron Collider (HL-LHC) Preliminary Design Report CERN Document Server Apollinari, G; Béjar Alonso, I; Brüning, O; Lamont, M; Rossi, L 2015-01-01 The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHC is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cav... 19. Science and the Large Hadron Collider: a probe into instrumentation, periodization and classification CERN Document Server Roy, Arpita 2012-01-01 On September 19, 2008, the Large Hadron Collider (LHC) at CERN, Switzerland, began the world’s highest energy experiments as a probe into the structure of matter and forces of nature. Just nine days after the gala start-up, an explosion occurred in the LHC tunnel that brought the epic collider to a complete standstill. In light of the catastrophic incident that disrupted the operation of the LHC, the paper investigates the relation of temporality to the cycle of work in science, and raises the question: What kind of methodological value should we ascribe to events such as crises or breakdowns? Drawing upon and integrating classical anthropological themes with two and a half years of fieldwork at the LHC particle accelerator complex, the paper explores how the incident in September, which affected the instrument, acquaints us with the distribution of work in the laboratory. The incident discloses that the organization of science is not a homogenous ensemble, but marked by an enormous diversity of tasks and p... 20. Heavy-ion physics with the ALICE experiment at the CERN Large Hadron Collider. Science.gov (United States) Schukraft, J 2012-02-28 After close to 20 years of preparation, the dedicated heavy-ion experiment A Large Ion Collider Experiment (ALICE) took first data at the CERN Large Hadron Collider (LHC) accelerator with proton collisions at the end of 2009 and with lead nuclei at the end of 2010. After a short introduction into the physics of ultra-relativistic heavy-ion collisions, this article recalls the main design choices made for the detector and summarizes the initial operation and performance of ALICE. Physics results from this first year of operation concentrate on characterizing the global properties of typical, average collisions, both in proton-proton (pp) and nucleus-nucleus reactions, in the new energy regime of the LHC. The pp results differ, to a varying degree, from most quantum chromodynamics-inspired phenomenological models and provide the input needed to fine tune their parameters. First results from Pb-Pb are broadly consistent with expectations based on lower energy data, indicating that high-density matter created at the LHC, while much hotter and larger, still behaves like a very strongly interacting, almost perfect liquid. 1. Drell-Yan and diphoton production at hadron colliders and low scale gravity model International Nuclear Information System (INIS) Cheung, Kingman; Landsberg, Greg 2000-01-01 In the model of Arkani-Hamed, Dimopoulos, and Dvali where gravity is allowed to propagate in the extra dimensions of very large size, virtual graviton exchange between the standard model particles can give rise to signatures that can be tested in collider experiments. We study these effects in dilepton and diphoton production at hadron colliders. Specifically, we examine the double differential cross section in the invariant mass and scattering angle, which is found to be useful in separating the gravity effects from the standard model. In this work, sensitivity obtained using the double differential cross section is higher than that in previous studies based on single differential distributions. Assuming no excess of events over the standard model predictions, we obtain the following 95% confidence level lower limits on the effective Planck scale: 0.9-1.5 TeV in the Fermilab Tevatron run I, 1.3-2.5 TeV in run IIa, 1.7-3.5 TeV in run IIb, and 6.5-12.8 TeV at the CERN LHC. The range of numbers corresponds to the number of extra dimensions n=7-2. (c) 2000 The American Physical Society 2. Minimum Bias Measurements with the ATLAS Detector at the CERN Large Hadron Collider CERN Document Server Leyton, M 2009-01-01 The Large Hadron Collider (LHC) at CERN will collide bunches of protons (p) at a center-of-mass energy of sqrt(s) = 14 TeV and a rate of 40 MHz. The unprecedented collision energy and interaction rate at the LHC will allow us to explore the TeV mass scale and take a major step forward in our understanding of the fundamental nature of matter. The initial physics run of the LHC is expected to start in November 2009 and continue until the end of 2010, with collisions at sqrt(s) = 900 GeV, 7 TeV and 10 TeV. ATLAS (A Toroidal LHC ApparatuS) is a 4pi general-purpose detector designed for studying LHC collisions at the particle level. The design and layout of ATLAS are intended to cover the wide spectrum of physics signatures that are possible at the TeV mass scale. Construction and installation of the ATLAS detector at CERN are now complete. This dissertation focuses on measuring the properties of inelastic pp interactions at the LHC with the ATLAS detector. A method for measuring the central pseudorapidity den... 3. Advanced superconducting technology for global science: The Large Hadron Collider at CERN International Nuclear Information System (INIS) Lebrun, Ph. 2002-01-01 The Large Hadron Collider (LHC), presently in construction at CERN, the European Organization for Nuclear Research near Geneva (Switzerland), will be, upon its completion in 2005 and for the next twenty years, the most advanced research instrument of the world's high-energy physics community, providing access to the energy frontier above 1 TeV per elementary constituent. Re-using the 26.7-km circumference tunnel and infrastructure of the past LEP electron-positon collider, operated until 2000, the LHC will make use of advanced superconducting technology-high-field Nb-Ti superconducting magnets operated in superfluid helium and a cryogenic ultra-high vacuum system-to bring into collision intense beams of protons and ions at unprecedented values of center-of-mass energy and luminosity (14 TeV and 10 34 cm -2 ·s -1 , respectively with protons). After some ten years of focussed R and D, the LHC components are presently series-built in industry and procured through world-wide collaboration. After briefly recalling the physics goals, performance challenges and design choices of the machine, we describe its major technical systems, with particular emphasis on relevant advances in the key technologies of superconductivity and cryogenics, and report on its construction progress 4. Advanced superconducting technology for global science: The Large Hadron Collider at CERN Science.gov (United States) Lebrun, Ph. 2002-05-01 The Large Hadron Collider (LHC), presently in construction at CERN, the European Organization for Nuclear Research near Geneva (Switzerland), will be, upon its completion in 2005 and for the next twenty years, the most advanced research instrument of the world's high-energy physics community, providing access to the energy frontier above 1 TeV per elementary constituent. Re-using the 26.7-km circumference tunnel and infrastructure of the past LEP electron-positon collider, operated until 2000, the LHC will make use of advanced superconducting technology-high-field Nb-Ti superconducting magnets operated in superfluid helium and a cryogenic ultra-high vacuum system-to bring into collision intense beams of protons and ions at unprecedented values of center-of-mass energy and luminosity (14 TeV and 1034 cm-2ṡs-1, respectively with protons). After some ten years of focussed R&D, the LHC components are presently series-built in industry and procured through world-wide collaboration. After briefly recalling the physics goals, performance challenges and design choices of the machine, we describe its major technical systems, with particular emphasis on relevant advances in the key technologies of superconductivity and cryogenics, and report on its construction progress. 5. Simulations and measurements of beam loss patterns at the CERN Large Hadron Collider CERN Document Server Bruce, R.; Boccone, V.; Bracco, C.; Brugger, M.; Cauchi, M.; Cerutti, F.; Deboy, D.; Ferrari, A.; Lari, L.; Marsili, A.; Mereghetti, A.; Mirarchi, D.; Quaranta, E.; Redaelli, S.; Robert-Demolaize, G.; Rossi, A.; Salvachua, B.; Skordis, E.; Tambasco, C.; Valentino, G.; Weiler, T.; Vlachoudis, V.; Wollmann, D. 2014-08-21 The CERN Large Hadron Collider (LHC) is designed to collide proton beams of unprecedented energy, in order to extend the frontiers of high-energy particle physics. During the first very successful running period in 2010--2013, the LHC was routinely storing protons at 3.5--4 TeV with a total beam energy of up to 146 MJ, and even higher stored energies are foreseen in the future. This puts extraordinary demands on the control of beam losses. An un-controlled loss of even a tiny fraction of the beam could cause a superconducting magnet to undergo a transition into a normal-conducting state, or in the worst case cause material damage. Hence a multi-stage collimation system has been installed in order to safely intercept high-amplitude beam protons before they are lost elsewhere. To guarantee adequate protection from the collimators, a detailed theoretical understanding is needed. This article presents results of numerical simulations of the distribution of beam losses around the LHC that have leaked out of the co... 6. Advanced Superconducting Technology for Global Science The Large Hadron Collider at CERN CERN Document Server Lebrun, P 2002-01-01 The Large Hadron Collider (LHC), presently in construction at CERN, the European Organisation for Nuclear Research near Geneva (Switzerland), will be, upon its completion in 2005 and for the next twenty years, the most advanced research instrument of the world's high-energy physics community, providing access to the energy frontier above 1 TeV per elementary constituent. Re-using the 26.7-km circumference tunnel and infrastructure of the past LEP electron-positon collider, operated until 2000, the LHC will make use of advanced superconducting technology - high-field Nb-Ti superconducting magnets operated in superfluid helium and a cryogenic ultra-high vacuum system - to bring into collision intense beams of protons and ions at unprecedented values of center-of-mass energy and luminosity (14 TeV and 1034 cm-2.s-1, respectively with protons). After some ten years of focussed R&D, the LHC components are presently series-built in industry and procured through world-wide collaboration. After briefly recalling ... 7. Fault Tracking of the Superconducting Magnet System at the CERN Large Hadron Collider CERN Document Server Griesemer, Tobias 2016-03-25 The Large Hadron Collider (LHC) at CERN is one of the most complex machines ever built. It is used to explore the mysteries of the universe by reproducing conditions of the big bang. High energy particles are collide in particle detectors and as a result of the collision process secondary particles are created. New particles could be discovered during this process. The operation of such a machine is not straightforward and is subject to many different types of failures. A model of LHC operation needs to be defined in order to understand the impact of the various failures on availability. As an example a typical operational cycle is described: the beams are first injected, then accelerated, and finally brought into collisions. Under nominal conditions, beams should be in collision (so-called ‘stable beams’ period) for about 10 hours and then extracted onto a beam dump block. In case of a failure, the Machine Protection Systems ensure safe extraction of the beams. From the experience in LHC Run 1 (2009 - 20... 8. VUV photoemission studies of candidate Large Hadron Collider vacuum chamber materials CERN Document Server Cimino, R; Baglin, V 1999-01-01 In the context of future accelerators and, in particular, the beam vacuum of the Large Hadron Collider (LHC), a 27 km circumference proton collider to be built at CERN, VUV synchrotron radiation (SR) has been used to study both qualitatively and quantitatively candidate vacuum chamber materials. Emphasis is given to show that angle and energy resolved photoemission is an extremely powerful tool to address important issues relevant to the LHC, such as the emission of electrons that contributes to the creation of an electron cloud which may cause serious beam instabilities and unmanageable heat loads on the cryogenic system. Here we present not only the measured photoelectron yields from the proposed materials, prepared on an industrial scale, but also the energy and in some cases the angular dependence of the emitted electrons when excited with either a white light (WL) spectrum, simulating that in the arcs of the LHC, or monochromatic light in the photon energy range of interest. The effects on the materials ... 9. Cryogenic Studies for the Proposed CERN Large Hadron Electron Collider (LHeC) CERN Document Server Haug, F 2011-01-01 The LHeC (Large Hadron electron Collider) is a proposed future colliding beam facility for lepton-nucleon scattering particle physics at CERN. A new 60 GeV electron accelerator will be added to the existing 27 km circumference 7 TeV LHC for collisions of electrons with protons and heavy ions. Two basic design options are being pursued. The first is a circular accelerator housed in the existing LHC tunnel which is referred to as the "Ring-Ring" version. Low field normal conducting magnets guide the particle beam while superconducting (SC) RF cavities cooled to 2 K are installed at two opposite locations at the LHC tunnel to accelerate the beams. For this version in addition a 10 GeV re-circulating SC injector will be installed. In total four refrigerators with cooling capacities between 1.2 kW and 3 kW @ 4.5 K are needed. The second option, referred to as the "Linac-Ring" version consists of a race-track re-circulating energy-recovery type machine with two 1 km long straight acceleration sections. The 944 hi... 10. Supersymmetry phenomenology in the context of neutrino physics and the large hadron collider LHC Energy Technology Data Exchange (ETDEWEB) Hanussek, Marja 2012-05-15 Experimentally, it is well established that the Standard Model of particle physics requires an extension to accommodate the neutrino oscillation data, which indicates that at least two neutrinos are massive and that two of the neutrino mixing angles are large. Massive neutrinos are naturally present in a supersymmetric extension of the Standard Model which includes lepton-number violating terms (the B3 MSSM). Furthermore, supersymmetry stabilizes the hierarchy between the electroweak scale and the scale of unified theories or the Planck scale. In this thesis, we study in detail how neutrino masses are generated in the B3 MSSM. We present a mechanism how the experimental neutrino oscillation data can be realized in this framework. Then we discuss how recently published data from the Large Hadron Collider (LHC) can be used to constrain the parameter space of this model. Furthermore, we present work on supersymmetric models where R-parity is conserved, considering scenarios with light stops in the light of collider physics and scenarios with near-massless neutralinos in connection with cosmological restrictions. 11. Supersymmetry phenomenology in the context of neutrino physics and the large hadron collider LHC International Nuclear Information System (INIS) Hanussek, Marja 2012-05-01 Experimentally, it is well established that the Standard Model of particle physics requires an extension to accommodate the neutrino oscillation data, which indicates that at least two neutrinos are massive and that two of the neutrino mixing angles are large. Massive neutrinos are naturally present in a supersymmetric extension of the Standard Model which includes lepton-number violating terms (the B3 MSSM). Furthermore, supersymmetry stabilizes the hierarchy between the electroweak scale and the scale of unified theories or the Planck scale. In this thesis, we study in detail how neutrino masses are generated in the B3 MSSM. We present a mechanism how the experimental neutrino oscillation data can be realized in this framework. Then we discuss how recently published data from the Large Hadron Collider (LHC) can be used to constrain the parameter space of this model. Furthermore, we present work on supersymmetric models where R-parity is conserved, considering scenarios with light stops in the light of collider physics and scenarios with near-massless neutralinos in connection with cosmological restrictions. 12. Importance of beam-beam tune spread to collective beam-beam instability in hadron colliders International Nuclear Information System (INIS) Jin Lihui; Shi Jicong 2004-01-01 In hadron colliders, electron-beam compensation of beam-beam tune spread has been explored for a reduction of beam-beam effects. In this paper, effects of the tune-spread compensation on beam-beam instabilities were studied with a self-consistent beam-beam simulation in model lattices of Tevatron and Large Hodron Collider. It was found that the reduction of the tune spread with the electron-beam compensation could induce a coherent beam-beam instability. The merit of the compensation with different degrees of tune-spread reduction was evaluated based on beam-size growth. When two beams have a same betatron tune, the compensation could do more harm than good to the beams when only beam-beam effects are considered. If a tune split between two beams is large enough, the compensation with a small reduction of the tune spread could benefit beams as Landau damping suppresses the coherent beam-beam instability. The result indicates that nonlinear (nonintegrable) beam-beam effects could dominate beam dynamics and a reduction of beam-beam tune spread by introducing additional beam-beam interactions and reducing Landau damping may not improve the stability of beams 13. Indications of Conical Emission of Charged Hadrons at the BNL Relativistic Heavy Ion Collider Czech Academy of Sciences Publication Activity Database Abelev, B. I.; Aggarwal, M. M.; Ahammed, Z.; Anderson, B. D.; Arkhipkin, D.; Averichev, G. S.; Balewski, J.; Barannikova, O.; Barnby, L. S.; Baudot, J.; Baumgart, S.; Beavis, D.R.; Bellwied, R.; Benedosso, F.; Betancourt, M.J.; Betts, R. R.; Bhasin, A.; Bhati, A.K.; Bichsel, H.; Bielčík, Jaroslav; Bielčíková, Jana; Biritz, B.; Bland, L.C.; Bombara, M.; Bonner, B. E.; Botje, M.; Bouchet, J.; Braidot, E.; Brandin, A. V.; Bruna, E.; Bueltmann, S.; Burton, T. P.; Bysterský, Michal; Cai, X.Z.; Caines, H.; Sanchez, M.C.D.; Catu, O.; Cebra, D.; Cendejas, R.; Cervantes, M.C.; Chajecki, Z.; Chaloupka, Petr; Chattopadhyay, S.; Chen, H.F.; Chen, J.H.; Cheng, J.; Cherney, M.; Chikanian, A.; Choi, K.E.; Christie, W.; Clarke, R.F.; Codrington, M.J.M.; Corliss, R.; Cormier, T.M.; Coserea, R. M.; Cramer, J. G.; Crawford, H. J.; Das, D.; Dash, S.; Daugherity, M.; De Silva, L.C.; Dedovich, T. G.; DePhillips, M.; Derevschikov, A.A.; de Souza, R.D.; Didenko, L.; Djawotho, P.; Dunlop, J.C.; Mazumdar, M.R.D.; Edwards, W.R.; Efimov, L.G.; Elhalhuli, E.; Elnimr, M.; Emelianov, V.; Engelage, J.; Eppley, G.; Erazmus, B.; Estienne, M.; Eun, L.; Fachini, P.; Fatemi, R.; Fedorisin, J.; Feng, A.; Filip, P.; Finch, E.; Fine, V.; Fisyak, Y.; Gagliardi, C. A.; Gaillard, L.; Ganti, M. S.; Gangaharan, D.R.; Garcia-Solis, E.J.; Geromitsos, A.; Geurts, F.; Ghazikhanian, V.; Ghosh, P.; Gorbunov, Y.N.; Gordon, A.; Grebenyuk, O.; Grosnick, D.; Grube, B.; Guertin, S.M.; Guimaraes, K.S.F.F.; Gupta, A.; Gupta, N.; Guryn, W.; Haag, B.; Hallman, T.J.; Hamed, A.; Harris, J.W.; He, W.; Heinz, M.; Heppelmann, S.; Hippolyte, B.; Hirsch, A.; Hjort, E.; Hoffman, A.M.; Hoffmann, G.W.; Hofman, D.J.; Hollis, R.S.; Huang, H.Z.; Humanic, T.J.; Igo, G.; Iordanova, A.; Jacobs, P.; Jacobs, W.W.; Jakl, Pavel; Jena, C.; Jin, F.; Jones, C.L.; Jones, P.G.; Joseph, J.; Judd, E.G.; Kabana, S.; Kajimoto, K.; Kang, K.; Kapitán, Jan; Keane, D.; Kechechyan, A.; Kettler, D.; Khodyrev, V.Yu.; Kikola, D.P.; Kiryluk, J.; Kisiel, A.; Klein, S.R.; Knospe, A.G.; Kocoloski, A.; Koetke, D.D.; Kopytine, M.; Korsch, W.; Kotchenda, L.; Kushpil, Vasilij; Kravtsov, P.; Kravtsov, V.I.; Krueger, K.; Krus, M.; Kuhn, C.; Kumar, L.; Kurnadi, P.; Lamont, M.A.C.; Landgraf, J.M.; LaPointe, S.; Lauret, J.; Lebedev, A.; Lednický, Richard; Lee, Ch.; Lee, J.H.; Leight, W.; LeVine, M.J.; Li, N.; Li, C.; Li, Y.; Lin, G.; Lindenbaum, S.J.; Lisa, M.A.; Liu, F.; Liu, J.; Liu, L.; Ljubicic, T.; Llope, W.J.; Longacre, R.S.; Love, W.A.; Lu, Y.; Ludlam, T.; Ma, G.L.; Ma, Y.G.; Mahapatra, D.P.; Majka, R.; Mall, O.I.; Mangotra, L.K.; Manweiler, R.; Margetis, S.; Markert, C.; Matis, H.S.; Matulenko, Yu.A.; McShane, T.S.; Meschanin, A.; Milner, R.; Minaev, N.G.; Mioduszewski, S.; Mischke, A.; Mitchell, J.; Mohanty, B.; Morozov, D.A.; Munhoz, M. G.; Nandi, B.K.; Nattrass, C.; Nayak, T. K.; Nelson, J.M.; Netrakanti, P.K.; Ng, M.J.; Nogach, L.V.; Nurushev, S.B.; Odyniec, G.; Ogawa, A.; Okada, H.; Okorokov, V.; Olson, D.; Pachr, M.; Page, B.S.; Pal, S.K.; Pandit, Y.; Panebratsev, Y.; Panitkin, S.Y.; Pawlak, T.; Peitzmann, T.; Perevoztchikov, V.; Perkins, C.; Peryt, W.; Phatak, S.C.; Poljak, N.; Poskanzer, A.M.; Potukuchi, B.V.K.S.; Prindle, D.; Pruneau, C.; Pruthi, N.K.; Putschke, J.; Raniwala, R.; Raniwala, S.; Ray, R.L.; Redwine, R.; Reed, R.; Ridiger, A.; Ritter, H.G.; Roberts, J.B.; Rogachevskiy, O.V.; Romero, J.L.; Rose, A.; Roy, C.; Ruan, L.; Russcher, M.J.; Sahoo, R.; Sakrejda, I.; Sakuma, T.; Salur, S.; Sandweiss, J.; Sarsour, M.; Schambach, J.; Scharenberg, R.P.; Schmitz, N.; Seger, J.; Selyuzhenkov, I.; Seyboth, P.; Shabetai, A.; Shahaliev, E.; Shao, M.; Sharma, M.; Shi, S.S.; Shi, X.H.; Sichtermann, E.P.; Simon, F.; Singaraju, R.N.; Skoby, M.J.; Smirnov, N.; Snellings, R.; Sorensen, P.; Sowinski, J.; Spinka, H.M.; Srivastava, B.; Stadnik, A.; Stanislaus, T.D.S.; Staszak, D.; Strikhanov, M.; Stringfellow, B.; Suaide, A.A.P.; Suarez, M.C.; Subba, N.L.; Šumbera, Michal; Sun, X.M.; Sun, Y.; Sun, Z.; Surrow, B.; Symons, T.J.M.; de Toledo, A. S.; Takahashi, J.; Tang, A.H.; Tang, Z.; Tarnowsky, T.; Thein, D.; Thomas, J.H.; Tian, J.; Timmins, A.R.; Timoshenko, S.; Tokarev, M. V.; Trainor, T.A.; Tram, V.N.; Trattner, A.L.; Trentalange, S.; Tribble, R. E.; Tsai, O.D.; Ulery, J.; Ullrich, T.; Underwood, D.G.; Van Buren, G.; van Leeuwen, M.; Vander Molen, A.M.; Vanfossen, J.A.; Varma, R.; Vasconcelos, G.S.M.; Vasilevski, I.M.; Vasiliev, A. N.; Videbaek, F.; Vigdor, S.E.; Viyogi, Y. P.; Vokal, S.; Voloshin, S.A.; Wada, M.; Walker, M.; Wang, F.; Wang, G.; Wang, J.S.; Wang, Q.; Wang, X.; Wang, X.L.; Wang, Y.; Webb, G.; Webb, J.C.; Westfall, G.D.; Whitten, C.; Wieman, H.; Wissink, S.W.; Witt, R.; Wu, Y.; Tlustý, David; Xie, W.; Xu, N.; Xu, Q.H.; Xu, Y.; Xu, Z.; Yang, P.; Yepes, P.; Yip, K.; Yoo, I.K.; Yue, Q.; Zawisza, M.; Zbroszczyk, H.; Zhan, W.; Zhang, S.; Zhang, W.M.; Zhang, X.P.; Zhang, Y.; Zhang, Z.; Zhao, Y.; Zhong, C.; Zhou, J.; Zoulkarneev, R.; Zoulkarneeva, Y.; Zuo, J.X. 2009-01-01 Roč. 102, č. 5 (2009), 052302/1-052302/7 ISSN 0031-9007 R&D Projects: GA ČR GA202/07/0079 Institutional research plan: CEZ:AV0Z10480505; CEZ:AV0Z10100502 Keywords : PARTICLE CORRELATIONS * QCD MATTER * CONICAL EMISSION Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 7.328, year: 2009 14. Calculations of safe collimator settings and β^{*} at the CERN Large Hadron Collider Directory of Open Access Journals (Sweden) R. Bruce 2015-06-01 Full Text Available The first run of the Large Hadron Collider (LHC at CERN was very successful and resulted in important physics discoveries. One way of increasing the luminosity in a collider, which gave a very significant contribution to the LHC performance in the first run and can be used even if the beam intensity cannot be increased, is to decrease the transverse beam size at the interaction points by reducing the optical function β^{*}. However, when doing so, the beam becomes larger in the final focusing system, which could expose its aperture to beam losses. For the LHC, which is designed to store beams with a total energy of 362 MJ, this is critical, since the loss of even a small fraction of the beam could cause a magnet quench or even damage. Therefore, the machine aperture has to be protected by the collimation system. The settings of the collimators constrain the maximum beam size that can be tolerated and therefore impose a lower limit on β^{*}. In this paper, we present calculations to determine safe collimator settings and the resulting limit on β^{*}, based on available aperture and operational stability of the machine. Our model was used to determine the LHC configurations in 2011 and 2012 and it was found that β^{*} could be decreased significantly compared to the conservative model used in 2010. The gain in luminosity resulting from the decreased margins between collimators was more than a factor 2, and a further contribution from the use of realistic aperture estimates based on measurements was almost as large. This has played an essential role in the rapid and successful accumulation of experimental data in the LHC. 15. Calculations of safe collimator settings and β* at the CERN Large Hadron Collider Science.gov (United States) Bruce, R.; Assmann, R. W.; Redaelli, S. 2015-06-01 The first run of the Large Hadron Collider (LHC) at CERN was very successful and resulted in important physics discoveries. One way of increasing the luminosity in a collider, which gave a very significant contribution to the LHC performance in the first run and can be used even if the beam intensity cannot be increased, is to decrease the transverse beam size at the interaction points by reducing the optical function β*. However, when doing so, the beam becomes larger in the final focusing system, which could expose its aperture to beam losses. For the LHC, which is designed to store beams with a total energy of 362 MJ, this is critical, since the loss of even a small fraction of the beam could cause a magnet quench or even damage. Therefore, the machine aperture has to be protected by the collimation system. The settings of the collimators constrain the maximum beam size that can be tolerated and therefore impose a lower limit on β*. In this paper, we present calculations to determine safe collimator settings and the resulting limit on β*, based on available aperture and operational stability of the machine. Our model was used to determine the LHC configurations in 2011 and 2012 and it was found that β* could be decreased significantly compared to the conservative model used in 2010. The gain in luminosity resulting from the decreased margins between collimators was more than a factor 2, and a further contribution from the use of realistic aperture estimates based on measurements was almost as large. This has played an essential role in the rapid and successful accumulation of experimental data in the LHC. 16. Simulations and measurements of beam loss patterns at the CERN Large Hadron Collider Science.gov (United States) Bruce, R.; Assmann, R. W.; Boccone, V.; Bracco, C.; Brugger, M.; Cauchi, M.; Cerutti, F.; Deboy, D.; Ferrari, A.; Lari, L.; Marsili, A.; Mereghetti, A.; Mirarchi, D.; Quaranta, E.; Redaelli, S.; Robert-Demolaize, G.; Rossi, A.; Salvachua, B.; Skordis, E.; Tambasco, C.; Valentino, G.; Weiler, T.; Vlachoudis, V.; Wollmann, D. 2014-08-01 The CERN Large Hadron Collider (LHC) is designed to collide proton beams of unprecedented energy, in order to extend the frontiers of high-energy particle physics. During the first very successful running period in 2010-2013, the LHC was routinely storing protons at 3.5-4 TeV with a total beam energy of up to 146 MJ, and even higher stored energies are foreseen in the future. This puts extraordinary demands on the control of beam losses. An uncontrolled loss of even a tiny fraction of the beam could cause a superconducting magnet to undergo a transition into a normal-conducting state, or in the worst case cause material damage. Hence a multistage collimation system has been installed in order to safely intercept high-amplitude beam protons before they are lost elsewhere. To guarantee adequate protection from the collimators, a detailed theoretical understanding is needed. This article presents results of numerical simulations of the distribution of beam losses around the LHC that have leaked out of the collimation system. The studies include tracking of protons through the fields of more than 5000 magnets in the 27 km LHC ring over hundreds of revolutions, and Monte Carlo simulations of particle-matter interactions both in collimators and machine elements being hit by escaping particles. The simulation results agree typically within a factor 2 with measurements of beam loss distributions from the previous LHC run. Considering the complex simulation, which must account for a very large number of unknown imperfections, and in view of the total losses around the ring spanning over 7 orders of magnitude, we consider this an excellent agreement. Our results give confidence in the simulation tools, which are used also for the design of future accelerators. 17. Simulations and measurements of beam loss patterns at the CERN Large Hadron Collider Directory of Open Access Journals (Sweden) R. Bruce 2014-08-01 Full Text Available The CERN Large Hadron Collider (LHC is designed to collide proton beams of unprecedented energy, in order to extend the frontiers of high-energy particle physics. During the first very successful running period in 2010–2013, the LHC was routinely storing protons at 3.5–4 TeV with a total beam energy of up to 146 MJ, and even higher stored energies are foreseen in the future. This puts extraordinary demands on the control of beam losses. An uncontrolled loss of even a tiny fraction of the beam could cause a superconducting magnet to undergo a transition into a normal-conducting state, or in the worst case cause material damage. Hence a multistage collimation system has been installed in order to safely intercept high-amplitude beam protons before they are lost elsewhere. To guarantee adequate protection from the collimators, a detailed theoretical understanding is needed. This article presents results of numerical simulations of the distribution of beam losses around the LHC that have leaked out of the collimation system. The studies include tracking of protons through the fields of more than 5000 magnets in the 27 km LHC ring over hundreds of revolutions, and Monte Carlo simulations of particle-matter interactions both in collimators and machine elements being hit by escaping particles. The simulation results agree typically within a factor 2 with measurements of beam loss distributions from the previous LHC run. Considering the complex simulation, which must account for a very large number of unknown imperfections, and in view of the total losses around the ring spanning over 7 orders of magnitude, we consider this an excellent agreement. Our results give confidence in the simulation tools, which are used also for the design of future accelerators. 18. Searches for Lorentz Violation in Top-Quark Production and Decay at Hadron Colliders Energy Technology Data Exchange (ETDEWEB) Whittington, Denver Wade [Indiana Univ., Bloomington, IN (United States) 2012-07-01 We present a first-of-its-kind confirmation that the most massive known elementary particle obeys the special theory of relativity. Lorentz symmetry is a fundamental aspect of special relativity which posits that the laws of physics are invariant regardless of the orientation and velocity of the reference frame in which they are measured. Because this symmetry is a fundamental tenet of physics, it is important to test its validity in all processes. We quantify violation of this symmetry using the Standard-Model Extension framework, which predicts the effects that Lorentz violation would have on elementary particles and their interactions. The top quark is the most massive known elementary particle and has remained inaccessible to tests of Lorentz invariance until now. This model predicts a dependence of the production cross section for top and antitop quark pairs on sidereal time as the orientation of the experiment in which these events are produced changes with the rotation of the Earth. Using data collected with the DØ detector at the Fermilab Tevatron Collider, we search for violation of Lorentz invariance in events involving the production of a$t\\bar{t}$pair. Within the experimental precision, we find no evidence for such a violation and set upper limits on parameters describing its possible strength within the Standard-Model Extension. We also investigate the prospects for extending this analysis using the ATLAS detector at the Large Hadron Collider which, because of the higher rate of$t\\bar{t}events at that experiment, has the potential to improve the limits presented here. 19. Cryogenic studies for the proposed CERN large hadron electron collider (LHEC) Science.gov (United States) Haug, F.; LHeC Study Team, The 2012-06-01 The LHeC (Large Hadron electron Collider) is a proposed future colliding beam facility for lepton-nucleon scattering particle physics at CERN. A new 60 GeV electron accelerator will be added to the existing 27 km circumference 7 TeV LHC for collisions of electrons with protons and heavy ions. Two basic design options are being pursued. The first is a circular accelerator housed in the existing LHC tunnel which is referred to as the "Ring-Ring" version. Low field normal conducting magnets guide the particle beam while superconducting (SC) RF cavities cooled to 2 K are installed at two opposite locations at the LHC tunnel to accelerate the beams. For this version in addition a 10 GeV re-circulating SC injector will be installed. In total four refrigerators with cooling capacities between 1.2 kW and 3 kW @ 4.5 K are needed. The second option, referred to as the "Linac-Ring" version consists of a race-track re-circulating energyrecovery type machine with two 1 km long straight acceleration sections. The 944 high field 2 K SC cavities dissipate 30 kW at CW operation. Eight 10 kW @ 4.5 K refrigerators are proposed. The particle detector contains a combined SC solenoid and dipole forming the cold mass and an independent liquid argon calorimeter. Cooling is done with two individual small sized cryoplants; a 4.5 K helium, and a 87 K liquid nitrogen plant. 20. Study of cosmic ray events with high muon multiplicity using the ALICE detector at the CERN Large Hadron Collider Energy Technology Data Exchange (ETDEWEB) Collaboration: ALICE Collaboration 2016-01-01 ALICE is one of four large experiments at the CERN Large Hadron Collider near Geneva, specially designed to study particle production in ultra-relativistic heavy-ion collisions. Located 52 meters underground with 28 meters of overburden rock, it has also been used to detect muons produced by cosmic ray interactions in the upper atmosphere. In this paper, we present the multiplicity distribution of these atmospheric muons and its comparison with Monte Carlo simulations. This analysis exploits the large size and excellent tracking capability of the ALICE Time Projection Chamber. A special emphasis is given to the study of high multiplicity events containing more than 100 reconstructed muons and corresponding to a muon areal density ρ{sub μ} > 5.9 m{sup −2}. Similar events have been studied in previous underground experiments such as ALEPH and DELPHI at LEP. While these experiments were able to reproduce the measured muon multiplicity distribution with Monte Carlo simulations at low and intermediate multiplicities, their simulations failed to describe the frequency of the highest multiplicity events. In this work we show that the high multiplicity events observed in ALICE stem from primary cosmic rays with energies above 10{sup 16} eV and that the frequency of these events can be successfully described by assuming a heavy mass composition of primary cosmic rays in this energy range. The development of the resulting air showers was simulated using the latest version of QGSJET to model hadronic interactions. This observation places significant constraints on alternative, more exotic, production mechanisms for these events. 1. MEKS: A program for computation of inclusive jet cross sections at hadron colliders Science.gov (United States) Gao, Jun; Liang, Zhihua; Soper, Davison E.; Lai, Hung-Liang; Nadolsky, Pavel M.; Yuan, C.-P. 2013-06-01 EKS is a numerical program that predicts differential cross sections for production of single-inclusive hadronic jets and jet pairs at next-to-leading order (NLO) accuracy in a perturbative QCD calculation. We describe MEKS 1.0, an upgraded EKS program with increased numerical precision, suitable for comparisons to the latest experimental data from the Large Hadron Collider and Tevatron. The program integrates the regularized patron-level matrix elements over the kinematical phase space for production of two and three partons using the VEGAS algorithm. It stores the generated weighted events in finely binned two-dimensional histograms for fast offline analysis. A user interface allows one to customize computation of inclusive jet observables. Results of a benchmark comparison of the MEKS program and the commonly used FastNLO program are also documented. Program SummaryProgram title: MEKS 1.0 Catalogue identifier: AEOX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9234 No. of bytes in distributed program, including test data, etc.: 51997 Distribution format: tar.gz Programming language: Fortran (main program), C (CUBA library and analysis program). Computer: All. Operating system: Any UNIX-like system. RAM: ˜300 MB Classification: 11.1. External routines: LHAPDF (https://lhapdf.hepforge.org/) Nature of problem: Computation of differential cross sections for inclusive production of single hadronic jets and jet pairs at next-to-leading order accuracy in perturbative quantum chromodynamics. Solution method: Upon subtraction of infrared singularities, the hard-scattering matrix elements are integrated over available phase space using an optimized VEGAS algorithm. Weighted events are generated and filled 2. Large Hadron Collider at CERN: Beams Generating High-Energy-Density Matter CERN Document Server Tahir, N A; Shutov, A; Lomonosov, IV; Piriz, A R; Hoffmann, D H H; Deutsch, C; Fortov, V E 2009-01-01 This paper presents numerical simulations that have been carried out to study the thermodynamic and hydrodynamic response of a solid copper cylindrical target that is facially irradiated along the axis by one of the two Large Hadron Collider (LHC) 7 TeV/c proton beams. The energy deposition by protons in solid copper has been calculated using an established particle interaction and Monte Carlo code, FLUKA, which is capable of simulating all components of the particle cascades in matter, up to multi-TeV energies. This data has been used as input to a sophisticated two--dimensional hydrodynamic computer code, BIG2 that has been employed to study this problem. The prime purpose of these investigations was to assess the damage caused to the equipment if the entire LHC beam is lost at a single place. The FLUKA calculations show that the energy of protons will be deposited in solid copper within about 1~m assuming constant material parameters. Nevertheless, our hydrodynamic simulations have shown that the energy de... 3. Reliability of the Beam Loss Monitors System for the Large Hadron Collider at CERN CERN Document Server Guaglio, G; Santoni, C 2005-01-01 The energy stored in the Large Hadron Collider is unprecedented. The impact of the beam particles can cause severe damage on the superconductive magnets, resulting in significant downtime for repairing. The Beam Loss Monitors System (BLMS) detects the secondary particles shower of the lost beam particles and initiates the extraction of the beam before any serious damage to the equipment can occur. This thesis defines the BLMS specifications in term of reliability. The main goal is the design of a system minimizing both the probability to not detect a dangerous loss and the number of false alarms generated. The reliability theory and techniques utilized are described. The prediction of the hazard rates, the testing procedures, the Failure Modes Effects and Criticalities Analysis and the Fault Tree Analysis have been used to provide an estimation of the probability to damage a magnet, of the number of false alarms and of the number of generated warnings. The weakest components in the BLMS have been pointed out.... 4. Prospects for Charged Higgs Boson Searches at the Large Hadron Collider with Early ATLAS Data CERN Document Server Lane, Jenna Louise; Jones, Roger; Yang, Un-Ki In some theories beyond the Standard Model, such as Supersymmetry, the two complex scalar doublets required for electro-weak symmetry breaking result in, amongst other new particles, two charged Higgs bosons, H ± . This thesis presents the expected sensitivity to the H ± , assuming proton-proton collisions at a centre of mass energy √ s = 10 TeV provided by the Large Hadron Collider and recorded by the ATLAS experiment. At this centre of mass energy, top-quark pairs are produced with a predicted cross section of 401.6 pb, and the H ± are potentially produced in the top quark decay t → bH + , which replaces the Standard Model decay t → bW + . The H ± were assumed to decay to the quark pairs c s or s c , and the presence of the H ± was inferred from a secondary peak in the W -boson mass distribution. A kinematic fitting method was used to gain better separation between the W -boson and H ± mass peaks, and a maximum likelihood method was used to set the expected upper limits on the branching ratio B ... 5. Thermomechanical response of Large Hadron Collider collimators to proton and ion beam impacts Directory of Open Access Journals (Sweden) Marija Cauchi 2015-04-01 Full Text Available The CERN Large Hadron Collider (LHC is designed to accelerate and bring into collision high-energy protons as well as heavy ions. Accidents involving direct beam impacts on collimators can happen in both cases. The LHC collimation system is designed to handle the demanding requirements of high-intensity proton beams. Although proton beams have 100 times higher beam power than the nominal LHC lead ion beams, specific problems might arise in case of ion losses due to different particle-collimator interaction mechanisms when compared to protons. This paper investigates and compares direct ion and proton beam impacts on collimators, in particular tertiary collimators (TCTs, made of the tungsten heavy alloy INERMET® 180. Recent measurements of the mechanical behavior of this alloy under static and dynamic loading conditions at different temperatures have been done and used for realistic estimates of the collimator response to beam impact. Using these new measurements, a numerical finite element method (FEM approach is presented in this paper. Sequential fast-transient thermostructural analyses are performed in the elastic-plastic domain in order to evaluate and compare the thermomechanical response of TCTs in case of critical beam load cases involving proton and heavy ion beam impacts. 6. Probing high scale physics with top quarks at the Large Hadron Collider Science.gov (United States) Dong, Zhe With the Large Hadron Collider (LHC) running at TeV scale, we are expecting to find the deviations from the Standard Model in the experiments, and understanding what is the origin of these deviations. Being the heaviest elementary particle observed so far in the experiments with the mass at the electroweak scale, top quark is a powerful probe for new phenomena of high scale physics at the LHC. Therefore, we concentrate on studying the high scale physics phenomena with top quark pair production or decay at the LHC. In this thesis, we study the discovery potential of string resonances decaying to t/tbar final state, and examine the possibility of observing baryon-number-violating top-quark production or decay, at the LHC. We point out that string resonances for a string scale below 4 TeV can be detected via the t/tbar channel, by reconstructing center-of-mass frame kinematics of the resonances from either the t/tbar semi-leptonic decay or recent techniques of identifying highly boosted tops. For the study of baryon-number-violating processes, by a model independent effective approach and focusing on operators with minimal mass-dimension, we find that corresponding effective coefficients could be directly probed at the LHC already with an integrated luminosity of 1 inverse femtobarns at 7 TeV, and further constrained with 30 (100) inverse femtobarns at 7 (14) TeV. 7. The CERN Large Hadron Collider as a tool to study high-energy density matter. Science.gov (United States) Tahir, N A; Kain, V; Schmidt, R; Shutov, A; Lomonosov, I V; Gryaznov, V; Piriz, A R; Temporal, M; Hoffmann, D H H; Fortov, V E 2005-04-08 The Large Hadron Collider (LHC) at CERN will generate two extremely powerful 7 TeV proton beams. Each beam will consist of 2808 bunches with an intensity per bunch of 1.15x10(11) protons so that the total number of protons in one beam will be about 3x10(14) and the total energy will be 362 MJ. Each bunch will have a duration of 0.5 ns and two successive bunches will be separated by 25 ns, while the power distribution in the radial direction will be Gaussian with a standard deviation, sigma=0.2 mm. The total duration of the beam will be about 89 mus. Using a 2D hydrodynamic code, we have carried out numerical simulations of the thermodynamic and hydrodynamic response of a solid copper target that is irradiated with one of the LHC beams. These calculations show that only the first few hundred proton bunches will deposit a high specific energy of 400 kJ/g that will induce exotic states of high energy density in matter. 8. Jet signals for low mass strings at the large hadron collider. Science.gov (United States) Anchordoqui, Luis A; Goldberg, Haim; Nawata, Satoshi; Taylor, Tomasz R 2008-05-02 The mass scale M{s} of superstring theory is an arbitrary parameter that can be as low as few TeVs if the Universe contains large extra dimensions. We propose a search for the effects of Regge excitations of fundamental strings at the CERN Large Hadron Collider (LHC), in the process pp-->gamma+jet. The underlying parton process is dominantly the single photon production in gluon fusion, gg-->gammag, with open string states propagating in intermediate channels. If the photon mixes with the gauge boson of the baryon number, which is a common feature of D-brane quivers, the amplitude appears already at the string disk level. It is completely determined by the mixing parameter-and it is otherwise model (compactification) independent. Even for relatively small mixing, 100 fb{-1} of LHC data could probe deviations from standard model physics, at a 5sigma significance, for M{s} as large as 3.3 TeV. 9. ηc Hadroproduction at Large Hadron Collider Challenges NRQCD Factorization Directory of Open Access Journals (Sweden) Butenschoen Mathias 2017-01-01 Full Text Available We report on our analysis [1] of prompt ηc meson production, measured by the LHCb Collaboration at the Large Hadron Collider, within the framework of non-relativistic QCD (NRQCD factorization up to the sub-leading order in both the QCD coupling constant αs and the relative velocity v of the bound heavy quarks. We thereby convert various sets of J/ψ and χc,J long-distance matrix elements (LDMEs, determined by different groups in J/ψ and χc,J yield and polarization fits, to ηc and hc production LDMEs making use of the NRQCD heavy quark spin symmetry. The resulting predictions for ηc hadroproduction in all cases greatly overshoot the LHCb data, while the color-singlet model contributions alone would indeed be sufficient. We investigate the consequences for the universality of the LDMEs, and show how the observed tensions remain in follow-up works by other groups. 10. Final implementation, commissioning, and performance of embedded collimator beam position monitors in the Large Hadron Collider Directory of Open Access Journals (Sweden) Gianluca Valentino 2017-08-01 Full Text Available During Long Shutdown 1, 18 Large Hadron Collider (LHC collimators were replaced with a new design, in which beam position monitor (BPM pick-up buttons are embedded in the collimator jaws. The BPMs provide a direct measurement of the beam orbit at the collimators, and therefore can be used to align the collimators more quickly than using the standard technique which relies on feedback from beam losses. Online orbit measurements also allow for reducing operational margins in the collimation hierarchy placed specifically to cater for unknown orbit drifts, therefore decreasing the β^{*} and increasing the luminosity reach of the LHC. In this paper, the results from the commissioning of the embedded BPMs in the LHC are presented. The data acquisition and control software architectures are reviewed. A comparison with the standard alignment technique is provided, together with a fill-to-fill analysis of the measured orbit in different machine modes, which will also be used to determine suitable beam interlocks for a tighter collimation hierarchy. 11. Performance Analysis of the Ironless Inductive Position Sensor in the Large Hadron Collider Collimators Environment CERN Document Server Danisi, Alessandro; Losito, Roberto 2015-01-01 The Ironless Inductive Position Sensor (I2PS) has been introduced as a valid alternative to Linear Variable Differential Transformers (LVDTs) when external magnetic fields are present. Potential applications of this linear position sensor can be found in critical systems such as nuclear plants, tokamaks, satellites and particle accelerators. This paper analyzes the performance of the I2PS in the harsh environment of the collimators of the Large Hadron Collider (LHC), where position uncertainties of less than 20 μm are demanded in the presence of nuclear radiation and external magnetic fields. The I2PS has been targeted for installation for LHC Run 2, in order to solve the magnetic interference problem which standard LVDTs are experiencing. The paper describes in detail the chain of systems which belong to the new I2PS measurement task, their impact on the sensor performance and their possible further optimization. The I2PS performance is analyzed evaluating the position uncertainty (on 30 s), the magnetic im... 12. Soft functions for generic jet algorithms and observables at hadron colliders Energy Technology Data Exchange (ETDEWEB) Bertolini, Daniele [Lawrence Berkeley National Laboratory, Berkeley, CA (United States). Theoretical Physics Group; California Univ., Berkeley, CA (United States). Berkeley Center for Theoretical Physics; Kolodrubetz, Daniel; Stewart, Iain W. [Massachusetts Institute of Technology, Cambridge, MA (United States). Center for Theoretical Physics; Duff, Neill [Los Alamos National Laboratory, NM (United States). Theoretical Div.; Massachusetts Institute of Technology, Cambridge, MA (United States). Center for Theoretical Physics; Pietrulewicz, Piotr; Tackmann, Frank J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Waalewijn, Wouter J. [NIKHEF, Amsterdam (Netherlands). Theory Group; Amsterdam Univ. (Netherlands). Inst. for Theoretical Physics Amsterdam and Delta Inst. for Theoretical Physics 2017-07-15 We introduce a method to compute one-loop soft functions for exclusive N-jet processes at hadron colliders, allowing for different definitions of the algorithm that determines the jet regions and of the measurements in those regions. In particular, we generalize the N-jettiness hemisphere decomposition of T. T. Joutennus et al. (2011) in a manner that separates the dependence on the jet boundary from the observables measured inside the jet and beam regions. Results are given for several factorizable jet definitions, including anti-k{sub T}, XCone, and other geometric partitionings. We calculate explicitly the soft functions for angularity measurements, including jet mass and jet broadening, in pp→L+1 jet and explore the differences for various jet vetoes and algorithms. This includes a consistent treatment of rapidity divergences when applicable. We also compute analytic results for these soft functions in an expansion for a small jet radius R. We find that the small-R results, including corrections up to O(R{sup 2}), accurately capture the full behavior over a large range of R. 13. The upgraded Pixel Detector of the ATLAS Experiment for Run 2 at the Large Hadron Collider Energy Technology Data Exchange (ETDEWEB) Backhaus, M., E-mail: [email protected] 2016-09-21 During Run 1 of the Large Hadron Collider (LHC), the ATLAS Pixel Detector has shown excellent performance. The ATLAS collaboration took advantage of the first long shutdown of the LHC during 2013 and 2014 and extracted the ATLAS Pixel Detector from the experiment, brought it to surface and maintained the services. This included the installation of new service quarter panels, the repair of cables, and the installation of the new Diamond Beam Monitor (DBM). Additionally, a completely new innermost pixel detector layer, the Insertable B-Layer (IBL), was constructed and installed in May 2014 between a new smaller beam pipe and the existing Pixel Detector. With a radius of 3.3 cm the IBL is located extremely close to the interaction point. Therefore, a new readout chip and two new sensor technologies (planar and 3D) are used in the IBL. In order to achieve best possible physics performance the material budget was improved with respect to the existing Pixel Detector. This is realized using lightweight staves for mechanical support and a CO{sub 2} based cooling system. This paper describes the improvements achieved during the maintenance of the existing Pixel Detector as well as the performance of the IBL during the construction and commissioning phase. Additionally, first results obtained during the LHC Run 2 demonstrating the distinguished tracking performance of the new Four Layer ATLAS Pixel Detector are presented. 14. Development of N+ in P pixel sensors for a high-luminosity large hadron collider Science.gov (United States) Kamada, Shintaro; Yamamura, Kazuhisa; Unno, Yoshinobu; Ikegami, Yoichi 2014-11-01 Hamamatsu Photonics K. K. is developing an N+ in a p planar pixel sensor with high radiation tolerance for the high-luminosity large hadron collider (HL-LHC). The N+ in the p planar pixel sensor is a candidate for the HL-LHC and offers the advantages of high radiation tolerance at a reasonable price compared with the N+ in an n planar sensor, the three-dimensional sensor, and the diamond sensor. However, the N+ in the p planar pixel sensor still presents some problems that need to be solved, such as its slim edge and the danger of sparks between the sensor and readout integrated circuit. We are now attempting to solve these problems with wafer-level processes, which is important for mass production. To date, we have obtained a 250-μm edge with an applied bias voltage of 1000 V. To protect against high-voltage sparks from the edge, we suggest some possible designs for the N+ edge. 15. Development of N+ in P pixel sensors for a high-luminosity large hadron collider International Nuclear Information System (INIS) Kamada, Shintaro; Yamamura, Kazuhisa; Unno, Yoshinobu; Ikegami, Yoichi 2014-01-01 Hamamatsu Photonics K. K. is developing an N+ in a p planar pixel sensor with high radiation tolerance for the high-luminosity large hadron collider (HL-LHC). The N+ in the p planar pixel sensor is a candidate for the HL-LHC and offers the advantages of high radiation tolerance at a reasonable price compared with the N+ in an n planar sensor, the three-dimensional sensor, and the diamond sensor. However, the N+ in the p planar pixel sensor still presents some problems that need to be solved, such as its slim edge and the danger of sparks between the sensor and readout integrated circuit. We are now attempting to solve these problems with wafer-level processes, which is important for mass production. To date, we have obtained a 250-μm edge with an applied bias voltage of 1000 V. To protect against high-voltage sparks from the edge, we suggest some possible designs for the N+ edge. - Highlights: • We achieved a tolerance of 1000 V with a 250-μm edge by Al2O3 side wall passivation. • Above is a wafer process and suitable for mass production. • For edge-spark protection, we suggest N+ edge with an isolation 16. Mathematical formulation to predict the harmonics of the superconducting Large Hadron Collider magnets Directory of Open Access Journals (Sweden) Nicholas Sammut 2006-01-01 Full Text Available CERN is currently assembling the LHC (Large Hadron Collider that will accelerate and bring in collision 7 TeV protons for high energy physics. Such a superconducting magnet-based accelerator can be controlled only when the field errors of production and installation of all magnetic elements are known to the required accuracy. The ideal way to compensate the field errors obviously is to have direct diagnostics on the beam. For the LHC, however, a system solely based on beam feedback may be too demanding. The present baseline for the LHC control system hence requires an accurate forecast of the magnetic field and the multipole field errors to reduce the burden on the beam-based feedback. The field model is the core of this magnetic prediction system, that we call the field description for the LHC (FIDEL. The model will provide the forecast of the magnetic field at a given time, magnet operating current, magnet ramp rate, magnet temperature, and magnet powering history. The model is based on the identification and physical decomposition of the effects that contribute to the total field in the magnet aperture of the LHC dipoles. Each effect is quantified using data obtained from series measurements, and modeled theoretically or empirically depending on the complexity of the physical phenomena involved. This paper presents the developments of the new finely tuned magnetic field model and, using the data accumulated through series tests to date, evaluates its accuracy and predictive capabilities over a sector of the machine. 17. Correlation between magnetic field quality and mechanical components of the Large Hadron Collider main dipoles International Nuclear Information System (INIS) Bellesia, B. 2006-12-01 The 1234 superconducting dipoles of the Large Hadron Collider, working at a cryogenic temperature of 1.9 K, must guarantee a high quality magnetic field to steer the particles inside the beam pipe. Magnetic field measurements are a powerful way to detect assembly faults that could limit magnet performances. The aim of the thesis is the analysis of these measurements performed at room temperature during the production of the dipoles. In a large scale production the ideal situation is that all the magnets produced were identical. However all the components constituting a magnet are produced with certain tolerance and the assembly procedures are optimized during the production; due to these the reality drifts away from the ideal situation. We recollected geometrical data of the main components (superconducting cables, coil copper wedges and austenitic steel coil collars) and coupling them with adequate electro-magnetic models we reconstructed a multipolar field representation of the LHC dipoles defining their critical components and assembling procedures. This thesis is composed of 3 main parts: 1) influence of the geometry and of the assembling procedures of the dipoles on the quality of the magnetic field, 2) the use of measurement performed on the dipoles in the assembling step in order to solve production issues and to understand the behaviour of coils during the assembling step, and 3) a theoretical study of the uncertain harmonic components of the magnetic field in order to assess the dipole production 18. Finite-width effects in unstable-particle production at hadron colliders International Nuclear Information System (INIS) Falgari, P.; Signer, A.; Zuerich Univ. 2013-03-01 We present a general formalism for the calculation of finite-width contributions to the differential production cross sections of unstable particles at hadron colliders. In this formalism, which employs an effective-theory description of unstable-particle production and decay, the matrix element computation is organized as a gauge-invariant expansion in powers of Γ X /m X , with Γ X and m X the width and mass of the unstable particle. This framework allows for a systematic inclusion of off-shell and non-factorizable effects whilst at the same time keeping the computational effort minimal compared to a full calculation in the complex-mass scheme. As a proof-of-concept example, we give results for an NLO calculation of top-antitop production in the q anti q partonic channel. As already found in a similar calculation of single-top production, the finite-width effects are small for the total cross section, as expected from the naive counting ∝Γ t /m t ∝1%. However, they can be sizeable, in excess of 10%, close to edges of certain kinematical distributions. The dependence of the results on the mass renormalization scheme, and its implication for a precise extraction of the top-quark mass, is also discussed. 19. Finite-width effects in unstable-particle production at hadron colliders Energy Technology Data Exchange (ETDEWEB) Falgari, P. [Utrecht Univ. (Netherlands). Inst. for Theoretical Physics; Utrecht Univ. (Netherlands). Spinoza Inst.; Papanastasiou, A.S. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Signer, A. [Paul Scherrer Institut, Villigen (Switzerland); Zuerich Univ. (Switzerland). Inst. for Theoretical Physics 2013-03-15 We present a general formalism for the calculation of finite-width contributions to the differential production cross sections of unstable particles at hadron colliders. In this formalism, which employs an effective-theory description of unstable-particle production and decay, the matrix element computation is organized as a gauge-invariant expansion in powers of {Gamma}{sub X}/m{sub X}, with {Gamma}{sub X} and m{sub X} the width and mass of the unstable particle. This framework allows for a systematic inclusion of off-shell and non-factorizable effects whilst at the same time keeping the computational effort minimal compared to a full calculation in the complex-mass scheme. As a proof-of-concept example, we give results for an NLO calculation of top-antitop production in the q anti q partonic channel. As already found in a similar calculation of single-top production, the finite-width effects are small for the total cross section, as expected from the naive counting {proportional_to}{Gamma}{sub t}/m{sub t}{proportional_to}1%. However, they can be sizeable, in excess of 10%, close to edges of certain kinematical distributions. The dependence of the results on the mass renormalization scheme, and its implication for a precise extraction of the top-quark mass, is also discussed. 20. Search for Microscopic Black Hole Signatures at the Large Hadron Collider Energy Technology Data Exchange (ETDEWEB) Tsang, Ka Vang [Brown Univ., Providence, RI (United States) 2011-05-01 A search for microscopic black hole production and decay in proton-proton collisions at a center-of-mass energy of 7 TeV has been conducted using Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider. A total integrated luminosity of 35 pb-1 data sample, taken by CMS Collaboration in year 2010, has been analyzed. A novel background estimation for multi-jet events beyond TeV scale has been developed. A good agreement with standard model backgrounds, dominated by multi-jet production, is observed for various final-state multiplicities. Using semi-classical approximation, upper limits on minimum black hole mass at 95% confidence level are set in the range of 3.5 - 4.5 TeV for values of the Planck scale up to 3 TeV. Model-independent limits are provided to further constrain microscopic black hole models with additional regions of parameter space, as well as new physics models with multiple energetic final states. These are the first limits on microscopic black hole production at a particle accelerator. 1. Busca por dimensões extras no detector CMS do large hadron collider CERN Document Server Fernandez Perez Tomei, T R We present the results of a search for experimental evidence of extra space dimensions in proton-proton collisions at a center-of-mass energy of 7 TeV, furnished by the Large Hadron Collider accelerator. We analyzed the data taken by the Compact Muon Solenoid experiment during 2011, which total an integrated luminosity of 4.7 fb−1. The Randall-Sundrum warped extra dimensions model was used as a standard benchmark for the experimental signatures which could be observed in the data, in the presence of extra dimensions. The studied reaction is pp → G∗→ ZZ→ qqνν, where G∗ is the first Randall-Sundrum graviton resonance. The observations agree witht he Standard Model predictions. In the absence of experimental signals of extra dimensions, we put limits on the parameters of the Randall-Sundrum model. Upper limits, with 95% confidence, for the cross-section of processes which would raise the event yield in the channel considered are in the [0.047 – 0.021] pb range, for resonance masses in the [1000... 2. The new Level-1 Topological Trigger for the ATLAS experiment at the Large Hadron Collider CERN Document Server AUTHOR|(INSPIRE)INSPIRE-00047907; The ATLAS collaboration 2017-01-01 At the CERN Large Hadron Collider, the world’s most powerful particle accelerator, the ATLAS experiment records high-energy proton collision to investigate the properties of fundamental particles. These collisions take place at a 40 MHz, and the ATLAS trigger system selects the interesting ones, reducing the rate to 1 kHz, allowing for their storage and subsequent offline analysis. The ATLAS trigger system is organized in two levels, with increasing degree of details and of accuracy. The first level trigger reduces the event rate to 100 kHz with a decision latency of less than 2.5 micro seconds. It is composed of the calorimeter trigger, muon trigger and central trigger processor. A new component of the first-level trigger was introduced in 2015: the Topological Processor (L1Topo). It allows to use detailed real-time information from the Level-1 calorimeter and muon systems, to compute advanced kinematic quantities using state of the art FPGA processors, and to select interesting events based on several com... 3. Superconductivity: Its Role, Its Success and Its Setbacks in the Large Hadron Collider of CERN CERN Document Server Rossi, L 2010-01-01 The Large Hadron Collider - LHC, the particle accelerator at CERN, Geneva, is the largest and probably the most complex scientific instrument ever built. Superconductivity plays a key role because the accelerator is based on the reliable operation of almost 10,000 superconducting magnets cooled by 130 tonnes of helium at 1.9 and 4.2 K and containing a total stored magnetic energy of about 15,000 MJ (including detector magnets). The characteristics of the 1200 tonnes of high quality Nb-Ti cables have met the severe requests in terms of critical currents, magnetization and inter-strand resistance; the magnets are built with an unprecedented uniformity, about 0.01% of variation in field quality among the 1232 main dipoles which are 15 m in length and 30 tonnes in weight. The results of this 20 year long enterprise will be discussed together with problems faced during construction and commissioning and their remedies. Particular reference is made to the severe incident which occurred nine days after the spectacul... 4. The Local Helium Compound Transfer Lines for the Large Hadron Collider Cryogenic System CERN Document Server Parente, C; Munday, A; Wiggins, P 2006-01-01 The cryogenic system for the Large Hadron Collider (LHC) under construction at CERN will include twelve new local helium transfer lines distributed among five LHC points in underground caverns. These lines, being manufactured and installed by industry, will connect the cold boxes of the 4.5-K refrigerators and the 1.8-K refrigeration units to the cryogenic interconnection boxes. The lines have a maximum of 30-m length and may possess either small or large re-distribution units to allow connection to the interface ports. Due to space restrictions the lines may have complex routings and require several elbowed sections. The lines consist of a vacuum jacket, a thermal shield and either three or four helium process pipes. Specific internal and external supporting and compensation systems were designed for each line to allow for thermal contraction of the process pipes (or vacuum jacket, in case of a break in the insulation vacuum) and to minimise the forces applied to the interface equipment. Whenever possible, f... 5. Operational Experience and Consolidations for the Current Lead Control Valves of the Large Hadron Collider CERN Document Server Perin, A; Pirotte, O; Krieger, B; Widmer, A 2012-01-01 The Large Hadron Collider superconducting magnets are powered by more than 1400 gas cooled current leads ranging from 120 A to 13000 A. The gas flow required by the leads is controlled by solenoid proportional valves with dimensions from DN 1.8 mm to DN 10 mm. During the first months of operation, signs of premature wear were found in the active parts of the valves. This created major problems for the functioning of the current leads threatening the availability of the LHC. Following the detection of the problems, a series of measures were implemented to keep the LHC running, to launch a development program to solve the premature wear problem and to prepare for a global consolidation of the gas flow control system. This article describes first the difficulties encountered and the measures taken to ensure a continuous operation of the LHC during the first year of operation. The development of new friction free valves is then presented along with the consolidation program and the test equipment developed to val... 6. Development of an abort gap monitor for the large hadron collider International Nuclear Information System (INIS) Beche, J.-F.; Byrd, J.; De Santis, S.; Placidi, M.; Turner, W.; Zolotorev, M. 2004-01-01 The Large Hadron Collider (LHC), presently under construction at CERN, requires monitoring the parasitic charge in the 3.3ms long gap in the machine fill structure. This gap, referred to as the abort gap, corresponds to the raise time of the abort kickers magnets. Any circulating particle present in the abort gap at the time of the kickers firing is lost inside the ring, rather than in the beam dump, and can potentially damage a number of the LHC components. CERN specifications indicate a linear density of 6 x 106 protons over a 100 ns interval as the maximum charge safely allowed to accumulate in the abort gap at 7 TeV. We present a study of an abort gap monitor, based on a photomultiplier tube with a gated microchannel plate, which would allow for detecting such low charge densities by monitoring the synchrotron radiation emitted in the dedicated diagnostics port. We show results of beam test experiments at the Advanced Light Source (ALS) using a Hamamatsu 5961U MCP-PMT, which indicate that such an instrument has the required sensitivity to meet LHC specifications 7. The Thermosiphon Cooling System of the ATLAS Experiment at the CERN Large Hadron Collider CERN Document Server Battistin, M; Bitadze, A; Bonneau, P; Botelho-Direito, J; Boyd, G; Corbaz, F; Crespo-Lopez, O; Da Riva, E; Degeorge, C; Deterre, C; DiGirolamo, B; Doubek, M; Favre, G; Godlewski, J; Hallewell, G; Katunin, S; Lefils, D; Lombard, D; McMahon, S; Nagai, K; Robinson, D; Rossi, C; Rozanov, A; Vacek, V; Zwalinski, L 2015-01-01 The silicon tracker of the ATLAS experiment at CERN Large Hadron Collider will operate around –15°C to minimize the effects of radiation damage. The present cooling system is based on a conventional evaporative circuit, removing around 60 kW of heat dissipated by the silicon sensors and their local electronics. The compressors in the present circuit have proved less reliable than originally hoped, and will be replaced with a thermosiphon. The working principle of the thermosiphon uses gravity to circulate the coolant without any mechanical components (compressors or pumps) in the primary coolant circuit. The fluorocarbon coolant will be condensed at a temperature and pressure lower than those in the on-detector evaporators, but at a higher altitude, taking advantage of the 92 m height difference between the underground experiment and the services located on the surface. An extensive campaign of tests, detailed in this paper, was performed using two small-scale thermosiphon systems. These tests confirmed th... 8. Investigation of collimator materials for the High Luminosity Large Hadron Collider CERN Document Server AUTHOR|(CDS)2085459; Bertarelli, Alessandro; Redaelli, Stefano This PhD thesis work has been carried out at the European Organisation for Nuclear Research (CERN), Geneva, Switzerland), in the framework of the High Luminosity (HL) upgrade of the Large Hadron Collider (LHC). The HL-LHC upgrade will bring the accelerator beyond the nominal performance: it is planning to reach higher stored beam energy up to 700 MJ, through more intense proton beams. The present multi-stage LHC collimation system was designed to handle 360 MJ stored beam energy and withstand realistic losses only for this nominal beam. Therefore, the challenging HL-LHC beam parameters pose strong concerns for beam collimation, which call for important upgrades of the present system. The objective of this thesis is to provide solid basis for optimum choices of materials for the different collimators that will be upgraded for the baseline layout of the HL-LHC collimation system. To achieve this goal, material-related limitations of the present system are identified and novel advanced composite materials are se... 9. Leptonic signals from off-shell Z boson pairs at hadron colliders International Nuclear Information System (INIS) Zecher, C.; Matsuura, T.; Bij, J.J. van der 1994-04-01 We study the gluon fusion into pairs of off-shell Z bosons and their subsequent decay into charged lepton pairs at hadron colliders : g→ZZ→4l ± (l ± :charged lepton). Throughout this paper we do not restrict the intermediate state Z bosons to the narrow width approximation but allow for arbitrary invariant masses. We compare the strength of this process with the known leading order results for q anti q→ZZ→4l ± and for gg→H→ZZ→4l ± . At LHC energies (√s=14 TeV) the contribution from the gluon fusion background is around 20% of the contribution from quark-antiquark annihilation. These two processes do not form a severe irreducible background to the Higgs signal. At Higgs masses below 120 GeV the final state interference for the decay channel H→ZZ→4μ ± is increasingly constructive. This has no effect on the Higgs search as in this mass region the signal remains too small. One can extend the intermediate mass Higgs search via off-shell Z boson pairs at the LHC down to about 130 GeV Higgs mass. However careful study of the reducible background is needed for definite conclusions. (orig.) 10. Study of Drell-Yan process in CMS experiment at Large Hadron Collider CERN Document Server Jindal, Monika The proton-proton collisions at the Large Hadron Collider (LHC) is the begining of a new era in the high energy physics. It enables the possibility of the discoveries at high-energy frontier and also allows the study of Standard Model physics with high precision. The new physics discoveries and the precision measurements can be achieved with highly efficient and accurate detectors like Compact Muon Solenoid. In this thesis, we report the measurement of the differential production cross-section of the Drell-Yan process,q ar{q} ightarrow Z/gamma^{*} ightarrowmu^{+}mu^{-}$in proton-proton collisions at the center-of-mass energy$sqrt{s}=$7 TeV using CMS experiment at the LHC. This measurement is based on the analysis of data which corresponds to an integrated luminosity of$intmath{L}dt$= 36.0$pm$1.4 pb$^{-1}. The measurement of the production cross-section of the Drell-Yan process provides a first test of the Standard Model in a new energy domain and may reveal exotic physics processes. The Drell... 11. Precision measurements of W and Z boson production and their decays to electrons at hadron colliders CERN Document Server Ehlers, Jans H Hermann; Pauss, Felicitas For many measurements at hadron colliders, such as cross sections and branching ratios, the uncertainty of the integratedluminosity is an important contributionto the error of the final result. In 1997, the ETH Zürich group proposeda new approach to determinethe integrated luminosity via a counting measurement of the W and Z bosons throughtheir decays to leptons. In this thesis this proposal has been applied on real data as well as on Simulation for a future experiment. The first part of this thesis describes a dedicated data analysis to precisely mea¬ sure the luminosity at the CDF experimentat the Tevatroncollider (USA) through the production of Z bosons and their decay to electrons. An integrated pp lumi¬ nosity of .Lcounting = 221.7 ± 2.8 (stat.) ± 11.2 (sys.) pb"1 has been measured for the data taking period from March 2002 to February 2004. This is in very good agreement with the traditional measurement at CDF of Lci.c ~ 222.2 ± 12.9 pb-1, using Cherenkov LuminosityCountersat large angles. Bothmea... 12. High Luminosity Large Hadron Collider A description for the European Strategy Preparatory Group CERN Document Server Rossi, L 2012-01-01 The Large Hadron Collider (LHC) is the largest scientific instrument ever built. It has been exploring the new energy frontier since 2009, gathering a global user community of 7,000 scientists. It will remain the most powerful accelerator in the world for at least two decades, and its full exploitation is the highest priority in the European Strategy for Particle Physics, adopted by the CERN Council and integrated into the ESFRI Roadmap. To extend its discovery potential, the LHC will need a major upgrade around 2020 to increase its luminosity (rate of collisions) by a factor of 10 beyond its design value. As a highly complex and optimized machine, such an upgrade of the LHC must be carefully studied and requires about 10 years to implement. The novel machine configuration, called High Luminosity LHC (HL-LHC), will rely on a number of key innovative technologies, representing exceptional technological challenges, such as cutting-edge 13 tesla superconducting magnets, very compact and ultra-precise superconduc... 13. Studies of supersymmetry models for the ATLAS experiment at the Large Hadron Collider CERN Document Server Barr, A J 2002-01-01 This thesis demonstrates that supersymmetry can be discovered with the ATLAS experiment even if nature conspires to choose one of two rather difficult cases. In the first case where baryon-number is weakly violated, the lightest supersymmetric particle decays into three quarks. This leads to events with a very large multiplicity of jets which presents a difficult combinatorical problem at a hadronic collider. The distinctive property of the second class of model -- anomaly-mediation -- is the near degeneracy of the super-partners of the SU(2) weak bosons. The heavier charged wino decays producing its invisible neutral partner, the presence of which must be inferred from the apparent non-conservation of transverse momentum, as well as secondary particle(s) with low transverse momentum which must be extracted from a large background. Monte-Carlo simulations are employed to show that for the models examined not only can the distinctive signature of the model can be extracted, but that a variety of measurements (... 14. Superconducting Magnet with the Reduced Barrel Yoke for the Hadron Future Circular Collider CERN Document Server Klyukhin, V.I.; Berriaud, C.; Curé, B.; Dudarev, A.; Gaddi, A.; Gerwig, H.; Hervé, A.; Mentink, M.; Rolando, G.; Pais Da Silva, H.F.; Wagner, U.; ten Kate, H. H. J. 2015-01-01 The conceptual design study of a hadron Future Circular Collider (FCC-hh) with a center-of-mass energy of the order of 100 TeV in a new tunnel of 80-100 km circumference assumes the determination of the basic requirements for its detectors. A superconducting solenoid magnet of 12 m diameter inner bore with the central magnetic flux density of 6 T is proposed for a FCC-hh experimental setup. The coil of 24.518 m long has seven 3.5 m long modules included into one cryostat. The steel yoke with a mass of 21 kt consists of two barrel layers of 0.5 m radial thickness, and 0.7 m thick nose disk, four 0.6 m thick end-cap disks, and three 0.8 m thick muon toroid disks each side. The outer diameter of the yoke is 17.7 m; the length without the forward muon toroids is 33 m. The air gaps between the end-cap disks provide the installation of the muon chambers up to the pseudorapidity of \\pm 3.5. The conventional forward muon spectrometer provides the measuring of the muon momenta in the pseudorapidity region from \\pm 2.7... 15. Superconducting Magnet with the Minimum Steel Yoke for the Hadron Future Circular Collider Detector CERN Document Server Klyukhin, V I; Ball, A.; Curé, B.; Dudarev, A.; Gaddi, A.; Gerwig, H.; Mentink, M.; Da Silva, H. Pais; Rolando, G.; ten Kate, H. H. J.; Berriaud, C.P. 2016-01-01 The conceptual design study of a hadron Future Circular Collider (FCC-hh) with a center-of-mass energy of the order of 100 TeV in a new tunnel of 80-100 km circumference assumes the determination of the basic requirements for its detectors. A superconducting solenoid magnet of 12 m diameter inner bore with the central magnetic flux density of 6 T in combination with two superconducting dipole and two conventional toroid magnets is proposed for a FCC-hh experimental setup. The coil of 23.468 m long has seven 3.35 m long modules included into one cryostat. The steel yoke with a mass of 22.6 kt consists of two barrel layers of 0.5 m radial thickness, and the 0.7 m thick nose disk and four 0.6 m thick end-cap disks each side. The maximum outer diameter of the yoke is 17.7 m; the length is 62.6 m. The air gaps between the end-cap disks provide the installation of the muon chambers up to the pseudorapidity about \\pm 2.7. The superconducting dipole magnets allow measuring the charged particle momenta in the pseudora... 16. The CERN Large Hadron Collider as a tool to study high-energy density matter CERN Document Server Tahir, N A; Gryaznov, V; Hoffmann, Dieter H H; Kain, V; Lomonosov, I V; Piriz, A R; Schmidt, R; Shutov, A; Temporal, M 2005-01-01 The Large Hadron Collider (LHC) at CERN will generate two extremely powerful 7 TeV proton beams. Each beam will consist of 2808 bunches with an intensity per bunch of 1.15*10/sup 11/ protons so that the total number of protons in one beam will be about 3*10/sup 14/ and the total energy will be 362 MJ. Each bunch will have a duration of 0.5 ns and two successive bunches will be separated by 25 ns, while the power distribution in the radial direction will be Gaussian with a standard deviation, sigma =0.2 mm. The total duration of the beam will be about 89 mu s. Using a 2D hydrodynamic code, we have carried out numerical simulations of the thermodynamic and hydrodynamic response of a solid copper target that is irradiated with one of the LHC beams. These calculations show that only the first few hundred proton bunches will deposit a high specific energy of 400 kJ/g that will induce exotic states of high energy density in matter. 17. Calibration of the hadronic calorimeter prototype for a future lepton collider Energy Technology Data Exchange (ETDEWEB) Schroeder, Sarah; Garutti, Erika [Institute for Experimental Physics, Hamburg University, Luruper Chaussee 149, D-22761 Hamburg (Germany); Collaboration: CALICE-D-Collaboration 2016-07-01 The CALICE AHCAL technological prototype is a hadronic calorimeter prototype for a future e{sup +}e{sup -} - collider. It is designed as a sampling calorimeter alternating steel absorber plates and active readout layers, segmented in single plastic scintillator tiles of 3 x 3 x 0.3 cm{sup 3} volume. Each tile is individually coupled to a silicon photomultiplier, read out by a dedicated ASIC with energy measurement and time stamping capability. The high granularity is meant to enable imaging and separation of single showers, for a Particle Flow approach to the jet energy measurement. The prototype aims to establish a scalable solution for an ILC detector. A total of 3456 calorimeter cells need to be inter-calibrated, for this the response to muons is used. The calibration procedure is presented, and the statistic and systematic uncertainties are discussed, which have a direct impact on the constant term of the calorimeter energy resolution. Additionally, the MIP yield in number of fired SiPM pixels can be compared betw een the muon calibration and a test bench calibrations obtained using a Sr sourc e on the single tiles before the assembly of the calorimeter. A good correlation would enable pre-calibation of the single channels on the test bench to be port able to the assemble detector. This hypothesis is checked with the present work. 18. A Possible 1.8 K Refrigeration Cycle for the Large Hadron Collider CERN Document Server Millet, F; Tavian, L; Wagner, U 1998-01-01 The Large Hadron Collider (LHC) under construction at the European Laboratory for Particle Physics, CERN, will make use of superconducting magnets operating below 2.0 K. This requires, for each of the eight future cryogenic installations, an isothermal cooling capacity of up to 2.4 kW obtained by vaporisation of helium II at 1.6 kPa and 1.8 K. The process design for this cooling duty has to satisfy several demands. It has to be adapted to four already existing as well as to four new refrigerators. It must cover a dynamic range of one to three, and it must to allow continuous pump-down from 4.5 K to 1.8 K. A possible solution, as presented in this paper, includes a combination of cold centrifugal and warm volumetric compressors. It is characterised by a low thermal load on the refrigerator, and a large range of adaptability to different operation modes. The expected power factor for 1.8 K cooling is given, and the proposed control strategy is explained. 19. Modelling of flexibles for structural analysis of short straight section of Large Hadron Collider International Nuclear Information System (INIS) Abhay Kumar; Dutta, Subhajit; Dwivedi, Jishnu; Soni, H.C. 2003-01-01 Short Straight Section (SSS) of Large hadron Collider (LRCM) is a 8-meter long structure with a diameter of 1 meter and it houses a twin quadrupole. The cryogens are fed to the Sass through a jumper connection between Cryogenic Distribution Line (QRL) and SSS. The bus bars travel through interconnection bellows to adjoining magnets. CAT is studying the structural behavior of cold mass and the cryostat when subjected to various forces imposed on the SSS under various operating conditions of LHC machine including realignment required to compensate local sinking of the floor of the tunnel during the LHC machine's lifetime. CAT did calculation of reaction forces and moments on the Short Straight Section due to presence of jumper connection last year after the experimental verification of finite element model at CERN. Subsequently, a unified Fe model consisting of cold mass, cold feet, vacuum vessel, main vacuum vessel bellows (large sleeves), magnet interconnects, jumper connection, service module and precision motion jacks is being developed for studying the structural behaviour. (author) 20. High precision tools for slepton pair production processes at hadron colliders International Nuclear Information System (INIS) Thier, Stephan Christoph 2015-01-01 In this thesis, we develop high precision tools for the simulation of slepton pair production processes at hadron colliders and apply them to phenomenological studies at the LHC. Our approach is based on the POWHEG method for the matching of next-to-leading order results in perturbation theory to parton showers. We calculate matrix elements for slepton pair production and for the production of a slepton pair in association with a jet perturbatively at next-to-leading order in supersymmetric quantum chromodynamics. Both processes are subsequently implemented in the POWHEG BOX, a publicly available software tool that contains general parts of the POWHEG matching scheme. We investigate phenomenological consequences of our calculations in several setups that respect experimental exclusion limits for supersymmetric particles and provide precise predictions for slepton signatures at the LHC. The inclusion of QCD emissions in the partonic matrix elements allows for an accurate description of hard jets. Interfacing our codes to the multi-purpose Monte-Carlo event generator PYTHIA, we simulate parton showers and slepton decays in fully exclusive events. Advanced kinematical variables and specific search strategies are examined as means for slepton discovery in experimentally challenging setups. 1. Correlation between magnetic field quality and mechanical components of the Large Hadron Collider main dipoles Energy Technology Data Exchange (ETDEWEB) Bellesia, B 2006-12-15 The 1234 superconducting dipoles of the Large Hadron Collider, working at a cryogenic temperature of 1.9 K, must guarantee a high quality magnetic field to steer the particles inside the beam pipe. Magnetic field measurements are a powerful way to detect assembly faults that could limit magnet performances. The aim of the thesis is the analysis of these measurements performed at room temperature during the production of the dipoles. In a large scale production the ideal situation is that all the magnets produced were identical. However all the components constituting a magnet are produced with certain tolerance and the assembly procedures are optimized during the production; due to these the reality drifts away from the ideal situation. We recollected geometrical data of the main components (superconducting cables, coil copper wedges and austenitic steel coil collars) and coupling them with adequate electro-magnetic models we reconstructed a multipolar field representation of the LHC dipoles defining their critical components and assembling procedures. This thesis is composed of 3 main parts: 1) influence of the geometry and of the assembling procedures of the dipoles on the quality of the magnetic field, 2) the use of measurement performed on the dipoles in the assembling step in order to solve production issues and to understand the behaviour of coils during the assembling step, and 3) a theoretical study of the uncertain harmonic components of the magnetic field in order to assess the dipole production. 2. QCD and low-x physics at a Large Hadron electron Collider CERN Document Server Laycock, Paul 2012-01-01 The Large Hadron electron Collider (LHeC) is a proposed facility which will exploit the new world of energy and intensity offered by the LHC for electron-proton scattering, through the addition of a new electron accelerator. This contribution, which is derived from the draft CERN-ECFA-NuPECC Conceptual Design report (due for release in 2012), addresses the expected impact of the LHeC precision and extended kinematic range for low Bjorken-x and diffractive physics, and detailed simulation studies and prospects for high precision QCD and electroweak fits. Numerous observables which are sensitive to the expected low-x saturation of the parton densities are explored. These include the inclusive electron-proton scattering cross section and the related structure functionsF_2$and$F_L, as well as exclusive processes such as deeply-virtual Compton scattering and quasi-elastic heavy vector meson production and diffractive virtual photon dissociation. With a hundred times the luminosity that was achieved at HERA, s... 3. Final implementation, commissioning, and performance of embedded collimator beam position monitors in the Large Hadron Collider Science.gov (United States) Valentino, Gianluca; Baud, Guillaume; Bruce, Roderik; Gasior, Marek; Mereghetti, Alessio; Mirarchi, Daniele; Olexa, Jakub; Redaelli, Stefano; Salvachua, Belen; Valloni, Alessandra; Wenninger, Jorg 2017-08-01 During Long Shutdown 1, 18 Large Hadron Collider (LHC) collimators were replaced with a new design, in which beam position monitor (BPM) pick-up buttons are embedded in the collimator jaws. The BPMs provide a direct measurement of the beam orbit at the collimators, and therefore can be used to align the collimators more quickly than using the standard technique which relies on feedback from beam losses. Online orbit measurements also allow for reducing operational margins in the collimation hierarchy placed specifically to cater for unknown orbit drifts, therefore decreasing the β* and increasing the luminosity reach of the LHC. In this paper, the results from the commissioning of the embedded BPMs in the LHC are presented. The data acquisition and control software architectures are reviewed. A comparison with the standard alignment technique is provided, together with a fill-to-fill analysis of the measured orbit in different machine modes, which will also be used to determine suitable beam interlocks for a tighter collimation hierarchy. 4. Advances in elementary particle physics with applied superconductivity. Contribution of superconducting technology to CERN large hadron collider accelerator International Nuclear Information System (INIS) Yamamoto, Akira 2011-01-01 The construction of the Large Hadron Collider (LHC) was started in 1994 and completed in 2008. The LHC consists of more than seven thousand superconducting magnets and cavities, which play an essential role in elementary particle physics and its energy frontier. Since 2010, physics experiments at the new energy frontier have been carried out to investigate the history and elementary particle phenomena in the early universe. The superconducting technology applied in the energy frontier physics experiments is briefly introduced. (author) 5. American superconductor technology to help CERN to explore the mysteries of matter company's high temperature superconductor wire to be used in CERN's Large Hadron Collider CERN Multimedia 2003-01-01 American Superconductor Corporation has been selected by CERN, to provide 14,000 meters of high temperature superconductor (HTS) wire for current lead devices that will be used in CERN's Large Hadron Collider (1 page). 6. QCD-resummation and non-minimal flavour-violation for supersymmetric particle production at hadron colliders International Nuclear Information System (INIS) Fuks, B. 2007-06-01 Cross sections for supersymmetric particles production at hadron colliders have been extensively studied in the past at leading order and also at next-to-leading order of perturbative QCD. The radiative corrections include large logarithms which have to be re-summed to all orders in the strong coupling constant in order to get reliable perturbative results. In this work, we perform a first and extensive study of the resummation effects for supersymmetric particle pair production at hadron colliders. We focus on Drell-Yan like slepton-pair and slepton-sneutrino associated production in minimal supergravity and gauge-mediated supersymmetry-breaking scenarios, and present accurate transverse-momentum and invariant-mass distributions, as well as total cross sections. In non-minimal supersymmetric models, novel effects of flavour violation may occur. In this case, the flavour structure in the squark sector cannot be directly deduced from the trilinear Yukawa couplings of the fermion and Higgs supermultiplets. We perform a precise numerical analysis of the experimentally allowed parameter space in the case of minimal supergravity scenarios with non-minimal flavour violation, looking for regions allowed by low-energy, electroweak precision, and cosmological data. Leading order cross sections for the production of squarks and gauginos at hadron colliders are implemented in a flexible computer program, allowing us to study in detail the dependence of these cross sections on flavour violation. (author) 7. Beam losses from ultraperipheral nuclear collisions between ^{208}Pb^{82+} ions in the Large Hadron Collider and their alleviation Directory of Open Access Journals (Sweden) R. Bruce 2009-07-01 Full Text Available Electromagnetic interactions between colliding heavy ions at the Large Hadron Collider (LHC at CERN will give rise to localized beam losses that may quench superconducting magnets, apart from contributing significantly to the luminosity decay. To quantify their impact on the operation of the collider, we have used a three-step simulation approach, which consists of optical tracking, a Monte Carlo shower simulation, and a thermal network model of the heat flow inside a magnet. We present simulation results for the case of ^{208}Pb^{82+} ion operation in the LHC, with focus on the ALICE interaction region, and show that the expected heat load during nominal ^{208}Pb^{82+} operation is 40% above the quench level. This limits the maximum achievable luminosity. Furthermore, we discuss methods of monitoring the losses and possible ways to alleviate their effect. 8. VUV photoemission studies of candidate Large Hadron Collider vacuum chamber materials Directory of Open Access Journals (Sweden) R. Cimino 1999-06-01 Full Text Available In the context of future accelerators and, in particular, the beam vacuum of the Large Hadron Collider (LHC, a 27 km circumference proton collider to be built at CERN, VUV synchrotron radiation (SR has been used to study both qualitatively and quantitatively candidate vacuum chamber materials. Emphasis is given to show that angle and energy resolved photoemission is an extremely powerful tool to address important issues relevant to the LHC, such as the emission of electrons that contributes to the creation of an electron cloud which may cause serious beam instabilities and unmanageable heat loads on the cryogenic system. Here we present not only the measured photoelectron yields from the proposed materials, prepared on an industrial scale, but also the energy and in some cases the angular dependence of the emitted electrons when excited with either a white light (WL spectrum, simulating that in the arcs of the LHC, or monochromatic light in the photon energy range of interest. The effects on the materials examined of WL irradiation and /or ion sputtering, simulating the SR and ion bombardment expected in the LHC, were investigated. The studied samples exhibited significant modifications, in terms of electron emission, when exposed to the WL spectrum from the BESSY Toroidal Grating Monochromator beam line. Moreover, annealing and ion bombardment also induce substantial changes to the surface thereby indicating that such surfaces would not have a constant electron emission during machine operation. Such characteristics may be an important issue to define the surface properties of the LHC vacuum chamber material and are presented in detail for the various samples analyzed. It should be noted that all the measurements presented here were recorded at room temperature, whereas the majority of the LHC vacuum system will be maintained at temperatures below 20 K. The results cannot therefore be directly applied to these sections of the machine until 9. Fundamental cavity impedance and longitudinal coupled-bunch instabilities at the High Luminosity Large Hadron Collider Directory of Open Access Journals (Sweden) P. Baudrenghien 2017-01-01 Full Text Available The interaction between beam dynamics and the radio frequency (rf station in circular colliders is complex and can lead to longitudinal coupled-bunch instabilities at high beam currents. The excitation of the cavity higher order modes is traditionally damped using passive devices. But the wakefield developed at the cavity fundamental frequency falls in the frequency range of the rf power system and can, in theory, be compensated by modulating the generator drive. Such a regulation is the responsibility of the low-level rf (llrf system that measures the cavity field (or beam current and generates the rf power drive. The Large Hadron Collider (LHC rf was designed for the nominal LHC parameter of 0.55 A DC beam current. At 7 TeV the synchrotron radiation damping time is 13 hours. Damping of the instability growth rates due to the cavity fundamental (400.789 MHz can only come from the synchrotron tune spread (Landau damping and will be very small (time constant in the order of 0.1 s. In this work, the ability of the present llrf compensation to prevent coupled-bunch instabilities with the planned high luminosity LHC (HiLumi LHC doubling of the beam current to 1.1 A DC is investigated. The paper conclusions are based on the measured performances of the present llrf system. Models of the rf and llrf systems were developed at the LHC start-up. Following comparisons with measurements, the system was parametrized using these models. The parametric model then provides a more realistic estimation of the instability growth rates than an ideal model of the rf blocks. With this modeling approach, the key rf settings can be varied around their set value allowing for a sensitivity analysis (growth rate sensitivity to rf and llrf parameters. Finally, preliminary measurements from the LHC at 0.44 A DC are presented to support the conclusions of this work. 10. Heavy-ion collimation at the Large Hadron Collider. Simulations and measurements International Nuclear Information System (INIS) Hermes, Pascal Dominik 2016-01-01 The CERN Large Hadron Collider (LHC) stores and collides proton and 208 Pb 82+ beams of unprecedented energy and intensity. Thousands of superconducting magnets, operated at 1.9 K, guide the very intense and energetic particle beams, which have a large potential for destruction. This implies the demand for a multi-stage collimation system to provide protection from beam-induced quenches or even hardware damage. In heavy-ion operation, ion fragments with significant rigidity offsets can still scatter out of the collimation system. When they irradiate the superconducting LHC magnets, the latter risk to quench (lose their superconducting property). These secondary collimation losses can potentially impose a limitation for the stored heavy-ion beam energy. Therefore, their distribution in the LHC needs to be understood by sophisticated simulations. Such simulation tools must accurately simulate the particle motion of many different nuclides in the magnetic LHC lattice and simulate their interaction with the collimators. Previous simulation tools used simplified models for the simulation of particle-matter interaction and showed discrepancies compared to the measured loss patterns. This thesis describes the development and application of improved heavy-ion collimation simulation tools. Two different approaches are presented to provide these functionalities. In the first presented tool, called STIER, fragmentation at the primary collimator is simulated with the Monte-Carlo event generator FLUKA. The ion fragments scattered out of the primary collimator are subsequently tracked as protons with ion-equivalent rigidities in the existing proton tracking tool SixTrack. This approach was used to prepare the collimator settings for the 2015 LHC heavy-ion run and its predictions allowed reducing undesired losses. More accurate simulation results are obtained with the second presented simulation tool, in which SixTrack is extended to track arbitrary heavy ions. This new tracking 11. Heavy-ion collimation at the Large Hadron Collider. Simulations and measurements Energy Technology Data Exchange (ETDEWEB) Hermes, Pascal Dominik 2016-12-19 The CERN Large Hadron Collider (LHC) stores and collides proton and {sup 208}Pb{sup 82+} beams of unprecedented energy and intensity. Thousands of superconducting magnets, operated at 1.9 K, guide the very intense and energetic particle beams, which have a large potential for destruction. This implies the demand for a multi-stage collimation system to provide protection from beam-induced quenches or even hardware damage. In heavy-ion operation, ion fragments with significant rigidity offsets can still scatter out of the collimation system. When they irradiate the superconducting LHC magnets, the latter risk to quench (lose their superconducting property). These secondary collimation losses can potentially impose a limitation for the stored heavy-ion beam energy. Therefore, their distribution in the LHC needs to be understood by sophisticated simulations. Such simulation tools must accurately simulate the particle motion of many different nuclides in the magnetic LHC lattice and simulate their interaction with the collimators. Previous simulation tools used simplified models for the simulation of particle-matter interaction and showed discrepancies compared to the measured loss patterns. This thesis describes the development and application of improved heavy-ion collimation simulation tools. Two different approaches are presented to provide these functionalities. In the first presented tool, called STIER, fragmentation at the primary collimator is simulated with the Monte-Carlo event generator FLUKA. The ion fragments scattered out of the primary collimator are subsequently tracked as protons with ion-equivalent rigidities in the existing proton tracking tool SixTrack. This approach was used to prepare the collimator settings for the 2015 LHC heavy-ion run and its predictions allowed reducing undesired losses. More accurate simulation results are obtained with the second presented simulation tool, in which SixTrack is extended to track arbitrary heavy ions. This new 12. Prompt D*+ production in proton-proton and lead-lead collisions, measured with the ALICE experiment at the CERN Large Hadron Collider NARCIS (Netherlands) de Rooij, R. S. 2013-01-01 In this thesis the results are presented of the first measurements of the D*+ meson nuclear modification factor RAA in heavy ion collisions at the Large Hadron Collider (LHC) using the ALICE (A Large Ion Collider Experiment) detector at CERN. These open charmed mesons are a useful tool to 13. Reliability of the beam loss monitors system for the large hadron collider at CERN International Nuclear Information System (INIS) Guaglio, G. 2005-12-01 The energy stored in the Large Hadron Collider is unprecedented. The impact of the beam particles can cause severe damage on the superconductive magnets, resulting in significant downtime for repairing. The Beam Loss Monitors System (BLMS) detects the secondary particles shower of the lost beam particles and initiates the extraction of the beam before any serious damage to the equipment can occur. This thesis defines the BLMS specifications in term of reliability. The main goal is the design of a system minimizing both the probability to not detect a dangerous loss and the number of false alarms generated. The reliability theory and techniques utilized are described. The prediction of the hazard rates, the testing procedures, the Failure Modes Effects and Criticalities Analysis and the Fault Tree Analysis have been used to provide an estimation of the probability to damage a magnet, of the number of false alarms and of the number of generated warnings. The weakest components in the BLMS have been pointed out. The reliability figures of the BLMS have been calculated using a commercial software package (Isograph.). The effect of the variation of the parameters on the obtained results has been evaluated with a sensitivity analysis. The reliability model has been extended by the results of radiation tests. Design improvements, like redundant optical transmission, have been implemented in an iterative process. The proposed system is compliant with the reliability requirements. The model uncertainties are given by the limited knowledge of the thresholds levels of the superconductive magnets and of the locations of the losses along the ring. The implemented model allows modifications of the system, following the measuring of the hazard rates during the LHC life. It can also provide reference numbers to other accelerators which will implement similar technologies. (author) 14. Design and implementation of a crystal collimation test stand at the Large Hadron Collider International Nuclear Information System (INIS) Mirarchi, D.; Redaelli, S.; Scandale, W.; Hall, G. 2017-01-01 Future upgrades of the CERN Large Hadron Collider (LHC) demand improved cleaning performance of its collimation system. Very efficient collimation is required during regular operations at high intensities, because even a small amount of energy deposited on superconducting magnets can cause an abrupt loss of superconducting conditions (quench). The possibility to use a crystal-based collimation system represents an option for improving both cleaning performance and impedance compared to the present system. Before relying on crystal collimation for the LHC, a demonstration under LHC conditions (energy, beam parameters, etc.) and a comparison against the present system is considered mandatory. Thus, a prototype crystal collimation system has been designed and installed in the LHC during the Long Shutdown 1 (LS1), to perform feasibility tests during the Run 2 at energies up to 6.5 TeV. The layout is suitable for operation with proton as well as heavy ion beams. In this paper, the design constraints and the solutions proposed for this test stand for feasibility demonstration of crystal collimation at the LHC are presented. The expected cleaning performance achievable with this test stand, as assessed in simulations, is presented and compared to that of the present LHC collimation system. The first experimental observation of crystal channeling in the LHC at the record beam energy of 6.5 TeV has been obtained in 2015 using the layout presented (Scandale et al., Phys Lett B 758:129, 2016). First tests to measure the cleaning performance of this test stand have been carried out in 2016 and the detailed data analysis is still on-going. (orig.) 15. Design and implementation of a crystal collimation test stand at the Large Hadron Collider Science.gov (United States) Mirarchi, D.; Hall, G.; Redaelli, S.; Scandale, W. 2017-06-01 Future upgrades of the CERN Large Hadron Collider (LHC) demand improved cleaning performance of its collimation system. Very efficient collimation is required during regular operations at high intensities, because even a small amount of energy deposited on superconducting magnets can cause an abrupt loss of superconducting conditions (quench). The possibility to use a crystal-based collimation system represents an option for improving both cleaning performance and impedance compared to the present system. Before relying on crystal collimation for the LHC, a demonstration under LHC conditions (energy, beam parameters, etc.) and a comparison against the present system is considered mandatory. Thus, a prototype crystal collimation system has been designed and installed in the LHC during the Long Shutdown 1 (LS1), to perform feasibility tests during the Run 2 at energies up to 6.5 TeV. The layout is suitable for operation with proton as well as heavy ion beams. In this paper, the design constraints and the solutions proposed for this test stand for feasibility demonstration of crystal collimation at the LHC are presented. The expected cleaning performance achievable with this test stand, as assessed in simulations, is presented and compared to that of the present LHC collimation system. The first experimental observation of crystal channeling in the LHC at the record beam energy of 6.5 TeV has been obtained in 2015 using the layout presented (Scandale et al., Phys Lett B 758:129, 2016). First tests to measure the cleaning performance of this test stand have been carried out in 2016 and the detailed data analysis is still on-going. 16. Chromaticity decay due to superconducting dipoles on the injection plateau of the Large Hadron Collider Directory of Open Access Journals (Sweden) N. Aquilina 2012-03-01 Full Text Available It is well known that in a superconducting accelerator a significant chromaticity drift can be induced by the decay of the sextupolar component of the main dipoles. In this paper we give a brief overview of what was expected for the Large Hadron Collider (LHC on the grounds of magnetic measurements of individual dipoles carried out during the production. According to this analysis, the decay time constants were of the order of 200 s: since the injection in the LHC starts at least 30 minutes after the magnets are at constant current, the dynamic correction of this effect was not considered to be necessary. The first beam measurements of chromaticity showed significant decay even after a few hours. For this reason, a dynamic correction of decay on the injection plateau was implemented based on beam measurements. This means that during the injection plateau the sextupole correctors are powered with a varying current to cancel out the decay of the dipoles. This strategy has been implemented successfully. A similar phenomenon has been observed for the dependence of the decay amplitude on the powering history of the dipoles: according to magnetic measurements, also in this case time constants are of the order of 200 s and therefore no difference is expected between a one hour or a ten hours flattop. On the other hand, the beam measurements show a significant change of decay for these two conditions. For the moment there is no clue of the origin of these discrepancies. We give a complete overview of the two effects, and the modifications that have been done to the field model parameters to be able to obtain a final chromaticity correction within a few units. 17. Design and implementation of a crystal collimation test stand at the Large Hadron Collider Energy Technology Data Exchange (ETDEWEB) Mirarchi, D.; Redaelli, S.; Scandale, W. [CERN, European Organization for Nuclear Research, Geneva 23 (Switzerland); Hall, G. [Imperial College, Blackett Laboratory, London (United Kingdom) 2017-06-15 Future upgrades of the CERN Large Hadron Collider (LHC) demand improved cleaning performance of its collimation system. Very efficient collimation is required during regular operations at high intensities, because even a small amount of energy deposited on superconducting magnets can cause an abrupt loss of superconducting conditions (quench). The possibility to use a crystal-based collimation system represents an option for improving both cleaning performance and impedance compared to the present system. Before relying on crystal collimation for the LHC, a demonstration under LHC conditions (energy, beam parameters, etc.) and a comparison against the present system is considered mandatory. Thus, a prototype crystal collimation system has been designed and installed in the LHC during the Long Shutdown 1 (LS1), to perform feasibility tests during the Run 2 at energies up to 6.5 TeV. The layout is suitable for operation with proton as well as heavy ion beams. In this paper, the design constraints and the solutions proposed for this test stand for feasibility demonstration of crystal collimation at the LHC are presented. The expected cleaning performance achievable with this test stand, as assessed in simulations, is presented and compared to that of the present LHC collimation system. The first experimental observation of crystal channeling in the LHC at the record beam energy of 6.5 TeV has been obtained in 2015 using the layout presented (Scandale et al., Phys Lett B 758:129, 2016). First tests to measure the cleaning performance of this test stand have been carried out in 2016 and the detailed data analysis is still on-going. (orig.) 18. Measured and simulated heavy-ion beam loss patterns at the CERN Large Hadron Collider Science.gov (United States) Hermes, P. D.; Bruce, R.; Jowett, J. M.; Redaelli, S.; Salvachua Ferrando, B.; Valentino, G.; Wollmann, D. 2016-05-01 The Large Hadron Collider (LHC) at CERN pushes forward to new regimes in terms of beam energy and intensity. In view of the combination of very energetic and intense beams together with sensitive machine components, in particular the superconducting magnets, the LHC is equipped with a collimation system to provide protection and intercept uncontrolled beam losses. Beam losses could cause a superconducting magnet to quench, or in the worst case, damage the hardware. The collimation system, which is optimized to provide a good protection with proton beams, has shown a cleaning efficiency with heavy-ion beams which is worse by up to two orders of magnitude. The reason for this reduced cleaning efficiency is the fragmentation of heavy-ion beams into isotopes with a different mass to charge ratios because of the interaction with the collimator material. In order to ensure sufficient collimation performance in future ion runs, a detailed theoretical understanding of ion collimation is needed. The simulation of heavy-ion collimation must include processes in which 82 + 208Pb ions fragment into dozens of new isotopes. The ions and their fragments must be tracked inside the magnetic lattice of the LHC to determine their loss positions. This paper gives an overview of physical processes important for the description of heavy-ion loss patterns. Loss maps simulated by means of the two tools ICOSIM [1,2] and the newly developed STIER (SixTrack with Ion-Equivalent Rigidities) are compared with experimental data measured during LHC operation. The comparison shows that the tool STIER is in better agreement. 19. Towards future circular colliders Science.gov (United States) Benedikt, Michael; Zimmermann, Frank 2016-09-01 The Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) presently provides proton-proton collisions at a center-of-mass (c.m.) energy of 13 TeV. The LHC design was started more than 30 years ago, and its physics program will extend through the second half of the 2030's. The global Future Circular Collider (FCC) study is now preparing for a post-LHC project. The FCC study focuses on the design of a 100-TeV hadron collider (FCC-hh) in a new ˜100 km tunnel. It also includes the design of a high-luminosity electron-positron collider (FCCee) as a potential intermediate step, and a lepton-hadron collider option (FCC-he). The scope of the FCC study comprises accelerators, technology, infrastructure, detectors, physics, concepts for worldwide data services, international governance models, and implementation scenarios. Among the FCC core technologies figure 16-T dipole magnets, based on Nb3 S n superconductor, for the FCC-hh hadron collider, and a highly-efficient superconducting radiofrequency system for the FCC-ee lepton collider. Following the FCC concept, the Institute of High Energy Physics (IHEP) in Beijing has initiated a parallel design study for an e + e - Higgs factory in China (CEPC), which is to be succeeded by a high-energy hadron collider (SPPC). At present a tunnel circumference of 54 km and a hadron collider c.m. energy of about 70 TeV are being considered. After a brief look at the LHC, this article reports the motivation and the present status of the FCC study, some of the primary design challenges and R&D subjects, as well as the emerging global collaboration. 20. Colliders CERN Document Server Chou, Weiren 2014-01-01 The idea of colliding two particle beams to fully exploit the energy of accelerated particles was first proposed by Rolf Wideröe, who in 1943 applied for a patent on the collider concept and was awarded the patent in 1953. The first three colliders — AdA in Italy, CBX in the US, and VEP-1 in the then Soviet Union — came to operation about 50 years ago in the mid-1960s. A number of other colliders followed. Over the past decades, colliders defined the energy frontier in particle physics. Different types of colliers — proton–proton, proton–antiproton, electron–positron, electron–proton, electron-ion and ion-ion colliders — have played complementary roles in fully mapping out the constituents and forces in the Standard Model (SM). We are now at a point where all predicted SM constituents of matter and forces have been found, and all the latest ones were found at colliders. Colliders also play a critical role in advancing beam physics, accelerator research and technology development. It is timel... 1. Design considerations for the semi-digital hadronic calorimeter (SDHCAL) for future leptonic colliders CERN Document Server Pingault, Antoine 2016-07-29 The first technological SDHCAL prototype having been successfully tested, a new phase of R&D, to validate completely the SDHCAL option for the International Linear Detector (ILD) project of the International Linear Collider (ILC), has started with the conception and the realisation of a new prototype. The new one is intended to host few but large active layers of the future SDHCAL. The new active layers, made of Glass Resistive Plate Chambers (GRPC) with sizes larger than 2m^2 will be equipped with a new version of the electronic readout, fulfilling the requirements of the future ILD detector. The new GRPC are conceived to improve the homogeneity with a new gas distribution scheme. Finally the mechanical structure will be achieved using the electron beam welding technique. The progress realised will be presented and future steps will be discussed. 2. Design considerations for the semi-digital hadronic calorimeter (SDHCAL) for future leptonic colliders International Nuclear Information System (INIS) Pingault, A. 2016-01-01 The first technological SDHCAL prototype having been successfully tested, a new phase of R and D, to validate completely the SDHCAL option for the International Linear Detector (ILD) project of the International Linear Collider (ILC), has started with the conception and the realisation of a new prototype. The new one is intended to host few but large active layers of the future SDHCAL. The new active layers, made of Glass Resistive Plate Chambers (GRPC) with sizes larger than 2 m 2 will be equipped with a new version of the electronic readout, fulfilling the requirements of the future ILD detector. The new GRPC are conceived to improve the homogeneity with a new gas distribution scheme. Finally the mechanical structure will be achieved using the electron beam welding technique. The progress realised will be presented and future steps will be discussed. 3. Determination of elemental impurities in polymer materials of electrical cables for use in safety systems of nuclear power plants and for data transfer in the Large Hadron Collider by instrumental neutron activation analysis Czech Academy of Sciences Publication Activity Database Kučera, Jan; Cabalka, M.; Ferencei, Jozef; Kubešová, Marie; Strunga, Vladimír 2016-01-01 Roč. 309, č. 3 (2016), s. 1341-1348 ISSN 0236-5731 R&D Projects: GA TA ČR TA02010218; GA MŠk(CZ) LM2011019 Institutional support: RVO:61389005 Keywords : instrumental neutron activation analysis * polymer materials * undesired elements * nuclear power plant * Large Hadron Collider Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 1.282, year: 2016 4. Hadrons in a highly granular silicon-tungsten electromagnetic calorimeter - Top quark production at the International Linear Collider International Nuclear Information System (INIS) Doublet, P. 2011-10-01 The International Linear Collider (ILC) is a proposed e + e - collider with a center-of-mass energy of 500 GeV or more, aimed at precision measurements, e.g. of a light Higgs boson that could be discovered soon at the Large Hadron Collider. Its detectors foresee the use of fine grained calorimeters to achieve the desired accuracy. This thesis presents the study of the response to hadrons of a highly granular silicon-tungsten electromagnetic calorimeter (SiW ECAL), and the study of top quark pair production at the ILC. The SiW ECAL prototype developed by the CALICE collaboration was tested with beams of charged particles at FNAL in May and July 2008. After selecting single negatively charged pions entering the ECAL, its fine granularity is used to introduce a classification among four types of events, used to describe hadronic interactions. Motivated by extra-dimensional models which may explain the A FB b LEP anomaly by modifying the couplings of third generation quarks to the Z boson, the semileptonic decay of the top quark is studied with a full simulation of the proposed ILD detector for the ILC at center-of-mass energy of √(s)=500 GeV and integrated luminosity L=500 fb -1 . Detector performances permit to reach efficiencies larger than 70% in finding those events with a purity larger than 95%. This translates into a relative accuracy of about 1% on both the left-right asymmetry of top production A LR 0,t and the top forward-backward asymmetry A FB t with electrons polarized at 80% and no polarization of the positrons. The relative uncertainties in the left and right couplings of the top quark to the Z boson could be as good as 0.9% and 1.5%. (author) 5. A high-granularity scintillator hadronic-calorimeter with SiPM readout for a linear collider detector International Nuclear Information System (INIS) Andreev, V.; Balagura, V; Bobchenko, B. 2004-01-01 We report upon the design, construction and operation of a prototype for a high-granularity tile hadronic calorimeter for a future international linear collider(ILC) detector. Scintillating tiles are read out via wavelength-shifting fibers which guides the scintillation light to a novel photodetector, the Silicon Photomultiplier. The prototype has been tested at DESY using a positron test beam. The results are compared with a reference prototype equipped with multichannel vacuum photomultipliers. Detector calibration, noise, linearity and stability are discussed, and the energy response in a 1-6 GeV positron beam is compared with simulation. The work presented serves to establish the application of SiPM for calorimetry, and leads to the choice of this device for the construction of a 1m 3 calorimeter prototype for tests in hadron beams. (orig.) 6. Probing electroweak gauge boson scattering with the ATLAS detector at the large hadron collider International Nuclear Information System (INIS) Anger, Philipp 2014-01-01 Electroweak gauge bosons as central components of the Standard Model of particle physics are well understood theoretically and have been studied with high precision at past and present collider experiments. The electroweak theory predicts the existence of a scattering process of these particles consisting of contributions from triple and quartic bosonic couplings as well as Higgs boson mediated interactions. These contributions are not separable in a gauge invariant way and are only unitarized in the case of a Higgs boson as it is described by the Standard Model. The process is tied to the electroweak symmetry breaking which introduces the longitudinal modes for the massive electroweak gauge bosons. A study of this interaction is also a direct verification of the local gauge symmetry as one of the fundamental axioms of the Standard Model. With the start of the Large Hadron Collider and after collecting proton-proton collision data with an integrated luminosity of 20.3 fb -1 at a center-of-mass energy of √(s)=8 TeV with the ATLAS detector, first-ever evidence for this process could be achieved in the context of this work. A study of leptonically decaying W ± W ± jj, same-electric-charge diboson production in association with two jets resulted in an observation of the electroweak W ± W ± jj production with same electric charge of the W bosons, inseparably comprising W ± W ± →W ± W ± electroweak gauge boson scattering contributions, with a significance of 3.6 standard deviations. The measured production cross section is in agreement with the Standard Model prediction. In the course of a study for leptonically decaying WZ productions, methods for background estimation, the extraction of systematic uncertainties and cross section measurements were developed. They were extended and applied to the WZjj final state whereof the purely electroweakly mediated contribution is intrinsically tied to the scattering of all Standard Model electroweak gauge bosons: W 7. Comparison of electric dipole moments and the Large Hadron Collider for probing CP violation in triple boson vertices CERN Document Server Jung, Sunghoon 2009-01-01 CP violation from physics beyond the Standard Model may reside in triple boson vertices of the electroweak theory. We review the effective theory description and discuss how CP violating contributions to these vertices might be discerned by electric dipole moments (EDM) or diboson production at the Large Hadron Collider (LHC). Despite triple boson CP violating interactions entering EDMs only at the two-loop level, we find that EDM experiments are generally more powerful than the diboson processes. To give example to these general considerations we perform the comparison between EDMs and collider observables within supersymmetric theories that have heavy sfermions, such that substantive EDMs at the one-loop level are disallowed. EDMs generally remain more powerful probes, and next-generation EDM experiments may surpass even the most optimistic assumptions for LHC sensitivities. 8. Large Hadron Collider at CERN: Beams generating high-energy-density matter. Science.gov (United States) Tahir, N A; Schmidt, R; Shutov, A; Lomonosov, I V; Piriz, A R; Hoffmann, D H H; Deutsch, C; Fortov, V E 2009-04-01 This paper presents numerical simulations that have been carried out to study the thermodynamic and hydrodynamic responses of a solid copper cylindrical target that is facially irradiated along the axis by one of the two Large Hadron Collider (LHC) 7 TeV/ c proton beams. The energy deposition by protons in solid copper has been calculated using an established particle interaction and Monte Carlo code, FLUKA, which is capable of simulating all components of the particle cascades in matter, up to multi-TeV energies. These data have been used as input to a sophisticated two-dimensional hydrodynamic computer code BIG2 that has been employed to study this problem. The prime purpose of these investigations was to assess the damage caused to the equipment if the entire LHC beam is lost at a single place. The FLUKA calculations show that the energy of protons will be deposited in solid copper within about 1 m assuming constant material parameters. Nevertheless, our hydrodynamic simulations have shown that the energy deposition region will extend to a length of about 35 m over the beam duration. This is due to the fact that first few tens of bunches deposit sufficient energy that leads to high pressure that generates an outgoing radial shock wave. Shock propagation leads to continuous reduction in the density at the target center that allows the protons delivered in subsequent bunches to penetrate deeper and deeper into the target. This phenomenon has also been seen in case of heavy-ion heated targets [N. A. Tahir, A. Kozyreva, P. Spiller, D. H. H. Hoffmann, and A. Shutov, Phys. Rev. E 63, 036407 (2001)]. This effect needs to be considered in the design of a sacrificial beam stopper. These simulations have also shown that the target is severely damaged and is converted into a huge sample of high-energy density (HED) matter. In fact, the inner part of the target is transformed into a strongly coupled plasma with fairly uniform physical conditions. This work, therefore, has 9. Comprehending particle production in proton+proton and heavy-ion collisions at the Large Hadron Collider International Nuclear Information System (INIS) Sahoo, Raghunath 2017-01-01 In the extreme conditions of temperature and energy density, nuclear matter undergoes a transition to a new phase, which is governed by partonic degrees of freedom. This phase is called Quark-Gluon Plasma (QGP). The transition to QGP phase was conjectured to take place in central nucleus-nucleus collisions. With the advent of unprecedented collision energy at the Large Hadron Collider (LHC), at CERN, it has been possible to create energy densities higher than that was predicted by lattice QCD for a deconfinement transition 10. Electromagnetic Design and Optimization of Directivity of Stripline Beam Position Monitors for the High Luminosity Large Hadron Collider CERN Document Server Draskovic, Drasko; Jones, Owain Rhodri; Lefèvre, Thibaut; Wendt, Manfred 2015-01-01 This paper presents the preliminary electromagnetic design of a stripline Beam Position Monitor (BPM) for the High Luminosity program of the Large Hadron Collider (HL-LHC) at CERN. The design is fitted into a new octagonal shielded Beam Screen for the low-beta triplets and is optimized for high directivity. It also includes internal Tungsten absorbers, required to reduce the energy deposition in the superconducting magnets. The achieved broadband directivity in wakefield solver simulations presents significant improvement over the directivity of the current stripline BPMs installed in the LHC. 11. How hadron collider experiments contributed to the development of QCD: from hard-scattering to the perfect liquid Science.gov (United States) Tannenbaum, M. J. 2018-05-01 A revolution in elementary particle physics occurred during the period from the ICHEP1968 to the ICHEP1982 with the advent of the parton model from discoveries in Deeply Inelastic electron-proton Scattering at SLAC, neutrino experiments, hard-scattering observed in p+p collisions at the CERN ISR, the development of QCD, the discovery of the J/ Ψ at BNL and SLAC and the clear observation of high transverse momentum jets at the CERN SPS p¯ + p collider. These and other discoveries in this period led to the acceptance of QCD as the theory of the strong interactions. The desire to understand nuclear physics at high density such as in neutron stars led to the application of QCD to this problem and to the prediction of a Quark-Gluon Plasma (QGP) in nuclei at high energy density and temperatures. This eventually led to the construction of the Relativistic Heavy Ion Collider (RHIC) at BNL to observe superdense nuclear matter in the laboratory. This article discusses how experimental methods and results which confirmed QCD at the first hadron collider, the CERN ISR, played an important role in experiments at the first heavy ion collider, RHIC, leading to the discovery of the QGP as a perfect liquid as well as discoveries at RHIC and the LHC which continue to the present day. 12. Integrated analysis of particle interactions at hadron colliders Report of research activities in 2010-2015 Energy Technology Data Exchange (ETDEWEB) Nadolsky, Pavel M. [Southern Methodist Univ., Dallas, TX (United States) 2015-08-31 The report summarizes research activities of the project ”Integrated analysis of particle interactions” at Southern Methodist University, funded by 2010 DOE Early Career Research Award DE-SC0003870. The goal of the project is to provide state-of-the-art predictions in quantum chromodynamics in order to achieve objectives of the LHC program for studies of electroweak symmetry breaking and new physics searches. We published 19 journal papers focusing on in-depth studies of proton structure and integration of advanced calculations from different areas of particle phenomenology: multi-loop calculations, accurate long-distance hadronic functions, and precise numerical programs. Methods for factorization of QCD cross sections were advanced in order to develop new generations of CTEQ parton distribution functions (PDFs), CT10 and CT14. These distributions provide the core theoretical input for multi-loop perturbative calculations by LHC experimental collaborations. A novel ”PDF meta-analysis” technique was invented to streamline applications of PDFs in numerous LHC simulations and to combine PDFs from various groups using multivariate stochastic sampling of PDF parameters. The meta-analysis will help to bring the LHC perturbative calculations to the new level of accuracy, while reducing computational efforts. The work on parton distributions was complemented by development of advanced perturbative techniques to predict observables dependent on several momentum scales, including production of massive quarks and transverse momentum resummation at the next-to-next-to-leading order in QCD. 13. The Evolution of the Control System for the Electromagnetic Calorimeter of the Compact Muon Solenoid Experiment at the Large Hadron Collider CERN Multimedia Holme, Oliver; Dissertori, Günther; Lustermann, Werner; Zelepoukine, Serguei 2011-01-01 This paper discusses the evolution of the Detector Control System (DCS) designed and implemented for the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) as well as the operational experience acquired during the LHC physics data taking periods of 2010 and 2011. The current implementation in terms of functionality and planned hardware upgrades are presented. Furthermore, a project for reducing the long-term software maintenance, including a year-long detailed analysis of the existing applications, is put forward and the current outcomes which have informed the design decisions for the next CMS ECAL DCS software generation are described. The main goals for the new version are to minimize external dependencies enabling smooth migration to new hardware and software platforms and to maintain the existing functionality whilst substantially reducing support and maintenance effort through homogenization, simplification and standardization of the contr... 14. A Silicon Strip Detector for the Phase II High Luminosity Upgrade of the ATLAS Detector at the Large Hadron Collider CERN Document Server INSPIRE-00425747; McMahon, Stephen J 2015-01-01 ATLAS is a particle physics experiment at the Large Hadron Collider (LHC) that detects proton-proton collisions at a centre of mass energy of 14 TeV. The Semiconductor Tracker is part of the Inner Detector, implemented using silicon microstrip detectors with binary read-out, providing momentum measurement of charged particles with excellent resolution. The operation of the LHC and the ATLAS experiment started in 2010, with ten years of operation expected until major upgrades are needed in the accelerator and the experiments. The ATLAS tracker will need to be completely replaced due to the radiation damage and occupancy of some detector elements and the data links at high luminosities. These upgrades after the first ten years of operation are named the Phase-II Upgrade and involve a re-design of the LHC, resulting in the High Luminosity Large Hadron Collider (HL-LHC). This thesis presents the work carried out in the testing of the ATLAS Phase-II Upgrade electronic systems in the future strips tracker a... 15. ENLIGHT and other EU-funded projects in hadron therapy CERN Document Server Dosanjh, M; Meyer, R 2010-01-01 Following impressive results from early phase trials in Japan and Germany, there is a current expansion in European hadron therapy. This article summarises present European Union-funded projects for research and co-ordination of hadron therapy across Europe. Our primary focus will be on the research questions associated with carbon ion treatment of cancer, but these considerations are also applicable to treatments using proton beams and other light ions. The challenges inherent in this new form of radiotherapy require maximum interdisciplinary co-ordination. On the basis of its successful track record in particle and accelerator physics, the internationally funded CERN laboratories (otherwise known as the European Organisation for Nuclear Research) have been instrumental in promoting collaborations for research purposes in this area of radiation oncology. There will soon be increased opportunities for referral of patients across Europe for hadron therapy. Oncologists should be aware of these developments, whi... 16. Towards Future Circular Colliders CERN Document Server AUTHOR|(CDS)2108454; Zimmermann, Frank 2016-01-01 The Large Hadron Collider (LHC) at CERN presently provides proton-proton collisions at a centre-of-mass (c.m.) energy of 13 TeV. The LHC design was started more than 30 years ago, and its physics programme will extend through the second half of the 2030’s. The global Future Circular Collider (FCC) study is now preparing for a post-LHC project. The FCC study focuses on the design of a 100-TeV hadron collider (FCC-hh) in a new ∼100 km tunnel. It also includes the design of a high-luminosity electron-positron collider (FCC-ee) as a potential intermediate step, and a lepton-hadron collider option (FCC-he). The scope of the FCC study comprises accelerators, technology, infrastructure, detectors, physics, concepts for worldwide data services, international governance models, and implementation scenarios. Among the FCC core technologies figure 16-T dipole magnets, based onNb_3Sn$superconductor, for the FCC-hh hadron collider, and a highly efficient superconducting radiofrequency system for the FCC-ee lepton c... 17. Discovery and measurement of excited b hadrons at the Collider Detector at Fermilab Energy Technology Data Exchange (ETDEWEB) Pursley, Jennifer Marie [Johns Hopkins Univ., Baltimore, MD (United States) 2007-08-01 This thesis presents evidence for the B**0 and Σ$(*)±\\atop{b}$hadrons in proton-antiproton collisions at a center of mass energy of 1.96 TeV, using data collected by the Collider Detector at Fermilab. In the search for B**0 → B± π, two B± decays modes are reconstructed: B± → J/ΨK±, where J/Ψ → μ+μ-, and B± →$\\bar{D}$0π±, where$\\bar{D}$0 → K± π±. Both modes are reconstructed using 370 ± 20 pb-1 of data. Combining the B± meson with a charged pion to reconstruct B**0 led to the observation and measurement of the masses of the two narrow B**0 states, B$1\\atop{0}$and B$*0\\atop{2}$, of m(B$1\\atop{0}$) = 5734 ± 3(stat.) ± 2(syst.) MeV/c2; m(B$*0\\atop{2}$) = 5738 ± 5(stat.) ± 1(syst.) MeV/c{sup 2}. In the search for Σ$(*)±\\atop{b}$→ Λ$0\\atop{b}$π±, the Λ$0\\atop{b}$is reconstructed in the decay mode Λ$0\\atop{b}$→ Λ$+\\atop{c}$π-, where Λ$+\\atop{c}$→ pK- π+, using 1070 ± 60 pb-1 of data. Upon combining the Λ$0\\atop{b}$candidate with a charged pion, all four of the Σ$(*)±\\atop{b}$states are observed and their masses measured to be: m(Σ$+\\atop{b}$) = 5807.8$+2.0\\atop{-2.2}$(stat.) ± 1.7(syst.) MeV/c2; m(Σ$+\\atop{b}$) = 5815.2 ± 1.0(stat.) ± 1.7(syst.) MeV/c2; m(Σ$*+\\atop{b}$) = 5829.0$+1.6\\atop{-1.8}$(stat.)$+1.7\\atop{-1.8}$(syst.) MeV/c 2; M(Σ$*-±\\atop{b}$) - 5836.4 ± 2.0(stat.)$+1.8\\atop{-1.7}$(syst.) MeV/c2. This is the first observation of Σ$(*)±\\atop{b}$baryons. 18. Messung der Produktion von aus leichten Quarks zusammengesetzten Hadronen und Anti-Kernen am Large Hadron Collider CERN Document Server Kalweit, Alexander; Wambach, Jochen With the recording of the first collisions of the Large Hadron Collider (LHC) in November 2009, a new era in the domain of high energy and relativistic heavy-ion physics has started. As one of the early observables which can be addressed, the measurement of light quark flavor production is presented in this thesis. Hadrons that consist only of u, d, and s quarks constitute the majority of the produced particles in pp and Pb–Pb collisions. Their measurement forms the basis for a detailed understanding of the collision and for the answer of the question if hadronic matter undergoes a phase transition to the deconfined quark-gluon plasma at high temperatures. The basics of ultra-relativistic heavyion physics are briefly introduced in the first chapter followed by a short description of the ALICE experiment. A particular focus is put on the unique particle identification (PID) capabilities as they provide the basis of the measurements which are presented in the following chapters. The particle identification vi... 19. Large Area Silicon Tracking Detectors with Fast Signal Readout for the Large Hadron Collider (LHC) at CERN CERN Document Server Köstner, S 2005-01-01 The Standard Model of elementary particles, which is summarized briefly in the second chapter, incorporates a number of successful theories to explain the nature and consistency of matter. However not all building blocks of this model could yet be tested by experiment. To confirm existing theories and to improve nowadays understanding of matter a new machine is currently being built at CERN, the Large Hadron Collider (LHC), described in the third chapter. LHC is a proton-proton collider which will reach unprecedented luminosities and center of mass energies. Five experiments are attached to it to give answers to questions like the existence of the Higgs meson, which allows to explain the mass content of matter, and the origin of CP-violation, which plays an important role in the baryogenesis of the universe. Supersymmetric theories, proposing a bosonic superpartner for each fermion and vice versa, will be tested. By colliding heavy ions, high energy and particle densities can be achieved and probed. This stat... 20. Heavy flavor at the large hadron collider in a strong coupling approach Energy Technology Data Exchange (ETDEWEB) He, Min [Department of Applied Physics, Nanjing University of Science and Technology, Nanjing 210094 (China); Fries, Rainer J.; Rapp, Ralf [Cyclotron Institute and Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843-3366 (United States) 2014-07-30 Employing nonperturbative transport coefficients for heavy-flavor (HF) diffusion through quark–gluon plasma (QGP), hadronization and hadronic matter, we compute D- and B-meson observables in Pb+Pb (√(s)=2.76 TeV) collisions at the LHC. Elastic heavy-quark scattering in the QGP is evaluated within a thermodynamic T-matrix approach, generating resonances close to the critical temperature which are utilized for recombination into D and B mesons, followed by hadronic diffusion using effective hadronic scattering amplitudes. The transport coefficients are implemented via Fokker–Planck Langevin dynamics within hydrodynamic simulations of the bulk medium in nuclear collisions. The hydro expansion is quantitatively constrained by transverse-momentum spectra and elliptic flow of light hadrons. Our approach thus incorporates the paradigm of a strongly coupled medium in both bulk and HF dynamics throughout the thermal evolution of the system. At low and intermediate p{sub T}, HF observables at LHC are reasonably well accounted for, while discrepancies at high p{sub T} are indicative for radiative mechanisms not included in our approach. 1. A particle consistent with the Higgs Boson observed with the ATLAS detector at the Large Hadron Collider Czech Academy of Sciences Publication Activity Database Aad, G.; Abajyan, T.; Abbott, B.; Böhm, Jan; Chudoba, Jiří; Gallus, Petr; Gunther, Jaroslav; Havránek, Miroslav; Jakoubek, Tomáš; Juránek, Vojtěch; Kepka, Oldřich; Kupčo, Alexander; Kůs, Vlastimil; Lokajíček, Miloš; Marčišovský, Michal; Mikeštíková, Marcela; Myška, Miroslav; Němeček, Stanislav; Růžička, Pavel; Schovancová, Jaroslava; Šícho, Petr; Staroba, Pavel; Svatoš, Michal; Taševský, Marek; Tic, Tomáš; Vrba, Václav; Valenta, J.; Zeman, Martin 2012-01-01 Roč. 338, č. 6114 (2012), s. 1576-1582 ISSN 0036-8075 R&D Projects: GA MŠk LA08032 Institutional support: RVO:68378271 Keywords : Higgs particle * mass * ATLAS * CERN LHC Coll * interpretation of experiments Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 31.027, year: 2012 2. A high granularity scintillator hadronic — calorimeter with SiPM readout for a linear collider detector Czech Academy of Sciences Publication Activity Database Andreev, V.; Balagura, V.; Bobchenko, B.; Cvach, Jaroslav; Janata, Milan; Kacl, Ivan; Němeček, Stanislav; Polák, Ivo; Valkár, Š.; Weichert, Jan; Zálešák, Jaroslav 2005-01-01 Roč. 540, - (2005), s. 368-380 ISSN 0168-9002 R&D Projects: GA MŠk(CZ) LN00A006 Institutional research plan: CEZ:AV0Z10100502 Keywords : linear collider detector * analog calorimeter * semiconductor detectors * scintillator * high granularity Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 1.224, year: 2005 3. Confronting fragmentation function universality with single hadron inclusive production at HERA and e+e- colliders International Nuclear Information System (INIS) Albino, S.; Kniehl, B.A.; Kramer, G.; Sandoval, C. 2006-11-01 Predictions for light charged hadron production data in the current fragmentation region of deeply inelastic scattering from the H1 and ZEUS experiments are calculated using perturbative Quantum Chromodynamics at next-to-leading order, and using fragmentation functions obtained by fitting to similar data from e + e - reactions. General good agreement is found when the magnitude Q 2 of the hard photon's virtuality is sufficiently large. The discrepancy at low Q and small scaled momentum x p is reduced by incorporating mass effects of the detected hadron. By performing quark tagging, the contributions to the overall fragmentation from the various quark flavours in the ep reactions are studied and compared to the contributions in e + e - reactions. The yields of the various hadron species are also calculated. (orig.) 4. Comment on "Polarized window for left-right symmetry and a right-handed neutrino at the Large Hadron-Electron Collider" Science.gov (United States) Queiroz, Farinaldo S. 2016-06-01 Reference [1 S. Mondal and S. K. Rai, Phys. Rev. D 93, 011702 (2016).] recently argued that the projected Large Hadron Electron Collider (LHeC) presents a unique opportunity to discover a left-right symmetry since the LHeC has availability for polarized electrons. In particular, the authors apply some basic pT cuts on the jets and claim that the on-shell production of right-handed neutrinos at the LHeC, which violates lepton number in two units, has practically no standard model background and, therefore, that the right-handed nature of WR interactions that are intrinsic to left-right symmetric models can be confirmed by using colliding beams consisting of an 80% polarized electron and a 7 TeV proton. In this Comment, we show that their findings, as presented, have vastly underestimated the SM background which prevents a Left-Right symmetry signal from being seen at the LHeC. 5. Les Houches guidebook to Monte Carlo generators for hadron collider physics International Nuclear Information System (INIS) Dobbs, Matt A.; Frixione, Stefano; Laenen, Eric; Tollefson, Kirsten 2004-01-01 Recently the collider physics community has seen significant advances in the formalisms and implementations of event generators. This review is a primer of the methods commonly used for the simulation of high energy physics events at particle colliders. We provide brief descriptions, references, and links to the specific computer codes which implement the methods. The aim is to provide an overview of the available tools, allowing the reader to ascertain which tool is best for a particular application, but also making clear the limitations of each tool 6. Les Houches guidebook to Monte Carlo generators for hadron collider physics CERN Document Server Dobbs, M.A.; Laenen, Eric; Tollefson, K.; Baer, H.; Boos, E.; Cox, B.; Engel, R.; Giele, W.; Huston, J.; Ilyin, S.; Kersevan, B.; Krauss, F.; Kurihara, Y.; Lonnblad, L.; Maltoni, F.; Mangano, M.; Odaka, S.; Richardson, P.; Ryd, A.; Sjostrand, T.; Skands, Peter Z.; Was, Z.; Webber, B.R.; Zeppenfeld, D. 2005-01-01 Recently the collider physics community has seen significant advances in the formalisms and implementations of event generators. This review is a primer of the methods commonly used for the simulation of high energy physics events at particle colliders. We provide brief descriptions, references, and links to the specific computer codes which implement the methods. The aim is to provide an overview of the available tools, allowing the reader to ascertain which tool is best for a particular application, but also making clear the limitations of each tool. 7. Simulation study of electron cloud induced instabilities and emittance growth for the CERN Large Hadron Collider proton beam CERN Document Server Benedetto, Elena; Schulte, Daniel; Rumolo, Giovanni 2005-01-01 The electron cloud may cause transverse single-bunch instabilities of proton beams such as those in the Large Hadron Collider (LHC) and the CERN Super Proton Synchrotron (SPS). We simulate these instabilities and the consequent emittance growth with the code HEADTAIL, which models the turn-by-turn interaction between the cloud and the beam. Recently some new features were added to the code, in particular, electric conducting boundary conditions at the chamber wall, transverse feedback, and variable beta functions. The sensitivity to several numerical parameters has been studied by varying the number of interaction points between the bunch and the cloud, the phase advance between them, and the number of macroparticles used to represent the protons and the electrons. We present simulation results for both LHC at injection and SPS with LHC-type beam, for different electron-cloud density levels, chromaticities, and bunch intensities. Two regimes with qualitatively different emittance growth are observed: above th... 8. Model-independent description and Large Hadron Collider implications of suppressed two-photon decay of a light Higgs boson International Nuclear Information System (INIS) Phalen, Daniel J.; Thomas, Brooks; Wells, James D. 2007-01-01 For a standard model Higgs boson with mass between 115 GeV and 150 GeV, the two-photon decay mode is important for discovery at the Large Hadron Collider (LHC). We describe the interactions of a light Higgs boson in a more model-independent fashion and consider the parameter space where there is no two-photon decay mode. We argue from generalities that analysis of the tth discovery mode outside its normal thought of range of applicability is especially needed under these circumstances. We demonstrate the general conclusion with a specific example of parameters of a type I two-Higgs doublet theory, motivated by ideas in strongly coupled model building. We then specify a complete set of branching fractions and discuss the implications for the LHC 9. A Particle Consistent with the Higgs Boson Observed with the ATLAS Detector at the Large Hadron Collider CERN Document Server Aad, Georges; Abbott, Brad; Abdallah, Jalal; Abdel Khalek, Samah; Abdelalim, Ahmed Ali; Abdinov, Ovsat; Aben, Rosemarie; Abi, Babak; Abolins, Maris; AbouZeid, Ossama; Abramowicz, Halina; Abreu, Henso; Acharya, Bobby Samir; Adamczyk, Leszek; Adams, David; Addy, Tetteh; Adelman, Jahred; Adomeit, Stefanie; Adragna, Paolo; Adye, Tim; Aefsky, Scott; Aguilar-Saavedra, Juan Antonio; Agustoni, Marco; Aharrouche, Mohamed; Ahlen, Steven; Ahles, Florian; Ahmad, Ashfaq; Ahsan, Mahsana; Aielli, Giulio; Akdogan, Taylan; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov, Andrei; Alam, Mohammad; Alam, Muhammad Aftab; Albert, Justin; Albrand, Solveig; Aleksa, Martin; Aleksandrov, Igor; Alessandria, Franco; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexopoulos, Theodoros; Alhroob, Muhammad; Aliev, Malik; Alimonti, Gianluca; Alison, John; Allbrooke, Benedict; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alonso, Francisco; Altheimer, Andrew David; Alvarez Gonzalez, Barbara; Alviggi, Mariagrazia; Amako, Katsuya; Amelung, Christoph; Ammosov, Vladimir; Amor Dos Santos, Susana Patricia; Amorim, Antonio; Amram, Nir; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Andrieux, Marie-Laure; Anduaga, Xabier; Angelidakis, Stylianos; Anger, Philipp; Angerami, Aaron; Anghinolfi, Francis; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoki, Masato; Aoun, Sahar; Aperio Bella, Ludovica; Apolle, Rudi; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Arce, Ayana; Arfaoui, Samir; Arguin, Jean-Francois; Arik, Engin; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnal, Vanessa; Arnault, Christian; Artamonov, Andrei; Artoni, Giacomo; Arutinov, David; Asai, Shoji; Ask, Stefan; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astbury, Alan; Atkinson, Markus; Aubert, Bernard; Auge, Etienne; Augsten, Kamil; Aurousseau, Mathieu; Avolio, Giuseppe; Avramidou, Rachel Maria; Axen, David; Azuelos, Georges; Azuma, Yuya; Baak, Max; Baccaglioni, Giuseppe; Bacci, Cesare; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Backus Mayes, John; Badescu, Elisabeta; Bagnaia, Paolo; Bahinipati, Seema; Bai, Yu; Bailey, David; Bain, Travis; Baines, John; Baker, Oliver Keith; Baker, Mark; Baker, Sarah; Balek, Petr; Banas, Elzbieta; Banerjee, Piyali; Banerjee, Swagato; Banfi, Danilo; Bangert, Andrea Michelle; Bansal, Vikas; Bansil, Hardeep Singh; Barak, Liron; Baranov, Sergei; Barbaro Galtieri, Angela; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Bardin, Dmitri; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Barrillon, Pierre; Bartoldus, Rainer; Barton, Adam Edward; Bartsch, Valeria; Basye, Austin; Bates, Richard; Batkova, Lucia; Batley, Richard; Battaglia, Andreas; Battistin, Michele; Bauer, Florian; Bawa, Harinder Singh; Beale, Steven; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Bechtle, Philip; Beck, Hans Peter; Becker, Anne Kathrin; Becker, Sebastian; Beckingham, Matthew; Becks, Karl-Heinz; Beddall, Andrew; Beddall, Ayda; Bedikian, Sourpouhi; Bednyakov, Vadim; Bee, Christopher; Beemster, Lars; Begel, Michael; Behar Harpaz, Silvia; Behera, Prafulla; Beimforde, Michael; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellomo, Massimiliano; Belloni, Alberto; Beloborodova, Olga; Belotskiy, Konstantin; Beltramello, Olga; Benary, Odette; Benchekroun, Driss; Bendtz, Katarina; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez Garcia, Jorge-Armando; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Benslama, Kamal; Bentvelsen, Stan; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernat, Pauline; Bernhard, Ralf; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Bertella, Claudia; Bertin, Antonio; Bertolucci, Federico; Besana, Maria Ilaria; Besjes, Geert-Jan; Besson, Nathalie; Bethke, Siegfried; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianco, Michele; Biebel, Otmar; Bieniek, Stephen Paul; Bierwagen, Katharina; Biesiada, Jed; Biglietti, Michela; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biscarat, Catherine; Bittner, Bernhard; Black, Kevin; Blair, Robert; Blanchard, Jean-Baptiste; Blanchot, Georges; Blazek, Tomas; Bloch, Ingo; Blocker, Craig; Blocki, Jacek; Blondel, Alain; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Boddy, Christopher Richard; Boehler, Michael; Boek, Jennifer; Boelaert, Nele; Bogaerts, Joannes Andreas; Bogdanchikov, Alexander; Bogouch, Andrei; Bohm, Christian; Bohm, Jan; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Bolnet, Nayanka Myriam; Bomben, Marco; Bona, Marcella; Boonekamp, Maarten; Bordoni, Stefania; Borer, Claudia; Borisov, Anatoly; Borissov, Guennadi; Borjanovic, Iris; Borri, Marcello; Borroni, Sara; Bortolotto, Valerio; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Bouchami, Jihene; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Bousson, Nicolas; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Branchini, Paolo; Brandenburg, George; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brazzale, Simone Federico; Brelier, Bertrand; Bremer, Johan; Brendlinger, Kurt; Brenner, Richard; Bressler, Shikma; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Broggi, Francesco; Bromberg, Carl; Bronner, Johanna; Brooijmans, Gustaaf; Brooks, Timothy; Brooks, William; Brown, Gareth; Brown, Heather; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Buanes, Trygve; Buat, Quentin; Bucci, Francesca; Buchanan, James; Buchholz, Peter; Buckingham, Ryan; Buckley, Andrew; Buda, Stelian Ioan; Budagov, Ioulian; Budick, Burton; Büscher, Volker; Bugge, Lars; Bulekov, Oleg; Bundock, Aaron Colin; Bunse, Moritz; Buran, Torleiv; Burckhart, Helfried; Burdin, Sergey; Burgess, Thomas; Burke, Stephen; Busato, Emmanuel; Bussey, Peter; Buszello, Claus-Peter; Butler, Bart; Butler, John; Buttar, Craig; Butterworth, Jonathan; Buttinger, William; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Caloi, Rita; Calvet, David; Calvet, Samuel; Camacho Toro, Reina; Camarri, Paolo; Cameron, David; Caminada, Lea Michaela; Caminal Armadans, Roger; Campana, Simone; Campanelli, Mario; Canale, Vincenzo; Canelli, Florencia; Canepa, Anadi; Cantero, Josu; Cantrill, Robert; Capasso, Luciano; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capriotti, Daniele; Capua, Marcella; Caputo, Regina; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Bryan; Caron, Sascha; Carquin, Edson; Carrillo-Montoya, German D; Carter, Antony; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Cascella, Michele; Caso, Carlo; Castaneda Hernandez, Alfredo Martin; Castaneda-Miranda, Elizabeth; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Cataldi, Gabriella; Catastini, Pierluigi; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cavaliere, Viviana; Cavalleri, Pietro; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerqueira, Augusto Santiago; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chalupkova, Ina; Chan, Kevin; Chang, Philip; Chapleau, Bertrand; Chapman, John Derek; Chapman, John Wehrley; Chareyre, Eve; Charlton, Dave; Chavda, Vikash; Chavez Barajas, Carlos Alberto; Cheatham, Susan; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Shenjian; Chen, Xin; Chen, Yujiao; Cheng, Yangyang; Cheplakov, Alexander; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Cheung, Sing-Leung; Chevalier, Laurent; Chiefari, Giovanni; Chikovani, Leila; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chisholm, Andrew; Chislett, Rebecca Thalatta; Chitan, Adrian; Chizhov, Mihail; Choudalakis, Georgios; Chouridou, Sofia; Christidi, Illectra-Athanasia; Christov, Asen; Chromek-Burckhart, Doris; Chu, Ming-Lee; Chudoba, Jiri; Ciapetti, Guido; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciocca, Claudia; Ciocio, Alessandra; Cirilli, Manuela; Cirkovic, Predrag; Citron, Zvi Hirsh; Citterio, Mauro; Ciubancan, Mihai; Clark, Allan G; Clark, Philip James; Clarke, Robert; Cleland, Bill; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coffey, Laurel; Cogan, Joshua Godfrey; Coggeshall, James; Cogneras, Eric; Colas, Jacques; Cole, Stephen; Colijn, Auke-Pieter; Collins, Neil; Collins-Tooth, Christopher; Collot, Johann; Colombo, Tommaso; Colon, German; Compostella, Gabriele; Conde Muiño, Patricia; Coniavitis, Elias; Conidi, Maria Chiara; Consonni, Sofia Maria; Consorti, Valerio; Constantinescu, Serban; Conta, Claudio; Conti, Geraldine; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cooper-Smith, Neil; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Côté, David; Courneyea, Lorraine; Cowan, Glen; Cowden, Christopher; Cox, Brian; Cranmer, Kyle; Crescioli, Francesco; Cristinziani, Markus; Crosetti, Giovanni; Crépé-Renaudin, Sabine; Cuciuc, Constantin-Mihai; Cuenca Almenar, Cristóbal; Cuhadar Donszelmann, Tulay; Curatolo, Maria; Curtis, Chris; Cuthbert, Cameron; Cwetanski, Peter; Czirr, Hendrik; Czodrowski, Patrick; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; D'Orazio, Alessia; Da Cunha Sargedas De Sousa, Mario Jose; Da Via, Cinzia; Dabrowski, Wladyslaw; Dafinca, Alexandru; Dai, Tiesheng; Dallapiccola, Carlo; Dam, Mogens; Dameri, Mauro; Damiani, Daniel; Danielsson, Hans Olof; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Dassoulas, James; Davey, Will; Davidek, Tomas; Davidson, Nadia; Davidson, Ruth; Davies, Eleanor; Davies, Merlin; Davignon, Olivier; Davison, Adam; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Daya-Ishmukhametova, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; De La Taille, Christophe; De la Torre, Hector; De Lorenzi, Francesco; de Mora, Lee; De Nooij, Lucie; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; De Zorzi, Guido; Dearnaley, William James; Debbe, Ramiro; Debenedetti, Chiara; Dechenaux, Benjamin; Dedovich, Dmitri; Degenhardt, James; Del Papa, Carlo; Del Peso, Jose; Del Prete, Tarcisio; Delemontex, Thomas; Deliyergiyev, Maksym; Dell'Acqua, Andrea; Dell'Asta, Lidia; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delpierre, Pierre; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demirkoz, Bilge; Deng, Jianrong; Denisov, Sergey; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Devetak, Erik; Deviveiros, Pier-Olivier; Dewhurst, Alastair; DeWilde, Burton; Dhaliwal, Saminder; Dhullipudi, Ramasudhakar; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Donato, Camilla; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Luise, Silvestro; Di Mattia, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Diaz, Marco Aurelio; Diehl, Edward; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dindar Yagci, Kamile; Dingfelder, Jochen; Dinut, Florin; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; do Vale, Maria Aline Barros; Do Valle Wemans, André; Doan, Thi Kieu Oanh; Dobbs, Matt; Dobinson, Robert; Dobos, Daniel; Dobson, Ellie; Dodd, Jeremy; Doglioni, Caterina; Doherty, Tom; Doi, Yoshikuni; Dolejsi, Jiri; Dolenc, Irena; Dolezal, Zdenek; Dolgoshein, Boris; Dohmae, Takeshi; Donadelli, Marisilvia; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dos Anjos, Andre; Dotti, Andrea; Dova, Maria-Teresa; Dowell, John; Doxiadis, Alexander; Doyle, Tony; Dressnandt, Nandor; Dris, Manolis; Dubbert, Jörg; Dube, Sourabh; Duchovni, Ehud; Duckeck, Guenter; Duda, Dominik; Dudarev, Alexey; Dudziak, Fanny; Dührssen, Michael; Duerdoth, Ian; Duflot, Laurent; Dufour, Marc-Andre; Duguid, Liam; Dunford, Monica; Duran Yildiz, Hatice; Duxfield, Robert; Dwuznik, Michal; Dydak, Friedrich; Düren, Michael; Ebenstein, William; Ebke, Johannes; Eckweiler, Sebastian; Edmonds, Keith; Edson, William; Edwards, Clive; Edwards, Nicholas Charles; Ehrenfeld, Wolfgang; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Eisenhandler, Eric; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Katherine; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Engelmann, Roderich; Engl, Albert; Epp, Brigitte; Erdmann, Johannes; Ereditato, Antonio; Eriksson, Daniel; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Esch, Hendrik; Escobar, Carlos; Espinal Curull, Xavier; Esposito, Bellisario; Etienne, Francois; Etienvre, Anne-Isabelle; Etzion, Erez; Evangelakou, Despoina; Evans, Hal; Fabbri, Laura; Fabre, Caroline; Fakhrutdinov, Rinat; Falciano, Speranza; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farley, Jason; Farooque, Trisha; Farrell, Steven; Farrington, Sinead; Farthouat, Philippe; Fassi, Farida; Fassnacht, Patrick; Fassouliotis, Dimitrios; Fatholahzadeh, Baharak; Favareto, Andrea; Fayard, Louis; Fazio, Salvatore; Febbraro, Renato; Federic, Pavol; Fedin, Oleg; Fedorko, Wojciech; Fehling-Kaschek, Mirjam; Feligioni, Lorenzo; Fellmann, Denis; Feng, Cunfeng; Feng, Eric; Fenyuk, Alexander; Ferencei, Jozef; Fernando, Waruna; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filthaut, Frank; Fincke-Keeler, Margret; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Gordon; Fisher, Matthew; Flechl, Martin; Fleck, Ivor; Fleckner, Johanna; Fleischmann, Philipp; Fleischmann, Sebastian; Flick, Tobias; Floderus, Anders; Flores Castillo, Luis; Flowerdew, Michael; Fonseca Martin, Teresa; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Fournier, Daniel; Fowler, Andrew; Fox, Harald; Francavilla, Paolo; Franchini, Matteo; Franchino, Silvia; Francis, David; Frank, Tal; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; Fratina, Sasa; French, Sky; Friedrich, Conrad; Friedrich, Felix; Froeschl, Robert; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fulsom, Bryan Gregory; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gadatsch, Stefan; Gadfort, Thomas; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Galhardo, Bruno; Gallas, Elizabeth; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Gan, KK; Gao, Yongsheng; Gaponenko, Andrei; Garberson, Ford; Garcia-Sciveres, Maurice; García, Carmen; García Navarro, José Enrique; Gardner, Robert; Garelli, Nicoletta; Garitaonandia, Hegoi; Garonne, Vincent; Gatti, Claudio; Gaudio, Gabriella; Gaur, Bakul; Gauthier, Lea; Gauzzi, Paolo; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Ge, Peng; Gecse, Zoltan; Gee, Norman; Geerts, Daniël Alphonsus Adrianus; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Gemmell, Alistair; Genest, Marie-Hélène; Gentile, Simonetta; George, Matthias; George, Simon; Gerlach, Peter; Gershon, Avi; Geweniger, Christoph; Ghazlane, Hamid; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giakoumopoulou, Victoria; Giangiobbe, Vincent; Gianotti, Fabiola; Gibbard, Bruce; Gibson, Adam; Gibson, Stephen; Gilchriese, Murdock; Gildemeister, Otto; Gillberg, Dag; Gillman, Tony; Gingrich, Douglas; Ginzburg, Jonatan; Giokaris, Nikos; Giordani, MarioPaolo; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giovannini, Paola; Giraud, Pierre-Francois; Giugni, Danilo; Giunta, Michele; Giusti, Paolo; Gjelsten, Børge Kile; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glazov, Alexandre; Glitza, Karl-Walter; Glonti, George; Goddard, Jack Robert; Godfrey, Jennifer; Godlewski, Jan; Goebel, Martin; Göpfert, Thomas; Goeringer, Christian; Gössling, Claus; Goldfarb, Steven; Golling, Tobias; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; González de la Hoz, Santiago; Gonzalez Parra, Garoe; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goodson, Jeremiah Jet; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorfine, Grant; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Gosdzik, Bjoern; Goshaw, Alfred; Gosselink, Martijn; Gostkin, Mikhail Ivanovitch; Gough Eschrich, Ivo; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Gozpinar, Serdar; Grabowska-Bold, Iwona; Grafström, Per; Grahn, Karl-Johan; Gramstad, Eirik; Grancagnolo, Francesco; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Grau, Nathan; Gray, Heather; Gray, Julia Ann; Graziani, Enrico; Grebenyuk, Oleg; Greenshaw, Timothy; Greenwood, Zeno Dixon; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Griffiths, Justin; Grigalashvili, Nugzar; Grillo, Alexander; Grinstein, Sebastian; Gris, Philippe Luc Yves; Grishkevich, Yaroslav; Grivaz, Jean-Francois; Gross, Eilam; Grosse-Knetter, Joern; Groth-Jensen, Jacob; Grybel, Kai; Guest, Daniel; Guicheney, Christophe; Guillemin, Thibault; Guindon, Stefan; Gul, Umar; Gunther, Jaroslav; Guo, Bin; Guo, Jun; Gutierrez, Phillip; Guttman, Nir; Gutzwiller, Olivier; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haas, Stefan; Haber, Carl; Hadavand, Haleh Khani; Hadley, David; Haefner, Petra; Hahn, Ferdinand; Haider, Stefan; Hajduk, Zbigniew; Hakobyan, Hrachya; Hall, David; Haller, Johannes; Hamacher, Klaus; Hamal, Petr; Hamano, Kenji; Hamer, Matthias; Hamilton, Andrew; Hamilton, Samuel; Han, Liang; Hanagaki, Kazunori; Hanawa, Keita; Hance, Michael; Handel, Carsten; Hanke, Paul; Hansen, John Renner; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Peter Henrik; Hansson, Per; Hara, Kazuhiko; Hard, Andrew; Hare, Gabriel; Harenberg, Torsten; Harkusha, Siarhei; Harper, Devin; Harrington, Robert; Harris, Orin; Hartert, Jochen; Hartjes, Fred; Haruyama, Tomiyoshi; Harvey, Alex; Hasegawa, Satoshi; Hasegawa, Yoji; Hassani, Samira; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Anthony David; Hayakawa, Takashi; Hayashi, Takayasu; Hayden, Daniel; Hays, Chris; Hayward, Helen; Haywood, Stephen; Head, Simon; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heinemann, Beate; Heisterkamp, Simon; Helary, Louis; Heller, Claudio; Heller, Matthieu; Hellman, Sten; Hellmich, Dennis; Helsens, Clement; Henderson, Robert; Henke, Michael; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Hensel, Carsten; Henß, Tobias; Hernandez, Carlos Medina; Hernández Jiménez, Yesenia; Herrberg, Ruth; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hesketh, Gavin Grant; Hessey, Nigel; Higón-Rodriguez, Emilio; Hill, John; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirsch, Florian; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hohlfeld, Marc; Holder, Martin; Holmgren, Sven-Olof; Holy, Tomas; Holzbauer, Jenny; Hong, Tae Min; Hooft van Huysduynen, Loek; Horner, Stephan; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howard, Jacob; Howarth, James; Hristova, Ivana; Hrivnac, Julius; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Hu, Diedi; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huettmann, Antje; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Huhtinen, Mika; Hurwitz, Martina; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibbotson, Michael; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Idarraga, John; Iengo, Paolo; Igonkina, Olga; Ikegami, Yoichi; Ikeno, Masahiro; Iliadis, Dimitrios; Ilic, Nikolina; Ince, Tayfun; Inigo-Golfin, Joaquin; Ioannou, Pavlos; Iodice, Mauro; Iordanidou, Kalliopi; Ippolito, Valerio; Irles Quiles, Adrian; Isaksson, Charlie; Ishino, Masaya; Ishitsuka, Masaki; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakoubek, Tomas; Jakubek, Jan; Jamin, David Olivier; Jana, Dilip; Jansen, Eric; Jansen, Hendrik; Jantsch, Andreas; Janus, Michel; Jarlskog, Göran; Jeanty, Laura; Jen-La Plante, Imai; Jennens, David; Jenni, Peter; Loevschall-Jensen, Ask Emil; Jež, Pavel; Jézéquel, Stéphane; Jha, Manoj Kumar; Ji, Haoshuang; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Shan; Jinnouchi, Osamu; Joergensen, Morten Dam; Joffe, David; Johansen, Marianne; Johansson, Erik; Johansson, Per; Johnert, Sebastian; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tim; Joram, Christian; Jorge, Pedro; Joshi, Kiran Daniel; Jovicevic, Jelena; Jovin, Tatjana; Ju, Xiangyang; Jung, Christian; Jungst, Ralph Markus; Juranek, Vojtech; Jussel, Patrick; Juste Rozas, Aurelio; Kabana, Sonja; Kaci, Mohammed; Kaczmarska, Anna; Kadlecik, Peter; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kajomovitz, Enrique; Kalinin, Sergey; Kalinovskaya, Lidia; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kaneti, Steven; Kanno, Takayuki; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kaplon, Jan; Kar, Deepak; Karagounis, Michael; Karakostas, Konstantinos; Karnevskiy, Mikhail; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasieczka, Gregor; Kass, Richard; Kastanas, Alex; Kataoka, Mayuko; Kataoka, Yousuke; Katsoufis, Elias; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kayl, Manuel; Kazama, Shingo; Kazanin, Vassili; Kazarinov, Makhail; Keeler, Richard; Keener, Paul; Kehoe, Robert; Keil, Markus; Kekelidze, George; Keller, John; Kenyon, Mike; Kepka, Oldrich; Kerschen, Nicolas; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Keung, Justin; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharchenko, Dmitri; Khodinov, Alexander; Khomich, Andrei; Khoo, Teng Jian; Khoriauli, Gia; Khoroshilov, Andrey; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kim, Hyeon Jin; Kim, Shinhong; Kimura, Naoki; Kind, Oliver; King, Barry; King, Matthew; King, Robert Steven Beaufoy; Kirk, Julie; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kitamura, Takumi; Kittelmann, Thomas; Kiuchi, Kenji; Kladiva, Eduard; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klemetti, Miika; Klier, Amit; Klimek, Pawel; Klimentov, Alexei; Klingenberg, Reiner; Klinger, Joel Alexander; Klinkby, Esben; Klioutchnikova, Tatiana; Klok, Peter; Klous, Sander; Kluge, Eike-Erik; Kluge, Thomas; Kluit, Peter; Kluth, Stefan; Kneringer, Emmerich; Knoops, Edith; Knue, Andrea; Ko, Byeong Rok; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Köneke, Karsten; König, Adriaan; Koenig, Sebastian; Köpke, Lutz; Koetsveld, Folkert; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kogan, Lucy Anne; Kohlmann, Simon; Kohn, Fabian; Kohout, Zdenek; Kohriki, Takashi; Koi, Tatsumi; Kolachev, Guennady; Kolanoski, Hermann; Kolesnikov, Vladimir; Koletsou, Iro; Koll, James; Komar, Aston; Komori, Yuto; Kondo, Takahiko; Kono, Takanori; Kononov, Anatoly; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kopeliansky, Revital; Koperny, Stefan; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostyukhin, Vadim; Kotov, Sergey; Kotov, Vladislav; Kotwal, Ashutosh; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, Jana; Kreiss, Sven; Krejci, Frantisek; Kretzschmar, Jan; Krieger, Nina; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Kruker, Tobias; Krumnack, Nils; Krumshteyn, Zinovii; Kruse, Amanda; Kubota, Takashi; Kuday, Sinan; Kuehn, Susanne; Kugel, Andreas; Kuhl, Thorsten; Kuhn, Dietmar; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kummer, Christian; Kuna, Marine; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurata, Masakazu; Kurochkin, Yurii; Kus, Vlastimil; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; Kwee, Regina; La Rosa, Alessandro; La Rotonda, Laura; Labarga, Luis; Labbe, Julien; Lablak, Said; Lacasta, Carlos; Lacava, Francesco; Lacey, James; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Laisne, Emmanuel; Lamanna, Massimo; Lambourne, Luke; Lampen, Caleb; Lampl, Walter; Lancon, Eric; Landgraf, Ulrich; Landon, Murrough; Lang, Valerie Susanne; Lange, Clemens; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Larner, Aimee; Lassnig, Mario; Laurelli, Paolo; Lavorini, Vincenzo; Lavrijsen, Wim; Laycock, Paul; Lazovich, Tomo; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Menedeu, Eve; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Michel; Legendre, Marie; Legger, Federica; Leggett, Charles; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Lemmer, Boris; Lendermann, Victor; Leney, Katharine; Lenz, Tatiana; Lenzen, Georg; Lenzi, Bruno; Leonhardt, Kathrin; Leontsinis, Stefanos; Lepold, Florian; Leroy, Claude; Lessard, Jean-Raphael; Lester, Christopher; Lester, Christopher Michael; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Lewis, Adrian; Lewis, George; Leyko, Agnieszka; Leyton, Michael; Li, Bo; Li, Haifeng; Li, Ho Ling; Li, Shu; Li, Xuefei; Liang, Zhijun; Liao, Hongbo; Liberti, Barbara; Lichard, Peter; Lichtnecker, Markus; Lie, Ki; Liebig, Wolfgang; Limbach, Christian; Limosani, Antonio; Limper, Maaike; Lin, Simon; Linde, Frank; Linnemann, James; Lipeles, Elliot; Lipniacka, Anna; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Chuanlei; Liu, Dong; Liu, Hao; Liu, Jianbei; Liu, Kun; Liu, Lulu; Liu, Minghui; Liu, Yanwen; Livan, Michele; Livermore, Sarah; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Loddenkoetter, Thomas; Loebinger, Fred; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Lombardo, Vincenzo Paolo; Long, Jonathan; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; Loscutoff, Peter; Lo Sterzo, Francesco; Losty, Michael; Lou, Xinchou; Lounis, Abdenour; Loureiro, Karina; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Ludwig, Andreas; Ludwig, Dörthe; Ludwig, Inga; Ludwig, Jens; Luehring, Frederick; Luijckx, Guy; Lukas, Wolfgang; Luminari, Lamberto; Lund, Esben; Lund-Jensen, Bengt; Lundberg, Björn; Lundberg, Johan; Lundberg, Olof; Lundquist, Johan; Lungwitz, Matthias; Lynn, David; Lytken, Else; Ma, Hong; Ma, Lian Liang; Maccarrone, Giovanni; Macchiolo, Anna; Maček, Boštjan; Machado Miguens, Joana; Mackeprang, Rasmus; Madaras, Ronald; Maddocks, Harvey Jonathan; Mader, Wolfgang; Maenner, Reinhard; Maeno, Tadashi; Mättig, Peter; Mättig, Stefan; Magnoni, Luca; Magradze, Erekle; Mahboubi, Kambiz; Mahlstedt, Joern; Mahmoud, Sara; Mahout, Gilles; Maiani, Camilla; Maidantchik, Carmen; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Mal, Prolay; Malaescu, Bogdan; Malecki, Pawel; Malecki, Piotr; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Caitlin; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mameghani, Raphael; Mamuzic, Judita; Manabe, Atsushi; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Manfredini, Alessandro; Mangeard, Pierre-Simon; Manhaes de Andrade Filho, Luciano; Manjarres Ramos, Joany Andreina; Mann, Alexander; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Mapelli, Alessandro; Mapelli, Livio; March, Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchiori, Giovanni; Marcisovsky, Michal; Marino, Christopher; Marroquim, Fernando; Marshall, Zach; Martens, Kalen; Marti, Lukas Fritz; Marti-Garcia, Salvador; Martin, Brian; Martin, Brian; Martin, Jean-Pierre; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martin-Haugh, Stewart; Martinez, Mario; Martinez Outschoorn, Verena; Martyniuk, Alex; Marx, Marilyn; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massaro, Graziano; Massol, Nicolas; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Matricon, Pierre; Matsunaga, Hiroyuki; Matsushita, Takashi; Mattravers, Carly; Maurer, Julien; Maxfield, Stephen; Mayne, Anna; Mazini, Rachid; Mazur, Michael; Mazzaferro, Luca; Mazzanti, Marcello; Mc Donald, Jeffrey; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McCubbin, Norman; McFarlane, Kenneth; Mcfayden, Josh; Mchedlidze, Gvantsa; Mclaughlan, Tom; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Mechtel, Markus; Medinnis, Mike; Meera-Lebbai, Razzak; Meguro, Tatsuma; Mehdiyev, Rashid; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Meloni, Federico; Mendoza Navas, Luis; Meng, Zhaoxia; Mengarelli, Alberto; Menke, Sven; Meoni, Evelin; Mercurio, Kevin Michael; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Merritt, Hayes; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Carsten; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Meyer, Joerg; Meyer, Thomas Christian; Michal, Sebastien; Micu, Liliana; Middleton, Robin; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Miller, David; Miller, Robert; Mills, Bill; Mills, Corrinne; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Miñano Moya, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mirabelli, Giovanni; Mitrevski, Jovan; Mitsou, Vasiliki A; Mitsui, Shingo; Miyagawa, Paul; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Moeller, Victoria; Mönig, Klaus; Möser, Nicolas; Mohapatra, Soumya; Mohr, Wolfgang; Moles-Valls, Regina; Molfetas, Angelos; Monk, James; Monnier, Emmanuel; Montejo Berlingen, Javier; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Moorhead, Gareth; Mora Herrera, Clemencia; Moraes, Arthur; Morange, Nicolas; Morel, Julien; Morello, Gianfranco; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morgenstern, Marcus; Morii, Masahiro; Morley, Anthony Keith; Mornacchi, Giuseppe; Morris, John; Morvaj, Ljiljana; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Mueller, Felix; Mueller, James; Mueller, Klemens; Müller, Thomas; Mueller, Timo; Muenstermann, Daniel; Munwes, Yonathan; Murray, Bill; Mussche, Ido; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nackenhorst, Olaf; Nadal, Jordi; Nagai, Koichi; Nagai, Ryo; Nagano, Kunihiro; Nagarkar, Advait; Nagasaka, Yasushi; Nagel, Martin; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Nanava, Gizo; Napier, Austin; Narayan, Rohin; Nash, Michael; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Neal, Homer; Nechaeva, Polina; Neep, Thomas James; Negri, Andrea; Negri, Guido; Negrini, Matteo; Nektarijevic, Snezana; Nelson, Andrew; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neumann, Manuel; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newcomer, Mitchel; Newman, Paul; Nguyen Thi Hong, Van; Nickerson, Richard; Nicolaidou, Rosy; Nicquevert, Bertrand; Niedercorn, Francois; Nielsen, Jason; Nikiforou, Nikiforos; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolic-Audit, Irena; Nikolics, Katalin; Nikolopoulos, Konstantinos; Nilsen, Henrik; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nisius, Richard; Nobe, Takuya; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Norberg, Scarlet; Nordberg, Markus; Norton, Peter; Novakova, Jana; Nozaki, Mitsuaki; Nozka, Libor; Nugent, Ian Michael; Nuncio-Quiroz, Adriana-Elizabeth; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; O'Brien, Brendan Joseph; O'Neil, Dugan; O'Shea, Val; Oakes, Louise Beth; Oakham, Gerald; Oberlack, Horst; Ocariz, Jose; Ochi, Atsuhiko; Oda, Susumu; Odaka, Shigeru; Odier, Jerome; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohshima, Takayoshi; Okamura, Wataru; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Olchevski, Alexander; Olivares Pino, Sebastian Andres; Oliveira, Miguel Alfonso; Oliveira Damazio, Denis; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Onofre, António; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlando, Nicola; Orlov, Iliya; Oropeza Barrera, Cristina; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Osuna, Carlos; Otero y Garzon, Gustavo; Ottersbach, John; Ouchrif, Mohamed; Ouellette, Eric; Ould-Saada, Farid; Ouraou, Ahmimed; Ouyang, Qun; Ovcharova, Ana; Owen, Mark; Owen, Simon; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Pagan Griso, Simone; Paganis, Efstathios; Pahl, Christoph; Paige, Frank; Pais, Preema; Pajchel, Katarina; Palacino, Gabriel; Paleari, Chiara; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panduro Vazquez, William; Pani, Priscilla; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Papadelis, Aras; Papadopoulou, Theodora; Paramonov, Alexander; Paredes Hernandez, Daniela; Park, Woochun; Parker, Andy; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pashapour, Shabnaz; Pasqualucci, Enrico; Passaggio, Stefano; Passeri, Antonio; Pastore, Fernanda; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Patricelli, Sergio; Pauly, Thilo; Pecsy, Martin; Pedraza Lopez, Sebastian; Pedraza Morales, Maria Isabel; Peleganchuk, Sergey; Pelikan, Daniel; Peng, Haiping; Penning, Bjoern; Penson, Alexander; Penwell, John; Perantoni, Marcelo; Perez, Kerstin; Perez Cavalcanti, Tiago; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perez Reale, Valeria; Perini, Laura; Pernegger, Heinz; Perrino, Roberto; Perrodo, Pascal; Peshekhonov, Vladimir; Peters, Krisztian; Petersen, Brian; Petersen, Jorgen; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petschull, Dennis; Petteni, Michele; Pezoa, Raquel; Phan, Anna; Phillips, Peter William; Piacquadio, Giacinto; Picazio, Attilio; Piccaro, Elisa; Piccinini, Maurizio; Piec, Sebastian Marcin; Piegaia, Ricardo; Pignotti, David; Pilcher, James; Pilkington, Andrew; Pina, João Antonio; Pinamonti, Michele; Pinder, Alex; Pinfold, James; Pinto, Belmiro; Pizio, Caterina; Plamondon, Mathieu; Pleier, Marc-Andre; Plotnikova, Elena; Poblaguev, Andrei; Poddar, Sahill; Podlyski, Fabrice; Poggioli, Luc; Pohl, David-leon; Pohl, Martin; Polesello, Giacomo; Policicchio, Antonio; Polifka, Richard; Polini, Alessandro; Poll, James; Polychronakos, Venetios; Pomeroy, Daniel; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Portell Bueso, Xavier; Pospelov, Guennady; Pospisil, Stanislav; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Pozdnyakov, Valery; Prabhu, Robindra; Pralavorio, Pascal; Pranko, Aliaksandr; Prasad, Srivas; Pravahan, Rishiraj; Prell, Soeren; Pretzl, Klaus Peter; Price, Darren; Price, Joe; Price, Lawrence; Prieur, Damien; Primavera, Margherita; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Prudent, Xavier; Przybycien, Mariusz; Przysiezniak, Helenka; Psoroulas, Serena; Ptacek, Elizabeth; Pueschel, Elisa; Purdham, John; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qian, Jianming; Quadt, Arnulf; Quarrie, David; Quayle, William; Quinonez, Fernando; Raas, Marcel; Raddum, Silje; Radeka, Veljko; Radescu, Voica; Radloff, Peter; Rador, Tonguc; Ragusa, Francesco; Rahal, Ghita; Rahimi, Amir; Rahm, David; Rajagopalan, Srinivasan; Rammensee, Michael; Rammes, Marcus; Randle-Conde, Aidan Sean; Randrianarivony, Koloina; Rauscher, Felix; Rave, Tobias Christian; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Reinherz-Aronis, Erez; Reinsch, Andreas; Reisinger, Ingo; Rembser, Christoph; Ren, Zhongliang; Renaud, Adrien; Rescigno, Marco; Resconi, Silvia; Resende, Bernardo; Reznicek, Pavel; Rezvani, Reyhaneh; Richter, Robert; Richter-Was, Elzbieta; Ridel, Melissa; Rijpstra, Manouk; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Rios, Ryan Randy; Riu, Imma; Rivoltella, Giancesare; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robson, Aidan; Rocha de Lima, Jose Guilherme; Roda, Chiara; Roda Dos Santos, Denis; Roe, Adam; Roe, Shaun; Røhne, Ole; Rolli, Simona; Romaniouk, Anatoli; Romano, Marino; Romeo, Gaston; Romero Adam, Elena; Rompotis, Nikolaos; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Anthony; Rose, Matthew; Rosenbaum, Gabriel; Rosenberg, Eli; Rosendahl, Peter Lundgaard; Rosenthal, Oliver; Rosselet, Laurent; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexander; Rozen, Yoram; Ruan, Xifeng; Rubbo, Francesco; Rubinskiy, Igor; Ruckstuhl, Nicole; Rud, Viacheslav; Rudolph, Christian; Rudolph, Gerald; Rühr, Frederik; Ruiz-Martinez, Aranzazu; Rumyantsev, Leonid; Rurikova, Zuzana; Rusakovich, Nikolai; Rutherfoord, John; Ruzicka, Pavel; Ryabov, Yury; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Saavedra, Aldo; Sadeh, Iftach; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Sakamoto, Hiroshi; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salek, David; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvachua Ferrando, Belén; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Samset, Björn Hallvard; Sanchez, Arturo; Sanchez Martinez, Victoria; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Tanya; Sandoval, Carlos; Sandstroem, Rikard; Sankey, Dave; Sansoni, Andrea; Santamarina Rios, Cibran; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Saraiva, João; Sarangi, Tapas; Sarkisyan-Grinbaum, Edward; Sarri, Francesca; Sartisohn, Georg; Sasaki, Osamu; Sasaki, Yuichi; Sasao, Noboru; Satsounkevitch, Igor; Sauvage, Gilles; Sauvan, Emmanuel; Sauvan, Jean-Baptiste; Savard, Pierre; Savinov, Vladimir; Savu, Dan Octavian; Sawyer, Lee; Saxon, David; Saxon, James; Sbarra, Carla; Sbrizzi, Antonio; Scannicchio, Diana; Scarcella, Mark; Schaarschmidt, Jana; Schacht, Peter; Schaefer, Douglas; Schäfer, Uli; Schaelicke, Andreas; Schaepe, Steffen; Schaetzel, Sebastian; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R~Dean; Schamov, Andrey; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schiavi, Carlo; Schieck, Jochen; Schioppa, Marco; Schlenker, Stefan; Schmid, Peter; Schmidt, Evelyn; Schmieden, Kristof; Schmitt, Christian; Schmitt, Sebastian; Schmitz, Martin; Schneider, Basil; Schnoor, Ulrike; Schoeffel, Laurent; Schoening, Andre; Schorlemmer, Andre Lukas; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schram, Malachi; Schroeder, Christian; Schroer, Nicolai; Schultens, Martin Johannes; Schultes, Joachim; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwegler, Philipp; Schwemling, Philippe; Schwienhorst, Reinhard; Schwierz, Rainer; Schwindling, Jerome; Schwindt, Thomas; Schwoerer, Maud; Sciolla, Gabriella; Scott, Bill; Searcy, Jacob; Sedov, George; Sedykh, Evgeny; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Sekula, Stephen; Selbach, Karoline Elfriede; Seliverstov, Dmitry; Sellden, Bjoern; Sellers, Graham; Seman, Michal; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Seuster, Rolf; Severini, Horst; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Sherman, Daniel; Sherwood, Peter; Shimizu, Shima; Shimojima, Makoto; Shin, Taeksu; Shiyakova, Maria; Shmeleva, Alevtina; Shochet, Mel; Short, Daniel; Shrestha, Suyog; Shulga, Evgeny; Shupe, Michael; Sicho, Petr; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silbert, Ohad; Silva, José; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simioni, Eduard; Simmons, Brinick; Simoniello, Rosa; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sircar, Anirvan; Sisakyan, Alexei; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skinnari, Louise Anastasia; Skottowe, Hugh Philip; Skovpen, Kirill; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Smakhtin, Vladimir; Smart, Ben; Smestad, Lillian; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smith, Ben Campbell; Smith, Douglas; Smith, Kenway; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snow, Steve; Snow, Joel; Snyder, Scott; Sobie, Randall; Sodomka, Jaromir; Soffer, Abner; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Soldatov, Evgeny; Soldevila, Urmila; Solfaroli Camillocci, Elena; Solodkov, Alexander; Solovyanov, Oleg; Solovyev, Victor; Soni, Nitesh; Sopko, Vit; Sopko, Bruno; Sosebee, Mark; Soualah, Rachik; Soukharev, Andrey; Spagnolo, Stefania; Spanò, Francesco; Spearman, William Robert; Spighi, Roberto; Spigo, Giancarlo; Spiwoks, Ralf; Spousta, Martin; Spreitzer, Teresa; Spurlock, Barry; St Denis, Richard Dante; Stahlman, Jonathan; Stamen, Rainer; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stanescu-Bellu, Madalina; Stanitzki, Marcel Michael; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Staszewski, Rafal; Staude, Arnold; Stavina, Pavel; Steele, Genevieve; Steinbach, Peter; Steinberg, Peter; Stekl, Ivan; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stern, Sebastian; Stewart, Graeme; Stillings, Jan Andre; Stockton, Mark; Stoerig, Kathrin; Stoicea, Gabriel; Stonjek, Stefan; Strachota, Pavel; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strang, Michael; Strauss, Emanuel; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Strong, John; Stroynowski, Ryszard; Stugu, Bjarne; Stumer, Iuliu; Stupak, John; Sturm, Philipp; Styles, Nicholas Adam; Soh, Dart-yin; Su, Dong; Subramania, Halasya Siva; Subramaniam, Rajivalochan; Succurro, Antonella; Sugaya, Yorihito; Suhr, Chad; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Susinno, Giancarlo; Sutton, Mark; Suzuki, Yu; Suzuki, Yuta; Svatos, Michal; Swedish, Stephen; Sykora, Ivan; Sykora, Tomas; Sánchez, Javier; Ta, Duc; Tackmann, Kerstin; Taffard, Anyes; Tafirout, Reda; Taiblum, Nimrod; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tamsett, Matthew; Tan, Kong Guan; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tanasijczuk, Andres Jorge; Tani, Kazutoshi; Tannoury, Nancy; Tapprogge, Stefan; Tardif, Dominique; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tassi, Enrico; Tatarkhanov, Mous; Tayalati, Yahya; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Wendy; Teinturier, Marthe; Teischinger, Florian Alfred; Teixeira Dias Castanheira, Matilde; Teixeira-Dias, Pedro; Temming, Kim Katrin; Ten Kate, Herman; Teng, Ping-Kun; Terada, Susumu; Terashi, Koji; Terron, Juan; Testa, Marianna; Teuscher, Richard; Therhaag, Jan; Theveneaux-Pelzer, Timothée; Thoma, Sascha; Thomas, Juergen; Thompson, Emily; Thompson, Paul; Thompson, Peter; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Thomson, Mark; Thong, Wai Meng; Thun, Rudolf; Tian, Feng; Tibbetts, Mark James; Tic, Tomáš; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tiouchichine, Elodie; Tipton, Paul; Tisserant, Sylvain; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tollefson, Kirsten; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tonoyan, Arshak; Topfel, Cyril; Topilin, Nikolai; Torchiani, Ingo; Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Trefzger, Thomas; Tremblet, Louis; Tricoli, Alesandro; Trigger, Isabel Marian; Trilling, George; Trincaz-Duvoid, Sophie; Tripiana, Martin; Triplett, Nathan; Trischuk, William; Trocmé, Benjamin; Troncon, Clara; Trottier-McDonald, Michel; Trzebinski, Maciej; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiakiris, Menelaos; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsiskaridze, Shota; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsung, Jieh-Wen; Tsuno, Soshi; Tsybychev, Dmitri; Tua, Alan; Tudorache, Alexandra; Tudorache, Valentina; Tuggle, Joseph; Turala, Michal; Turecek, Daniel; Turk Cakir, Ilkay; Turlay, Emmanuel; Turra, Ruggero; Tuts, Michael; Tykhonov, Andrii; Tylmad, Maja; Tyndel, Mike; Tzanakos, George; Uchida, Kirika; Ueda, Ikuo; Ueno, Ryuichi; Ugland, Maren; Uhlenbrock, Mathias; Uhrmacher, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Unno, Yoshinobu; Urbaniec, Dustin; Urquijo, Phillip; Usai, Giulio; Uslenghi, Massimiliano; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Vahsen, Sven; Valenta, Jan; Valentinetti, Sara; Valero, Alberto; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; Van Berg, Richard; Van Der Deijl, Pieter; van der Geer, Rogier; van der Graaf, Harry; Van Der Leeuw, Robin; van der Poel, Egge; van der Ster, Daniel; van Eldik, Niels; van Gemmeren, Peter; van Vulpen, Ivo; Vanadia, Marco; Vandelli, Wainer; Vanguri, Rami; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Vari, Riccardo; Varol, Tulin; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vassilakopoulos, Vassilios; Vazeille, Francois; Vazquez Schroeder, Tamara; Vegni, Guido; Veillet, Jean-Jacques; Veloso, Filipe; Veness, Raymond; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Viel, Simon; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinek, Elisabeth; Vinogradov, Vladimir; Virchaux, Marc; Virzi, Joseph; Vitells, Ofer; Viti, Michele; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vladoiu, Dan; Vlasak, Michal; Vogel, Adrian; Vokac, Petr; Volpi, Guido; Volpi, Matteo; Volpini, Giovanni; von der Schmitt, Hans; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vorwerk, Volker; Vos, Marcel; Voss, Rudiger; Voss, Thorsten Tobias; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Wolfgang; Wagner, Peter; Wahlen, Helmut; Wahrmund, Sebastian; Wakabayashi, Jun; Walch, Shannon; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Waller, Peter; Walsh, Brian; Wang, Chiho; Wang, Fuquan; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Rui; Wang, Song-Ming; Wang, Tan; Warburton, Andreas; Ward, Patricia; Wardrope, David Robert; Warsinsky, Markus; Washbrook, Andrew; Wasicki, Christoph; Watanabe, Ippei; Watkins, Peter; Watson, Alan; Watson, Ian; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Anthony; Waugh, Ben; Weber, Michele; Weber, Pavel; Webster, Jordan S; Weidberg, Anthony; Weigell, Philipp; Weingarten, Jens; Weiser, Christian; Wells, Phillippa; Wenaus, Torre; Wendland, Dennis; Weng, Zhili; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Werth, Michael; Wessels, Martin; Wetter, Jeffrey; Weydert, Carole; Whalen, Kathleen; Wheeler-Ellis, Sarah Jane; White, Andrew; White, Martin; White, Sebastian; Whitehead, Samuel Robert; Whiteson, Daniel; Whittington, Denver; Wicek, Francois; Wicke, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik-Fuchs, Liv Antje Mari; Wijeratne, Peter Alexander; Wildauer, Andreas; Wildt, Martin Andre; Wilhelm, Ivan; Wilkens, Henric George; Will, Jonas Zacharias; Williams, Eric; Williams, Hugh; Willis, William; Willocq, Stephane; Wilson, John; Wilson, Michael Galante; Wilson, Alan; Wingerter-Seez, Isabelle; Winkelmann, Stefan; Winklmeier, Frank; Wittgen, Matthias; Wollstadt, Simon Jakob; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wong, Wei-Cheng; Wooden, Gemma; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wozniak, Krzysztof; Wraight, Kenneth; Wright, Michael; Wrona, Bozydar; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wulf, Evan; Wynne, Benjamin; Xella, Stefania; Xiao, Meng; Xie, Song; Xu, Chao; Xu, Da; Yabsley, Bruce; Yacoob, Sahal; Yamada, Miho; Yamaguchi, Hiroshi; Yamaguchi, Yohei; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamanaka, Takashi; Yamazaki, Takayuki; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Hongtao; Yang, Un-Ki; Yang, Yi; Yang, Zhaoyu; Yanush, Serguei; Yao, Liwen; Yao, Yushu; Yasu, Yoshiji; Ybeles Smit, Gabriel Valentijn; Ye, Jingbo; Ye, Shuwei; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Riktura; Yoshihara, Keisuke; Young, Charles; Young, Christopher John; Youssef, Saul; Yu, Dantong; Yu, Jaehoon; Yu, Jie; Yuan, Li; Yurkewicz, Adam; Byszewski, Marcin; Zabinski, Bartlomiej; Zaidan, Remi; Zaitsev, Alexander; Zajacova, Zuzana; Zanello, Lucia; Zanzi, Daniele; Zaytsev, Alexander; Zeitnitz, Christian; Zeman, Martin; Zemla, Andrzej; Zendler, Carolin; Zenin, Oleg; Ženiš, Tibor; Zinonos, Zinonas; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhang, Dongliang; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Long; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Ning; Zhou, Yue; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zhuravlov, Vadym; Zieminska, Daria; Zimin, Nikolai; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Ziolkowski, Michael; Zitoun, Robert; Živković, Lidija; Zmouchko, Viatcheslav; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zutshi, Vishnu; Zwalinski, Lukasz 2012-01-01 Nearly 50 years ago, theoretical physicists proposed that a field permeates the universe and gives energy to the vacuum. This field was required to explain why some, but not all, fundamental particles have mass. Numerous precision measurements during recent decades have provided indirect support for the existence of this field, but one crucial prediction of this theory has remained unconfirmed despite 30 years of experimental searches: the existence of a massive particle, the standard model Higgs boson. The ATLAS experiment at the Large Hadron Collider at CERN has now observed the production of a new particle with a mass of 126 giga–electron volts and decay signatures consistent with those expected for the Higgs particle. This result is strong support for the standard model of particle physics, including the presence of this vacuum field. The existence and properties of the newly discovered particle may also have consequences beyond the standard model itself. 10. Optimising charged Higgs boson searches at the Large Hadron Collider across bb¯W± final states Directory of Open Access Journals (Sweden) Stefano Moretti 2016-09-01 Full Text Available In the light of the most recent data from Higgs boson searches and analyses, we re-assess the scope of the Large Hadron Collider in accessing heavy charged Higgs boson signals in bb¯W± final states, wherein the contributing channels can be H+→tb¯, hW±, HW± and AW±. We consider a 2-Higgs Doublet Model Type-II and we assume as production mode bg→tH−+c.c., the dominant one over the range MH±≥480 GeV, as dictated by b→sγ constraints. Prospects of detection are found to be significant for various Run 2 energy and luminosity options. 11. Quench protection test results and comparative simulations on the first 10 meter prototype dipoles for the Large Hadron Collider International Nuclear Information System (INIS) Rodriguez-Mateos, F.; Gerin, G.; Marquis, A. 1996-01-01 The first 10 meter long dipole prototypes made by European Industry within the framework of the R and D program for the Large Hadron Collider (LHC) have been tested at CERN. As a part of the test program, a series of quench protection tests have been carried out in order to qualify the basic protection scheme foreseen for the LHC dipoles (quench heaters and cold diodes). Results are presented on the quench heater performance, and on the maximum temperatures and voltages observed during quenches under the so-called machine conditions. Moreover, an update of the quench simulation package specially developed at CERN (QUABER 2) has been recently made. Details on this new version of QUABER are given. Simulation runs have been made specifically to validate the model with the results from the measurements on quench protection mentioned above 12. Cryogenic testing of by-pass diode stacks for the superconducting magnets of the large hadron collider at CERN International Nuclear Information System (INIS) Della Corte, A.; Catitti, A.; Chiarelli, S.; Di Ferdinando, E.; Verdini, L.; Gharib, A.; Hagedorn, D.; Turtu, S.; Basile, G. L.; Taddia, G.; Talli, M.; Viola, R. 2002-01-01 A dedicated facility prepared by ENEA (Italian Agency for Energy and Environment) for the cryogenic testing of by-pass diodes for the protection of the CERN Large Hadron Collider main magnets will be described. This experimental activity is in the frame of a contract awarded to OCEM, an Italian firm active in the field of electronic devices and power supplies, in collaboration with ENEA, for the manufacture and testing of all the diode stacks. In particular, CERN requests the measurement of the reverse and forward voltage diode characteristics at 300 K and 77 K, and endurance test cycles at liquid helium temperature. The experimental set-up at ENEA and data acquisition system developed for the scope will be described and the test results reported 13. Field quality in low-β superconducting quadrupoles and impact on the beam dynamics for the Large Hadron Collider upgrade Directory of Open Access Journals (Sweden) Boris Bellesia 2007-06-01 Full Text Available A possible scenario for the luminosity upgrade of the Large Hadron Collider is based on large aperture quadrupoles to lower β^{*} in the interaction regions. Here we analyze the measurements relative to the field quality of the RHIC and LHC superconducting quadrupoles to find out the dependence of field errors on the size of the magnet aperture. Data are interpreted in the framework of a Monte Carlo analysis giving the reproducibility in the coil positioning reached in each production. We show that this precision is likely to be independent of the magnet aperture. Using this result, we can carry out an estimate of the impact of the field quality on the beam dynamics for the collision optics. 14. ECFA study week on instrumentation technology for high-luminosity hadron colliders. Proceedings. Vol. 1 and 2 International Nuclear Information System (INIS) Fernandez, E.; Jarlskog, G. 1989-01-01 The main aim of the present ECFA Study Week on 'Instrumentation Technology for High Luminosity Hadron Colliders' was to review the progress made after the La Thuile Workshop (1987) and to critically evaluate which of the detection methods and data handling structures could be suitable for luminosities in the 10 34 cm -2 s -1 range. The Study Week was sponsored by the Universitat Autonoma de Barcelona, the Comision Interministerial Ciencia y Tecnologia of Spain, CERN, and the Commission of the European Communities. It attracted 220 participants, including 35 from industry and good representation from groups planning experiments at the SSC. The various conveners gathered many excellent and original contributions, which led to intense discussions. Subjects covered include the use of scintillating fibres; silicon, gaseous, and crystal detectors, particle identification; readout and data acquisition systems. A separate session dealt with the contributions of industry to this kind of research. (orig.) 15. Calculation of abort thresholds for the Beam Loss Monitoring System of the Large Hadron Collider at CERN CERN Document Server Nemcic, Martin; Dehning, Bernd The Beam Loss Monitoring (BLM) System is one of the most critical machine protection systems for the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN), Switzerland. Its main purpose is to protect the superconducting magnets from quenches and other equipment from damage by requesting a beam abort when the measured losses exceed any of the predefined threshold levels. The system consist of circa 4000 ionization chambers which are installed around the 27 kilometres ring (LHC). This study aims to choose a technical platform and produce a system that addresses all of the limitations with the current system that is used for the calculation of the LHC BLM abort threshold values. To achieve this, a comparison and benchmarking of the Java and .NET technical platforms is performed in order to establish the most suitable solution. To establish which technical platform is a successful replacement of the current abort threshold calculator, comparable prototype systems in Java and .NET we... 16. A particle consistent with the Higgs boson observed with the ATLAS detector at the large hadron collider International Nuclear Information System (INIS) Aad, G.; Ahles, F.; Barber, T.; Bernhard, R.; Boehler, M.; Bruneliere, R.; Christov, A.; Consorti, V.; Fehling-Kaschek, M.; Flechl, M.; Hartert, J.; Herten, G.; Horner, S.; Jakobs, K.; Janus, M.; Kononov, A.I.; Kuehn, S.; Lai, S.; Landgraf, U.; Lohwasser, K.; Ludwig, I.; Ludwig, J.; Mahboubi, K.; Mohr, W.; Nilsen, H.; Parzefall, U.; Rammensee, M.; Rave, T.C.; Rurikova, Z.; Schmidt, E.; Schumacher, M.; Siegert, F.; Stoerig, K.; Sundermann, J.E.; Temming, K.K.; Thoma, S.; Tsiskaridze, V.; Venturi, M.; Vivarelli, I.; Radziewski, H. von; Vu Anh, T.; Warsinsky, M.; Weiser, C.; Werner, M.; Wiik-Fuchs, L.A.M.; Winkelmann, S.; Xie, S.; Zimmermann, S.; Abreu, H.; Bachacou, H.; Bauer, F.; Besson, N.; Blanchard, J.B.; Bolnet, N.M.; Boonekamp, M.; Chevalier, L.; Ernwein, J.; Etienvre, A.I.; Formica, A.; Gauthier, L.; Giraud, P.F.; Guyot, C.; Hassani, S.; Kozanecki, W.; Lancon, E.; Laporte, J.F.; Legendre, M.; Maiani, C.; Mal, P.; Manjarres Ramos, J.A.; Mansoulie, B.; Meyer, J.P.; Mijovic, L.; Morange, N.; Nguyen Thi Hong, V.; Nicolaidou, R.; Ouraou, A.; Resende, B.; Royon, C.R.; Schoeffel, L.; Schune, Ph.; Schwindling, J.; Simard, O.; Vranjes, N.; Xiao, M.; Abdel Khalek, S.; Andari, N.; Arnault, C.; Auge, E.; Barrillon, P.; Benoit, M.; Binet, S.; Bourdarios, C.; De La Taille, C.; De Vivie De Regie, J.B.; Duflot, L.; Escalier, M.; Fayard, L.; Fournier, D.; Grivaz, J.F.; Guillemin, T.; Henrot-Versille, S.; Hrivnac, J.; Iconomidou-Fayard, L.; Idarraga, J.; Kado, M.; Lorenzo Martinez, N.; Lounis, A.; Makovec, N.; Matricon, P.; Niedercorn, F.; Poggioli, L.; Puzo, P.; Renaud, A.; Rousseau, D.; Rybkin, G.; Sauvan, J.B.; Schaarschmidt, J.; Schaffer, A.C.; Serin, L.; Simion, S.; Tanaka, R.; Teinturier, M.; Veillet, J.J.; Wicek, F.; Zerwas, D.; Zhang, Z.; Abajyan, T.; Arutinov, D.; Backhaus, M.; Barbero, M.; Bechtle, P.; Brock, I.; Cristinziani, M.; Davey, W.; Desch, K.; Dingfelder, J.; Gaycken, G.; Geich-Gimbel, Ch.; Glatzer, J.; Gonella, L.; Haefner, P.; Havranek, M.; Hellmich, D.; Hillert, S.; Huegging, F.; Karagounis, M.; Khoriauli, G.; Koevesarki, P.; Kostyukhin, V.V.; Kraus, J.K.; Kroseberg, J.; Kruger, H.; Lapoire, C.; Lehmacher, M.; Leyko, A.M.; Limbach, C.; Loddenkoetter, T.; Mazur, M.; Moser, N.; Mueller, K.; Nanava, G.; Nattermann, T.; Nuncio-Quiroz, A.E.; Pohl, D.; Psoroulas, S.; Schaepe, S.; Schmieden, K.; Schmitz, M.; Schultens, M.J.; Schwindt, T.; Stillings, J.A.; Therhaag, J.; Tsung, J.W.; Uchida, K.; Uhlenbrock, M.; Urquijo, P.; Vogel, A.; Toerne, E. von; Wang, T.; Wermes, N.; Wienemann, P.; Zendler, C.; Zimmermann, R.; Zimmermann, S.; Abbott, B.; Gutierrez, P.; Jana, D.K.; Marzin, A.; Meera-Lebbai, R.; Norberg, S.; Saleem, M.; Severini, H.; Skubic, P.; Snow, J.; Strauss, M. 2012-01-01 Nearly 50 years ago, theoretical physicists proposed that a field permeates the universe and gives energy to the vacuum. This field was required to explain why some, but not all, fundamental particles have mass. Numerous precision measurements during recent decades have provided indirect support for the existence of this field, but one crucial prediction of this theory has remained unconfirmed despite 30 years of experimental searches: the existence of a massive particle, the standard model Higgs boson. The ATLAS experiment at the Large Hadron Collider at CERN has now observed the production of a new particle with a mass of 126 giga-electron volts and decay signatures consistent with those expected for the Higgs particle. This result is strong support for the standard model of particle physics, including the presence of this vacuum field. The existence and properties of the newly discovered particle may also have consequences beyond the standard model itself. (authors) 17. Determination of AC Characteristics of Superconducting Dipole Magnets in the Large Hadron Collider Based on Experimental Results and Simulations CERN Document Server Ambjørndalen, Sara; Verweij, Arjan The Large Hadron Collider (LHC) utilizes high-field superconducting Main Dipole Magnets that bend the trajectory of the beam. The LHC ring is electrically divided into eight octants, each allocating a 7 km chain of 154 Main Dipole Magnets. Dedicated de- tection and protection systems prevent irreversible magnet damage caused by quenches. Quench is a local transition from the superconducting to the normal conducting state. Triggering of such systems, along with other failure scenarios, result in fast transient phenomena. In order to analyze the consequence of such electrical transients and failures in the dipole chain, one needs a circuit model that is validated against measurements. Currently, there exists an equivalent circuit of the Main Dipole Magnet resolved at an aperture level. Each aperture model takes into account the dynamic effects occurring in the magnets, trough a lossy-inductance model and parasitic capacitances to ground. At low frequencies the Main Dipole Magnet behaves as a linear inductor. Ca... 18. Commissioning and First Operation of the Low-Beta Triplets and Their Electrical Feed Boxes at the Large Hadron Collider CERN Document Server Darve, C; Casas-Cubillos, J; Claudet, S; Feher, S; Ferlin, G; Kerby, J; Metral, L; Perin, A; Peterson, T; Prin, H; Rabehl, R; Vauthier, N; Wagner, U; van Weelderen, R 2010-01-01 The insertion regions located around the four interaction points of the Large Hadron Collider (LHC) are mainly composed of the low-b triplets, the separation dipoles and their respective electrical feed-boxes (DFBX). The low-b triplets are Nb-Ti superconductor quadrupole magnets, which operate at 215 T/m in superfluid helium at a temperature of 1.9 K. The commissioning and the first operation of these components have been performed. The thermo-mechanical behavior of the low-b triplets and DFBX were studied. Cooling and control systems were tuned to optimize the cryogenic operation of the insertion regions. Hardware commissioning also permitted to test the system response. This paper summarizes the performance results and the lessons learned. 19. Supersymmetry searches in events with at least four leptons using the ATLAS detector at the Large Hadron Collider CERN Document Server AUTHOR|(SzGeCERN)732150 This thesis presents a search for supersymmetry using the dataset taken by ATLAS at the Large Hadron Collider with$\\sqrt{s}=$8$~$TeV during 2012. Events with four or more leptons are selected and required to satisfy additional kinematic criteria that define optimised signal regions. These criteria are chosen to reject the majority of events produced by Standard Model processes, whilst retaining a large fraction of events produced by a variety of proposed supersymmetry scenarios. The expected number of Standard Model events are estimated using a combination of Monte Carlo and data-driven methods, the predictions of which are tested against data in specifically designed validation regions. No significant deviations from the Standard Model estimations are observed within statistical and systematic uncertainties. Exclusion limits are then set at 95$\\%\$ confidence level (CL) on a wide range of R-parity conserving and R-parity violating supersymmetry simplified models, as well as models of general gauge mediated s...
20. Trends in Cable Magnetization and Persistent Currents during the Production of the Main Dipoles of the Large Hadron Collider
CERN Document Server
Bellesia, B; Granata, V; Le Naour, S; Oberli, L; Sanfilippo, S; Santoni, C; Scandale, Walter; Schwerg, N; Todesco, Ezio; Völlinger, C
2005-01-01
The production of more than 60% of superconducting cables for the main dipoles of the Large Hadron Collider has been completed. The results of the measurements of cable magnetization and the dependence on the manufacturers are presented. The strand magnetization produces field errors that have been measured in a large number of dipoles (approximately 100 to date) tested in cold conditions. We examine here the correlation between the available magnetic measurements and the large database of cable magnetization. The analysis is based on models documented elsewhere in the literature. Finally, a forecast of the persistent current effects to be expected in the LHC main dipoles is presented, and the more critical parameters for beam dynamics are singled out. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8861194252967834, "perplexity": 2465.695245356767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215077.71/warc/CC-MAIN-20180819110157-20180819130157-00713.warc.gz"} |
https://physics.stackexchange.com/tags/dimensional-analysis/new | # Tag Info
0
The notion of dimension is well-defined for a vector space. In Newtonian physics, you define time as a scalar, so it doesn't make sense to give it a dimension. In general, though, time should be seen as the component of a vector (think about a time interval). It is neither a scalar nor a vector. You can talk about the dimension of the vector itself, but not ...
4
It's because the circumference of a circle with radius $r$ is $2\pi r$. It's easier to see with velocities, so I'll stick to that for now. If you want to know the tangential velocity of a particle rotating around some point, you have to divide the distance it travels by the time it takes to do that. At $1\,\mathrm{rpm}$, the distance traveled in $1\,\mathrm{... 0$t$contains both a unit AND a number. You can't use a unit to cancel both a number and a unit. You don't know what unit you are going to put into the variable ahead of time (seconds vs hours, meters vs km, etc). And even if you did know, you can't cancel only the unit part of$t$while leaving the number part of$t$in the equation. That would leave you ... 1 You need to think in terms of quantities. It looks like you are using some equation for distance without acceleration: $$x-x_0=vt$$ where$x$and$x_0$are quantities called "position" (specifically at time$t$and$0$) and will have units of distance.$v$is a quantity called "velocity" and has unit of distance/time, and$t$is a ... 3 You cannot cancel it because the s in 4 m/s is a unit (seconds) and t is a variable (time), which is measured in seconds. So the units cancel, but not the numeric value. Assume$t = 2s$, then you get$4 m/s \cdot 2 s = 8 m$. 0 Essentially you are right in your last statement. Let's define$t_{1s} \equiv 1\ {\rm s}$. Then your question amounts to asking whether this equation is true for any$t$: $$\frac{t}{t_{1s}} \stackrel{?}{=}1$$ The answer is, no, this equation is not always true. It is only true if$t=1{\rm s}$. 0 The regularization issue that you mention is exactly where the solution of the problem lies. Path integrals are always formally divergent. In order to get a physically meaningful result, it is always necessary to take a ratio of two path integral expressions. For the case of a thermal partition function, what you normally want is the ratio of the partition ... 3 This is a stark demonstration of the mess inflicted by failure to nondimensionalize. If your quantities are dimensional, write their units next to them, in your case: $$M = \begin{pmatrix} 1 + xy & ym \\ x/m & 1 \end{pmatrix},$$ which acts on vectors$( am, b)^T$, to produce like-dimensioned vectors, for numerical x,y,a,b. Having ensured ... 1 I suppose you consider states$|k>$in the continuous spectrum, otherwise you would not use the integration over$k$but a summation. If so, the normalization of your$|k>$probably looks like that$<k_i|k_j>=\delta(k_i-k_j)$which means$|k>$is not exactly dimensionless for the reasons you mentioned (Dirac delta is$[k]^{-1}$). Actually, you ... 2 Note: the paper says dimension 6 operators, not order 6 operators. However, I should warn you that they are using a somewhat lazy definition of dimension here. Consider a term in the Lagrangian (in$D$spacetime dimensions) of the form $$\mathcal{L} \sim \frac{1}{M^{N-D}} \mathcal{O}_N$$ where$\mathcal{O}_N$is an expression with ... 7$\int \delta(x) dx=1$shows that$\delta(x)$has units of (length)$^{-1}$. As$V\delta(x)$is a potential energy,$V$must have units of (energy)$\times$(length). This is consistent with the first-order energy shift as$|\psi|^2$is probability (dimensionless) per unit length. 3 I have learnt that the dimensionless quantities have no unit. Whoever you learnt this from is incorrect . Clearly, dimensionless quantities CAN have units, as you have figured out in the case of angles. Another example of dimensionless quantity with units is the relative abundance of particles which has units of ppm(parts per million) , ppb(parts per ... 2 With our current definitions of meter and second, $$c = 299, 792 ,458\frac{m}{s}.$$ Let's say we were to use some other unit of second, let's call it the "zepond" and denote it with$z$instead of$s$. Let's say one zepond is$2$seconds, so $$z = 2 s.$$ Then the speed of light, measured in meters per zepond (instead of meters per second) would ... 1 Your constant can be written as:$\beta= \alpha J^{4/4} m^{5/4} $If you want to use a different constant without fractional units, you can simply: $$\beta' = \beta^4 = \alpha^4 J^4 m^5$$ Taking the "1.25th" root would still give a fractional value for joules. Note that the equation is already in SI units, it just looks a bit ugly to use noninteger ... 4 The integral of distance with respect to time is known as absement. It is one of the family of derivatives and integrals of position, and can be integrated further to get absity, abseleration and abserk. Absement appears when considering situations where a quantity depends on both how far something has moved or extended and how long the movement is ... 0 As far as I know, distance$s$multiplied by time$t$or (as the differential analogue) the integral $$^{(*)}: \int_{t_0}^{t}s(\tilde{t})\text{d}\tilde{t}\$$ are not used in the context of physics. Nonetheless, this does not mean that it cannot interpreted physically. If an object travels uniformly along a straight line,$^{(*)}$will obviously be larger ... 0 The binding energy can be calculated from the work needed to take shells of matter away from the star to infinity, until it's all gone (technically the negative of that). If$\rho$is the density, the radius of the star that's left is$r$then the mass of the star that is left is$\frac{4}{3}\pi r^3\rho$, and the work needed to remove a shell of mass$4\pi r^...
4
Temperature is not a measure of energy. It seems what you are doing is you are interpreting temperature as a kind of "energy per unit mass", but that is not the case - otherwise we'd just use that: energy per unit mass (joules per kilogram, say), and we would not need a separate unit (kelvin) for temperature. It's far from unrelated to energy, and ...
8
Energy is not temperature. The lack of kelvins in the definition of the joule is not weird when you consider that every material requires a different amount of joules to heat up by 1 kelvin, and during a phase change, there is no change in temperature (kelvins) at all even as you add or remove energy (joules). You can add a bunch of joules to a material, and ...
2
The kinetic energy (in Joules), for each atom in a gas, is $\frac{3kT}{2}$, where $k$ is Boltzmann's constant and $T$ is the temperature in Kelvin. The number of atoms in each gram depends on the atomic mass; for example, 12 g of carbon has Avogadro's number ($6.02\times 10^{23}$) atoms of carbon. The total heat energy is then the kinetic energy times the ...
1
The decimal system is conveniant in conversion of magnitudes. In a positional system it is easy to multiply and divide in factors of ten and ten itself is easy enough to conceptualise. This fact is used in the SI system where there is a system of prefixes that indicate these scale factors. For example, $k$ for kilo and which means multiplying by a factor of ...
Top 50 recent answers are included | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9740965366363525, "perplexity": 404.0986993315714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.52/warc/CC-MAIN-20210515192444-20210515222444-00599.warc.gz"} |
http://www.ask.com/question/what-did-nicolaus-copernicus-invent | # What did Nicolaus Copernicus invent?
Nicolaus Copernicus proposed that the sun was stationary in the center of the universe and the earth revolved around it. He came up with a model, with the Sun at the center of the universe, and other objects including earth orbiting it. He is regarded as the starting point of modern astronomy.
Q&A Related to "What did Nicolaus Copernicus invent?"
i dont no dont ask me He developed the model of the solar system that says that the sun is the center of the universe and that all of the other planets are orbiting it. http://wiki.answers.com/Q/What+all+did+nicolaus+co...
Nicolaus Copernicus (1473-1543) was a mathematician and astronomer who proposed that http://www.chacha.com/question/what-did-nicolaus-c...
Nicolaus Copernicus (February 19, 1473 – May 24, 1543) was one of the great polymaths of his age. He was a mathematician, astronomer, jurist, physician, classical scholar, governor http://answers.yahoo.com/question/index?qid=201201... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202197551727295, "perplexity": 714.7256949992186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430458957748.48/warc/CC-MAIN-20150501054237-00044-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/2100793/how-to-access-an-element-of-a-set/2100808 | # How to access an element of a set?
Let us say that I have a set $A=\{1,2, 3\}$. Now I need to access, say, the element $3$ of $A$. How do I achieve this?
I know that sets are unordered list of elements but I need to access the elements of a set. Can I achieve this with a tuple? Like $A=(1, 2, 3)$, should I write $A(i)$ to access the i-th element of $A$? Or is there any other notation?
If I have a list of elements, what is the best mathematical object to represent it so that I can freely access its elements and how? In programming, I would use arrays.
• Define access. – Dan Rust Jan 16 '17 at 21:50
• @DanRust by acces I mean index the elements. – Zir Jan 16 '17 at 21:53
• Why do you need to index them? If you want to take a specific element, you just take that element ("we have $3 \in A$"). If you want to take an arbitrary element, you just give it a name ("we have some $\alpha \in A$"). – Morgan Rodgers Jan 17 '17 at 3:07
• @MorganRodgers How do you write this algorithm? 1. Let $A$ be the set of admissible clients. 2. $S\gets\emptyset$. 3. for $i = 1$ to $|A|$ do $S\gets A(i)$ and calculate $f(S)$. 4. If $f(S)=0$, remove $A(i)$ from $A$. I can write it in English language but not in mathematical language because there no $A(i)$ in a set and there is no remove in a set. – Zir Jan 17 '17 at 16:18
• 1. Define $A$. 2. $S = \emptyset$. 3./4. (done as a single loop over the elements of $A$) for $\alpha \in A$: $S = S \cup \{\alpha\}$, if $f(S) = 0$ then $A = A \setminus \{\alpha\}$. – Morgan Rodgers Jan 17 '17 at 20:09
You are speaking of the index operator one often finds in programming when using arrays. Given that sets are unordered, it does not make sense to do this with sets, as $\{1,2\}=\{2,1\}$.
You probably want to use a tuple, which is just a (finite) sequence of elements, and is denoted as you wrote $A$. If $A=(1,2,3)$ is a tuple, then I think the most common way to index it is to use subscripts - so $A_1=1$ and $A_2=2$ and so on, though one sometimes sees $A(1)$ and $A(2)$ as well, since one can think of $A$ as a function from the set $\{1,2,3\}$ to $\mathbb R$. Occasionally, one even sees $A^1$ and $A^2$. Really, what matters is that you are consistent and clear in how you write indexing.
• Thank you. If I let $A=(1,2,3)$ then how to tell, in notation, the cardinality of $A$? I mean, what I have is this: Let $A$ be a list of elements (to index them I choose as you suggested a tuple). Now I need to define the cardinality of $A$, is it $|A|$? – Zir Jan 16 '17 at 22:29
• @Zir I think if you said "$|A|$ is the length of the list", that would be understood perfectly well and considered a natural notation. I hardly ever see it though - it's pretty common, however, to say, "Let $x=(x_1,\ldots,x_n)$ be a tuple" or something like that, where $n$ is just implicitly defined as the length. Or "If $A$ is an $n$-tuple..." has a similar effect. However, if you have lots of lists to work with, these aren't really good options and just defining $|A|$ as you have would be better. – Milo Brandt Jan 16 '17 at 22:32
In a general set, there is no indexing, except the trivial one: a set $A$ is indexed by itself, that is, $A=\{x_a: a\in A\}$ where $x_a=a$.
Indeed, in set theory without the axiom of choice, there can be sets which are impossible to "index nicely," although of course this takes work to be made precise.
So the general task you describe is either trivial (if you allow a set to index itself) or impossible.
That said, in restricted contexts we can do better. For instance, suppose we're looking exclusively at finite sets of real numbers. Then any such set is naturally ordered by cardinality, that is, we may speak of the $k$th smallest element of a set.
The fact that a set is "unordered" by definition doesn't prevent you from defining an ordering on it. Indeed, by the Axiom of Choice, any set can be indexed by an ordinal number. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8716520667076111, "perplexity": 209.02722797387608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.61/warc/CC-MAIN-20200813014639-20200813044639-00385.warc.gz"} |
https://www.groundai.com/project/hitting-times-asymptotics-for-hard-core-interactions-on-grids/ | Hitting times asymptotics for hard-core interactions on grids
# Hitting times asymptotics for hard-core interactions on grids
## Abstract
We consider the hard-core model with Metropolis transition probabilities on finite grid graphs and investigate the asymptotic behavior of the first hitting time between its two maximum-occupancy configurations in the low-temperature regime. In particular, we show how the order-of-magnitude of this first hitting time depends on the grid sizes and on the boundary conditions by means of a novel combinatorial method. Our analysis also proves the asymptotic exponentiality of the scaled hitting time and yields the mixing time of the process in the low-temperature limit as side-result. In order to derive these results, we extended the model-independent framework in [27] for first hitting times to allow for a more general initial state and target subset.
\DisableLigatures
[f]encoding = * \ListProperties(Progressive*=3ex)
Keywords: hard-core model; hitting times; Metropolis Markov chains; finite grid graphs; mixing times; low temperature.
## 1 Introduction
Hard-core lattice gas model. In this paper we consider a stochastic model where particles in a finite volume dynamically interact subject to hard-core constraints and study the first hitting times between admissible configurations of this model. This model was introduced in the chemistry and physics literature under the name “hard-core lattice gas model” to describe the behavior of a gas whose particles have non-negligible radii and cannot overlap [21, 36]. We describe the spatial structure in terms of a finite undirected graph of vertices, which represents all the possible sites where particles can reside. The hard-core constraints are represented by edges connecting the pairs of sites that cannot be occupied simultaneously. We say that a particle configuration on is admissible if it does not violate the hard-core constraints, i.e. if it corresponds to an independent set of the graph . The appearance and disappearance of particles on is modeled by means of a single-site update Markov chain with Metropolis transition probabilities, parametrized by the fugacity . At every step a site of is selected uniformly at random; if it is occupied, the particle is removed with probability ; if instead the selected site is vacant, then a particle is created with probability if and only if all the neighboring sites at edge-distance one from are also vacant. Denote by the collection of independent sets of . The Markov chain is ergodic and reversible with respect to the hard-core measure with fugacity on , which is defined as
μλ(I):=λ|I|Zλ(Λ),I∈I(Λ), (1)
where is the appropriate normalizing constant (also called partition function). The fugacity is related to the inverse temperature of the gas by the logarithmic relationship .
We focus on the study of the hard-core model in the low-temperature regime where (or equivalently ), so that the hard-core measure favors maximum-occupancy configurations. In particular, we are interested in how long it takes the Markov chain to “switch” between these maximum-occupancy configurations. Given a target subset of admissible configurations and an initial configuration , this work mainly focuses on the study of the first hitting time of the subset for the Markov chain with initial state at time .
Two more application areas. The hard-core lattice gas model is thus a canonical model of a gas whose particles have a non-negligible size, and the asymptotic hitting times studied in this paper provide insight into the rigid behavior at low temperatures. Apart from applications in statistical physics, our study of the hitting times is of interest for other areas as well. The hard-core model is also intensively studied in the area of operations research in the context of communication networks [23]. In that case, the graph represents a communication network where calls arrive at the vertices according to independent Poisson streams. The durations of the calls are assumed to be independent and exponentially distributed. If upon arrival of a call at a vertex , this vertex and all its neighbors are idle, the call is activated and vertex will be busy for the duration of the call. If instead upon arrival of the call, vertex or at least one of its neighbors is busy, the call is lost, hence rendering hard-core interaction. In recent years, extensions of this communication network model received widespread attention, because of the emergence of wireless networks. A pivotal algorithm termed CSMA [37] which is implemented for distributed resource sharing in wireless networks can be described in terms of a continuous-time version of the Markov chain studied in this paper. Wireless devices form a topology and the hard-core constraints represent the conflicts between simultaneous transmissions due to interference [37]. In this context is therefore called interference graph or conflict graph. The transmission of a data packet is attempted independently by every device after a random back-off time with exponential rate , and, if successful, lasts for an exponentially distributed time with mean . Hence, the regime describes the scenario where the competition for access to the medium becomes fiercer. The asymptotic behavior of the first hitting times between maximum-occupancy configurations provides fundamental insights into the average packet transmission delay and the temporal starvation which may affect some devices of the network, see [39].
A third area in which our results find application is discrete mathematics, and in particular for algorithms designed to find independent sets in graphs. The Markov chain can be regarded as a Monte Carlo algorithm to approximate the partition function or to sample efficiently according to the hard-core measure for large. A crucial quantity to study is then the mixing time of such Markov chains, which quantifies how long it takes the empirical distribution of the process to get close to the stationary distribution . Several papers have already investigated the mixing time of the hard-core model with Glauber dynamics on various graphs [3, 19, 20, 34]. By understanding the asymptotic behavior of the hitting times between maximum-occupancy configurations on as , we can derive results for the mixing time of the Metropolis hard-core dynamics on , which in general is smaller than for the usual Glauber dynamics, as illustrated in [25].
Results for general graphs. The Metropolis dynamics in which we are interested for the hard-core model can be put, after the identification , in the framework of reversible Freidlin-Wentzel Markov chains with Metropolis transition probabilities (see Section 2 for precise definitions). Hitting times for Freidlin-Wentzel Markov chains are central in the mathematical study of metastability. In the literature, several different approaches have been introduced to study the time it takes for a particle system to reach a stable state starting from a metastable configuration. Two approaches have been independently developed based on large deviations techniques: the pathwise approach, first introduced in [6] and then developed in [31, 32, 33], and the approach in [7, 8, 9, 10]. Other approaches to metastability are the potential theoretic approach [4, 5] and, more recently introduced, the martingale approach [1, 2], see [13] for a more detailed review.
In the present paper, we follow the pathwise approach, which has already been used to study many finite-volume models in a low-temperature regime, see [11, 12, 15, 16, 17, 24, 29, 30], where the state space is seen as an energy landscape and the paths which the Markov chain will most likely follow are those with a minimum energy barrier. In [31, 32, 33] the authors derive general results for first hitting times for the transition from metastable to stable states, the critical configurations (or bottlenecks) visited during this transition and the tube of typical paths. In [27] the results on hitting times are obtained with minimal model-dependent knowledge, i.e. find all the metastable states and the minimal energy barrier which separates them from the stable states. We extend the existing framework [27] in order to obtain asymptotic results for the hitting time for any starting state , not necessarily metastable, and any target subset , not necessarily the set of stable configurations. In particular, we identify the two crucial exponents and that appear in the upper and lower bounds in probability for in the low-temperature regime. These two exponents might be hard to derive for a given model and, in general, they are not equal. However, we derive a sufficient condition that guarantees that they coincide and also yields the order-of-magnitude of the first moment of on a logarithmic scale. Furthermore, we give another slightly stronger condition under which the hitting time normalized by its mean converges in distribution to an exponential random variable.
Results for rectangular grid graphs. We apply these model-independent results to the hard-core model on rectangular grid graphs to understand the asymptotic behavior of the hitting time , where and are the two configurations with maximum occupancy, where the particles are arranged in a checkerboard fashion on even and odd sites. Using a novel powerful combinatorial method, we identify the minimum energy barrier between and and prove absence of deep cycles for this model, which allows us to decouple the asymptotics for the hitting time and the study of the critical configurations. In this way, we then obtain sharp bounds in probability for , since the two exponents coincide, and find the order-of-magnitude of on a logarithmic scale, which depends both on the grid dimensions and on the chosen boundary conditions. In addition, our analysis of the energy landscape shows that the scaled hitting time is exponentially distributed in the low-temperature regime and yields the order-of-magnitude of the mixing time of the Markov chain .
By way of contrast, we also briefly look at the hard-core model on complete -partite graphs, which was already studied in continuous time in [38]. While less relevant from a physical standpoint, the corresponding energy landscape is simpler than that for grid graphs and allows for explicit calculations for the hitting times between any pair of configurations. In particular, we show that whenever our two conditions are not satisfied, and the scaled hitting time is not necessarily exponentially distributed.
## 2 Overview and main results
In this section we introduce the general framework of Metropolis Markov chains and show how the dynamical hard-core model fits in it. We then present our two main results for the hitting time for the hard-core model on grid graphs and outline our proof method.
### 2.1 Metropolis Markov chains
Let be a finite state space and let be the Hamiltonian, i.e. a non-constant energy function. We consider the family of Markov chains on with Metropolis transition probabilities indexed by a positive parameter
Pβ(x,y):={q(x,y)e−β[H(y)−H(x)]+, if x≠y,1−∑z≠xPβ(x,z), if x=y, (2)
where is a matrix that does not depend on . The matrix is the connectivity function and we assume it to be
• Stochastic, i.e. for every ;
• Symmetric, i.e. for every ;
• Irreducible, i.e. for any , , there exists a finite sequence of states such that , and , for . We will refer to such a sequence as a path from to and we will denote it by .
We call the triplet an energy landscape. The Markov chain is reversible with respect to the Gibbs measure
μβ(x):=e−βH(x)∑y∈Xe−βH(y). (3)
Furthermore, it is well-known (see for example [9, Proposition 1.1]) that the Markov chain is aperiodic and irreducible on . Hence is ergodic on with stationary distribution .
For a nonempty subset and a state , we denote by the first hitting time of the subset for the Markov chain with initial state at time , i.e.
τxA:=inf{t>0:Xβt∈A\leavevmode\nobreak |\leavevmode\nobreak Xβ0=x}.
Denote by the set of stable states of the energy landscape , that is the set of global minima of on , and by the set of metastable states, which are the local minima of in with maximum stability level (see Section 3 for definition). The first hitting time is often called tunneling time when is a stable state and the target set is some , or transition time from metastable to stable when and .
### 2.2 The hard-core model
The hard-core model on a finite undirected graph of vertices evolving according to the dynamics described in Section 1 can be put in the framework of Metropolis Markov chains. Indeed, we associate a variable with each site , indicating the absence () or the presence () of a particle in that site. Then the hard-core dynamics correspond to the Metropolis Markov chain determined by the energy landscape where
• The state space is the set of admissible configurations on , i.e. the configurations such that for every pair of neighboring sites in ;
• The energy of a configuration is ;
• The connectivity function allows only for single-site updates (possibly void), i.e. for any ,
q(σ,σ′):=⎧⎪ ⎪⎨⎪ ⎪⎩1N,if |{v∈Λ\leavevmode\nobreak |\leavevmode\nobreak σ(v)≠σ′(v)}|=1,0,if |{v∈Λ\leavevmode\nobreak |\leavevmode\nobreak σ(v)≠σ′(v)}|>1,1−∑η≠σq(σ,η),if σ=σ′.
For the hard-core measure (1) on is precisely the Gibbs measure (3) associated with the energy landscape .
Our main focus in the present paper concerns the dynamics of the hard-core model on finite two-dimensional rectangular lattices, to which we will simply refer to as grid graphs. More precisely, given two integers , we will take to be a grid graph with three possible boundary conditions: Toroidal (periodic), cylindrical (semiperiodic) and open. We denote them respectively by , and . Figure 1 shows an example of the three possible types of boundary conditions.
There are in total sites in . Every site is described by its coordinates , and since is finite, we assume without loss of generality that the leftmost (respectively bottommost) site of has the horizontal (respectively vertical) coordinate equal to zero. A site is called even (odd) if the sum of its two coordinates is even (odd, respectively) and we denote by and the collection of even sites and that of odd sites of , respectively.
The open grid is naturally a bipartite graph: All the neighbors in of an even site are odd sites and vice versa. In contrast, the cylindrical and toroidal grids may not be bipartite, so that we further assume that is an even integer for the cylindrical grid and that both and are even integers for the toroidal grid . Since the bipartite structure is crucial for our methodology, we will tacitly work under these assumptions for the cylindrical and toroidal grids in the rest of the paper. As a consequence, and are balanced bipartite graphs, i.e. . The open grid has even sites and odd sites, hence it is a balanced bipartite graphs if and only if the product is even. We denote by ( respectively) the configuration with a particle at each site in ( respectively). More precisely,
Unknown environment '%
Note that and are admissible configurations for any choice of boundary conditions, and that and . In the special case where with , and, as we will show in Section 5, and . In all the other cases, we have and ; see Section 5 for details.
### 2.3 Main results and proof outline
Our first main result describes the asymptotic behavior of the tunneling time for any rectangular grid in the low-temperature regime . In particular, we prove the existence and find the value of an exponent that gives an asymptotic control in probability of on a logarithmic scale as and characterizes the asymptotic order-of-magnitude of the mean tunneling time . We further show that the tunneling time normalized by its mean converges in distribution to an exponential unit mean random variable.
###### Theorem 2.1 (Asymptotic behavior of the tunneling time τeo).
Consider the Metropolis Markov chain corresponding to hard-core dynamics on a grid as described in Subsection 2.2. There exists a constant such that
[align=left]
• For every
In the special case where with , (i), (ii), and (iii) hold also for the first hitting time , but replacing by .
Theorem 2.1 relies on the analysis of the hard-core energy landscape for grid graphs and novel results for hitting times in the general Metropolis Markov chains context. We first explain these new model-independent results and, afterwards, we give details about the properties we proved for the energy landscape of the hard-core model.
The framework [27] focuses on the most classical metastability problem, which is the characterization of the transition time between a metastable state and the set of stable states . However, the starting configuration for the hitting times we are interested in, is not always a metastable state and the target set is not always . In fact, the classical results can be applied for the hard-core model on grids for the hitting time only in the case of an grid with open boundary conditions and odd side lengths, i.e. . Many other interesting hitting times are not covered by the literature, including:
• The hitting time when is a grid with open boundary conditions and odd side lengths, i.e. , which is a transition from the unique stable state to the metastable state ;
• The hitting times when is an grid with for any boundary conditions, since the configurations and are both stable states;
• The hitting time between any pair of local minima when is a complete -partite graph.
We therefore generalize the classical pathwise approach [27] to study the first hitting time for a Metropolis Markov chain for any pair of starting state and target subset . The interest of extending these results to the tunneling time between two stable states was already mentioned in [27, 33], but our framework is even more general and we could study for any pair , e.g. the transition between a stable state and a metastable one.
Our analysis relies on the classical notion of cycle, which is a maximal connected subset of states lying below a given energy level. The exit time from a cycle in the low-temperature regime is well-known in the literature [9, 10, 13, 31, 33] and is characterized by the depth of the cycle, which is the minimum energy barrier that separates the bottom of the cycle from its external boundary. The usual strategy presented in the literature to study the first hitting time from to is to look at the decomposition into maximal cycles of the relevant part of the energy landscape, i.e. . The first model-dependent property one has to prove is that the starting state is metastable, which guarantees that there are no cycles in deeper than the maximal cycle containing the starting state , denoted by . In this scenario, the time spent in maximal cycles different from , and hence the time it takes to reach from the boundary of , is comparable to or negligible with respect to the exit time from , making the exit time from and the first hitting time of the same order.
In contrast, for a general starting state and target subset all the maximal cycles of can potentially have a non-negligible impact on the transition from to in the low-temperature regime. By analyzing these maximal cycles and the possible cycle-paths, we can establish bounds in probability of the hitting time on a logarithmic scale, i.e. obtain a pair of exponents such that for every
Missing or unrecognized delimiter for \Big
The sharpness of the exponents and crucially depends on how precisely one can determine which maximal cycles are likely to be visited and which ones are not, see Section 3 for further details. Furthermore, we give a sufficient condition (see Assumption A in Section 3), which is the absence of deep typical cycles, which guarantees that , proving that the random variables converge in probability to as , and that . In many cases of interest, one could show that Assumption A holds for the pair without detailed knowledge of the typical paths from to . Indeed, by proving that the model exhibits absence of deep cycles (see Proposition 3.18), similarly to [27], also in our framework the study of the hitting time is decoupled from an exact control of the typical paths from to . More precisely, one can obtain asymptotic results for the hitting time in probability, in expectation and in distribution without the detailed knowledge of the critical configuration or of the tube of typical paths. Proving the absence of deep cycles when and corresponds precisely to identifying the set of metastable states , while, when and , it is enough to show that the energy barrier that separates any state from a state with lower energy is not bigger than the energy barrier separating any two stable states.
Moreover, we give another sufficient condition (see Assumption B in Section 3), called “worst initial state” assumption, to show that the hitting time normalized by its mean converges in distribution to an exponential unit mean random variable. However, checking Assumption B for a specific model can be very involved, and hence we provide a stronger condition (see Proposition 3.20), which includes the case of the tunneling time between stable states and the classical transition time from a metastable to a stable state. The hard-core model on complete -partite graphs is used as an example to illustrate scenarios where Assumption A or B is violated, and the asymptotic result for of the first moment and the asymptotic exponentiality of do not hold.
In the case of the hard-core model on a rectangular grid , we develop a powerful combinatorial approach which shows the absence of deep cycles (Assumption A) for this model, concluding the proof of Theorem 2.1. Furthermore, it yields the value of the energy barrier between and , which turns out to depend both on the grid size and on the chosen boundary conditions. This is illustrated by the next theorem, which is our second main result.
###### Theorem 2.2 (The exponent Γ(Λ) for rectangular grids).
Let a rectangular grid. Then the energy barrier between and appearing in Theorem 2.1 takes the values
Γ(Λ)=⎧⎪⎨⎪⎩min{K,L}+1 if Λ=TK,L,min{⌈K/2⌉,⌈L/2⌉}+1 if Λ=GK,L,min{K/2,L}+1 if Λ=CK,L.
The crucial idea behind the proof of Theorem 2.2 is that along the transition from to , there must be a critical configuration where for the first time an entire row or an entire column coincides with the target configuration . In such a critical configuration particles reside both in even and odd sites and, due to the hard-core constraints, an interface of empty sites should separate particles with different parities. By quantifying the “inefficiency” of this critical configuration we get the minimum energy barrier that has to be overcome for the transition from to to occur. The proof is then concluded by exhibiting a path that achieves this minimum energy and by exploiting the absence of other deep cycles in the energy landscape.
Lastly, we show that by understanding the global structure of an energy landscape and the maximum depths of its cycles, we can also derive results for the mixing time of the corresponding Metropolis Markov chains , as illustrated in Subsection 3.8. In particular, our results show that in the special case of an energy landscape with multiple stable states and without other deep cycles, the hitting time between any two stable states and the mixing time of the chain are of the same order-of-magnitude in the low-temperature regime. This is the case also for the Metropolis hard-core dynamics on grids, see Theorem 5.4 in Section 5.
The rest of the paper is structured as follows. Section 3 is devoted to the model-independent results valid for a general Metropolis Markov chain, which extend the classical framework [27]. The proofs of these results are rather technical and therefore deferred to Section 4. In Section 5 we develop our combinatorial approach to analyze the energy landscapes corresponding to the hard-core model on grids. We finally present in Section 6 our conclusions and indicate future research directions.
## 3 Asymptotic behavior of hitting times for Metropolis Markov chains
In this section we present model-independent results valid for any Markov chains with Metropolis transition probabilities (2) defined in Subsection 2.1. In Subsection 3.1 we introduce the classical notion of a cycle. If the considered model allows only for a very rough energy landscape analysis, well-known results for cycles are shown to readily yield upper and lower bounds in probability for the hitting time : indeed, one can use the depth of the initial cycle as (see Propositions 3.4) and the maximum depth of a cycle in the partition of as (see Proposition 3.7). If one has a good handle on the model-specific optimal paths from to , i.e. those paths along which the maximum energy is precisely the min-max energy barrier between and , sharper exponents can be obtained, as illustrated in Proposition 3.10, by focusing on the relevant cycle, where the process started in spends most of its time before hitting the subset . We even further sharpen these bounds in probability for the hitting time with Proposition 3.15 by studying the tube of typical paths from to or standard cascade, a task that in general requires a very detailed but local analysis of the energy landscape. To complete the study of the hitting time in the regime , we prove in Subsection 3.5 the convergence of the first moment of the hitting time on a logarithmic scale under suitable assumptions (see Theorem 3.17) and give in Subsection 3.6 sufficient conditions for the scaled hitting time to converge in distribution as to an exponential unit mean random variable, see Theorem 3.19. Furthermore, we illustrate in detail two special cases which fall within our framework, namely the classical transition from a metastable state to a stable state and the tunneling between two stable states, which is the relevant one for the model considered in this paper. In Subsection 3.7 we briefly present the hard-core model on a complete -partite graph, which is an example of a model where the asymptotic exponentiality of the scaled hitting times does not always hold. Lastly, in Subsection 3.8 we present some results for the mixing time and the spectral gap of Metropolis Markov chains and show how they are linked with the critical depths of the energy landscape.
In the rest of this section and in Section 4, will denote a general Metropolis Markov chain with energy landscape and inverse temperature , as defined in Subsection 2.1.
### 3.1 Cycles: Definitions and classical results
We recall here the definition of cycle and present some well-known properties.
Recall that a path has been defined in Subsection 2.1 as a finite sequence of states such that , and , for . Given a path in , we denote by its length and define its height or elevation by
Φω:=maxi=1,…,|ω|H(ωi). (4)
A subset with at least two elements is connected if for all there exists a path , such that for every . Given a nonempty subset and , we define as the collection of all paths for some that do not visit before hitting , i.e.
Ωx,A:={ω:x→y\leavevmode\nobreak |\leavevmode\nobreak y∈A,ωi∉A∀i<|ω|}. (5)
We remark that only the endpoint of each path in belongs to . The communication energy between a pair is the minimum value that has to be reached by the energy in every path , i.e.
Φ(x,y):=minω:x→yΦω. (6)
Given two nonempty disjoint subsets , we define the communication energy between and by
Φ(A,B):=minx∈A,y∈BΦ(x,y). (7)
Given a nonempty set , we define its external boundary by
∂A:={y∉A\leavevmode\nobreak |\leavevmode\nobreak ∃x∈A\leavevmode\nobreak :\leavevmode\nobreak q(x,y)>0}.
For a nonempty set we define its bottom as the set of all minima of the energy function on , i.e.
F(A):={y∈A:H(y)=minx∈AH(x)}.
Let be the set of stable states, i.e. the set of states with minimum energy. Since is finite, the set is always nonempty. Define the stability level of a state by
Vx:=Φ(x,Ix)−H(x), (8)
where is the set of states with energy lower than . We set if is empty, i.e. when is a stable state. The set of metastable states is defined as
Xm:={x∈X:Vx=maxz∈X∖XsVz}. (9)
We call a nonempty subset a cycle if it is either a singleton or it is a connected set such that
maxx∈CH(x)
A cycle for which condition (10) holds is called non-trivial cycle. If is a non-trivial cycle, we define its depth as
Γ(C):=H(F(∂C))−H(F(C)). (11)
Any singleton for which condition (10) does not hold is called trivial cycle. We set the depth of a trivial cycle to be equal to zero, i.e. . Given a cycle , we will refer to the set of minima on its boundary as its principal boundary. Note that
Φ(C,X∖C)={H(x) if C={x} is a% trivial cycle,H(F(∂C)) if C is a non-trivial cycle.
In this way, we have the following alternative expression for the depth of a cycle , which has the advantage of being valid also for trivial cycles:
Γ(C)=Φ(C,X∖C)−H(F(C)). (12)
The next lemma gives an equivalent characterization of a cycle.
###### Lemma 3.1.
A nonempty subset is a cycle if and only if it is either a singleton or it is connected and satisfies
maxx,y∈CΦ(x,y)<Φ(C,X∖C).
The proof easily follows from definitions (6), (7) and (10) and the fact that if is not a singleton and is connected, then
maxx,y∈CΦ(x,y)=maxx∈CH(x). (13)
We remark that the equivalent characterization of a cycle given in Lemma 3.1 is the “correct definition” of a cycle in the case where the transition probabilities are not necessarily Metropolis but satisfy the more general Friedlin-Wentzell condition
limβ→∞−1βlogPβ(x,y)=Δ(x,y)∀x,y∈X, (14)
where is an appropriate rate function . The Metropolis transition probabilities correspond to the case (see [14] for more details) where
Δ(x,y)={[H(y)−H(x)]+ if q(x,y)>0,∞ otherwise.
The next theorem collects well-known results for the asymptotic behavior of the exit time from a cycle as becomes large, where the depth of the cycle plays a crucial role.
###### Theorem 3.2 (Properties of the exit time from a cycle).
Consider a non-trivial cycle .
1. For any and for any , there exists such that for all sufficiently large
Pβ(τx∂C
2. For any and for any , there exists such that for all sufficiently large
Pβ(τx∂C>eβ(Γ(C)+ε))≤e−ek2β.
3. For any , there exists such that for all sufficiently large
Pβ(τxy>τx∂C)≤e−k3β.
4. There exists such that for all sufficiently large
Extra open brace or missing close brace
5. For any , and , for all sufficiently large
Pβ(τx∂C
6. For any , any and all sufficiently large
eβ(Γ(C)−ε)
The first three properties can be found in [33, Theorem 6.23], the fourth one is [33, Corollary 6.25] and the fifth one in [27, Theorem 2.17]. The sixth property is given in [31, Proposition 3.9] and implies that
limβ→∞1βlogEτx∂C=Γ(C). (15)
The third property states that, given that is a cycle, for any starting state , the Markov chain visits any state before exiting from with a probability exponentially close to one. This is a crucial property of the cycles and in fact can be given as alternative definition, see for instance [9, 10]. The equivalence of the two definitions has been proved in [14] in greater generality for a Markov chain satisfying the Friedlin-Wentzell condition (14). Leveraging this fact, many properties and results from [9] will be used or cited.
We denote by the set of cycles of . The next lemma, see [33, Proposition 6.8], implies that the set has a tree structure with respect to the inclusion relation, where is the root and the singletons are the leafs.
###### Lemma 3.3 (Cycle tree structure).
Two cycles are either disjoint or comparable for the inclusion relation, i.e. or .
Lemma 3.3 also implies that the set of cycles to which a state belongs is totally ordered by inclusion. Furthermore, we remark that if two cycles are such that , then ; this latter inequality is strict if and only if the inclusion is strict.
### 3.2 Classical bounds in probability for hitting time τxA
In this subsection we start investigating the first hitting time . Thus, we will tacitly assume that the target set is a nonempty subset of and the initial state belongs to . Moreover, without loss of generality, we will henceforth assume that
A={y∈X\leavevmode\nobreak |\leavevmode\nobreak ∀ω:x→yω∩A≠∅}, (16)
which means that we add to the original target subset all the states in that cannot be reached from without visiting the subset . Note that this assumption does not change the distribution of the first hitting time , since the states which we may have added in this way could not have been visited without hitting the original subset first.
Given a nonempty subset and , we define the initial cycle by
CA(x):={x}∪{z∈X:Φ(x,z)<Φ(x,A)}. (17)
If , then and thus is a trivial cycle. If , the subset is either a trivial cycle (when ) or a non-trivial cycle containing , if . In any case, if , then . For every , we denote by the depth of the initial cycle , i.e.
Γ(x,A):=Γ(CA(x)).
Clearly if is trivial (and in particular when ), then . Note that by definition the quantity is always non-negative, and in general
Γ(x,A)=Φ(x,A)−H(F(CA(x)))≥Φ(x,A)−H(x),
with equality if and only if .
If , then the initial cycle is, by construction, the maximal cycle (in the sense of inclusion) that contains the state and has an empty intersection with . Therefore any path has at some point to exit from , by overcoming an energy barrier not smaller than its depth . The next proposition gives a probabilistic bound for the hitting time by looking precisely at this initial ascent up until the boundary of .
###### Proposition 3.4 (Initial-ascent bound).
Consider a nonempty subset and . For any there exists such that for sufficiently large
Missing or unrecognized delimiter for \Big (18)
The proof is essentially adopted from [33] and follows easily from Theorem 3.2(i), since by definition of , we have that .
Before stating an upper bound for the tail probability of the hitting time , we need some further definitions. Given a nonempty subset , we denote by the collection of maximal cycles that partitions , i.e.
M(B):={C∈C(X)\leavevmode\nobreak :\leavevmode\nobreak C⊆B,C maximal}. (19)
Lemma 3.3 implies that every nonempty subset has a partition into maximal cycles and hence guarantees that is well defined. Note that if is itself a cycle, then . The importance of the notion of initial cycle besides Proposition 3.4 is partially explained by the next lemma.
###### Lemma 3.5.
[27, Lemma 2.26] Given a nonempty subset , the collection of initial cycles is the partition into maximal cycles of , i.e.
M(X∖A)={CA(x)}x∈X∖A.
We can extend the notion of depth to subsets which are not necessarily cycles by using the partition of into maximal cycles. More precisely, we define the maximum depth of a nonempty subset as the maximum depth of a cycle contained in , i.e.
˜Γ(B):=maxC∈M(B)Γ(C). (20)
Trivially if . The next lemma gives two equivalent characterizations of the maximum depth of a nonempty subset .
###### Lemma 3.6 (Equivalent characterizations of the maximum depth).
Given a nonempty subset ,
˜Γ(B)=maxx∈BΓ(x,X∖B)=maxx∈B{miny∈X∖BΦ(x,y)−H(x)}. (21)
In view of Lemma 3.6, is the maximum initial energy barrier that the process started inside possibly has to overcome to exit from . As illustrated by the next proposition, one can get a (super-)exponentially small upper bound for the tail probability of the hitting time , by looking at the maximum depth of the complementary set , where the process resides before hitting the target subset .
###### Proposition 3.7 (Deepest-cycle bound).
[9, Proposition 4.19] Consider a nonempty subset and . For any there exists such that for sufficiently large
Pβ(τxA>eβ(˜Γ(X∖A)+ε))
By definition we have , but in general and neither bound presented in this subsection is actually tight, so we will proceed to establish sharper but more involved bounds in the next subsection.
### 3.3 Optimal paths and refined bounds in probability for hitting time τxA
The quantity appearing in Proposition 3.4 only accounts for the energy barrier that has to be overcome starting from , but there is such an energy barrier for every state and it may well be that to reach it is inevitable to visit a state with . Similarly, also the exponent appearing in Proposition 3.7 may not be sharp in general. For instance, the maximum depth could be determined by a deep cycle in that cannot be visited before hitting or that is visited with a vanishing probability as . In this subsection, we refine the bounds given in Propositions 3.4 and 3.7 by using the notion of optimal path and identifying the subset of the state space in which these optimal paths lie.
Given a nonempty subset and , define the set of optimal paths as the collection of all paths along which the maximum energy is equal to the communication height between and , i.e.
Ωoptx,A:={ω∈Ωx,A\leavevmode\nobreak :\leavevmode\nobreak Φω=Φ(x,A)}. (23)
Define the relevant cycle as the minimal cycle in such that , i.e.
C+A(x):=min{C∈C(X)\leavevmode\nobreak |\leavevmode\nobreak CA(x)⊊C}. (24)
The cycle is well defined, since the cycles in that contain are totally ordered by inclusion, as remarked after Lemma 3.3. By construction, and thus contains at least two states, so it has to be a non-trivial cycle. The minimality of with respect to the inclusion gives that
maxz∈C+A(x)H(z)=Φ(x,A),
and then, by using Lemma 3.1, one obtains
Φ(x,A)
The choice of the name relevant cycle for comes from the fact that all paths the Markov chain will follow to go from to will almost surely not exit from in the limit . Indeed, for the relevant cycle Theorem 3.2(iii) reads
limβ→∞Pβ(τxA<τx∂C+A(x))=1. (26)
The next lemma, which is proved in Section 4, states that an optimal path from to is precisely a path from to that does not exit from .
###### Lemma 3.8 (Optimal path characterization).
Consider a nonempty subset and . Then
ω∈Ωoptx,A⟺ω∈Ωx,Aandω⊆C+A(x).
Lemma 3.8 implies that the relevant cycle can be equivalently defined as
C+A(x)={y∈X\leavevmode\nobreak |\leavevmode\nobreak Φ(x,y)≤Φ(x,A)}={y∈X\leavevmode\nobreak :\leavevmode\nobreak Φ(x,y)<Φ(x,A)+δ0/2}, (27)
where is the minimum energy gap between an optimal and a non-optimal path from to , i.e.
δ0=δ0(x,A):=minω∈Ωx,A∖Ωoptx,AΦω−Φ(x,A).
In view of Lemma 3.8 and (26), the Markov chain started in follows in the limit almost surely an optimal path in to hit . It is then natural to define the following quantities for a nonempty subset and :
Ψmin(x,A):=minω∈Ωoptx,Amaxz∈ωΓ(z,A), (28)
and
Ψmax(x,A):=maxω∈Ωoptx,Amaxz∈ωΓ(z,A). (29)
Definition (28) implies that every optimal path has to enter at some point a cycle in of depth at least , while definition (29) means that every cycle visited by any optimal path has depth less than or equal to .
An equivalent characterization for the energy barrier can be given, but we first need one further definition. Define as the subset of states which belong to at least one optimal path in , i.e.
RA(x):={y∈X\leavevmode\nobreak |\leavevmode\nobreak ∃ω∈Ωoptx,A\leavevmode\nobreak :\leavevmode\nobreak y∈ω}. (30)
Note that , since the endpoint of each path in belongs to , by definition (5). In view of Lemma 3.8, . We remark that this latter inclusion could be strict, since in general . Indeed, there could exist a state such that all paths that do not exit from always visit the target set before reaching , and thus they do not belong to (see definitions (5) and (23)), see Figure 2.
The next lemma characterizes the quantity as the maximum depth of the subset (see definition (20)).
###### Lemma 3.9 (Equivalent characterization of Ψmax(x,A)).
Ψmax(x,A)=˜Γ(RA(x)∖A). (31)
Using the two quantities and , we can better control in probability the hitting time , as stated in the next proposition, which is proved in Section 4.
###### Proposition 3.10 (Optimal paths depth bounds).
Consider a nonempty subset and . For any there exists such that for sufficiently large
Pβ(τxA
and
Pβ(τxA>eβ(Ψmax(x,A)+ε)) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9639480113983154, "perplexity": 321.94891806800564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247496080.57/warc/CC-MAIN-20190220190527-20190220212527-00518.warc.gz"} |
https://questions.examside.com/past-years/jee/question/the-equation-of-the-line-passing-through-the-points-of-inter-jee-advanced-1986-marks-2-rqovcwqbd0h6nqb0.htm | NEW
New Website Launch
Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc...
1
### IIT-JEE 1986
Fill in the Blanks
The equation of the line passing through the points of intersection of the circles $$3{x^2} + 3{y^2} - 2x + 12y - 9 = 0$$ and $${x^2} + {y^2} - 6x + 2y - 15 = 0$$ is..............................
10x - 3y - 18 = 0
2
### IIT-JEE 1986
Fill in the Blanks
From the point A(0, 3) on the circle $${x^2} + 4x + {(y - 3)^2} = 0$$, a chord AB is drawn and extended to a point M such that AM = 2AB. The equation of the locus of M is..........................
$${x^2} + {y^2} + 8x - 6y + 9 = 0$$
3
### IIT-JEE 1985
Fill in the Blanks
From the origin chords are drawn to the circle $${(x - 1)^2} + {y^2} = 1$$. The equation of the locus of the mid-points of these chords is.............
$${x^2} + {y^2} - x = 0$$
4
### IIT-JEE 1985
Fill in the Blanks
Let $${x^2} + {y^2} - 4x - 2y - 11 = 0$$ be a circle. A pair of tangentas from the point (4, 5) with a pair of radi from a quadrilateral of area............................
8 sq unit
### Joint Entrance Examination
JEE Main JEE Advanced WB JEE
### Graduate Aptitude Test in Engineering
GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN
NEET
Class 12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8168102502822876, "perplexity": 2831.310253510558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662601401.72/warc/CC-MAIN-20220526035036-20220526065036-00654.warc.gz"} |
https://www.scribd.com/document/300485957/1-s2-0-0024379594001502-main | You are on page 1of 15
# NORTH- HOLLAND
On the Transfer Matrices of Fractals
Zhi-Xiong Wen
Department of Physics
Wuhan University
430072 Wuhan, People's Republic of China
Submitted by Richard A. Brualdi
ABSTRACT
An algebra of matrices of order 2~ was introduced by Mandelbrot et al. Some
of these matrices are associated with iterative constructions of fractal objects
and for this reason are called TMFs (transfer matrices of fra£tals). It has been
proved that the eigenvalues of a TMF are nonnegative integers. In this article
TMFs are shown to be diagonalizable, and further properties of these matrices,
as well as some of their generalizations, are investigated.
1.
INTRODUCTION
In the article "Fractals, their transfer matrices and their eigen-dimensional sequence" [1], an algebra of dimension 3 n of square matrices of order
2 n was introduced. Some of its elements are associated with iterative constructions of fractal objects, and are, for this reason, called T M F s (transfer m a t r i x of fractals). It was proved t h a t the eigenvalues of a T M F are
nonnegative integers. In [1] it was also shown t h a t when two T M F s coming from related geometric constructions are diagonalizable, their left and
right eigenvectors have certain properties of biorthogonality. In this article we prove t h a t indeed T M F s are diagonalizable. Moreover, we show
t h a t these properties subsist in other cases: the r a n d o m case (following the
terminology of [1]) and when the matrix does not arise from a geometric
construction. Also in [1] it was shown that, when the geometric situation
has symmetries, it can be described by T M F s of lesser dimensionality. This
leads us to investigate this case also, in Section 3.
We stick to the definitions and notation in [1]. A vector is said to be is
positive if it is nonzero and if all its components are nonnegative. Similarly,
we define a positive matrix.
LINEAR ALGEBRA AND ITS APPLICATIONS 236:189-203 (1996)
(~ Elsevier Science Inc., 1996
655 Avenue of the Americas, New York, NY 10010
0024-3795/96/\$15.00
SSDI 0024-3795(94)00150-C
is associated to a chessboard: M = (m/)~. If N is a subset of E.element of the index (J. NOTATION 1. A matrix M is indexed by 2 E × 2 E and a vector X is indexed by 2 E. We define [M]~ :-. A square matrix M . Let E be a finite set. by if It:Land Z L we or i f I C _ L a n d J = I t 2 K .je2e. otherwise. and T and ¢ are mappings from S to 2 E such that.4. We denote the cardinality of the E by IEI.2. I ) of the M. DEFINITION 1. r(s)u(In¢(s))=J}l. and the family of its subsets by 2 E.3. 0 otherwise. ¢) where S is a finite set. Let L and K be two subsets of E such t h a t L n K = 0. Let E be a finite set. Let L _C E. X L := PLKXL (in particular X ~ ----x L ) . : I{ses. NOTATION 1. We note t h a t every column of A L is a vector of the canonical basis of the R 2N . By A L we denote the 2 E × 2 E matrix whose (i. j ) entries are defined by ll [AL]~= 0 if L N I = J . And M is called a T M F . J=I. Let E be a finite set. Let K and L be two subsets of E.1.5. for every s in S. [X]j := Yth component of the vector X. An E-chessboard is a triplet (S.190 ZHI-XIONG WEN NOTATION 1. T. . indexed by 2 n x 2 n.6. otherwise. By p L we denote the matrix indexed by 2 E x 2 E whose the entries are {10 If L C_ E. denote the vector of R 2E whose components are [xL]¢ = NOTATION 1. NOTATION 1. we write N c = E \ N . (_l)lL\Ol if QCL. Define A L := p L A L (then A~ = AL). T(S)n¢(S) -= 0 [1].
DIAGONALIZATION OF THE TMF LEMMA 2. AJxN = ~ X N (0 if N C J. then E (--1)'N\PI = O. s o Q C _ H U ( N \ J ) . QC_N. (2) . JnQ-~H 0 otherwise. [AJXN]H = QCE QC_N E -~ (-1) IN\PI if H C_ N. It is clear that H C_ Q and H n ( N \ J ) c_ H n [N\(J N Q)] -.1.T R A N S F E R MATRICES OF FRACTALS 191 Let M be a TMF.H N ( N \ H ) -. We obtain easily M = E A¢(s) ~(s). t h e n x ~ J . otherwise. Consider Q which satisfies Q c N and JNQ=H. If H and K are two subsets of N such that H C_K and K \ H # 0. HC_QC_K We have PROPOSITION 2. It is easy to verify that the equation (1) holds if H ~ N. sES 2. IfxeQandx~H.H. HC_QC_K Proof. JAQ=H 2. K\H # 0 implies that IK\HI >_ 1. (1) Proof.0. so that (_l)lN\ l(1 _ = O.(--1)IN\HI ----[XN]H" QC_N. and E (-1)IN\Q1-. Let N be a finite set. then Q = J N Q -. If N C_ J. Now we consider the case H C_ N: 1. then N \ J # O.2. I f N ~ J.
and N ~ J. N be subsets orE. J N ( R U I ) = H 0 otherwise. RC_N. Thus it is sufficient to consider the case H C H U I. HC_RUI.1. Proof. Q : H O J ' and J' C N \ J . K N I = PROPOSITION 2. then A d P ~ X N = 0. L. Take H = H'UH" such that H ' _C N and H " C_ I.1 ) IN\RI if R C N.3. (_I)IN\QI = QCN. E = ( . In addition. Then H = Jn (nuI) -. I f N C K.~glJV = if R _C N (OK). We have Q c N and J N Q = JNH = H. HCQCHu(N\J) Let I. and H C J.lN\RI . From the fact K N I = 0. RCN 0- otherwise. RC_N.192 ZHI-XIONG WEN We obtain H _C Q _c H n ( N \ J ) and (H U ( N \ J ) ) \ H = N \ J # 0. we have = O. K. if J N L =N N (3) I = O. (4) We have [AJP~XN]H = E PP QaX R aH Q. from Lemma 2. J. . aRuI(--1)IN\RI if RCN(C_K). QC_E 0 Z otherwise. O.( J N R ) U ( J N I) if and only if H ' = J N R and H " = J N I. In particular. So in this case. we obtain R N I = 0 for all R C N. J N Q = H ~ (_I)IN\QI = 0.RCE oq R( --~/ .
Q. We obtain N JS ---. wehaveS=[JN(RUI)]UL=RU(JNI)UL. RCN ~a~"'(-~) '~\~L He_J. Jn(RUl)=H ={(O-1)IN\RI if otherwise.[X(JnI) uL] s" {XN}Kc2 Nc is a family of the linearly independent . L"~L[AJpKxN]I PROPOSITION 2. RC_N = E HCE. then A./t_Q ~R ( ~IN\RI PS ~ H P Q t. (6) In particular. Because A J = P~A J.RCE E : .QC_E. Proof. JAR=H' = ~ (-1) iN\RI = 0 . N be subsets of E.4. K. J.TRANSFER MATRICES OF FRACTALS 193 Therefore.2. L. vectors.5. H'CRCmU(N\J) PROPOSITION 2. we get (-i)'~\~I = RCN.~PFX ~ = X(jni)u L. (5) AarX~ --~ X(JNI)U N L.~1 H. such that L N J = I N K = O . we have AJpKxN1 ~L I JS = V" -H Q R Ps aHPQXR H. JA(RUI)=H Z (_i)l~\~l RC_N.S=[JN(RUI)]UL' SinceRCN. in the same way as for Proposition 2. Let I.-. I f N C _ K n J .
InJ=O Now for every L C N c.JCE. there exists a nonnegative sequence { ~ / K } K C N c such that the vector -~N= ~--~. XN = E KCN c # o.KC_Nc7K XN is an eigenvector of M with eigenvalue .JCE. THEOREM 2. INJ=O E N QgX(KnJ)u. NCJC_E. then MxN E = K X gY /~L (K C NO). The sum of the entries of every column of K )L. _ InJ=O ¢%IJ A JI " Then for all N C E.~N does not depend on K. Let M = ~-~I. (7) LCN c On the other hand.3 and 2. ICE. NCJCE. we have MxN = J J) {IAI XKN = E I. (10) admits a nonzero solution From Proposition 2.194 ZHI-XIONG WEN Proof. NC_JCE.~Y = E I C E . NCJ.Ks2N c equals AN . InJ=O t h e n . ICIJ=O~J" Proof. . write t3~ = ~IC_E. put E el= LC_Nc E (8) ICE.4. (KoL)uI=L ~/. From Propositions 2. So the linear equation the matrix B = (/3L Br = ~Nr (9) r = (~K)K~_No.. otherwise.5.6. Jol=0. The statement follows from XN {O 1 [ K ] L U N ~--- if K = L .
if {~/} are positive. ~g > 0 for all I and J). If ~/g are positive (that is. REMARK 2. where (£K)KCN c are positive. . N C ¢(s)].~N "[LXLN . . L e t . Mr}.. ~g are also positive.6 is valid also for the random case.T R A N S F E R MATRICES OF FRACTALS 195 so we get finally MxN= E yKMXN KCN ~ = ~L 7 K X L KC_Nc5C_N~ = E = E ~L ")/K X L LC_Nc \ KCN c / . . Choose an E-chessboard with the matrices M 1 . From Propositions 2. In particular. . .7.6 we have = Z (11) In particular. let ~1. COROLLARY 2. but it has been obtained by a different method in [1]. Let M be a TMF associated with the E-chessboard ( S . .3 and 2. then B is a positive matrix. then the sum of the components of { ~ N } is null. Then {--XN}Ne2E is linearly in-dependent. M r for this distribution. Suppose t h a t ENCE ~NXN : O. Proof. Theorem 2.7t be a probability distribution.6. ¢ . In fact.bAN = . .k/O (1 < i < t) is a nonnegative integer. then from Theorem 2. where Mi E { M 1 . then (i) the family {XN}Nc_2E defined as in Theorem 2. A0 = ~[] . r ) .-.8. .6 are linearly independent. PROPOSITION 2. This corollary is a direct consequence of Theorem 2. (ii) the components of {~-a} are positive.. .~ )~N'-xN" [] LCN ~ REMARK 2. If M = ~TiM~. Then the eigenvalue )~N of M equals ]{s • S.4. so by the Perron-FrSbenius theorem [2].9.X = E K C N c £KXNK. (iii) /f N ~ 0.10.
1.12. and (iii). the TMF of an E-chessboard is diagonalizable. 0 thus 50 = 0.JC_E.9. Since E is a finite set.6 are linearly independent). If I E 2E\{I~} and I ¢ K .J > 0 of Theorem 2.5.6 cannot be removed. then I ~ K . We have thus \ L C K -- c LCK ~ c which yields that ~K = 0.0). . J. from the fact that (eK)KC_Nc is positive (in particular. Prom Remark 2.11. Then M is diagonalizable. for N C_ E. we obtain (i). He gives the following example [3]: Let IEI = 1. REMARK 2. • Let M = ~-]~I. from Remark 2. the Perron-PrSbenius theorem.A~ is not diagonalizable.196 ZHI-XIONG WEN Prom Proposition 2. and Lemma 2. Take a subset K of E with IKI = 1. If we take then M = A~ . Proof. (ii). Peyri~re has noted that the condition ~. In particular.6. ¢nJ=O 7IJ AI2 ('~] >. THEOREM 2. {-RN}Ne2~ defined as in Theorem 2. It is a simple corollary of Proposition 2.7. we can repeat this method and we have finally 5N = 0 for all N C E.10 and of Theorem 2. INI = n.
197 INVARIANT E-CHESSBOARD UNDER THE ACTION OF A GROUP For some constructions of fractals which possess certain symmetries. using the bijective m a p d we have 7r~ : S . LKI = n}. We say (S. Given an E-chessboard (S. whereg E G. To give an answer to this question. I E Zn. In this case. S (a E G) such t h a t ' . T) is G-invariant if it is g-invariant for all gcG. ¢. For 0 < n < 2 IE] .{K ~ E. there exists a perm u t a t i o n ~rg on S such t h a t for every a E S there exists b c S satisfying ~g(a) = b. s}. So the order of their T M F is less t h a n 2 IEI . Consider the Sierpifiski gasket. and we obtain the G-orbits F~n).s} 3 {g. Put gK = {gk. P u t E = {g. then we say t h a t (S. g¢(a) = ¢(b). k c K}. and if I ~ J E Zn. K c_ E. ¢. then they are not in the same orbit. gT(a) = ~'(b). d. we can ask if these T M F s still possess the properties described in Section 2. Instead of F[n) we will write simply FI if no confusion results. Then G acts on F(n).s} r 0 s For G equal to the s y m m e t r y group on E -~ \$3.T R A N S F E R MATRICES OF FRACTALS 3. If for all g E G.d} 2 {g.¢. we define a class of subsets of E: F (n) . Let G be a p e r m u t a t i o n group on E.~-) is g-invariant. EXAMPLE 1. T). where Zn is a class of the representatives of the orbits of F(n). we shall introduce the notion of the invariant chessboard under the action of a group. it is not necessary to distinguish the different orientations of the "tremas" [1]. s ¢ 1 {g.
F((1) g} = {{g}. EXAMPLE 2.~} 8 {e..~} = {In.w} {w. F{(1) . {~}. we have the Serpifiski carpet (nw)(es). ~}. {w. {d}. s}. F+<°)= {0}. ~4 = (18)(25)(47). {g. (~}}. F(2) {.. {~.e} 7 (s} {~. {d.s} (n. {s}. s}}. F~°)= {o}. s}.s} {n.:} = {{n. and s ¢ ~- 1 {n.e} {e. group on E. (ne)(ws)} ~_ Klein 4- n 1 4 6 W 2 3 5 8 7 e s We have 71"1 ~ ls. ~3 = (24)(36)(57). .w} {e. ~}. {s}}.s} 4 {w} {. In.s} 5 6 {~} {w. For G = {1E..~} {n. d}. w}.198 ZHI-XIONG WEN The orbits of G over 2E are F~3)= rE}. F (2) {9. e}}. ~}}.~ = {{~}.w} The orbits of G over 2 E are F(~4> = {E}. 7r2 = (18)(27)(36)(45).4} = {{g.s} 2 3 {n} {n. F (2) {. (ns)(ew). {~.
we define the matrix Pg -= (Pj)j. ~ IEF m~ = ~m~ IEF for all K. ¢. g~-(s) = r(t). As g induces a bijection on S.l ~ ( t ) u [g-l¢(t) n z] = ]. For s E SJ. I Ie2E by P~ = otherwise. J E 2E). . J are in the same G-orbit. (i) for ally E G. m~j = mgj (ii) if F is a G-orbit. gI runs over F if and only if I runs over F. J E F. E m._.. there exists t such that g¢(s) = ¢(t). Then d (I. where 1 P~ = 0 if I = g J . T). Let M = (mI)i. we have Is51 < IsE/I < . T(s) U ( ¢ ( s ) M I ) = J } . where g E G. (i): Put SJ = { s E S. r(t) U [¢(t) A 9I] = gZ. there exists g E G such that J = gK. i. otherwise. so Y.e. i. then m~= ~m IEF 5. Besides. jE2m be a TMF of a G-invariant E-chessboard ( S. Pg is obtained by permuting the lines of the unit matrix under the action of g on 2 E. t h e n m / = I S j l . Notice that Pg-1 = (p/). IEF Proof.- - - (ii): Since K.T R A N S F E R MATRICES OF FRACTALS 199 PROPOSITION 3. IEF gLEF gL6F LEF Similarly.¢*-"(~') ~ g-l(gj) : IsSI. From this g .1. So MPg-1 is obtained by permuting the columns of the unit matrix under .e.E IEF IEF For g E G..
I C N IC_N = (--1) IgN\glO if H = g I U g L . we have g K U gL = g ( K O L). (ii): Notice the following facts: (1) I _ N if only if g I c gN. Then for all g E G. (ii) There exists an eigenvector of M such that its components in the same orbit are always equal. we have (i) PgMPg-I = M. where g E G. (i): It is a direct consequence of Proposition 3. Let M be a T M F of a G-invariant E-chessboard (S.200 ZHI-XIONG W E N the action of g on 2 E. Proof. if g. PgPh = Pgh PROPOSITION 3. IC_E KCE. (ii) P g X g = X~ N. (2) IN\I[ = Igg\gI[. From Propositions 2. 0 otherwise. and let -X g = }-']~gcg~ 7K X N be an eigenvector with the eigenvalueA N of M. Then (i) Pg"X N is also an eigenvector of M with the eigenvalue ~N and )~N = A9N.6. where N E 2E. we have MPg'K N = P g M X N = AN pg-~ N.2.6. r). (i): From Proposition 3.2(ii). ¢. We have also Pg-l = P g 1.3.{7K}Ke2N ~ is chosen as in Theorem2. and X I be defined as in Notation 1. L of E. Let M be a T M F of a G-invariant E-chessboard. (3) for all subsets K. .10 and 3.1(i). • PROPOSITION 3.2. h E G. Thus = pHPKXI K. p g ~ N is an eigenvector with AgN. Proof.
We obtain still t h a t XN = PgXN is an eigenvector with AN.TRANSFER MATRICES OF FRACTALS 201 (ii): For the eigenvector ~ N .10.gN .l N are distinct each other. Consider the vector xN = XN + p. ---~. We get from Proposition 3. and two different columns come from two different G-orbits. then from Proposition 2. K 2. we have gmN = N and gmK = K. XN + . • Now we shall reduce the matrices of the G-invariant E-chessboard. ~ K C N c I. .t K j=0 j=0 KCN c x~JN and 7~: is the sum of some 7K ( K C NO). Suppose t h a t t is the least positive integer such that g t N = N.. we have obtained Pg-Rg = X N. Hence the I t h component and J t h component are distinct. . . .--~g. and the J t h line of C with J E F1 is the I t h vector of the canonical basis of R ". then j=0 KCN ~ where -X'g'N "~ x-~ A . let t be the least positive integer such that: F r o m Proposition (1) gK = K for all K E 2 No. (2) The columns of A M are just the representatives of the G-orbits. . and PgXN = XN.. We define a v x 2 IEI matrix A and a-2 [~l x g matrix C such that the J t h column of A with J E FI is either the I t h vector of the canonical basis of R" if J = I or zero.10. then N. . and (2) there exists L C_ N c such that gt-l L # L ( i f g K = K for a l l K E 2 No.l N are linearly independent. we can assume that g # 1E). + G. --~tg ' . XN ~ O. KE2N c If m is the order of the element 9. If N = gN (since g = 1E is trivial. + KCN ~ From Remark 2. Now the definition of the G-orbit permits the conclusion. g t . . ..7 and Proposition 2. So XN ( # 0) is also an eigenvector of M. . . m--1 Let XN = ~ j = 0 pgj. . so we can suppose that t > 2). N= + +--..~ t N .1(ii) the following facts: (1) The columns of M C which come from the same G-orbit are always equM. if N ¢ gN.2(ii) we have pg N gN = "[KXg K . g N . Denote by v = [U0<j<n J j l the numbers of the G-orbits.
Furthermore. AMC = F (3) F (2) F(1) F (0) 3 0 0 1 2 0 0 2 1 0 0 3 F (2) F 0) F (°) where /~ f(2) ~ /k . M has at most v different eigenvalues. independent.3. In Example 1. hence there exist just u different such vectors y I . For an eigenvalue Ai. From Proposition 3. M and A M C have the same eigenvalues. we choose an eigenvector satisfying the conditions of Proposition 3. the eigenvectors of A M C possess the properties in Proposition 2.X i . so ( A M C ) Y I = A M C A X I = A M X I = Az A X I = A I y / .202 ZHI-XIONG WEN EXAMPLE 1'.10. EXAMPLE 3. we have oooooo 1 A= 0 0 0 0 /I 0 0 0 0 1 0 0 0 0 0 0 0 0 C __ ' 0 0 0 0 1 0 0 0 1 0 0 0 I 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 THEOREM 3. { y I } is also linearly independent.3(ii).3(ii). We have therefore C A X I -. Take a vector y I _= A X I c R ~. Since { X I} is linearly • In Example 1. Notice t h a t y I is obtained by putting only a representative of every G-orbit corresponding to the components of X I in Proposition 3. Proof. and A M C is diagonalizable.4.
Y. i. Aharony. Wiley.~} F(4) {E} ----+ --+ ' I F (2) F{O) n} {n. its T M F is denoted by MN. M N is just the matrix B defined as in the proof of Theorem 2. Peyri~re. Non-negative Matrices. A. Phys. where F = ("}'K)Ke2N~. J. REFERENCES 1 B.¢N. From Proposition 2. we have M N F = xNF. TN : S N ) N c with CN(S) = ¢ ( s ) \ N . and define CN.¢.5. Private communication. TN(S) ---.s} I I.. Mandelbrot.6. Thus we obtain an NO-chessboard (sN. I I [----1 F(O) {¢} _1 ~I t_ I. _ _ ~I L_ _ F(2) {n.6 as follows: Take S n = {s e S. 2 E.TN). Let M be a T M F associated with an E-chessboard (S.4. Peyri~re.T(S).T). M0 = M. N C ¢(s)}. *I l. In particular. _ _J REMARK 3. Seneta. We give a explicit construction of the sequence {TK}Ke2N c of Theorem 2. and J. Gefen. Received 3 January 1994.e} k_. F (4) F(3) 4 2 0 1 0 0~ 4 5 6 4 3 0 0 1 2 2 3 4 0 0 0 1 2 4 F (2) {~. their transfer matrices and their eigen-dimensional sequence. 3 J. final manuscript accepted 27 May 1994 . Fractals.e.e. A 18:335-354 (1985).T R A N S F E R MATRICES OF FRACTALS 203 In Example 2. 1973.e} 0 0 0 0 0 0 0 0 0 0 0 0 F(1) F(O) A M C -- where F(3) {n. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9864640831947327, "perplexity": 3100.2478587769024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823348.23/warc/CC-MAIN-20181210144632-20181210170132-00110.warc.gz"} |
https://www.physicsforums.com/threads/physics-esque-question.255383/ | # Physics-esque question
1. Sep 11, 2008
### lnl
Question:
Use Newton's version of Kepler's third law to answer the following questions. (Hint: The calculations for this problem are so simple that you will not need a calculator.) Imagine another solar system, with a star of the same mass as the Sun. Suppose there is a planet in that solar system with a mass twice that of Earth orbiting at a distance of 1 AU from the star. What is the orbital period of this planet? Explain.
Any thoughts?
2. Sep 11, 2008
### tiny-tim
Welcome to PF!
Hi lnl! Welcome to PF!
Show us what you've tried, and where you're stuck, and then we'll know how to help.
Similar Discussions: Physics-esque question | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8637282848358154, "perplexity": 1063.6087453462371}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815034.13/warc/CC-MAIN-20180224013638-20180224033638-00379.warc.gz"} |
https://en.wikipedia.org/wiki/Langmuir_adsorption_model | Fig 1. A schematic showing equivalent sites, occupied (blue) and unoccupied (red) clarifying the basic assumptions used in the model. The adsorption sites (heavy dots) are equivalent and can have unit occupancy. Also, the adsorbates are immobile on the surface.
The Langmuir adsorption model explains adsorption by assuming an adsorbate behaves as an ideal gas at isothermal conditions. At these conditions the adsorbate's partial pressure, $p_A$, is related to the volume of it, $V$, adsorbed onto a solid adsorbent. The adsorbent, as indicated in Figure 1, is assumed to be an ideal solid surface composed of series of distinct sites capable of binding the adsorbate. The adsorbate binding is treated as a chemical reaction between the adsorbate molecule $A_{g}$ and an empty site, $S$. This reaction yields an adsorbed complex $A_{ad}$ with an associated equilibrium constant $K_{eq}$
$A_{g}\,\,\, + \,\,\, S \,\,\, \rightleftharpoons \,\,\, A_{ad}$
From these assumptions the Langmuir isotherm can be derived (see below), which states that:
$\theta_A = \frac{V}{V_m} = \frac{K_{eq}^A\,p_A }{1+K_{eq}^A\,p_A}$
where $\theta_A$ is the fractional occupancy of the adsorption sites and $V_m$ is the volume of the monolayer. A continuous monolayer of adsorbate molecules surrounding a homogeneous solid surface is the conceptual basis for this adsorption model.[1]
The Langmuir isotherm is formally equivalent to the Hill equation in biochemistry.
## Background and experiments
In 1916, Irving Langmuir presented his model for the adsorption of species onto simple surfaces. Langmuir was awarded the Nobel Prize in 1932 for his work concerning surface chemistry. He hypothesized that a given surface has a certain number of equivalent sites that a species can “stick”, either by physisorption or chemisorption. His theory began when he postulated that gaseous molecules do not rebound elastically from a surface, but are held by it in a similar way to groups of molecules in solid bodies.[2]
Langmuir published two papers that proved the assumption that adsorbed films do not exceed one molecule in thickness. The first experiment involved observing electron emission from heated filaments in gases.[3] The second, a more direct proof, examined and measured the films of liquid on an adsorbent surface layer. He also noted that generally the attractive strength between the surface and the first layer of adsorbed substance is much greater than the strength between the first and second layer. However, there are instances where the subsequent layers may condense given the right combination of temperature and pressure.[4]
The most important empirical data came from a set of experiments that Langmuir ran to test the adsorption of several gases on mica, glass and platinum. The experiments began at very low pressures (~100 bar) in order to more easily measure the change in quantities of free gas and also to avoid condensation. He then ran the experiments at different temperatures and pressures, which proved the pressure dependence demonstrated below.
## Basic assumptions of the model
Inherent within this model, the following assumptions[5] are valid specifically for the simplest case: the adsorption of a single adsorbate onto a series of equivalent sites on the surface of the solid.
1. The surface containing the adsorbing sites is perfectly flat plane with no corrugations (assume the surface is homogeneous) .
3. All sites are equivalent.
4. Each site can hold at most one molecule of A (mono-layer coverage only).
## Derivations of the Langmuir Adsorption Isotherm
### Kinetic derivation
This section[5] provides a kinetic derivation for a single adsorbate case. The multiple adsorbate case is covered in the Competitive adsorption sub-section. The model assumes adsorption and desorption as being elementary processes, where the rate of adsorption rad and the rate of desorption rd are given by:
$r_{ad} = k_{ad} \, p_A \, [S]$
$r_{d} = k_d \, [A_{ad}]$
where PA is the partial pressure of A over the surface, [S] is the concentration of bare sites in number/m², [Aad] is the surface concentration of A in molecules/m², and kad and kd are constants of forward adsorption reaction and backward desorption reaction in the above reactions.
At equilibrium, the rate of adsorption equals the rate of desorption. Setting rad=rd and rearranging, we obtain:
$\frac {[A_{ad}]}{p_A[S]} = \frac{k_{ad}}{k_d} = K_{eq}^A$
The concentration of all sites [S0] is the sum of the concentration of free sites [S] and of occupied sites:
$[S_0] = [S] + [A_{ad}]\,$
Combining this with the equilibrium equation, we get:
$[S_0] = \frac {[A_{ad}]}{K_{eq}^A\,p_A} + [A_{ad}] = \frac{1+K_{eq}^A\,p_A}{K_{eq}^A\,p_A}\,[A_{ad}]$
We define now the fraction of the surface sites covered with A, θA, as:
$\theta_A = \frac{[A_{ad}]}{[S_0]}$
This, applied to the previous equation that combined site balance and equilibrium, yields the Langmuir adsorption isotherm:
$\theta_A = \frac{K_{eq}^A\,p_A }{1+K_{eq}^A\,p_A}$
### Statistical mechanical derivation
This derivation[6][7] was originally provided by Volmer and Mahnert[8] in 1925.
The partition function of the finite number of adsorbents adsorbed on a surface, in a canonical ensemble is given by
$Z(N) \, = \, \frac {\zeta^{N}_{L}}{N!} \, \frac { N_{S}!} { (N_{S}-N)!}$
where $\zeta_{L}$ is the partition function of a single adsorbed molecule, $N_{S}$ are the number of sites available for adsorption. Hence, N, which is the number of molecules that can be adsorbed, can be less or equal to Ns. The first term of Z(n) accounts the total partition function of the different molecules by taking a product of the individual partition functions (Refer to Partition function of subsystems). The latter term accounts for the overcounting arising due to the indistinguishable nature of the adsorption sites. The grand canonical partition function is given by
$\mathcal{Z}(\mu) \, = \, \sum_{N=0}^{N_{S}} \, \exp \left(\, \frac {N\mu}{k_{B}T} \right)\frac {\zeta^{N}_{L}}{N!} \, \frac { N_{S}!} { (N_{S}-N)!} \,$
As it has the form of binomial series, the summation is reduced to
$\mathcal{Z} \, = \, (1+x)^{N_{S}}$
where $x \, = \, \zeta_{L}\exp \left( \frac {\mu}{k_{B}T} \right)$
The Landau free energy, which is generalized Helmholtz free energy is given by
$L \, = \, -k_{B}T\ln(\mathcal{Z})\, = \, -k_{B}TN_{S}\ln(1+x)$
According to the Maxwell relations regarding the change of the Helmholtz free energy with respect to the chemical potential,
$\left (\frac {\partial L}{\partial \mu}\right)_{T, V, Area} = \, -N$
which gives
$\theta_{A} \,= \, \frac{N}{N_{S}} \, = \, \frac{x}{1+x}$
Now, invoking the condition that the system is in equilibrium, the chemical potential of the adsorbates is equal to that of the gas surroundings the absorbent.
$\mu_{A} \, = \, \mu_{g}$
An example plot of the surface coverage θA = P/(P+P0) with respect to the partial pressure of the adsorbate. P0 = 100mtorr. The graph shows levelling off of the surface coverage at pressures higher than P0.
$\mu_{g} = \left( \frac{\partial A_{g}}{\partial N_i} \right)_{T,V, N_{j \ne i}} \,\, = k_{B}T\ln \frac{N^{3D}}{Z^{3D}}$
where N3D is the number of gas molecules, Z3D is the partition function of the gas molecules and Ag=-kBT ln Zg. Further, we get
$x \, = \, \frac {\theta_A}{1- \theta_A} \, = \, \zeta_{L} \frac{N^{3D}}{\zeta^{3D}} \, = \, \zeta_L \left ( \frac {h^2}{2 \pi mk_BT} \right)^{3/2} \frac{P}{k_BT} \, = \, \frac{P}{P_0}$
where
$P_0 = \frac {k_BT}{\zeta_L} \left ( \frac {2 \pi mk_BT}{h^2} \right)^{3/2}$
Finally, we have
$\theta_A = \frac {P}{P+P_0}$
It is plotted in the figure alongside demonstrating the surface coverage increases quite rapidly with the partial pressure of the adsorbants but levels off after P reaches P0.
The previous derivations assumes that there is only one species, A, adsorbing onto the surface. This section[9] considers the case when there are two distinct adsorbates present in the system.Consider two species A and B that compete for the same adsorption sites. The following assumptions are applied here:
1. All the sites are equivalent.
2. Each site can hold at most one molecule of A or one molecule of B, but not both.
As derived using kinetical considerations, the equilibrium constants for both A and B are given by
$\frac {[A_{ad}]}{p_A\,[S]} = K^A_{eq}$
and
$\frac {[B_{ad}]}{p_B\,[S]} = K^B_{eq}$
The site balance states that the concentration of total sites [S0] is equal to the sum of free sites, sites occupied by A and sites occupied by B:
$[S_0] = [S] + [A_{ad}] + [B_{ad}]\,$
Inserting the equilibrium equations and rearranging in the same way we did for the single-species adsorption, we get similar expressions for both θA and θB:
$\theta_A = \frac {K^A_{eq}\,p_A}{1+K^A_{eq}\,p_A+K^B_{eq}\,p_B}$
$\theta_B = \frac {K^B_{eq}\,p_B}{1+K^A_{eq}\,p_A+K^B_{eq}\,p_B}$
The other case of special importance is when a molecule D2 dissociates into two atoms upon adsorption.[9] Here, the following assumptions would be held to be valid:
1. D2 completely dissociates to two molecules of D upon adsorption.
2. The D atoms adsorb onto distinct sites on the surface of the solid and then move around and equilibrate.
3. All sites are equivalent.
4. Each site can hold at most one atom of D.
Using similar kinetic considerations, we get:
$\frac {[D_{ad}]}{p^{1/2}_{D_2}[S]} = K^D_{eq}$
The 1/2 exponent on pD2 arises because one gas phase molecule produces two adsorbed species. Applying the site balance as done above:
$\theta_D = \frac {K^D_{eq}\,p^{1/2}_{D_2}}{1 + K^D_{eq}\,p^{1/2}_{D_2}}$
## Entropic considerations
The formation of Langmuir monolayers by adsorption onto a surface dramatically reduces the entropy of the molecular system. This conflicts with the second law of thermodynamics, which states that entropy will increase in an isolated system. This implies that either another locally active force is stronger than the thermodynamic potential, or that our expression of the entropy of the system is incomplete.
To find the entropy decrease, we find the entropy of the molecule when in the adsorbed condition.[10]
$S \, = \, S_{configurational} \, + \, S_{vibrational}$
$S_{conf} = k_B \, ln \Omega_{conf}$
$\Omega_{conf} \, = \, \frac {N_S!}{N! (N_S -N)!}$
Using Stirling's approximation, we have,
$ln N! \, = \, NlnN - N$
$S_{conf}/k_B = -\theta_A \, ln(\theta_A) - (1-\theta_A) \, ln(1- \theta_A)$
On the other hand, the entropy of a molecule of an ideal gas is
$\frac {S_{gas}}{Nk_B} \, = \, ln \left (\frac {k_BT}{P \lambda^3} \right) + 5/2$
where $\lambda$ is the Thermal de Broglie wavelength of the gas molecule.
The Langmuir adsorption model deviates significantly in many cases, primarily because it fails to account for the surface roughness of the adsorbent. Rough inhomogeneous surfaces have multiple site-types available for adsorption, and some parameters vary from site to site, such as the heat of adsorption.
## Modifications of the Langmuir Adsorption Model
The modifications try to account for the points mentioned in above section like surface roughness, inhomogeneity, and adsorbate-adsorbate interactions.
Main article: Freundlich equation
The Freundlich isotherm is the most important multisite adsorption isotherm for rough surfaces.
$\theta_A = \alpha_F\,p^{C_F}$
where αF and CF are fitting parameters.[11] This equation implies that if one makes a log-log plot of adsorption data, the data will fit a straight line. The Freundlich isotherm has two parameters while Langmuir's equations has only one: as a result, it often fits the data on rough surfaces better than the Langmuir's equations.
A related equation is the Toth equation. Rearranging the Langmuir equation, one can obtain:
$\theta_A = \frac{p_A}{\frac{1}{K_{eq}^A} + p_A}$
Toth[12] modified this equation by adding two parameters, αT0 and CT0 to formulate the Toth equation:
$\theta^{C_{T_0}} = \frac {\alpha_{T_0}\,p_A^{C_{T_0}}}{\frac{1}{K_{eq}^A} + p_A^{C_{T_0}}}$
This isotherm takes into accounts of indirect adsorbate-adsorbate interactions on adsorption isotherms. Temkin[13] noted experimentally that heats of adsorption would more often decrease than increase with increasing coverage.
$\frac{[A_{ad}]}{p_A\,[S]} = K^A_{eq} \propto \mathrm{e}^{-\Delta G_{ad}/RT} = \mathrm{e}^{\Delta S_{ad}/R}\,\mathrm{e}^{-\Delta H_{ad}/RT}$
He derived a model assuming that as the surface is loaded up with adsorbate, the heat of adsorption of all the molecules in the layer would decrease linearly with coverage due to adsorbate/adsorbate interactions:
$\Delta H_{ad} = \Delta H^0_{ad}\,(1-\alpha_T\,\theta)$
where αT is a fitting parameter. Assuming the Langmuir Adsorption isotherm still applied to the adsorbed layer, $K_{eq}^A$ is expected to vary with coverage, as follows:
$K^A_{eq} = K^{A,0}_{eq} \mathrm{e}^{(\Delta H^0_{ad}\,\alpha_T \,\theta / k\,T)}$
Langmuir's isotherm can be rearranged to this form:
$K^A_{eq}\,p_A = \frac{\theta }{1-\theta}$
Substituting the expression of the equilibrium constant and taking the natural logarithm:
$\ln (K^{A,0}_{eq}\,p_A) = \frac{-\Delta H^0_{ad} \, \alpha_T \, \theta}{k\,T} + \ln \left( \frac{\theta}{1-\theta}\right)$
### BET equation
Main article: BET theory
Brunauer's model of multilayer adsorption, that is, a random distribution of sites covered by one, two, three, etc., adsorbate molecules.
Brunauer, Emmett and Teller[14] derived the first isotherm for multilayer adsorption. It assumes a random distribution of sites that are empty or that are covered with by one monolayer, two layers and so on, as illustrated alongside. The main equation of this model is:
$\frac{[A]}{S_0} = \frac{c_B \, x_B}{(1-x_B)\,[1 + (c_B - 1)\,x_B]}$
where
$x_B = p_A\,K_m, \qquad c_B = \frac{K_1}{K_m}$
and [A] is the total concentration of molecules on the surface, given by:
$[A] = \sum^{\infty}_{i=1} i\,[A]_i = \sum^{\infty}_{i=1}i \, K_1 \, K^{i-1}_m \, p^i_A \, [A]_0$
where
$K_i = \frac{[A]_i}{p_A\,[A]_{i-1}}$
in which [A]0 is the number of bare sites, and [A]i is the number of surface sites covered by i molecules.
This section describes the surface coverage when the adsorbate is in liquid phase and is a binary mixture[15]
For ideal both phases - no lateral interactions, homogeneous surface - the composition of a surface phase for a binary liquid system in contact with solid surface is given by a classic Everett isotherm equation (being a simple analogue of Langmuir equation), where the components are interchangeable (i.e. "1" may be exchanged to "2") without change of eq. form:
$x_1^s \, = \, \frac{Kx_1^l}{1+(K-1)x_1^l}$
where the normal definition of multicomponent system is valid as follows :
$\sum_{i=1}^{k} x^s_i =1 \,\,\sum_{i=1}^{k} x^l_i =1$
By simple rearrangement, we get
$x_1^s \, = \, \frac{K[x_1^l/(1-x_1^l)]}{1+K[x_1^l/(1-x_1^l)]}$
This equation describes competition of components "1" and "2".
## References
1. ^ Hanaor, D.A.H.; Ghadiri, M.; Chrzanowski, W.; Gan, Y. (2014). "Scalable Surface Area Characterization by Electrokinetic Analysis of Complex Anion Adsorption" (PDF). Langmuir 30 (50): 15143–15152. doi:10.1021/la503581e.
2. ^ Langmuir, Irving (June 1918). "The Adsorption of Gases on Plane Surface of Glass, Mica and Platinum". The Research Laboratory of The General Electric Company: 1361–1402. doi:10.1021/ja02242a004. Retrieved 11 June 2013.
3. ^ Langmuir, Irving (1916). "Part I". The Research Laboratory of The General Electric Company: 2221.
4. ^ Langmuir, Irving (1918). "Part II". The Research Laboratory of The General Electric Company: 1848.
5. ^ a b Masel, Richard (1996). Principles of Adsorption and Reaction on Solid Surfaces. Wiley Interscience. p. 240. ISBN 0-471-30392-5.
6. ^ Masel, Richard (1996). Principles of Adsorption and Reaction on Solid Surfaces. Wiley Interscience. p. 242. ISBN 0-471-30392-5.
7. ^ Cahill, David (2008). "Lecture Notes 5 Page 2" (pdf). University of Illinois, Urbana Champaign. Retrieved 2008-11-09.
8. ^ Volmer, M.A., and P. Mahnert, Z. Physik. Chem 115, 253
9. ^ a b Masel, Richard (1996). Principles of Adsorption and Reaction on Solid Surfaces. Wiley Interscience. p. 244. ISBN 0-471-30392-5.
10. ^ Cahill, David (2008). "Lecture Notes 5 Page 13" (pdf). University of Illinois, Urbana Champaign. Retrieved 2008-11-09.
11. ^ Freundlich, H. (1909). "eine darstellung der chemie der kolloide und verwanter gebiete.". Kapillarchemie (Leipzig: Academishe Bibliotek).
12. ^ Toth, J., Acta. Chim. Acad. Sci. Hung 69, 311(1971)
13. ^ Temkin, M. I.; Pyzhev, V. (1940). Acta Physicochima USSR 12: 327. Missing or empty |title= (help)
14. ^ Brunauer, Stephen; Emmett, P. H.; Teller, Edward (1938). "Adsorption of Gases in Multimolecular Layers". Journal of the American Chemical Society 60 (2): 309–319. doi:10.1021/ja01269a023. ISSN 0002-7863.
15. ^ Marczewski, A. W (2002). "Basics of Liquid Adsorption". Retrieved 2008-11-24.
• The constitution and fundamental properties of solids and liquids. part i. solids. Irving Langmuir; J. Am. Chem. Soc. 38, 2221-95 1916 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 61, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8586932420730591, "perplexity": 2184.103736059361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450715.63/warc/CC-MAIN-20151124205410-00291-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://mathhelpforum.com/trigonometry/194618-sine-rule-cos-theta-p-2q.html | # Thread: From Sine Rule to cos (theta) = p / 2q
1. ## From Sine Rule to cos (theta) = p / 2q
Hi,
Doing some self-teach on Maths, and got stumped by the following:-
In triangle PQR, angle PQR = theta and angle QPR = 2 theta. Prove cos (theta) = p / 2q
This question occurs before presentation of trignometric identities, so it implies that knowledge of these is not required to solve the above.
The question is given as part of a discussion of the Sine Rule, given in the form
a / sin A = b / sin B = c / sin C
I've perhaps gone down the wrong the wrong route by:-
• Making P the centre of a circle
• R lie on the circumference of a circle
• Q a point in the circle (and perhaps on)
• Drawing a line of length p from R to interesct the circle, drawing a diameter of length 2q from R to P and onto another point, giving a right angle triangle with a hypotenuse 2q and base p, with an angle whose cosine is p / 2q
What I cannot work out is a way to relate the angle of the triangle I constructed to the angles given in the original triangle.
( The text derives the Sine rule from i) angle in a semicircle, ii) angles in same segment, iii) opposite angles of a cyclic quadrilateral).
Knowing where I'm going wrong with this would be most helpful. Thanks!
2. ## Re: From Sine Rule to cos (theta) = p / 2q
You need not draw any circles! The problem is quite simple.
According to sine rule:
$\frac{q}{\sin{\theta}}=\frac{p}{\sin{(2\theta)}}= \frac{r}{\sin(\hat{R})}$
We have : . $\frac{q}{\sin{\theta}}=\frac{p}{\sin{(2\theta)}}$
Using double angle formula we get :.. $\frac{q}{\sin{\theta}}=\frac{p}{2\sin(\theta)\cos( \theta)}$
$\implies q=\frac{p}{2\cos(\theta)}$
$\implies \cos(\theta)=\frac{p}{2q}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.95317143201828, "perplexity": 989.5681749399915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820466.2/warc/CC-MAIN-20171016214209-20171016234209-00630.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/125407-expected-value-squared-normal-print.html | # Expected value of squared normal
• January 25th 2010, 11:55 AM
mrfloyd
Expected value of squared normal
I need to find the expected value a N~(0,1) squared distribution. I realize this is a Chi Square distribution with k = 1. The expected value is therefore k = 1. However, I am having trouble taking the following integral to find the expected value:
Integrate y*exp(-y^4) from -inf to inf.
I'm sure there is a trick somewhere to figure it out but i can't find it!
Thanks
• January 25th 2010, 12:11 PM
CaptainBlack
Quote:
Originally Posted by mrfloyd
I need to find the expected value a N~(0,1) squared distribution. I realize this is a Chi Square distribution with k = 1. The expected value is therefore k = 1. However, I am having trouble taking the following integral to find the expected value:
Integrate y*exp(-y^4) from -inf to inf.
I'm sure there is a trick somewhere to figure it out but i can't find it!
Thanks
The expected value of $x^2$ is:
$E(x^2)=\int_{-\infty}^{\infty} x^2 p(x) \; dx$
which in this case comes to the variance of $x$
CB
• January 25th 2010, 03:03 PM
matheagle
The integral of an odd function from minus infinity to infinity is zero.
But I'm not sure if that is what you're asking. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99660724401474, "perplexity": 357.82907566662226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988065.26/warc/CC-MAIN-20150728002308-00132-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://openreview.net/forum?id=H1qfnDmta6 | ## The DEformer: An Order-Agnostic Distribution Estimating Transformer
Jun 02, 2021 (edited Jul 11, 2021)INNF+ 2021 posterReaders: Everyone
• Abstract: Order-agnostic autoregressive distribution (density) estimation (OADE), i.e., autoregressive distribution estimation where the features can occur in an arbitrary order, is a challenging problem in generative machine learning. Prior work on OADE has encoded feature identity by assigning each feature to a distinct fixed position in an input vector. As a result, architectures built for these inputs must strategically mask either the input or model weights to learn the various conditional distributions necessary for inferring the full joint distribution of the dataset in an order-agnostic way. In this paper, we propose an alternative approach for encoding feature identities, where each feature's identity is included alongside its value in the input. This feature identity encoding strategy allows neural architectures designed for sequential data to be applied to the OADE task without modification. As a proof of concept, we show that a Transformer trained on this input (which we refer to as "the DEformer", i.e., the distribution estimating Transformer) can effectively model binarized-MNIST, approaching the performance of fixed-order autoregressive distribution estimating algorithms while still being entirely order-agnostic. Additionally, we find that the DEformer surpasses the performance of recent flow-based architectures when modeling a tabular dataset.
3 Replies | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8454836010932922, "perplexity": 2328.8483948273997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584886.5/warc/CC-MAIN-20211016135542-20211016165542-00582.warc.gz"} |
https://en.wikibooks.org/wiki/Statistics/Summary/Variance | # Statistics/Summary/Variance
Jump to: navigation, search
### Variance and Standard Deviation
Probability density function for the normal distribution. The red line is the standard normal distribution.
#### Measure of Scale
When describing data it is helpful (and in some cases necessary) to determine the spread of a distribution. One way of measuring this spread is by calculating the variance or the standard deviation of the data.
In describing a complete population, the data represents all the elements of the population. As a measure of the "spread" in the population one wants to know a measure of the possible distances between the data and the population mean. There are several options to do so. One is to measure the average absolute value of the deviations. Another, called the variance, measures the average square of these deviations.
A clear distinction should be made between dealing with the population or with a sample from it. When dealing with the complete population the (population) variance is a constant, a parameter which helps to describe the population. When dealing with a sample from the population the (sample) variance is actually a random variable, whose value differs from sample to sample. Its value is only of interest as an estimate for the population variance.
##### Population variance and standard deviation
Let the population consist of the N elements x1,...,xN. The (population) mean is:
${\displaystyle \mu ={\frac {1}{N}}\sum _{i=1}^{N}x_{i}}$.
The (population) variance σ2 is the average of the squared deviations from the mean or (xi - μ)2 - the square of the value's distance from the distribution's mean.
${\displaystyle \sigma ^{2}={\frac {1}{N}}\sum _{i=1}^{N}(x_{i}-\mu )^{2}}$.
Because of the squaring the variance is not directly comparable with the mean and the data themselves. The square root of the variance is called the Standard Deviation σ. Note that σ is the root mean squared of differences between the data points and the average.
##### Sample variance and standard deviation
Let the sample consist of the n elements x1,...,xn, taken from the population. The (sample) mean is:
${\displaystyle {\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}}$.
The sample mean serves as an estimate for the population mean μ.
The (sample) variance s2 is a kind of average of the squared deviations from the (sample) mean:
${\displaystyle s^{2}={\frac {1}{n-1}}\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}$.
Also for the sample we take the square root to obtain the (sample) standard deviation s
A common question at this point is "why do we square the numerator?" One answer is: to get rid of the negative signs. Numbers are going to fall above and below the mean and, since the variance is looking for distance, it would be counterproductive if those distances factored each other out.
#### Example
When rolling a fair die, the population consists of the 6 possible outcomes 1 to 6. A sample may consist instead of the outcomes of 1000 rolls of the die.
The population mean is:
${\displaystyle \mu ={\frac {1}{6}}(1+2+3+4+5+6)=3.5}$,
and the population variance:
${\displaystyle \sigma ^{2}={\frac {1}{6}}\sum _{i=1}^{n}(i-3.5)^{2}={\frac {1}{6}}(6.25+2.25+0.25+0.25+2.25+6.25)={\frac {35}{12}}\approx 2.917}$
The population standard deviation is:
${\displaystyle \sigma ={\sqrt {\frac {35}{12}}}\approx 1.708}$.
Notice how this standard deviation is somewhere in between the possible deviations.
So if we were working with one six-sided die: X = {1, 2, 3, 4, 5, 6}, then σ2 = 2.917. We will talk more about why this is different later on, but for the moment assume that you should use the equation for the sample variance unless you see something that would indicate otherwise.
Note that none of the above formulae are ideal when calculating the estimate and they all introduce rounding errors. Specialized statistical software packages use more complicated logarithms that take a second pass of the data in order to correct for these errors. Therefore, if it matters that your estimate of standard deviation is accurate, specialized software should be used. If you are using non-specialized software, such as some popular spreadsheet packages, you should find out how the software does the calculations and not just assume that a sophisticated algorithm has been implemented.
##### For Normal Distributions
The empirical rule states that approximately 68 percent of the data in a normally distributed dataset is contained within one standard deviation of the mean, approximately 95 percent of the data is contained within 2 standard deviations, and approximately 99.7 percent of the data falls within 3 standard deviations.
As an example, the verbal or math portion of the SAT has a mean of 500 and a standard deviation of 100. This means that 68% of test-takers scored between 400 and 600, 95% of test takers scored between 300 and 700, and 99.7% of test-takers scored between 200 and 800 assuming a completely normal distribution (which isn't quite the case, but it makes a good approximation).
#### Robust Estimators
For a normal distribution the relationship between the standard deviation and the interquartile range is roughly: SD = IQR/1.35.
For data that are non-normal, the standard deviation can be a terrible estimator of scale. For example, in the presence of a single outlier, the standard deviation can grossly overestimate the variability of the data. The result is that confidence intervals are too wide and hypothesis tests lack power. In some (or most) fields, it is uncommon for data to be normally distributed and outliers are common.
One robust estimator of scale is the "average absolute deviation", or aad. As the name implies, the mean of the absolute deviations about some estimate of location is used. This method of estimation of scale has the advantage that the contribution of outliers is not squared, as it is in the standard deviation, and therefore outliers contribute less to the estimate. This method has the disadvantage that a single large outlier can completely overwhelm the estimate of scale and give a misleading description of the spread of the data.
Another robust estimator of scale is the "median absolute deviation", or mad. As the name implies, the estimate is calculated as the median of the absolute deviation from an estimate of location. Often, the median of the data is used as the estimate of location, but it is not necessary that this be so. Note that if the data are non-normal, the mean is unlikely to be a good estimate of location.
It is necessary to scale both of these estimators in order for them to be comparable with the standard deviation when the data are normally distributed. It is typical for the terms aad and mad to be used to refer to the scaled version. The unscaled versions are rarely used. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.964221715927124, "perplexity": 339.3567841124344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257836397.31/warc/CC-MAIN-20160723071036-00258-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://www.e-pandu.com/2019/04/definition-of-vector-and-scalar-linear.html | # Definition of Vector and Scalar Linear Algebra
In this article we will discuss the meaning of vectors, especially for vectors on R2 and R3. Many quantities in physics such as force, speed, acceleration, displacement, and shift are vectors that can be expressed as directional line segments. The algebraic view, examines the properties of algebra from a vector space, that is, the properties of vector addition and scalar vector multiplication. In this article the properties will be summarized in the discussion on R2 and R3. In addition to physics, vector understanding is also widely used in other fields outside mathematics such as technology, economics, biology and so on.
Vector and Vector Operation
A solution of a system with m linear equations in n unknown numbers is a tuple-n of real numbers. We will call a tuple-n real numbers as a vector. If an n-tuple is expressed as a matrix of 1 x n, we will call it a row vector. Conversely, if the tuple is expressed by a matrix n x 1, we call it a column vector. For example the solution of the linear system below,
x1 + x2 = 5
x1 – x2 = 3
can be expressed by row vectors (2, 1) or column vectors
then in this section we will explain the presentation and understanding of vectors in geometry and algebra.
## Vector Definition of Geometry
Many quantities that we encounter in science such as: length, mass, volume, and electric charge, can be expressed by a number.
Such magnitude (and the number that becomes its size) is called Scalar, There are other quantities, such as speed, force, torque, and shifts to describe it require not only numbers, but also directions. Such magnitude is called a Vector. In this article vectors in space 2 and space 3 will be introduced geometrically, and we will discuss some of the basic characteristics of this operation.
Vectors can be expressed geometrically as directed line segments or arrows in space 2 or space 3. The length of the arrow is the magnitude of the vector while the direction of the arrow is the direction of the vector. The arrow has a base and a tip (Figure 1). The base of the arrow is called the initial point and the tip of the arrow is called the terminal point.
We will paint a vector with bold letters, for example u and v, because this is rather difficult to do in writing, you can describe vectors with symbols
When discussing vectors, we will declare numbers as scalars. All scalars are real items and will be expressed by ordinary lowercase letters, for example a, b, c, and k.
If, as in figure 1, the starting point of the vector v is A and the terminal point is B, we write it down.
Vectors that have the same length and direction, such as the vectors in Figure 2, are called equivalents. Because we want a vector that is determined by its length and direction, the equivalent vectors are considered equal even though the vectors may be placed in different positions.
### Vector operations
For two or more vectors, operations can be carried out as follows,
2. Vector multiplication with scalar.
### Provisions in Addition and Reduction of Vector
To obtain the resultant of two vectors u and v, move v without changing the size and direction until the base coincides with the tip u. Then u + v is a vector that connects between the base u and the end v. This method is called the triangular law, which is illustrated in figure 3. Another way of describing u + v is to move v so that the base coincides with the base u. Then u + v is a vector which denies u and which coincides with the diagonal parallelogram whose sides are u and v. This method is called the parallelogram law, which is illustrated in figure 4.
You can prove yourself that this sum is commutative, namely: u + v = v + u.
Commutative properties: u + v = v + u.
Associative properties: (u + v) + w = u + (v + w)
The sum of several vectors does not need to depend on the sequences. Addition can be expanded as shown in figure 6, i.e.,
u = u1 + u2 + u3 + u4 + u5
This method is called the polygon method
### Vector Multiplication Provisions with Scalar
If u is a vector, then 3u is a vector that is in the direction of u but whose length is three times the length u, the vectors are -2u twice the length u but in the opposite direction (Figure 7). In general, cu is a scalar multiple of the u vector, whose length is | c | times the length u, in the direction of u if c is positive and in the opposite direction if c is negative. Specifically, (-1) u (also written as -u) is the same length as u, but the direction is opposite. This vector is called a negative vector u because if - u is summed with u, the result is a zero vector (i.e. a point), this vector is the only vector without a certain direction, called the zero vector, which is denoted by 0. This vector is the sum element u + 0 = 0 + u = u. Finally, the reduction is determined as
uv = u + (-v)
Example 1.
In figure 8, it is expressed w with u and v
Solution:
Because u + w = v, then
w = v- u
Example 2.
In figure 9, specify that m is in u and v.
In general, if with 0 < t < 1, then
m = (1-t)u + tv
The evidence we get for m can also be written as
u + t(v - u)
If t changes from - to + we get all the vectors leading to the lines shown in the following figure,
## Vector Definition of Algebra
Problems involving vectors can often be simplified by introducing a Cartesian coordinate system. Following this we will limit the discussion of vectors in space-2 and space-3.
### Cartesian coordinates in space-2
We begin by taking a Cartesian coordinate system in the plane. As a representative of u vector, we select an arrow that starts at the origin (Figure 10). This arrow is determined singly by the coordinates u1 and u2 end points. This means that the vector u is determined by an ordered pair (u1, u2) by introducing the Cartesian coordinate system, which is illustrated in Figure 11. So we assume (u1, u2) is a vector u. This ordered pair (u1, u2) is a vector u algebraically.
To form such a coordinate system, we select an arrow that starts at point 0 as the origin and select two lines that are perpendicular as the coordinate axis through that origin, which is illustrated in Figure 10 below.
Mark these axes with x and y, then choose a positive direction for each coordinate axis and also a unit of length to measure distance.
Then we determine the arrow point by the coordinates u1 and u2.
This means that the vector u is determined by an ordered pair (u1, u2).
#### Vector Operations
Two vectors u = (u1, u2) and v = (v1, v2) are equal (equivalent) if and only if u1 = v1 and u2 = v2 and apply the following operations:
u + v = (u1 + v1, u2 + v2)
u + v + w = (u1 + v1 + w1, u2 + v2 + w2)
u – v = (u1 – v1, u2 – v2)
#### Vector multiplication with scalar
To multiply u by scalar k, it is done by multiplying each component with k, namely:
uk = ku = (ku1, ku2)
specifically, u = (-u1, -u2) dan 0 = 0u = (0,0)
Figure 12 shows that the above definitions are equivalent to the definitions of Geometry that we discussed earlier.
Example 3.
1. If u = (1, -2) and v = 7, 6) then u + v = (1 + 7, -2 + 6) = (8, 4)
2. 4v = 4(7, 6)= (4 (7), 4 (6)) = (28, 24)
### Cartesian Coordinate in Space-3
Vectors in space 3 can be expressed by triple real numbers, by introducing the Cartesian coordinate system depicted in Figure 13 below,
If, as Figure 13 above vector v in room 3 is located so that the starting point is at the origin of the Cartesian coordinate system, the coordinates of the terminal point are called components v and we write them as,
v = (v1, v2, v3)
#### Vector Operations
Two vectors v = (v1, v2, v3) and w = (w1, w2, w3) are equivalent if and only if v1 = w1, v2 = w2, and v3 = w3.
To add v = (v1, v2, v3) and w = (w1, w2, w3), we add the appropriate components, namely
v + w = (v1 + w1, v2 + w2, v3 + w3)
to multiply vector v with scalar k, we multiply each component with k, i.e.
vk = kv = (k v1, k v2, k v3)
Example 4.
If v = (1, -3, 2) and w = (4, 2, 1) then
v + w = (1 + 4, -3 + 2, 2 + 1)= (5, -1, 3)
2v = (2(1), 2(-3), 2(2))=(2,-6,4)
v-w=v+(-w) = (1 + (-4), -3 + (-2), 2 + (-1)) = (-3, -5, 1)
If the vector has a starting point at P1 (x1, y1, z1) and the terminal point in P2(x2, y2, z2), then:
That is, the components we get by subtracting the coordinates of the starting point from the coordinates of the terminal point. This can be seen using Figure 14 below,
Vector is the difference between vectors and so
Example 5.
Calculate vector components with the starting point P1 (2, -1, 4) and terminal point P2 (7, 5, -5).
Solution: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9176037311553955, "perplexity": 510.2535497623796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330750.45/warc/CC-MAIN-20190825151521-20190825173521-00024.warc.gz"} |
http://www.scielo.br/scielo.php?script=sci_arttext&pid=S1413-70542018000200159&lng=en&nrm=iso | ## Print version ISSN 1413-7054On-line version ISSN 1981-1829
### Ciênc. agrotec. vol.42 no.2 Lavras Mar./Apr. 2018
#### http://dx.doi.org/10.1590/1413-70542018422016817
Agricultural Sciences
Genetic progress in popcorn recurrent selection by a multivariate mixed-model approach
Progresso genético na seleção recorrente em milho pipoca via abordagem de modelos mistos multivariado
1Universidade Federal de Lavras/UFLA, Departamento de Biologia/DBI, Lavras, MG, Brasil
2University of Florida, Horticultural Science Department, Gainesville, Florida, USA
ABSTRACT
Recurrent selection is a viable alternative for popcorn breeding. However, frequent verification of progress attained is required. The aim of this study was to estimate the genetic progress attained for popping expansion (PE) and grain yield (GY) after four cycles of recurrent selection and to compare this progress with the expected progress estimated at the end of each cycle while considering the genetic relationships between the progenies via univariate and multivariate mixed-model approaches. To estimate the genetic parameters and gains from indirect selection, cycles 1, 2, 3, and 4 of a UFLA population were used. To estimate the genetic gains achieved, the following cycles were used: UFLA (original) and cycles 0, 1, 2, 3, and 4, evaluated in three environments. The multivariate approach provided more accurate estimates than did the univariate approach. There was genetic gain for PE in the recurrent selection program. In contrast, gain was not observed for GY using the different estimation strategies.
Index terms: Plant breeding; grain yield; popping expansion
RESUMO
Termos para indexação: Melhoramento de plantas; rendimento de grão; capacidade de expansão.
INTRODUCTION
In popcorn breeding, two traits are extremely important: grain yield (GY) and popping expansion (PE). Due to the complex genetic architecture of these traits and combined with the considerable influence of environmental factors, the use of recurrent selection has proven to be an effective strategy for improving populations and leads to a relatively high chance of selecting genotypically superior individuals (Freitas et al., 2013; Rodovalho et al., 2014; Freitas et al., 2014; Pena et al., 2016).
The popcorn breeding program has been conducted by Universidade Federal de Lavras (UFLA) since 2006 based on a local population (UFLA population). This population is characterized as segregating for grain type and is broadly adapted to local growing conditions; however, this population presents low PE values. As a strategy, the breeding program has used intra-population recurrent selection (IRS), prioritizing the PE trait during the first selections and considering GY during the second cycle via tandem selection. In an IRS program, the goal is to increase the mean value per se over the selection cycles via the generation of a promising recombination of genes related to the target traits of the breeding activity. Since this strategy involves a long-term breeding program, it is necessary to periodically measure the genetic progress obtained to evaluate the efficiency of the techniques implemented and assist in the decision-making process in the future (Breseghello et al., 2011).
In relation to genetic progress, we must distinguish between expected progress based on the coefficient of heritability and on the selection differential in contrast with the progress obtained in relation to the genetic gain achieved, after having passed through the recurrent selection cycles (Falconer; Mackay, 1996). Traditionally, estimates of these genetic parameters have been obtained using a fixed-model approach via the least squares method (LSM). This approach continues to offer great assistance to breeding programs, especially in annual crops, owing to the less imbalance of phenotypic data (Piepho et al., 2008). However, when there is an extensive imbalance in the data and/or complex pedigree information among genotypes, the LSM presents some limitations. In these cases, the use of a more robust procedure is needed, such as a mixed-model approach (Resende, 2007).
The REML/BLUP (restricted maximum likelihood/best linear unbiased predictor) procedure can adequately address unbalanced data and includes information on genetic relationships within a model, leading to more accurate estimates and predictions (Henderson, 1974). Another important question is whether to carry out selection considering two or more traits simultaneously. In this case, the univariate REML/BLUP procedure does not allow exploitation of genetic and phenotypic correlations that may exist and may generate bias in the estimates. To solve this problem, Henderson and Quaas (1976) introduced a multivariate mixed-model analysis, which has been used for some time in animal breeding (Meyer; Thompson, 1986; Waldman; Ericsson, 2006). However, in annual crops such as maize, studies using this approach are still rare (Kurosawa et al., 2017; Balestre et al., 2012; Viana et al., 2010; Piepho et al., 2008).
The aim of this study was to estimate the genetic progress undertaken for PE and GY after four recurrent selection cycles and to compare the progress with the gain expected from selection at the end of each cycle, taking the genetic relationship between the progenies into consideration via univariate and multivariate mixed-model approaches.
MATERIAL AND METHODS
Description of the recurrent selection program
The popcorn breeding program began in the 2005/2006 cropping season with the multiplication of the UFLA population and subsequent selection of 400 plants to obtain the UFLA population (base population). With this population, recurrent breeding procedures began with selection and recombination, producing cycles 0_1, 1, 2, 3, and 4 as described below.
The UFLA 0_1 cycle consisted of 40 half-sib progenies obtained by evaluation for the PE trait among the 400 half-sib progenies of the UFLA 0 population and by selection of the 40 best ones. The UFLA 1 (cycle 1) cycle consisted of 536 half-sib progenies obtained from the recombination of the 40 best half-sib progenies for the PE trait of the UFLA 0_1 population, and the UFLA 2 (cycle 2) cycle consisted of 394 half-sib progenies obtained from the recombination of the 42 best half-sib progenies for the PE trait of the UFLA 1 population. The UFLA 3 (cycle 3) cycle consisted of 560 half-sib progenies obtained from the recombination of the 42 best half-sib progenies for the PE and GY traits of the UFLA 2 population, and the UFLA 4 (cycle 4) consisted of 650 half-sib progenies obtained from the recombination of the 24 best half-sib progenies for the PE and GY traits of the UFLA 3 population. At the end of each recombination cycle, equal seed samples of all the plants were taken and stored to represent their respective cycles.
The progenies selected at the end of the evaluations within a cycle were recombined according to the modified Irish method (establishing the recombination lot in a completely randomized block design with three replications) to obtain the next generation. All the evaluations for PE and GY were carried out for all the plants of the recombination lot, respecting their respective progenies. The recombination lots were established on the UFLA experimental farm in the municipality of Ijaci, MG, during the 2007/2008, 2008/2009 and 2009/2010 cropping seasons.
Estimation of the indirect progress of recurrent selection by the uni- and multivariate mixed-models approaches
In the present study, UFLA 1, UFLA 2, UFLA 3, and UFLA 4 populations were used. The traits evaluated were PE and GY. The PE values were obtained by the ratio between the volume of expanded popcorn and the weight of the grains (ml g-1). For each progeny, a ten-gram grain sample was evaluated in an 800 W microwave oven for 150 seconds, according to the modified method described by Matta and Viana (2003). The expanded popcorn was measured in a 1000 ml graduated cylinder. The GY trait was obtained individually by weighing a certain volume of grain per plant on a precision scale. Recombination among selected progenies was performed in the field (isolated in time) according to the modified Irish method at the UFLA experimental farm in the municipality of Ijaci, MG. The recombination involved a randomized block design (RBD) with three replications in the 2007/2008, 2008/2009, and 2009/2010 cropping seasons.
REML/BLUP analyses:
The individual model adopted for the univariate analysis was similar to that presented by Mrode and Thompson (2005) and is given in Equation 1 as:
y=Xβ+Za+e, (1)
where y is the vector of the individual phenotypic data; β is the vector of the fixed effects of the blocks added to the overall mean value; a is the vector of the individual additive genetic effects (a ~ NMV (0, G), with G=Aσa2 ); X and Z are the incidence matrices of the fixed and random effects, respectively; and e is the residual vector (e ~ NMV (0, R), with R=Iσe2 ).
For the previously described mixed model (Equation 1), “A” refers to the matrix of the additive genetic relationship (the kinship coefficients were computed as two times the Malecot’s coefficient), 0 refers to the null vector, I refers to the identity matrix, σa2 is the additive variance, and σe2 is the residual variance.
The matrices of the system of Henderson’s mixed model equations can be given in Equation 2 as (Mrode;Thompson, 2005):
[β^â]=[X'R1XZ'R1XX'R1ZZ'R1Z+G1]1[X'R1yZ'R1y]. (2)
The individual multivariate model adopted was similar to that presented by Mrode and Thompson (2005, p. 85), which can be given as follow for traits 1 (grain yield) and 2 (popping expansion) in Equations 3 and 4, respectively:
y1=X1β1+Z1a1+e1, (3)
y2=X2β2+Z2a2+e2. (4)
In matrix terms, models showed in Equations 3 and 4 can be expanded to Equation 5:
[y1y2]=[X100X2][β1β2]+[Z100Z2][a1a2]+[e1e2] (5)
The matrices of the system of multivariate mixed model equations are similar to the univariate approach (Equation 2); β^=[β^1β^2] , a^=[a^1a^2] , R=I[σe12σe12σe12σe22] , G=A[σa12σa12σa12σa22] , σe12 is the additive covariance between traits 1 and 2, σe12 is the residual covariance between traits 1 and 2, and ⊗ is the Kronecker product.
The individual univariate and multivariate models involving all the generations (combined analysis) were given by the models described in Equations 1 and 5, respectively. However, in this case, the β vector refers to the fixed effects of the blocks added to the overall mean and of the environment/cycle effects.
The REML method was applied to estimate the covariance components and their significance was verified by REML-likelihood ratio test (REML-LRT) at the 5% probability level. The heritabilities at the individual level were obtained by Equation 6 and their standard errors were obtained according to Gilmour et al. (2009).
h2=σ^a2σ^a2+σ^e2. (6)
The predictive accuracy was obtained by Equation 7, as follows:
rãa=1(PEVσ^a2) (7)
where PEV is the prediction error variance (Mrode; Thompson, 2005).
The expected genetic gains were obtained from BLUP values associated with genetic effects by generation and all generations for PE and GY using the two approaches (the uni- and multivariate ones).
All analyses were performed using the software ASReml 3.0 (Gilmour; Gogel; Cullis, 2009).
Estimation of the genetic progress of recurrent selection by the least squares method
The populations UFLA 0, UFLA 0_1, UFLA 1, UFLA 2, UFLA 3, and UFLA 4 as well as two commercial controls (IAC 112 and IAC 125) were used to conduct this experiment. These cycles, represented by an equal mixture of seeds from all the plants, were evaluated in three environments: Environment 1, the UFLA experimental farm in the municipality of Ijaci, MG, in the 2010/2011 cropping season; Environment 2, the experimental field of the Department of Biology of the UFLA in the municipality of Lavras, MG, in the 2010/2011 cropping season; and Environment 3, the experimental field of the Department of Biology of the UFLA in the municipality of Lavras, MG, in the 2008/2009 cropping season.
In all the environments, a randomized block experimental design was used consisting of 7, 4, and 11 replications in environments 1, 2, and 3, respectively. The plots consisted of two 3-m rows at a spacing of 0.6 m, with five plants m-1. The traits evaluated per plot in the three environments were as follows: GY (in tons per hectare), which was obtained from grain weight per plot followed by subsequent transformation to tons per hectare and corrected both for ideal stand per plot by the covariance method (Vencovsky; Cruz, 1991) and for a standard moisture of 13%; PE (in ml g-1), which was obtained by the ratio between the volume of expanded popcorn and grain weight. In each plot, three samples of 30 g of grain were evaluated in an 800 W microwave oven for three minutes (180 seconds) according to the modified model described by Matta and Viana (2001). The PE was measured in a 1000 ml graduated cylinder.
For the traits evaluated, the basic assumptions for carrying out the analysis of variance (ANOVA) were first verified. Upon meeting these requirements, individual ANOVAs were carried out with additional controls, considering the mean value per plot (Cruz, 2006; Ramalho; Ferreira; Oliveira, 2005). To carry out combined ANOVAs with additional controls, each trait was examined by the Hartley test to verify if the residual mean squares over the environments were homogeneous (Cruz, 2006; Ramalho; Ferreira; Oliveira, 2005). For the estimation of genetic gain attained in the selection cycles, we used the estimates of the population mean in each cycle for the PE and GY traits and applied the least squares method. For carrying out the analyses, SAS® (SAS Institute, 2002) statistical software was used.
RESULTS AND DISCUSSION
Estimate of genetic parameters and progress per selection cycle
In the cycles evaluated (Tables 1 and 2), genetic variability (P<0.05) is observed from the results of the uni- and multivariate approaches. This variability is indispensable for the success of a recurrent selection program as generations advance.
Table 1: Estimates of genetic parameters for the traits of popping expansion (PE) and grain yield (GY) in the UFLA population during cycles 1, 2, 3, and 4 by a univariate approach (including kinship information).
UFLA C1 UFLA C2 UFLA C3 UFLA C4 All cycles Parameters PE GY PE GY PE GY PE GY PE GY Additive genetic variance 12.18* - 6.35* 317.2* 64.57* 74.76* 9.90* 170.95* 20.27* 181.05* Residual variance 14.53 - 18.61 432.97 13.32 314.20 29.54 245.11 23.42 308.51 Heritability 0.45±0.16 - 0.25±0.15 0.42±0.17 0.82±0.21 0.19±0.14 0.25±0.11 0.41±0.15 0.46±0.07 0.37±0.07 Accuracy 0.70 - 0.55 0.67 0.89 0.50 0.57 0.68 0.72 0.60 Mean 17.94(0.52) - 22.50(0.48) 73.66(3.00) 25.19(1.19) 60.45(2.16) 24.45(0.54) 57.36(2.01) - - Mean (adj) 17.94(0.55) - 20.67(0.71) 73.52(2.31) 22.61(0.75) 56.87(2.23) 22.98(0.76) 51.81(2.55) - -
* Significant and ns not significant by the REML-LRT test, with a distribution of x(0.05;1)2 =3.84.
Table 2: Estimates of genetic parameters for the traits of popping expansion (PE) and grain yield (GY) in the UFLA population during cycles 2, 3, and 4 by a multivariate approach (including the kinship information).
UFLA C2 UFLA C3 UFLA C4 All cycles Parameters PE GY PE GY PE GY PE GY Additive genetic variance 6.46* 320.53* 46.42* 76.35* 10.02* 172.04* 21.60* 182.18* Residual variance 18.51 430.12 28.52 312.98 29.44 244.23 27.35 303.54 Heritability 0.26±0.15 0.43±0.17 0.62±0.17 0.19±0.14 0.25±0.12 0.41±0.15 0.44±0.08 0.37±0.07 Accuracy 0.87 0.68 0.79 0.633 0.61 0.711 0.63 0.71 Genetic correlation 0.05 0.29 -0.336 0.11 Residual correlation 0.37 0.20 0.295 0.15 Mean 22.49(0.48) 73.60(3.01) 25.20(1.07) 59.18(2.53) 24.47(0.54) 57.37(2.01) - - Mean (adj) 21.66(1.11) 71.14(3.36) 23.25(0.88) 56.23(2.73) 21.22(0.86) 53.75(2.65) - -
* Significant and ns not significant by the REML-LRT test, with a distribution of x(0.05;1)2 = 3.84.
The estimates of the additive genetic ( σa2 ) and residual ( σe2 ) variances between the univariate and multivariate analyses were similar (Tables 1 and 2); the multivariate approach was slightly superior than the univariate approach (greater estimates of σa2 and lower σe2 ), except for PE in cycle 3.
Experimental precision was verified by the estimates of the predictive accuracy, which allows us to compare the approaches to identify which approach provides more accurate estimates. The multivariate approach in general exhibited greater precision (greater predictive accuracy) in all the cycles, except for PE during cycle 3 and in the combined analysis. However, this lower precision in the combined analysis is due to the four cycles considered in the univariate analysis (C1, C2, C3, and C4), whereas multivariate analysis considers only three (C2, C3, and C4), which are the cycles in which PE and GY are evaluated (Tables 1 and 2). When we analyze only the three cycles (C2, C3, and C4) by the univariate approach, which is the more correct comparison, the results of the multivariate analysis were 1% better (data not shown). The multivariate analysis revealed estimates of predictive accuracy that were 58% and 1.5% better than those from the univariate analysis during cycle 2 for PE and GY, respectively; 26.6% better estimates during cycle 3 for GY, 7% and 4.5% better estimates during cycle 4 for PE and GY, respectively; and 18.3% better means of the cycles for GY (combined analysis), indicating that, in these cases, the multivariate analysis surpassed the univariate analysis, despite being penalized by one less cycle. The estimates of heritabilities between the two analyses were similar, with distortion only in cycle 3 for PE, in which the univariate analysis proved to be more advantageous. The estimates of heritability for PE were of medium to high magnitude, oscillating from 0.25 to 0.82 from the univariate approach and from 0.26 to 0.62 from the multivariate approach. For GY, the estimates were of low to medium magnitude, ranging from 0.19 to 0.42 and 0.19 to 0.43 from the univariate and multivariate approaches, respectively.
As the selections made at the beginning of the program prioritized the PE trait apart from the GY trait, gains for the two traits simultaneously will depend on the genetic relationship between them. Consequently, the estimates of genetic correlation (r g ) will show how related the traits are. The estimates of r g were obtained by the multivariate approach (Table 2), which indicated a negative association only during cycle 4. The overall mean of the cycles (combined analysis), which corrects the effects of the cycle and enables a mean estimate of the genetic correlation to be obtained, was r g = 0.11; as such, on average, the traits were independent.
The gains estimated from selection were obtained using regression analysis, with the phenotypic mean values adjusted per cycle in the combined analysis for both approaches. From the univariate approach, the gains were 2.3%7 and -3.7% for PE and GY, respectively, per selection cycle, and from the multivariate approach, the gains were -0.33% and -3.74% for PE and GY, respectively.
Direct gain from selection (least squares method)
From the ANOVA (data not shown), significant variation was observed between the selection cycles (P<0.01) for the PE trait; no variation in the cycle × environment interaction was observed. For GY, significant differences were not observed. Figure 1 represents the evaluation of all the selection cycles undertaken across the three environments using the mean values adjusted by the combined analysis for PE and GY. An increase of 1.4% was observed for PE, and a stable response was observed for GY (Figure 1).
Discussion
In the approaches evaluated, the accuracies were in the range of 55% to 89% from the univariate analysis and from 61% to 87% from the multivariate analysis, which indicates moderate to high precision (Resende; Duarte, 2007). Predictive accuracy increases to the extent that the absolute deviations between the parametric genetic values and the predicted genetic values are lower, that is, the lower the prediction error variance (PEV) is, the more accurate the estimator (Resende; Duarte, 2007). In this study, with few exceptions, the multivariate approach provided more accurate estimates.
According to Piepho et al. (2008), the application of the multivariate BLUP method has advantages over the univariate method when the traits involved in the analysis exhibit high genetic correlation. Nevertheless, some authors have reported that the increase in precision obtained using the multivariate BLUP method is proportional to the absolute difference between the genetic and environmental correlations of the traits (Schaeffer, 1984; Thompson; Meyer, 1986; Resende, 2007). Additionally, Bauer and Léon (2008) similarly confirmed via simulations for two traits with different heritabilities (0.3 and 0.7) and via scenarios of genetic and residual correlations that multivariate analysis exhibited lower prediction error and that the superiority of multivariate analysis in relation to univariate analysis is more expressive when the traits are negatively correlated. In this context, the multivariate method for the estimation of genetic parameters in popcorn would be preferred because of the existence of negative correlations between PE and GY (frequently reported in the literature), to a greater or lesser degree of association, and by the difference in heritability between the two traits (Vieira et al., 2016; Freitas et al., 2014). This phenomenon occurs because the multivariate model specifically considers the environmental and genetic covariances that exist between the traits, minimizing biases that can occur in individual analyses, especially from sequential selection (Resende, 2007).
The estimates of heritability between the two approaches were very similar in this study, although the multivariate approach provided errors that were less than or at least equal to those provided by the univariate approach. Regarding the analysis of all the cycles (combined analysis), the multivariate approach provided better estimates and smaller errors. Viana et al. (2010), working with selection cycles in popcorn for PE and GY, also recommended the multivariate approach, although they did not report superiority of the multivariate approach in relation to the univariate (individual model) approach. The authors further discussed that the heritabilities between the two traits both were similar and showed favorable correlations, and for this reason, the multivariate approached lacked superiority.
In relation to genetic gains per cycle, there was a gain for PE from the univariate approach (Figure 2), whereas for GY, gain was observed in a negative sense from both approaches. This finding was expected because selection of the cycles gave priority to PE in the recombination unit, considering GY as only part of the second cycle. Selection was made for grain weight per plant within the highest-yielding families in tandem, which, in a certain way, is subject to great environmental influence.
The estimated genetic gains slightly distorted the gains attained when we analyzed the cycles in the same experiment (Figure 2). For PE, the univariate approach overestimated the gains while the multivariate approach underestimated them when we compare those gains with the gains achieved; nevertheless, the multivariate analysis was penalized in cycle 1, since this analysis did not have involve that cycle. For GY, both approaches indicated a reduction, whereas evaluation of the gains achieved in the field indicated stability as generations advanced. A question that remains is how much the genotype × environment interaction interferes with the estimates of genetic progress because, when we estimate the progress in an indirect manner, the environmental effect is confounded in the cycle, and when we analyze the progress achieved after all the cycles in some environments, the interaction can mask the results. Following this line of reasoning, Faria et al. (2013) estimated the genetic progress after 22 years of common bean breeding via the EMBRAPA ARROZ E FEIJÃO program for traits in 20 environments. Those authors reported that the genotype × environment interaction was high, which interferes with the estimation of genetic progress.
Researchers at Universidade Estadual do Norte Fluminense Darcy Ribeiro have been developing a recurrent selection program for some time, prioritizing the PE and GY traits; this program is now in its seventh cycle. As the generations advance, some strategies were adopted in the program, such as mass selection in cycle 0; S1 families in cycle 2; half-sib families in cycle 3; and full-sib families in cycles 1, 4, 5, and 6. The program has been using the index of Mulamba and Mock (1978) as a selection strategy. However, during selection in the sixth cycle, Freitas et al. (2013) compared some selection strategies, concluding that the Mulamba and Mock (1978) index is the most adequate; however, the greatest gains were estimated by the univariate REML/BLUP method. Freitas et al. (2014) then evaluated all the cycles from 0 to 6 and estimated both the genetic parameters of the sixth cycle and the progress from selection for the seventh cycle, obtaining expressive gains for PE and GY.
In summary, greater attention should be given to the GY trait in future cycles together with PE, that is, other breeding strategies should be adopted for the popcorn breeding program of the UFLA, which considers the GY and PE traits simultaneously. This recommendation is because phenotypic selection for PE in the recombination unit is effective at increasing PE but not effective at increasing GY if we consider the gains both achieved and estimated via univariate analyses. An alternative would be to make use of indices obtained by multivariate approaches, which allow possible existing genetic corrections with more accurate predictions to be exploited; another alternative could be the use of the Mulamba and Mock (1978) (index. However, additional care should be given to economic factors.
CONCLUSIONS
The multivariate mixed-model approach is preferred to the univariate one because the former is more informative and accurate for the estimation of both genetic parameters and selection gains in popcorn crops with respect to PE and GY traits. Genetic gain for PE occurred as a result of our popcorn recurrent selection program. This gain, by contrast, was not observed for GY using different estimation strategies. Both evaluation and selection for the PE and GY traits in the recombination unit are effective at increasing PE but are not effective at increasing GY.
REFERENCES
BALESTRE, M. et al. Applications of multi-trait selection in common bean using real and simulated experiments. Euphytica, 189(2):225-238, 2012. [ Links ]
BAUER, A. M.; LÉON, J. Multiple-trait breeding values for parental selection in self-pollinating crops. Theoretical and Applied Genetics, 116(2):235-242, 2008. [ Links ]
BRESEGHELLO, F. et al. Results of 25 years of upland rice breeding in Brazil. Crop Science, 51(3):914-923, 2011. [ Links ]
CRUZ, C. D. Programa Genes: Análise multivariada e simulação. Viçosa, MG: UFV, 2006. 175p. [ Links ]
FARIA, L. C. et al. Genetic progress during 22 years of improvement of carioca-type common bean in Brazil. Field Crops Research, 142:68-743, 2013. [ Links ]
FREITAS, I. L. J. et al. Ganho genético avaliado com índices de seleção e com REML/Blup em milho-pipoca. Pesquisa Agropecuária Brasileira, 48(11):1464-1471, 2013. [ Links ]
FREITAS, I. L. J. et al. Genetic gains in the UENF-14 popcorn population with recurrent selection. Genetics and Molecular Research, 13(1):518-527, 2014. [ Links ]
GILMOUR, A. et al. ASReml user guide, release 3.0. Hemel Hempstead: VSN International, 2009. 320p. [ Links ]
HENDERSON, C. R. General flexibility of linear model techniques for sire evaluation. Journal of Dairy Science, 57:963-972, 1974. [ Links ]
HENDERSON, C. R.; QUAAS, R. L. Multiple trait evaluation using relatives’ records. Journal of Animal Science, 43:1188-1197, 1976. [ Links ]
KUROSAWA, R. N. F. et al., Multivariate approach in popcorn genotypes using the Ward MLM strategy: Morpho-agronomic analysis and incidence of Fusarium spp. Genetics and Molecular Research , 16(1)1-12, 2017. [ Links ]
MATTA, F. P.; VIANA, J. M. S. Eficiências relativas de seleção entre e dentro de famílias de meios-irmãos em população de milho-pipoca. Ciência e Agrotecnologia, 27(3):548-556, 2003. [ Links ]
MEYER, K.; THOMPSON, R. Sequential estimation of genetic and phenotypic parameters in multitrait mixed model analysis. Journal of Dairy Science , 69(10):2696-2703, 1986. [ Links ]
MRODE, R. A.; THOMPSON, R. Linear models for the prediction of animal breeding values. Wallingford: CABI, 2005. 344p. [ Links ]
MULAMBA, N. N.; MOCK, J. J. Improvement of yield potential of the Eto Blanco maize (Zea mays L.) population by breeding for plant traits. Egyptian Journal of Genetics and Cytology, 7:40-51, 1978. [ Links ]
PENA, G. F. et al. Comparação de testadores na seleção de famílias S3 obtidas da variedade UENF-14 de milho-pipoca. Bragantia, 75(2):135-144, 2016. [ Links ]
PIEPHO, H. P. et al. BLUP for phenotypic selection in plant breeding and variety testing. Euphytica , 161(1/2):209-228, 2008. [ Links ]
RAMALHO, M. A. P.; FERREIRA, D. F.; OLIVEIRA, A. C. Experimentação em genética e melhoramento de plantas. Lavras: UFLA, 2005. 322p. [ Links ]
RESENDE, M. D. V. Matemática e estatística na análise de experimentos e no melhoramento genético. Colombo: EMBRAPA Florestas, 2007. 362p. [ Links ]
RESENDE, M. D. V. de; DUARTE, J. B. Precisão e controle de qualidade em experimentos de avaliação de cultivares. Pesquisa Agropecuária Tropical, 37(3):182-194, 2007. [ Links ]
RODOVALHO, M. et al. Genetic evaluation of popcorn families using a Bayesian approach via the independence chain algorithm. Crop Breeding and Applied Biotechnology, 14:261-265, 2014. [ Links ]
SCHAEFFER, L. R. Sire and cow evaluation under multiple traits model. Journal of Dairy Science , 67(7):1567-1580, 1984. [ Links ]
SOARES, A. A.; RAMALHO, M. A. P.; SOUSA, A. F. de. Estimativa do progresso genético obtido pelo programa de melhoramento de arroz irrigado da EPAMIG, na época de oitenta. Pesquisa Agropecuária Brasileira , 29:97-104, 1994. [ Links ]
STATISTICAL ANALYSIS SYSTEM INSTITUTE. SAS/STAT user’s guide. Version 9. Cary, Software, 2002. [ Links ]
THOMPSON, R.; MEYER, K. A review of theoretical aspects in the estimation of breeding values for multi-trait selection. Livestock Production Science, 15(4):299-313, 1986. [ Links ]
VENCOVSKY, R.; CRUZ, C. D. Comparação de métodos de correção do rendimento de parcelas com estandes variados: I. Dados simulados. Pesquisa Agropecuária Brasileira , 26(8):647-657, 1991. [ Links ]
VIANA, J. M. S. et al. Multi-trait BLUP in half-sib selection of annual crops. Plant Breeding, 129(6):599-604, 2010. [ Links ]
VIEIRA, R. A. et al. Selection index based on the relative importance of traits and possibilities in breeding popcorn. Genetics and Molecular Research , 15(2), 2016. [ Links ]
WALDMANN, P.; ERICSSON, T. Comparison of REML and Gibbs sampling estimates of multi-trait genetic parameters in Scots pine. Theoretical and Applied Genetics , 112(8):1441-1451, 2006 [ Links ]
Received: June 18, 2017; Accepted: January 31, 2018
*Corresponding author: [email protected]
This is an open-access article distributed under the terms of the Creative Commons Attribution License | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.82212233543396, "perplexity": 4205.4091550171115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375529.62/warc/CC-MAIN-20210306223236-20210307013236-00307.warc.gz"} |
http://mathhelpforum.com/algebra/205882-how-multiply-algebraic-fractions.html | # Math Help - How to multiply algebraic fractions?
1. ## How to multiply algebraic fractions?
x+1/3 * 3x+3/2
As I would do with a normal fraction (I was told to treat algebraic fractions as normal ones) I would multiply the numerator and denominator to get;
(x+1)(3x+3)/6
My answer was wrong. Why? And how do you do it? Step by step would be appreciated.
2. ## Re: How to multiply algebraic fractions?
To be clear, your oiginal equation i:
$\frac {x+1} 3 \times \frac {3x+3} 2$
$\frac {x+1} 3 \times \frac {3x+3} 2 = \frac {(x+1)(3x+3)} 6$
but note that it can be simplified: $\frac {(x+1)(3x+3)} 6 = \frac {(x+1)3(x+1)} 6= \frac {(x+1)^2} 2$
What answer did they give you?
3. ## Re: How to multiply algebraic fractions?
The one you arrived on, the simplification.
Hmm, where did the 3 go and why did the denominator turn into 2?
Thanks a lot!
4. ## Re: How to multiply algebraic fractions?
Originally Posted by Ashir
Hmm, where did the 3 go and why did the denominator turn into 2?
Thanks a lot!
Simplifying fractions - the 3's in the numerator and denominator cancel out:
$\frac {(x+1)(3x+3)} 6 = \frac {(x+1)3(x+1)} 6= \frac {3(x+1)^2} { 3 \times 2} = \frac {(x+1)^2} 2$
5. ## Re: How to multiply algebraic fractions?
Ah, right. Thanks!
Is there a way for me to tell if I should have factorize the bracket, because I didn't know I should have factorized 3x+3.
6. ## Re: How to multiply algebraic fractions?
In general you want to get rid of common factors that are in both numerator and denominator. As for factoring - it's best to group like terms when you can. Hence 3(x+1) is a better representation than 3x+3.
7. ## Re: How to multiply algebraic fractions?
What do you mean by 'group like terms'? How did you do this here?
8. ## Re: How to multiply algebraic fractions?
The expression 3x+3 has the number '3' in it twice. So we say that 3 and 3x have "like terms," and we can pull the 3 out so it appears only once: 3x+3 = 3(x+1).
9. ## Re: How to multiply algebraic fractions?
I know what like terms are but how is that grouping if you extracted the 3? Isn't that the opposite?
10. ## Re: How to multiply algebraic fractions?
Actually I got it now, you're grouping the x+1's. Thanks for the help! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833807945251465, "perplexity": 1663.1228528356437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268660.14/warc/CC-MAIN-20140728011748-00442-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/331691/norm-of-two-operators-on-l2-mathbbz-2-mathbbz-2-different | # Norm of two operators on $l^2(\mathbb{Z}_{2}*\mathbb{Z}_{2})$ different
In my research I encounered the following (very concrete) question: Consider the (discrete) group $$G:=\mathbb{Z}_{2}*\mathbb{Z}_{2}$$. Let $$s\text{, }t\in G$$ be the generating elements and define for $$\theta\in\left(-\frac{\pi}{2}\text{, }\frac{\pi}{2}\right)$$ the bounded operator $$\begin{eqnarray} X_{\theta}:=-8\tan\left(\theta\right)\cdot\text{id}+T_{s}+T_{t}\in{\cal B}\left(l^{2}\left(G\right)\right) \end{eqnarray}$$ on the Hilbert space $$l^{2}\left(G\right)$$ where $$T_{s}\delta_{g}:=\delta_{sg}$$ for every $$g\in G$$ (and $$T_{t}$$ is defined analogously). Let $$P\in{\cal B}\left(l^{2}\left(G\right)\right)$$ be the projection onto $$\mathbb{C}\delta_{e}$$ where $$e\in G$$ is the neutral element. I claim that $$\begin{eqnarray} \left\Vert X_{\theta}\right\Vert \neq\left\Vert X_{\theta}-2\tan\left(\theta\right)P\right\Vert \text{,} \end{eqnarray}$$ unless $$\theta=0$$. At first glance this looks obvious but I could not show it so far.
## This question has an open bounty worth +100 reputation from worldreporter14 ending in 6 days.
The question is widely applicable to a large audience. A detailed canonical answer is required to address all the concerns.
• I believe you can at least see that $\|X_\theta\|\leq \|X_\theta- tP\|$ for any $t\in \mathbb{R}$ by looking at the spectral measure for $X_\theta^*X_\theta$ and using the fact that there are no minimal projections in the group von Neumann algebra to get a vector perpendicular to $\delta_e$ almost attaining the norm. – J. E. Pascoe May 16 at 15:31
• Thanks for your response! Under assuming equality of both norms and by using your suggestion I can show that $t \mapsto \left\Vert X_\theta -tP \right\Vert$ is constant on the interval from $[16\text{tan}\left(\theta\right), -2\text{tan}\left(\theta\right)]$ (assuming $\theta \leq 0$). Do you think that could help deducing a contradiction? – worldreporter14 yesterday
• That's unclear, but if the claim is true, that's probably the way to go. I don't think that you will get a contradiction without some amount of understanding of the group von Neumann algebra. (That is, this is not likely to be some property of all von Neumann algebras.) – J. E. Pascoe yesterday
• The group in question is isomorphic to ${\bf Z}\rtimes {\bf Z}_2$ (the latter group can be viewed as acting on ${\bf Z}$ by translations and by the flip $n\leftrightarrow -n)$. There is a noncommutative version of the Fourier transform that yields an isomorphism $\ell^2(G) \leftrightarrow L^2([0,1]; {\bf C}^2)$ with corresponding isomorphism of von Neumann algebras ${\rm VN}(G) \cong L^\infty\otimes {\bf M}_2$. Perhaps the explicit matricial picture can be used to carry out some of these calculations? – Yemon Choi yesterday
## 1 Answer
The claim is true.
Any difference in norm must be picked up on the span of $$(T_s+T_t)^ne_0.$$ So we will apply perturbation theory on that subspace. The value of $$\langle(T_s+T_t)^ne_0,e_0\rangle$$ should be $${n}\choose{n/2}$$ when $$n$$ is even and zero otherwise. Note that $$A=T_s + T_t$$ is self-adjoint. Moreover the spectrum of $$A$$ contains $$2$$ and $$-2$$ as the limits $$\|(2+A)^ne_0\|^{1/n}$$ and $$\|(2-A)^ne_0\|^{1/n}$$ are both $$4$$ by Stirlings type estimates. (In fact, for each $$n$$ the quantities are equal. This says that the spectral radius of the operators $$2+A, 2-A$$ are equal to $$4.$$)
Consider the function $$F_A(z) = \langle (T_s+T_t-z)^{-1}e_0,e_0 \rangle.$$ The places where $$F_A$$ analytically continues through $$\mathbb{R}$$ is exactly the complement of the spectrum. Expanding $$F_A$$ at infinity gives: $$F_A(z) = -\frac{1}{z}\sum {{2n}\choose{n}} \frac{1}{z^{2n}}$$ Now consider $$\lim_{z\rightarrow 2^+} F_A(z)$$ and $$\lim_{z\rightarrow -2^-} F_A(z).$$ Apparently, using Stirling's formula type estimates, $$\lim_{z\rightarrow 2^+} F_A(z)= -\infty.$$ Also, as the function is odd, $$\lim_{z\rightarrow -2^-} F_A(z) =\infty.$$ By the Aronszajn-Krein formula, the spectrum of $$A + \alpha P$$ is governed by $$F_{A+\alpha P}=\frac{F}{1+\alpha F}.$$ Note the spectrum will only change if $$F(z) = -\frac{1}{\alpha}$$ has a real solution in the complement of the spectrum of $$A.$$ (Moreover, it will only change by one eigenvalue.)
So, now we consider the spectrum of $$4\alpha +A$$ and compare it to $$4\alpha+A + \alpha P.$$ If $$\alpha >0,$$ the extra eigenvalue of $$A+\alpha P$$ appears when $$F_A(z) = -1/\alpha$$ which happens to the right of the spectrum, and therefore the norm increases. Similarly, the norm increases in the other case.
Note that it is not true for a general $$\alpha + A + \beta P,$$ and has a somewhat subtle dependence on your choice of problem. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 46, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9483262300491333, "perplexity": 144.5716067741422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255837.21/warc/CC-MAIN-20190520081942-20190520103942-00118.warc.gz"} |
http://link.springer.com/article/10.1140%2Fepja%2Fi2012-12029-2 | , 48:29,
Open Access
Date: 12 Mar 2012
# Simultaneous measurement of neutron-induced capture and fission reactions at CERN
## Abstract
The measurement of the capture cross-section of fissile elements, of utmost importance for the design of innovative nuclear reactors and the management of nuclear waste, faces particular difficulties related to the $$\gamma$$ -ray background generated in the competing fission reactions. At the CERN neutron time-of-flight facility n_TOF we have combined the Total Absorption Calorimeter (TAC) capture detector with a set of three 235U loaded MicroMegas (MGAS) fission detectors for measuring simultaneously two reactions: capture and fission. The results presented here include the determination of the three detection efficiencies involved in the process: $$\ensuremath \varepsilon_{TAC}(n,f)$$ , $$\ensuremath \varepsilon_{TAC}(n,\gamma)$$ and $$\ensuremath \varepsilon_{MGAS}(n,f)$$ . In the test measurement we have succeeded in measuring simultaneously with a high total efficiency the 235U capture and fission cross-sections, disentangling accurately the two types of reactions. The work presented here proves that accurate capture cross-section measurements of fissile isotopes are feasible at n_TOF.
Communicated by R. Krücken | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8697384595870972, "perplexity": 3346.760784077654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639674.12/warc/CC-MAIN-20150417045719-00203-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/investigation-into-infinite-limits.301984/ | # Investigation into infinite limits
• Start date
#### Drake13
5
0
I was looking into infinite limits and sequences and looked at the Limit
Lim {f(n+1)/f(n)}
n->∞
I was looking to see if there were any significant patterns and i couldn't even solve. It was proposed to me by a friend am I just having a bad case of limit block or is it as difficult as it seems??
p.s f(n+1) = f sub (n+1)
f(n) = f sub n
#### csprof2000
287
0
If I'm reading your notation correctly, how difficult it is depends on the form of f(n).
#### Drake13
5
0
it can be what you like and by difficult i mean mentally challenging not solvability haha
#### HallsofIvy
Homework Helper
41,683
865
Then what in the world is your question?
#### 2^Oscar
45
0
surely the nature of the limit is defined by the nature of f(n)?
#### csprof2000
287
0
Let f(n) = 1 for all n. Then the limit is 1.
Let f(n) = c^n. Then the limit is 1/c.
Let f(n) = p(x). Then the limit is 1.
Let f(n) = sin(n). Then the limit is undefined.
Let f(n) = (-1)^n. Then the limit is -1.
etc.
These were all pretty easy. I guess the answer to your question, then, may be that they're, in general, not very hard at all. Of course, I could say
f(n) = (n^n)sin(ln(n))/(n!)(ln n!)
Give that one a shot and let me know how it turns out.
#### Drake13
5
0
since in the infinite series n must be greater than or equal to 1 we can take the example that n=10 thus we say that
lim f(subscript)n/f(subscript)n+1
n->10
is equal to the series
(f1, f2, f3, f4, f5, f6 ,f7, f8, f9,f10)/(f1,f2,f3,f4,f5,f6,f7,f8,f9,f10.f11)
with the common parts of each series eliminated what is left is
1/(f11)
thus if we say that n=x where both n and x are integers greater than or equal to 1
so that
lim f(subscript)n/f(subscript)n+1 = 1/(f[x+1])
n->x
that way we can set x=∞
lim f(subscript)n/f(subscript)n+1 = 1/(f[∞+1]) = 1/∞ = 0
n-> ∞
there fore
Lim {f(n+1)/f(n)} = 0
n->∞
Q.E.D??
#### csprof2000
287
0
Ummm, I think you have something of a misunderstanding of what sequence, series, etc. mean and what lim(f_n / f_n+1) means.
A sequence is a function from the (positive...let's say) integers to the real numbers.
f: N -> R
A sequence can have a limit as n goes to infinity if f(n) gets arbitrarily close to some finite value as you make n arbitrarily large.
A series is a limit of partial sums of a sequence f(n) as the number of terms in the partial sums goes to infinity.
As a counterexample to:
Lim {f(n+1)/f(n)} = 0
n->∞
Try f(n) = n. Then f(n+1)/f(n) = n+1/n = 1+1/n and the limit of this is 1, not zero.
#### Drake13
5
0
hmm k i think i might have misunderstood the initial equation but you explanation makes sense thank you kindly
• Posted
Replies
5
Views
2K
• Posted
Replies
5
Views
9K
• Posted
Replies
3
Views
2K
• Posted
Replies
1
Views
7K
• Posted
Replies
3
Views
3K
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9199041724205017, "perplexity": 1962.061936670167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321786.95/warc/CC-MAIN-20190824214845-20190825000845-00447.warc.gz"} |
https://pure.mpg.de/pubman/faces/ViewItemOverviewPage.jsp?itemId=item_2391940 | English
# Item
ITEM ACTIONSEXPORT
Released
Journal Article
#### Global dynamics of a Yang-Mills field on an asymptotically hyperbolic space
##### MPS-Authors
Bizon, Piotr
AEI-Golm, MPI for Gravitational Physics, Max Planck Society;
##### Locator
There are no locators available
1410.4317.pdf
(Preprint), 2MB
##### Supplementary Material (public)
There is no public supplementary material available
##### Citation
Bizon, P., & Mach, P. (2017). Global dynamics of a Yang-Mills field on an asymptotically hyperbolic space. Transactions of the American Mathematical Society, 369(3), 2029-2048. doi:10.1090/tran/6807.
Cite as: http://hdl.handle.net/11858/00-001M-0000-002C-5CCE-6
##### Abstract
We consider a spherically symmetric (purely magnetic) SU(2) Yang-Mills field propagating on an ultrastatic spacetime with two asymptotically hyperbolic regions connected by a throat of radius $\alpha$. Static solutions in this model are shown to exhibit an interesting bifurcation pattern in the parameter $\alpha$. We relate this pattern to the Morse index of the static solution with maximal energy. Using a hyperboloidal approach to the initial value problem, we describe the relaxation to the ground state solution for generic initial data and unstable static solutions for initial data of codimension one, two, and three. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8542104363441467, "perplexity": 1580.5829132609879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516071.83/warc/CC-MAIN-20181023044407-20181023065907-00214.warc.gz"} |
https://www.physicsforums.com/threads/relativistic-tips-of-a-propeller.861028/ | # B Relativistic tips of a propeller
Tags:
1. Mar 7, 2016
### xpell
Hi! Yes, I know that faster-than-light travel is impossible. But please stay with me for a while to help me understand this. Let's imagine we take some unobtainium and build a 12-km-radius propeller, attached to an engine able to accelerate it up to 250,000 rpm (like a turbocharger, or not a few large industrial motors and turbines.) Then we plug this engine to the nearest sun or whatever and smoothly accelerate the thing.
According to my (classical) calculations, the tips of the propeller would reach a linear velocity of c slightly under 239,000 rpm, and we'd still have over 11,000 remaining rpm's to (classically) accelerate them above c. To keep the thing stable, we maybe could add another counter-rotating propeller, as in Kamov-style helicopters.
I know Relativity totally forbids this. But... what would happen as we approach the 239,000 rpm mark to make it impossible, please?
2. Mar 7, 2016
### Staff: Mentor
It happened before you even started spinning up the thing, when you said "unobtanium." No "obtanium" is perfectly rigid. In fact, "perfect rigidity" is impossible in relativistic physics. Your propeller will inevitably start to deform/bend/break before it reaches a relativistically significant speed.
3. Mar 7, 2016
### xpell
Yes I know. But it was intended as a "thought experiment", precisally to understand what would happen (relativistically) and learn not only that it is not possible (which I already know), but how it is not possible. That's what I chose that unobtainium. :) And the sun plug too, not to mention the engine.
4. Mar 7, 2016
### HallsofIvy
I presume you know that as something's speed gets greater and greater, its total energy gets greater and greater (I'm trying to avoid talking about "mass" as getting greater). That means that more and more force would be required to increase the speed at all. That is essentially the same reason an object, with an unlimited fuel supply still cannot move faster than the speed of light.
5. Mar 7, 2016
### Orodruin
Staff Emeritus
To put it differently: Any material is being held together by interactions which propagate (at most) at the speed of light. As you approach the speed of light, the forces would not propagate fast enough to withstand the (humongous) stresses involved. Of course, any real material is going to break long before then.
6. Mar 7, 2016
### HallsofIvy
But even in "thought experiments" about relativity, you cannot assume things that violate relativity!
7. Mar 7, 2016
### Staff: Mentor
These kinds of what if questions which are not physically possible because of the energies involved and all kinds of nasty mass-energy issues are discussed in a comic strip writer's column - here is a link to the 'relativstic baseball pitch'. Bear in mind that the mass of a baseball is far less than that of a propeller tip. Much much less than entire propeller.
https://what-if.xkcd.com/1/
I think this is an appropriate answer in a lot of ways.
8. Mar 7, 2016
### xpell
Agreed, but I wanted to know why/how they violate relativity!
9. Mar 7, 2016
### Staff: Mentor
1. It would require an infinite amount of energy
2. It would generate an infinite amount of stress
3. Angular acceleration cannot be rigid even with finite energies and stresses
10. Mar 7, 2016
### pixel
What does it mean for angular acceleration to be "rigid?"
11. Mar 7, 2016
### Staff: Mentor
Even if the material were infinitely rigid, you'd still run into the problem of needing an infinite amount of torque to accelerate it past the speed of light. Essentially, from your point of view, you'd apply an infinite amount of torque and it wouldn't go any faster.
12. Mar 7, 2016
### JVNY
Here is another try at asking expell's question. Consider a batting machine and a pitching machine in a vacuum. There are no atmospheric particles to strike anything as described in the comic.
Someone expends nearly but not quite infinite energy to accelerate the batting machine to 0.99c relative to the pitching machine. The two machines are now inertial.
According to the principle of relativity there is no way for an observer to tell which machine is moving. Any limit based on on energy, mass or the like must account for the fact that the pitching machine can no more cause the ball to travel at c relative to the batting machine than can the batting machine cause the ball to move at c relative to the pitching machine. This must be true even though no one applied any force to the pitching machine to cause it to be in relativistic motion with respect to the batting machine. So what stops the pitching machine from expending all of its stored energy for the first time and throwing the ball to reach c relative to the batting machine?
13. Mar 7, 2016
### Staff: Mentor
The fact that the pitching machine only contains a finite amount of energy, and it would take an infinite amount of energy to throw the ball at c relative to the batting machine (or anything else). Do the math and see.
14. Mar 7, 2016
### PAllen
The algebra of velocity addition. If the pitching machine is moving at .9c relative to the batting machine, and pitches the ball .9c towards the batting machine relative to itself, the ball is then moving at .9945 c relative to the batting machine not 1.8c.
To apply this fundamental line of reasoning to the propeller case, note that if there is any observer relative to whom the propeller tip is moving < c, then it is true for all observers by the algebra of velocity composition. Thus, for the propeller tip to exceed c, it must magically exceed c for all observers (because if < c for any possible observer, it is <c for all).
I think the clearest conceptual solution to such problems is not to focus on the (true) fact energy and stress becoming infinite, but on the fact that Newtonian vector addition is wrong. It does not apply to our universe except approximately for low speeds.
However, I want to add to Dale's point about infinite stress, which I think hasn't been addressed so much in earlier posts. To force the propeller tip to move in a circle you must apply a force on it proportional to <inertia>v2/r. This force approaches infinite as v approaches c. Thus it is not possible, in principle, to bend the tip to a circle when v=c. This would require 'more than infinite' force.
15. Mar 7, 2016
### Staff: Mentor
Rigid means that the proper distances between different parts do not change. If you accelerate an object linearly then it can remain rigid, but not in rotational acceleration.
16. Mar 8, 2016
### PeroK
You can simplify this to see what happens. Imagine your propeller is a light connecting rod joining the engine to a small mass at its tip. A very classical set-up!
Now, as the system rotates, the small mass gains speed. Classically, of course, for a given torque, the mass accelerates indefinitely - ignoring any resisting forces.
But, as the mass reaches relativistic speeds, the acceleration reduces - as it would for a linear acceleration.
It doesn't matter that you assume the materials can withstand the forces or that the system remains rigid. The speed of the mass can only asymptotically approach $c$.
In short, once you clear away all the extraneous details such as rigidity, you simply hit the same constraint as a linear particle accelerator.
Last edited: Mar 8, 2016
17. Mar 8, 2016
### A.T.
If the material was infinitely rigid, relativity would be wrong. So under these assumptions, what are you basing that infinite amount of torque on?
18. Mar 8, 2016
### Staff: Mentor
No. It is possible essential to good problem solving skills to analyze one piece of a problem at a time to look for separate errors. We make physically wrong simplifying assumptions about problems all the time - there is no reason why it can't be done here.
The "If relativity were wrong, what would XXX say about relativity" retort used here is practically a meme around here and it is a wrong approach to problem solving. But since it has become a punch-line people are no longer putting any thought into it. Scientists and engineers make physically wrong assumptions in problem solving all the time (and the "infinitely strong" or rigid one is a very, very common one). That's a critical skill in the art of problem solving.
Edit:
Indeed, I wold say this is the most common simplifying assumption used. It is used dozens of times a day on PF.
Last edited: Mar 8, 2016
19. Mar 8, 2016
### A.T.
Your argument rather boils down to: "Even if relativity was wrong, you still couldn't do X, because relativity says X needs infinite torque"
20. Mar 8, 2016
### Staff: Mentor
Repeating the punch-line will not help you analyze and understand the "joke". Please read the rest of the post and put some thought into what I said.
21. Mar 8, 2016
### JVNY
I agree entirely. The cartoon accepts that the pitcher can throw the ball at 0.9c without using infinite energy. But even so, 0.9c plus 0.99c does not result in the ball traveling at c or greater relative to the batting machine as the ball passes it -- even without worrying about what would happen if the batting machine hit the ball. xpell might be looking for something else, but the answer is just that Newtonian vector addition is wrong.
22. Mar 8, 2016
### Staff: Mentor
While I agree that that's true, I think the energy implication of this is a useful way to view it as well. Using common simplifying assumptions, (a point mass at the end of an infinitely rigid and massless rod) yields a device that is basically the same as (can be analyzed the same as) a particle accelerator. They are commonly described in terms of energy.
23. Mar 8, 2016
### PeroK
It's not entirely clear that the loss of rigidity scuppers the whole experiment.
What would actually be, the maximum possible speed of the tip of a propeller? Is it 0.1c? Or, 0.2c. Or,perhaps, 0.99c?
Or, perhaps it's difficult to set a definite limit on what is possible. The only true relativistic limit is c. In the sense that you can get, in theory, arbitrarily close.
Anything lower depends on specific engineering limitations, as it does for particle accelerators.
24. Mar 8, 2016
### A.T.
OK
There is a difference between:
a) Making predictions, while ignoring aspects that have negligible quantitative effect on the result.
b) Trying to prove something, based on a set of mutually contradictory assumptions
25. Mar 8, 2016
### A.T.
Which also rules out perfect rigidity. If you assume perfect rigidity then you cannot also assume that limit, to prove anything. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8079622983932495, "perplexity": 1135.2745400583867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870497.66/warc/CC-MAIN-20180527225404-20180528005404-00238.warc.gz"} |
http://physics.stackexchange.com/questions/24595/ideal-gas-in-a-vessel-kinetic-energy-of-particles-hitting-the-vessels-wall?answertab=votes | # Ideal gas in a vessel: kinetic energy of particles hitting the vessel's wall
Reading Landau's Statistical Physics Part (3rd Edition), I am trying to calculate the answer to Chapter 39, Problem 3.
You are supposed to calculate the total kinetic energy of the particles in an ideal gas hitting the wall of a vessel containing said gas.
The number of collisions per unit area (of the vessel) per unit time is easily calculated from the Maxwellian distribution of the number of particles with a given velocity $\vec{v}$ (we define a coordinate system with the z-axis perpendicular to a surface element of the vessel's wall; more on that in the above mentioned book): $$\mathrm{d}\nu_v = \mathrm{d}N_v \cdot v_z = \frac{N}{V}\left(\frac{m}{2\pi T}\right)^{3/2} \exp\left[-m(v_x^2 + v_y^2 + v_z^2)/2T \right] \cdot v_z \mathrm{d}v_x \mathrm{d}v_y \mathrm{d}v_z$$
Integration of the velocity components in $x$ and $y$ direction from $-\infty$ to $\infty$, and of the $z$ component from $0$ to $\infty$ (because for $v_z<0$ a particle would move away from the vessel wall) gives for the total number of collisions with the wall per unit area per unit time: $$\nu = \frac{N}{V} \sqrt{\frac{T}{2\pi m}}$$
Now it gets interesting: I want to calculate the total kinetic energy of all particles hitting the wall, per unit area per unit time. I thought, this would just be: $$E_{\text{tot}} = \overline{E} \cdot \nu = \frac{1}{2} m \overline{v^2} \cdot \nu$$ The solution in Landau is given as: $$E = \nu \cdot 2T$$
That would mean that for the mean-square velocity of my particles I would need a result like: $$\overline{v^2} = 4\frac{T}{m}$$ Now, I consider that for the distribution of $v_x$ and $v_z$ nothing has changed and I can still use a Maxwellian distribution. That would just give me a contribution of $\frac{T}{m}$ each. That leaves me with $2\frac{T}{m}$, which I have to obtain for the $v_z$, but this is where my trouble starts:
How do I calculate the correct velocity distribution of $v_z^2$?
-
The following calculation gives the correct answer: $$Z\int_0^{\pi/2}\int_0^\infty 2\pi v \sin\theta\; v\; \mathrm{d}\theta\mathrm{d}v\; e^{-mv^2/2kT}\; v \cos\theta\; \frac{1}{2}mv^2,$$ where $Z$ is such that $$Z\int_0^{\pi}\int_0^\infty 2\pi v \sin\theta\; v\; \mathrm{d}\theta\mathrm{d}v\; e^{-mv^2/2kT} = n,$$ where $n$ is the particle number density.
The correct answer is $$\left(\frac{2kT}{\pi m}\right)^{1/2}\; nkT = \left(\frac{2kT}{\pi m}\right)^{1/2}\; p.$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9756593704223633, "perplexity": 83.5574439359272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770815.80/warc/CC-MAIN-20141217075250-00149-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/351064/can-it-happen-that-angular-momentum-is-conserved-about-some-points-but-not-other | # Can it happen that angular momentum is conserved about some points but not others?
So angular momentum is conserved about a point if no external net torques act about that point. But is there any occasion when this is only true about certain points? In other words: can it happen that angular momentum is conserved about some points in a system but not others?
• If the contribution of those points where the torque is not zero cancels when summed up, isn't it possible? – FF10 Aug 9 '17 at 11:22
• -1. Unclear. Please add further information to explain the source of your difficulty. What is the context of your question? – sammy gerbil Aug 9 '17 at 11:44 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8788483738899231, "perplexity": 347.19844812567527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524503.7/warc/CC-MAIN-20190716055158-20190716081158-00080.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.