text
stringlengths 104
605k
|
---|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Development and validation of a sample entropy-based method to identify complex patient-ventilator interactions during mechanical ventilation
A Publisher Correction to this article was published on 09 November 2020
## Abstract
Patient-ventilator asynchronies can be detected by close monitoring of ventilator screens by clinicians or through automated algorithms. However, detecting complex patient-ventilator interactions (CP-VI), consisting of changes in the respiratory rate and/or clusters of asynchronies, is a challenge. Sample Entropy (SE) of airway flow (SE-Flow) and airway pressure (SE-Paw) waveforms obtained from 27 critically ill patients was used to develop and validate an automated algorithm for detecting CP-VI. The algorithm’s performance was compared versus the gold standard (the ventilator’s waveform recordings for CP-VI were scored visually by three experts; Fleiss’ kappa = 0.90 (0.87–0.93)). A repeated holdout cross-validation procedure using the Matthews correlation coefficient (MCC) as a measure of effectiveness was used for optimization of different combinations of SE settings (embedding dimension, m, and tolerance value, r), derived SE features (mean and maximum values), and the thresholds of change (Th) from patient’s own baseline SE value. The most accurate results were obtained using the maximum values of SE-Flow (m = 2, r = 0.2, Th = 25%) and SE-Paw (m = 4, r = 0.2, Th = 30%) which report MCCs of 0.85 (0.78–0.86) and 0.78 (0.78–0.85), and accuracies of 0.93 (0.89–0.93) and 0.89 (0.89–0.93), respectively. This approach promises an improvement in the accurate detection of CP-VI, and future study of their clinical implications.
## Introduction
Invasive mechanical ventilation (MV) is a life-support measure administered to patients who cannot breathe on their own. Patient-ventilator asynchronies occur when there is a mismatch between the ventilator’s setting and patient’s breathing pattern. Recent studies have emphasized the impact of asynchronies upon clinical outcomes1,2,3,4,5, focusing on the incidence of specific subtypes of asynchronies or on the asynchrony index, and also on their distribution over time given that they occur in clusters within prolonged uneventful periods1,2,3,4,5,6. Importantly, in most of these studies ventilator’s waveforms were analysed visually2,5,7,8; only a few analyses have been based on automated algorithms1,4,6,9,10 or, more recently, on machine learning algorithms incorporating not only ventilator waveforms but also clinical data11.
Asynchronies are difficult to define when supported only by visual assessment carried out by inexperienced personnel, since different types may develop in a short time period or may even overlap with each other. Furthermore, asynchronies, which are by nature time-limited and transient, lead to patient distress, impede the ventilator’s effectiveness in decreasing the work of breathing, increase the time on mechanical ventilation and have a negative impact on outcome1,2,4,7,12. Additionally, sometimes patient’s drive only becomes evident due to an increase in the respiratory rate itself13,14,15,16, which, given its irregular and complex behaviour, may be overestimated by visual observation or dedicated algorithms. Therefore, it would be extremely useful to have access to a method for assessing irregularity and complexity which could detect Complex Patient-Ventilator interactions (CP-VI), including not just asynchronies of any kind but also changes in the respiratory rate, in an automated, non-invasive and personalized fashion.
Normal physiological data are non-linear17. The complex behavior of a non-linear system cannot be characterized by the sum of its inputs, and the study of these systems requires methods that take into account the non-linear physiological response to a given stimulus. These methods could provide insights into organ-system interconnectivity, regulatory control, and complexity in time series during disease17,18,19.
Entropy is a non-linear method derived from the theory of complex systems which measures the randomness and predictability of stochastic processes. Various types of entropy have been used in clinical monitoring20,21,22. Sample Entropy (SE) is a measure of complexity and regularity, defined as the negative natural logarithm of the conditional probability that two sequences similar for m points will remain similar at the next point, where self-matching is not included23. Thus, a lower SE value indicates more self-similarity in a time series.
SE has proved to be an effective tool for investigating different types of time series data derived from various biological conditions in the human body. Examples of these conditions include the activation of inspiratory muscles in COPD patients24,25, the analysis of atrial fibrillation on electrocardiograms26, background electroencephalograms in Alzheimer’s patients27, heart rate variability28,29, human postural sway29 and seizure termination during electroconvulsive therapy30.
Interestingly, only a few entropy approaches have been applied in the respiratory system to study breath-to-breath variability and its components as predictors of successful separation from MV during spontaneous breathing trials (SBT)19,31,32,33,34. Breath-to-breath approaches suggest that increased irregularity of the respiratory system may be a marker of pulmonary health19 and may serve as a weaning predictor32,33,34,35, opening up the possibility that a certain degree of irregularity may be normal3,36. However, these studies rely on the detection of the appropriate respiratory cycle. Hence, the performance of automated algorithms in breathing cycle detection may be jeopardized when transient asynchronies occur during patient-ventilator interaction or even overlap with each other. In this respect, other authors have applied the SE to the entire signal, as is the case of Sá et al.37 who evaluated the respiratory changes by applying SE upon the entire airway flow signal providing an early and sensitive functional indicator of interstitial asbestosis.
We hypothesized that analyzing transient complexity of CP-VI may provide clinically relevant information during MV. Therefore, we sought to develop and validate a non-invasive method based on SE measurement using the entire airway pressure (Paw) and airway flow (Flow) waveforms to detect CP-VI, defined as the occurrence of asynchronies and changes in the respiratory rate.
## Methods
### Defining complex patient ventilator interactions
We defined CP-VI as a > 50% change in the respiratory rate13,35,38,39 and/or > 30% asynchronous breaths of any type (ineffective expiratory efforts, double cycling, premature cycling, prolonged cycling, or reverse triggering) over a 3-min period. A recent study found that 38% of mechanically ventilated patients had clusters of ≥ 30 ineffective expiratory efforts in a 3-min period (i.e., ≥ 50% of all breaths in a patient with a respiratory rate of 20 breaths per minute), and that the median duration of these clusters was 20 min4. Another study found that 59.7% of patients had clusters in which > 10% of all breaths in a 3-min period were double cycled, with a mean cluster duration of 15.5 min6. Figure 1 shows a representative example of different CP-VIs consisting of increased respiratory rate, asynchronies, or a combination of these phenomena.
### Data acquisition and data analysis
The Better Care system (Better Care, Barcelona, Spain. US patent No. 12/538,940) continuously records Paw and Flow signals at a sample frequency of 200 Hz from intubation to liberation from MV9. Better Care uses drivers specifically designed to interact with output signals from mechanical ventilators and bedside monitors rather than directly with patients, synchronizing recorded signals and storing them for further analysis. We used MATLAB (The MathWorks, Inc., vR2018b, Natick, MA, USA) for signal processing, data analysis, and visual assessment. Signals were decimated at a sampling rate of 40 Hz before entropy calculation.
### Study population
The findings presented in this paper represents an ancillary analysis on an ongoing clinical study (ENTROPY-ICU, ClincalTrials.gov NCT04128124) designed to assess the feasibility of using SE to identify CP-VI during MV. Data from 27 patients were obtained from an ongoing database at two centers in Spain. The database was constructed prospectively for the development of a connectivity platform (Better Care) to interoperate signals from different ventilators and monitors and subsequently compute algorithms for diagnosing patient-ventilator asynchronies (ClinicalTrial.gov, NCT03451461). The Comitè d’Ètica d’Investigació amb medicaments at the Corporació Sanitària Parc Taulí and the Clinical Research Ethics Committee of Fundació Unió Catalana d’Hospitals approved the database and the study protocol. The need for informed consent was waived because the current study was an ancillary analysis with anonymized data. The guidelines followed in this study were according to the applicable Spanish regulations (Biomedical Research Law 14/2007). This type of study must be evaluated and approved by at least one Institutional Review Board (IRB). Parc Taulí’s IRB approved this study to be carried out in all participating centers. The IRB approved the study allowing it to be carried out without the explicit request of informed consent from each participant given that it is a study with retrospective data. Spanish regulations allow studies to be carried out with this condition as long as they are approved by an IRB.
The SE analysis was performed on the complete set of Flow and Paw data collected during the two hours before self-extubation. Self-extubations, defined as extubations performed by the patient himself, are included in unplanned extubations but its mechanisms differ from accidental extubations40. Clinical and demographic data were obtained from medical charts (Table 1).
### Visual validation of CP-VI
Experts’ visual assessment was considered the gold standard. Three critical care physicians with extensive experience in analyzing ventilator waveforms visually reviewed 92 15-min-long segments of Flow and Paw recordings from the two-hour period immediately before self-extubation. The 15-min window was selected based on two previous studies evaluating clusters of asynchronies, in which mean cluster duration was 15.5 and 20 min respectively6,10. An expert in MV selected the segments to ensure a balanced proportion of different ventilation modes (grouped into pressure support ventilation (PSV) or assist-control ventilation (ACV) modes, comprising volume assist-control and pressure assist-control ventilation) and of segments with and without CP-VIs. Every patient contributed both CP-VI and non-CP-VI segments with at least one 15-min segment of each type; however, some patients contributed more segments than others. In order to ensure that the most valuable CP-VI events were not missed, all the 15-min segments immediately preceding self-extubation were included. To ensure masking of the scorers, Flow and Paw tracings were randomly ordered in MATLAB prior to visual analysis. To standardize scoring criteria, scorers were provided with written descriptions of the characteristics of CP-VI before visual analysis. Scorers were asked to determine whether CP-VI were present in each segment. No time limitations were imposed.
### Sample entropy
SE is a non-linear technique that measures the randomness of a series of data23. Compared to other approaches, SE’s main advantage is that it provides consistent results even in short and noisy medical time series19,23. To calculate SE, three parameters are necessary: the embedding dimension, m (a positive integer); the tolerance value or similarity criterion, r (a positive real number); and the total length of the series, N. Briefly, SE is defined as the negative logarithm of the conditional probability that two sequences of patterns of m consecutive samples that are similar to each other within a tolerance r will remain similar when one consecutive sample is added ($$m + 1$$), excluding self-matches. SE is calculated as follows23:
Given a time series of N samples $$\left\{ {x\left( n \right) = x\left( 1 \right),x\left( 2 \right), \ldots ,x\left( N \right)} \right\}$$, a subset of $$N - m + 1$$, overlapping vectors $$X_{m} \left( i \right)$$ of length $$m$$ are defined:
1. 1.
Form m vectors defined by $$X_{m} \left( i \right) = \left[ {x\left( i \right),x\left( {i + 1} \right), \ldots ,x\left( {i + m - 1} \right)} \right],{ }i = 1,2, \ldots ,N - m + 1$$. These represent $$m$$ consecutive $$x$$ values.
2. 2.
Then, define Chebyshev distance between vectors $${ }X_{m} \left( i \right)$$ and $$X_{m} \left( j \right)$$, i.e., the maximum absolute difference between their scalar components:
$$d\left[ {X_{m} \left( i \right),X_{m} \left( j \right)} \right] = \begin{array}{*{20}c} {max} \\ {k = 0, \ldots ,m - 1} \\ \end{array} \left[ {\left| {x\left( {i + k} \right) - x\left( {j + k} \right)} \right|} \right].$$
(1)
3. 3
For a given $$X_{m} \left( i \right)$$, count the number of j $$\left( {1 \le j \ge N - m,i \ne j} \right)$$, denoted as $$B_{i} \left( r \right)$$, such that the distance between $$X_{m} \left( i \right)$$ and $$X_{m} \left( j \right)$$ is less than or equal to a threshold $$r$$.
$$\begin{gathered} \hfill \\ B_{i}^{m} \left( r \right) = \frac{{B_{i} \left( r \right)}}{N - m - 1}, \\ \text{Then, for}\, 1 \le i \ge N - m, \hfill \\ \end{gathered}$$
(2)
4. 4.
Defined $$B^{m} \left( r \right)$$ as
$$B^{m} = \frac{1}{N - m}\mathop \sum \limits_{i = 1}^{N - m} B_{i}^{m} \left( r \right)$$
(3)
5. 5.
This previous procedure is repeated, increasing the dimension to $$m + 1$$ to calculate $$A_{i} \left( r \right)$$ as the number of $$X_{m + 1} \left( i \right)$$ within $$r$$ of $$X_{m + 1} \left( j \right)$$, where $$j$$ ranges from 1 to $$N - m\left( {i \ne j} \right)$$. Then,$${ }A_{i}^{m} \left( r \right)$$ is defined as:
$$A_{i}^{m} \left( r \right) = \frac{{A_{i} \left( r \right)}}{N - m - 1}$$
(4)
6. 6.
Set $$A^{m} \left( r \right)$$ as
$$A^{m} = \frac{1}{N - m}\mathop \sum \limits_{i = 1}^{N - m} A_{i}^{m} \left( r \right)$$
(5)
Thus, $$B^{m} \left( r \right)$$ is the probability that two sequences will match for $$m$$ samples, whereas $$A^{m} \left( r \right)$$ is the probability that two sequences will match for $$m + 1$$ samples. Finally, sample entropy is then defined as
$$SE\left( {m,{ }r} \right) = \mathop {\lim }\limits_{N \to \infty } \left\{ { - ln\left[ {\frac{{A^{m} \left( r \right)}}{{B^{m} \left( r \right)}}} \right]} \right\}{ }$$
(6)
which is estimated by the statistic:
$$SE\left( {m, r, N} \right) = - ln\left( {\frac{{A^{m} \left( r \right)}}{{B^{m} \left( r \right)}}} \right)$$
(7)
The m parameter is generally taken as 2, while the r parameter normally ranges between 0.1 and 0.25 times the standard deviation (SD) of the segment analyzed of length N. In this study, SE was calculated over the Flow (SE-Flow) and Paw (SE-Paw) signals using a 30-s sliding window (N = 1,200 samples) with 50% overlap. SE was explored using m from 1 to 20 and with r values equal to 0.1, 0.2, 0.3, and 0.4 times the SD of each sliding window. To reduce noise and to increase the consistency of the results, we applied an 8-period-long exponential moving average filter to the SE series.
### Automatic CP-VI detection
We devised an automated algorithm based on SE to detect CP-VI events (European patent application number EP19383116). Figure 2 summarizes the algorithm in a flowchart. Detection of a CP-VI depends on whether the percentage of change (PC) in SE with respect to the patient's own SE baseline value during the 15-min period is greater than a predefined threshold of change (Th). We calculated PC for SE-Flow and SE-Paw in each 15-min period in two ways, using the following derived features (the mean SE value [SE-Flowmean and SE-Pawmean], and the maximum SE value [SE-Flowmax and SE-Pawmax]), applying different values of Th (15%, 20%, 25%, 30%, 35%, 40%, 45%, and 50%). We hypothesized that SE values would be higher in periods with CP-VI than in periods with regular patient-ventilator interactions. Periods were considered to contain a CP-VI event when PC exceeded the Th. The optimal Th for CP-VI detection was selected during the SE setting optimization procedure (explained below).
Keim-Malpas41 recently proposed that alert thresholds derived from continuous analytic monitoring should be based on the degree of change from the patient’s own baseline, rather than on general cutoff thresholds. In our study there was no single baseline value common to all patients; each patient had their own baseline.
The baseline value of each SE feature was initialized with the value calculated in the first 15-min period. This value was updated with each new 15-min segment if the SE feature of the new one was lower than the current baseline.
### Statistical analysis
Fleiss’ kappa coefficient was used to assess the reliability of agreement among scorers for visual assessment42. The automated CP-VI detection algorithm was applied over the SE series derived from the same Flow and Paw tracings previously used for visual assessment. To evaluate the performance of the automated algorithm with respect to the gold standard visual assessment, we calculated sensitivity, specificity, positive and negative predictive values (PPV and NPV respectively), accuracy, and the Matthews correlation coefficient (MCC)43. Widely used in biomedical research, the MCC is considered a balanced measure of the confusion matrix of true and false positives and negatives44,45,46. Calculation of the MCC is based on all four elements of the confusion matrix: true positive (TP), true negative (TN), false positive (FP), and false negative (FN) values, as follows:
$$MCC = \frac{{TP{*}TN - FP{*}FN}}{{\sqrt {\left( {TP + FP} \right){*}\left( {TP + FN} \right){*}\left( {TN + FP} \right){*}\left( {TN + FN} \right)} }}$$
(8)
MCC values can range from − 1 to + 1. An MCC value of − 1 suggests perfect disagreement between the predictions and the gold standard, and a value of 1 suggests perfect agreement between the predictions and the gold standard; a value of 0 indicates that the prediction is no better than random. The MCC index was used as the measure of effectiveness during the process to optimize SE settings so as to achieve the most robust CP-VI estimation.
### Optimization procedure (selection of m, r, and Th)
In entropy studies, determining the optimal settings to robustly extract the randomness of a series of data is an important step47,48. To select the optimal settings for the SE parameters m and r and the optimal Th for estimating CP-VI, we used a repeated holdout cross-validation method with the MCC as a measure of effectiveness.
Figure 3 depicts the steps involved in the optimization and the validation procedure. Once the experts had visually validated the set of 92 observations, it was randomly divided into two subsets: 70% of the data for optimization and the remaining 30% of the data for validation. This optimization procedure was repeated a total of 15 times using different subsets (randomly selected each time) to capture as much relevant information as possible and to minimize the potential bias resulting from fitting the settings on a single partition. The MCC metric was computed for all combinations of m, r, and Th for each repetition. Finally, the maximum mean MCC value determined the optimal combination of SE settings and Th among all possible combinations. The optimization procedure was individually applied to the features derived from SE-Flow (SE-Flowmean, SE-Flowmax) and SE-Paw (SE-Pawmean, SE-Pawmax) in order to determine the respiratory signal and features that best reflect CP-VI.
In addition, a sensitivity analysis by using a small grid search of r values (step = 0.01) around the optimal value in the best features derived from SE-Flow and SE-Paw was performed to compare regions of confidence and to investigate whether the selected r value is a robust local maximum.
To assess the robustness of the optimization procedure, we computed the medians and interquartile ranges of all measures of performance (MCC, sensitivity, specificity, accuracy, PPV, and NPV) considering the optimal combination for both the optimization and validation subsets.
## Results
### Visual CP-VI analysis by experts
The experts visually assessed a total of 92 periods: 45 periods of PSV (22 with CP-VI and 23 without) and 47 periods of ACV (24 with CP-VI and 23 without). Fleiss’ kappa for inter-rater agreement was 0.90 (0.87–0.93), indicating almost perfect agreement.
### Detecting CP-VI with SE
The exponential moving average filter reduced the noise in SE series and generated a smoothed SE version suitable for detecting CP-VI (see Supplementary Methods and Supplementary Fig. S1). Figure 4 shows representative examples of respiratory signal tracings with the corresponding SE-Flow (m = 2 and r = 0.2) and SE-Paw (m = 4 and r = 0.2) tracings. SE was highly sensitive to changes in the irregularity of the respiratory pattern occurring during ventilation.
### Optimization of SE settings, Th detection using a repeated holdout cross-validation procedure
Figure 5 shows the procedure used to optimize SE settings and Th for CP-VI detection. We calculated the mean MCC value for each combination of m, r, and Th for all derived features analyzed (SE-Flowmean, SE-Flowmax, SE-Pawmean, and SE-Pawmax). In general, SE-Paw features exhibit much less sensitivity to m parameter selection than SE-Flow features. SE-Flowmax and SE-Pawmax features yielded the highest mean MCC values. The highest MCC values for SE-Flowmax were found for values of m = 2, r equal to 0.2 and 0.3, and Th between 20 and 35%, whereas for SE-Pawmax were found for values of m equal to 3 and 4, r = 0.2, and Th between 25 and 30%. The optimal SE settings for SE-Flowmax were m = 2, r = 0.2, and Th = 25%, and for SE-Pawmax m = 4, r = 0.2, and Th = 30%. As regards the optimal SE settings, SE-Flowmax at Th = 25% (SE-Flowmax25) yielded the highest mean MCC value (0.84) and SE-Pawmax at Th = 30% (SE-Flowmax30) yielded the highest mean MCC value (0.86). Both SE-Flowmax25 and SE-Pawmax30 yielded their highest MCC values in 13 of the 15 repetitions. The sensitivity analysis conducted for the SE-Pawmax and SE-Flowmax features around the optimal value of r = 0.2 is shown in Supplementary Figure S3. Once we had determined the settings that best detected CP-VI, we evaluated the performance of the algorithm in the 15 repetitions of the cross-validation procedure. Figure 6 displays the algorithm’s performance statistics. The median values of all the parameters observed in the optimization subset were slightly higher than those observed in the validation subset (Supplementary Table S1); this is a common consequence of the repeated holdout cross-validation process. The performance of SE-Flowmax25 and SE-Pawmax30 stratified by ventilator modality (grouped into pressure support ventilation and assist-control ventilation modes) is shown in Supplementary Table S2.
For comparative purposes, we also carried out the procedure for optimizing SE settings and Th over the unfiltered SE series. The Supplementary Methods and the Supplementary Figure S2 show the results obtained in this case.
## Discussion
Our automatic algorithm for detecting CP-VI from ventilator signals proved highly sensitive and specific in individual patients. Using non-linear analysis of SE to measure irregularity and randomness in the entire set of physiological Flow and Paw signals, the algorithm compared data from different periods in each patient’s interaction with the ventilator to detect CP-VI. In our analyses the maximum changes of SE in both Flow and Paw signals yielded the most accurate results at different thresholds and settings. The most accurate results for SE-Flowmax were obtained with a threshold of change of 25% with m = 2, r = 0.2, and for SE-Pawmax with a threshold of change of 30% with m = 4 and r = 0.2.
The recognition of the hidden information contained in physiological time series draws attention to the extraordinary complexity of physiological systems49. Several non-linear techniques have been developed to study the irregularity and complexity of these physiomarkers18,23,50,51,52,53. Previous studies have used methods based on approximate entropy and sample entropy using breath-to-breath variability and derived indices19,23,32,33,34, which relies on the detection of the appropriate respiratory cycle.
The main advantage of our approach is that it does not require the detection of each single breathing cycle to measure irregularity in Flow and Paw waveforms and thus identify the development of a CP-VI. This approach makes a fundamentally different assumption about where complexity occurs in the physical signal, focusing on transient Flow and Paw complexity rather than breath-to-breath complexity in order to accurately identify changes in the respiratory rate and asynchronies which by their nature are transient and time-limited.
To our knowledge, no recommendations are currently available for the estimation of respiratory dynamics by applying an entropy approach to the entire dataset of Flow and Paw tracings during MV23,52. Recently, Sá et al.37, used SE estimation upon entire Flow signal without optimized parameters. Thus, one important contribution of our study is the description of a set of optimization and validation procedures based on a repeated holdout cross-validation method used in machine-learning models, which we used to obtain the optimal m, r and Th values. Ensuring the robustness of the validation procedure.
Our study also applied a personalized threshold to determine the occurrence of a CP-VI event based on a proportional change from the patient’s own baseline value, which is continuously updated. Continuous predictive analytics monitoring achieves early detection of changes in status over time in previously stable patients. Keim-Malpas et al.41 recently suggested that an absolute threshold of change from baseline values may not be clinically significant in real-world settings and could lead to a high rate of false-positives in patients with high baseline values54. In our study, thresholds of change of 25% and 30% from SE-Flowmax and SE-Pawmax respectively, proved to be the most accurate for CP-VI detection. The optimization procedure found that r = 0.2 is suitable for detecting CP-VI events using SE-Flowmax (m = 2, Th = 25%) or SE-Pawmax (m = 4, Th = 30%) features. Additionally, the sensitivity analysis indicates that r = 0.2 proved to be a more robust local maximum for SE-Flowmax feature. This might suggest that the algorithm predictions seems to be not influenced by small changes in underlying unknown parameters (i.e., different dataset, different measurement equipment or ventilator waveforms) when using SE-Flowmax (m = 2, r = 0.2, Th = 25%), and therefore, could be a more suitable feature than SE-Pawmax (m = 4, r = 0.2, Th = 30%).
Interestingly, both SE-Flowmax25 and SE-Pawmax30 performed well in detecting CP-VI in Assist-Control Ventilation, while SE-Flowmax25 performed slightly better than SE-Pawmax30 in Pressure Support Ventilation mode. The reason for the latter finding may be that during PSV the pressure is constant, and it is the flow waveform that exhibits more changes in accordance with patient’s demand and the mechanical properties of the diseased lung. However, due to the small sample size these sub-analysis results should be interpreted with care, and further research is needed.
Our study has several limitations. First, our algorithm responds to changes in the respiratory rate based on transient changes of Flow and Paw waveforms detected by SE, but not on inspiratory effort. This means that respiratory drive, the intensity of the neural output from the respiratory center that regulates the magnitude of inspiratory effort55, may not have been fully assessed15,56,57. Unfortunately, although many techniques have been proposed55,58,59,60 none have been implemented at the bedside to monitor drive and effort. Our proposed algorithm does not include measurements of effort; nevertheless, whenever a diaphragmatic contraction occurs unassisted by the ventilator, and an asynchrony develops our algorithm is able to detect it.
Second, although our method does not rely on the detection of breathing cycles to measure irregularity and is based on changes in SE of Flow and Paw waveforms, none of the features deriving from breath-to-breath variability were considered. Therefore, their potential importance in detecting CP-VI is yet to be assessed.
Third, while the dataset used for the repeated hold out cross-validation method was paired between segments with and without CP-VI, most of them were from tracings of patients who self-extubate, in whom the occurrence of events of poor patient-ventilator interactions is highly unpredictable. For that reason, the clinical meaning of CP-VI in critically ill patients is yet to be determined and requires more research. Additionally, in the current study we have only examined SE, and other promising measures of entropy may also provide adequate diagnostic tool. For instance, multiscale entropy analysis61,62, Fuzzy approximate entropy63, conditional entropy64 and distribution entropy65 could be others potentially useful entropy measures to be investigated.
Finally, we did not analyze data from proportional modes of MV. Thus, although it is tempting to speculate that ventilatory modes that adapt to patients’ efforts and variability might induce higher changes in SE, the performance of our algorithm in patients ventilated in these modes may differ substantially, and it should not be implemented in these modes until validated by future research.
## Conclusion
Our non-invasive method based on SE measurement of Paw and Flow is able to detect CP-VI, defined as the occurrence of transient asynchronies and changes in the respiratory rate, with high accuracy. Clinical relevance and usefulness of identifying Complex Patient-Ventilator Interactions in different clinical scenarios deserves to be explored.
## Data availability
The datasets generated and analyzed in the current study are available from the corresponding author on reasonable request.
## Change history
• ### 09 November 2020
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
## References
1. 1.
Blanch, L. et al. Asynchronies during mechanical ventilation are associated with mortality. Intensive Care Med. 41, 633–641 (2015).
2. 2.
Thille, A. W., Rodriguez, P., Cabello, B., Lellouche, F. & Brochard, L. Patient-ventilator asynchrony during assisted mechanical ventilation. Intensive Care Med. 32, 1515–1522 (2006).
3. 3.
Rué, M. et al. Bayesian joint modeling of bivariate longitudinal and competing risks data: an application to study patient-ventilator asynchronies in critical care patients. Biom. J. 59, 1184–1203 (2017).
4. 4.
Vaporidi, K. et al. Clusters of ineffective efforts during mechanical ventilation: impact on outcome. Intensive Care Med. 43, 184–191 (2017).
5. 5.
Beitler, J. R. et al. Quantifying unintended exposure to high tidal volumes from breath stacking dyssynchrony in ARDS: the BREATHE criteria. Intensive Care Med. 42, 1427–1436 (2016).
6. 6.
de Haro, C. et al. Double cycling during mechanical ventilation: frequency, mechanisms, and physiological implications. Crit. Care Med. 46, 1385–1392 (2018).
7. 7.
De Wit, M. et al. Ineffective triggering predicts increased duration of mechanical ventilation. Crit. Care Med. 37, 2740–2745 (2009).
8. 8.
Wysocki, M. et al. Reduced breathing variability as a predictor of unsuccessful patient separation from mechanical ventilation. Crit Care Med 34, 2076–2083 (2006).
9. 9.
Blanch, L. et al. Validation of the Better Care® system to detect ineffective efforts during expiration in mechanically ventilated patients: a pilot study. Intensive Care Med. 38, 772–780 (2012).
10. 10.
Marchuk, Y. et al. Predicting patient-ventilator asynchronies with hidden Markov models. Sci. Rep. 8, 1–7 (2018).
11. 11.
Sottile, P. D., Albers, D., Higgins, C., Mckeehan, J. & Moss, M. M. The association between ventilator dyssynchrony, delivered tidal volume, and sedation using a novel automated ventilator dyssynchrony detection algorithm. Crit. Care Med. 46, e151–e157 (2018).
12. 12.
Tobin, M. J., Alex, C. G. & Fahey, P. J. Fighting the ventilator. in Principles and Practice of Mechanical Ventialtion (ed. Tobin, M. J.) 1121–1136 (2006).
13. 13.
Tobin, M. J. et al. The pattern of breathing during successful and unsuccessful trials of weaning from mechanical ventilation. Am. Rev. Respir. Dis. 134, 1111–1118 (1986).
14. 14.
Tobin, M. J., Perez, W., Guenther, S. M., D’Alonzo, G. & Dantzker, D. R. Breathing pattern and metabolic behavior during anticipation of exercise. J. Appl. Physiol. 60, 1306–1312 (1986).
15. 15.
Tobin, M. et al. Variability and timing of resting respiratory in healthy subjects drive. J. Appl. Physiol. 65, 309–317 (1988).
16. 16.
Benchetrit, G. Breathing pattern in humans: diversity and individuality. Respir. Physiol. 122, 123–129 (2000).
17. 17.
Godin, P. & Buchman, T. Uncoupling of biological oscillators: a complementary hypothesis concerning the pathogenesis of multiple organ dysfunction syndrome. Crit. Care Med. 24, 1107–1116 (1996).
18. 18.
Pincus, S. M. Greater signal regularity may indicate increased system isolation. Math. Biosci. 122, 161–181 (1994).
19. 19.
White, C. E. et al. Lower interbreath interval complexity is associated with extubation failure in mechanically ventilated patients during spontaneous breathing trials. J. Trauma 68, 1310–1316 (2010).
20. 20.
Dong, X. et al. An improved method of handling missing values in the analysis of sample entropy for continuous monitoring of physiological signals. Entropy 21, 274 (2019).
21. 21.
Martínez-Cagigal, V., Santamaría-Vázquez, E. & Hornero, R. Asynchronous control of P300-based brain–computer interfaces using sample entropy. Entropy 21, 230 (2019).
22. 22.
Su, C. et al. A comparison of multiscale permutation entropy measures in on-line depth of anesthesia monitoring. PLoS ONE 11, 1–22 (2016).
23. 23.
Richman, J. S. & Moorman, R. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol. Hear. Circ. Physiol 278, 2039–2049 (2000).
24. 24.
Sarlabous, L. et al. Efficiency of mechanical activation of inspiratory muscles in COPD using sample entropy. Eur. Respir. J. 46, 1808–1811 (2015).
25. 25.
Sarlabous, L. et al. Electromyography-based respiratory onset detection in COPD patients on non-invasive mechanival ventilation. Entropy 21, 258 (2019).
26. 26.
Alcaraz, R. & Rieta, J. J. A review on sample entropy applications for the non-invasive analysis of atrial fibrillation electrocardiograms. Biomed. Signal. Process. Control 5, 1–14 (2010).
27. 27.
Abásolo, D., Hornero, R., Espino, P., Álvarez, D. & Poza, J. Entropy analysis of the EEG background activity in Alzheimer’s disease patients. Physiol. Meas. 27, 241–253 (2006).
28. 28.
Al-angari, H. M. & Sahakian, A. V. Use of sample entropy approach to study heart rate variability in obstructive sleep apnea syndrome. IEEE Trans. Biomed. Eng. 54, 1900–1904 (2007).
29. 29.
Lake, D. E. et al. Sample entropy analysis of neonatal heart rate variability. Am. J. Physiol. Regul. Integr. Comp. Physiol. 283, 789–797 (2002).
30. 30.
Yoo, C. S. et al. Automatic detection of seizure termination during electroconvulsive therapy using sample entropy of the electroencephalogram. Psychiatry Res. 195, 76–82 (2012).
31. 31.
El-Khatib, M., Jamaleddine, G., Soubra, R. & Muallem, M. Pattern of spontaneous breathing: potential marker for weaning outcome: Spontaneous breathing pattern and weaning from mechanical ventilation. Intensive Care Med. 27, 52–58 (2001).
32. 32.
Engoren, M. Approximate entropy of respiratory rate and tidal volume during weaning from mechanical ventilation. Crit. Care Med. 26, 1817–1823 (1998).
33. 33.
Papaioannou, V. E., Chouvarda, I. G., Maglaveras, N. K. & Pneumatikos, I. A. Study of multiparameter respiratory pattern complexity in surgical critically ill patients during weaning trials. BMC Physiol. 11, 2 (2011).
34. 34.
Papaioannou, V. E., Chouvarda, I., Maglaveras, N., Dragoumanis, C. & Pneumatikos, I. Changes of heart and respiratory rate dynamics during weaning from mechanical ventilation: a study of physiologic complexity in surgical critically ill patients. J. Crit. Care 26, 262–272 (2011).
35. 35.
Bien, M. Y. et al. Breathing pattern variability: a weaning predictor in postoperative patients recovering from systemic inflammatory response syndrome. Intensive Care Med. 30, 241–247 (2004).
36. 36.
Brochard, L. Breathing: does regular mean normal?. Crit. Care Med. 26, 1773–1774 (1998).
37. 37.
Sá, P. M., Castro, H. A., Lopes, A. J. & Melo, P. L. Entropy analysis for the evaluation of respiratory changes due to asbestos exposure and associated smoking. Entropy 21, 225 (2019).
38. 38.
Tobin, M. J. Advances in mechanical ventilation. N. Engl. J. Med. 344, 1986–1996 (2001).
39. 39.
Cohen, C. A., Zagelbaum, G., Gross, D. & Ph, D. Clinical manifestations of lnspiratory muscle fatigue. Am. J. Med. 73, 308–316 (1982).
40. 40.
Epstein, S. K., Nevins, M. L. & Chung, J. Effect of unplanned extubation on outcome of mechanical ventilation. Am. J. Respir. Crit. Care Med. 161, 1912–1916 (2000).
41. 41.
Keim-Malpass, J., Clark, M. T., Lake, D. E. & Moorman, J. R. Towards development of alert thresholds for clinical deterioration using continuous predictive analytics monitoring. J. Clin. Monit. Comput. (2019).
42. 42.
Fleiss, J. L., Cohen, J. & Everitt, B. Large sample standard errors of Kappa and weighted Kappa. Psychol. Bull. 72, 323–327 (1969).
43. 43.
Matthews, B. W. Comparison of the predicted and observed secondary struccture of T4 phagel lysozyme. Biochim. Biophys. Acta 405, 442–451 (1975).
44. 44.
Chaudhary, K., Nagpal, G., Dhanda, S. K. & Raghava, G. P. S. Prediction of Immunomodulatory potential of an RNA sequence for designing non-toxic siRNAs and RNA-based vaccine adjuvants. Sci. Rep. 6, 1–11 (2016).
45. 45.
Johnstone, D., Milward, E. A., Berretta, R. & Moscato, P. Multivariate protein signatures of pre-clinical Alzheimer’s disease in the Alzheimer’s disease neuroimaging initiative (ADNI) plasma proteome dataset. PLoS ONE 7, e34341 (2012).
46. 46.
Boughorbel, S., Jarray, F. & El-anbari, M. Optimal classifier for imbalanced data using Matthews correlation coefficient metric. PLoS ONE 12, 1–17 (2017).
47. 47.
Estrada, L., Torres, A., Sarlabous, L. & Jan, R. Improvement in neural respiratory drive estimation from diaphragm electromyographic signals using fixed sample entropy. IEEE J. Biomed. Heal. Informatics 20, 476–485 (2016).
48. 48.
Estrada, L., Torres, A., Sarlabous, L. & Jané, R. Influence of parameter selection in fixed sample entropy of surface diaphragm electromyography for estimating respiratory activity. Entropy 19, 460 (2017).
49. 49.
Buchman, T. G. The community of the self. Nature 420, 246–251 (2002).
50. 50.
Pincus, S. M. Approximate entropy as a measure of system complexity. Proc. Natl. Acad. Sci U. S. A. 88, 2297–2301 (1991).
51. 51.
Pincus, S. Approximate entropy (ApEn) as a complexity measure. Chaos 5, 110–117 (1995).
52. 52.
Suki, B., Bates, J. H. T. & Frey, U. Complexity and emergent phenomena. Compr. Physiol. 1, 995–1029 (2011).
53. 53.
Seely, A. J. E. et al. Proceedings from the Montebello round table discussion. Second annual conference on complexity and variability discusses research that brings innovation to the bedside. J. Crit. Care 26, 325–327 (2011).
54. 54.
Sullivan, B. A. et al. Early heart rate characteristics predict death and morbidities in preterm infants. J. Pediatr. 174, 1–6 (2016).
55. 55.
Vaporidi, K. et al. Respiratory drive in critically Ill patients: pathophysiology and clinical implications. Am. J. Respir. Crit. Care Med. 201, 20–32 (2019).
56. 56.
Georgopoulos, D. & Roussos, C. Control of breathing in mechanically ventilated patients. Eur. Respir. J. 9, 2151–2160 (1996).
57. 57.
Georgopoulos, D. Effects of mechanical ventilation on control of breathing. in Principles and Practice of Mechanical Ventialtion (ed. Tobin, M. J.) 805–820 (2013).
58. 58.
Laghi, F. Assessment of respiratory output in mechanically ventilated patients. Respir. Care Clin. N. Am. 11, 173–199 (2005).
59. 59.
Tobin, M. J., Laghi, F. & Jubran, A. Ventilatory failure, ventilator support, and ventilator weaning. Compr. Physiol. 2, 2871–2921 (2012).
60. 60.
Bertoni, M. et al. A novel non-invasive method to detect excessively high respiratory effort and dynamic transpulmonary driving pressure during mechanical ventilation. Crit. Care 23, 1–10 (2019).
61. 61.
Raoufy, R. M., Ghafari, T. & Mani, A. R. Complexity analysis of respiratory dynamics Mohammad. Am. J. Respir. Crit. Care Med. 196, 247–248 (2017).
62. 62.
Costa, M. D. & Goldberger, A. L. Generalized multiscale entropy analysis: Application to quantifying the complex volatility of human heartbeat time series. Entropy 17, 1197–1203 (2015).
63. 63.
Chen, W., Zhuang, J., Yu, W. & Wang, Z. Measuring complexity using FuzzyEn, ApEn, and SampEn. Med. Eng. Phys. 31, 61–68 (2009).
64. 64.
Porta, A. et al. Measuring regularity by means of a corrected conditional entropy in sympathetic outflow. Biol. Cybern. 78, 71–78 (1998).
65. 65.
Li, P. et al. Assessing the complexity of short-term heartbeat interval series by distribution entropy. Med. Biol. Eng. Comput. 53, 77–87 (2015).
## Acknowledgments
This work was funded by projects PI16/01606, integrated in the Plan Nacional de R+D+I and co-funded by the ISCIII- Subdirección General de Evaluación y el Fondo Europeo de Desarrollo Regional (FEDER). RTC-2017-6193-1 (AEI/FEDER UE). CIBER Enfermedades Respiratorias, and Fundació Parc Taulí.
## Author information
Authors
### Contributions
Study concept and design: L.S., J.A.E., R.M., C.d.H., and L.B. Data acquisition: L.S., J.A.E., C.d.H., C.S., M.B., G.G., A.O., and R.F. Data processing and interpretation: L.S., J.A.E., R.M., J.L.A., R.F., and L.B. Statistical analysis: L.S., R.M., and M.R. Figure preparation: L.S., J.A.E. and R.M. Drafting of the manuscript: L.S., J.A.E., and R.M. Revision of manuscript for important intellectual content: L.S., J.A.E., R.M., C.d.H., J.L.A., R.F., and L.B. Study supervision: L.S., J.A.E., R.M., C.d.H., A.O., R.F., and L.B. Data access and responsibility: L.B. had full access to all of the data in the study and takes full responsibility for the integrity of the data and the accuracy of the data analysis. L.S. and J.A.E. contributed equally to the study. All authors reviewed the manuscript.
### Corresponding author
Correspondence to Leonardo Sarlabous.
## Ethics declarations
### Competing interests
L.S., J.A.E., R.M., C.d.H., J.L.A, and L.B. have been named in a provisional European patent application number EP19383116 owned by Corporació Sanitària Parc Taulí: “A device and method for respiratory monitoring in mechanically ventilated patients”. L.B. is inventor of a US patent owned by Corporació Sanitària Parc Taulí: “Method and system for managed related patient parameters provided by a monitoring device”, US Patent No. 12/538,940. L.B. own stock options of BetterCare S.L., a research and development spinoff of Corporació Sanitària Parc Taulí. The remaining authors have no conflicts of interest.
### Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Sarlabous, L., Aquino-Esperanza, J., Magrans, R. et al. Development and validation of a sample entropy-based method to identify complex patient-ventilator interactions during mechanical ventilation. Sci Rep 10, 13911 (2020). https://doi.org/10.1038/s41598-020-70814-4
• Accepted:
• Published:
|
# [XeTeX] XeTeX/xdvipdfmx or the driver bug with eps images
Wed May 28 19:53:03 CEST 2014
```On 28.05.2014 19:31, Zdenek Wagner wrote:
> 2014-05-28 18:21 GMT+02:00 Joseph Wright
> <joseph.wright at morningstar2.co.uk
> <mailto:joseph.wright at morningstar2.co.uk>>:
>
> On 28/05/2014 16:14, Akira Kakuto wrote:
> > Dear Vafa Karen-Pahlav
> >
> >> w.eps is taken from LaTeX graphics companion examples;
> >> therefore I do not think there is anything wrong with the image
> itself.
> >>
> >> What is wrong?
> >
> > It is sufficient to change the header of w.eps
> > from
> > to
> > in order to tell Ghostscript that w.eps is an
> > eps file.
> >
> > Please try, then you will obtain an expected pdf.
> >
> > Thanks,
> > Akira
>
> All true, but both latex + dvips and pdflatex produce the expected
> output, as do latex + dvipdfmx or xelatex with the older driver set up.
>
>
> dvips includes EPS directly, it does not need ghostscript, pdflatex is
> not able to insert EPS and if \write18 is allowed, epstopdf is called.
> Probably epstopdf invokes ghostscript in a different way than xdvipdfmx
> does.
`epstopdf' does not require that the PostScript file is a strict
Encapsulated PostScript file. It takes the BoundingBox it can find,
moves the graphics to the origin, sets the new media size
(setpagedevice) and calls ghostscript for the conversion to PDF.
In the case that the PostScript file is *not* an EPS file, this
might succeed or fail.
XeTeX/xdvipdfmx/dvipdfmx are using ghostscript with option `-dEPSCrop',
configured in TDS:dvipdfmx/dvipdfmx.cfg. This option *requires*
EPS files. It seems that ghostscript can be fooled by an EPSF header
However, PostScript can use any PostScript operator, but EPS files are
restricted. Problematic operators such as setpagedevice are forbidden.
Renaming a file from .ps to .eps does *NOT* convert an PS file to
an EPS file.
It is much better to generate an EPS file in the first place
(setting an option to create EPS instead of PS in the driver/program
that generates the PostScript, ...).
Yours sincerely
Heiko Oberdiek
```
|
# Tag Info
69
It sounds like you are imagining that what satellites do is go up through the atmosphere, break though into outer space, and hang there. That is not right. If you simply go straight up to outer space (say 300 km above Earth's surface), gravity will pull you right back down, even if you've left the atmosphere, and you'll crash back into the Earth. Gravity is ...
47
I'm going to assume the bottom end of the rod is fixed, so the rod rotates around it. I think this is what you have in mind - shout if it isn't. So at some point during its fall the rod looks like: The mass of the rod is $m$ and the mass oif the weight on the end is $M$, and I've drawn in the forces due to gravity. To write down the equation of motion ...
30
I highly recommend you download Kerbal Space Program and see for yourself (there's a free demo version)! Typically the goal of a satellite is to orbit, and thus as the other answers address, you must build significant horizontal velocity. Indeed if the Earth didn't have an atmosphere, you could orbit a few km above the surface, so the main goal is building ...
25
If the mass/charge is symmetrically distributed on your sphere, there is no force acting on you, anywhere within the sphere. This is because every force originating from some part of the sphere will be canceled by another part. Like you said, if you move towards on side, the gravitational pull of that side will become stronger, but then there will also be ...
24
The force you can exert is your mass times your acceleration. By the equivalence principle, just standing still is equivalent to accelerating at 9.8 m/s2, which is where the force of your weight comes from when you just stand still. But it is easy to accelerate more - like when you jump. The force is only limited by your ability to push yourself off ...
21
Yes, this is possible. It is perfectly fine for a mass configuration to produce, for points outside a sphere of radius $R$ centred at $\mathbf r_0$, a gravitational field identical to that of a point mass at $\mathbf r_0$, and still be completely empty inside a smaller sphere of radius $a$ around $\mathbf r_0$. The spherical-shell model you describe is ...
19
The answer is that inside a spherically symmetric shell of matter (your hollow earth or massive beach ball) there is no gravitational force anywhere - you will not "fall" in any direction, whether you are at the centre or not, regardless of the radius of the sphere. This is a classic result of both Newtonian Gravity, and Einstein's General Theory of ...
7
I assume that you're imagining two sticks whose bases stay stuck still to the ground as they tip over. In this case, what we should do to calculate the tipping rate (I wouldn't exactly say "falling"-- I think it's misleading) is consider the torque applied to each stick as it tips. What the answer really comes down to is the struggle between torque (the ...
6
Emilio Pisanty's answer is a great one to this particular answer, i.e. there is no in principle bar from the laws of physics to a structure like yours and it would have the zero gravity inside property that Emilio describes. But, from a materials perspective, it would be pretty much impossible for a planet like this to form unless very small. The stablest ...
6
Use a lever. For an application like driving branches in the great outdoors, you would need to come prepared, or locate the site next to something heavy. Anchor one end of the lever under something heavy. Attach something to the branch to press against, and put the branch between you, at one end, and the fulcrum at the other. The lever acts as a force ...
5
If you think about Newton's third law and standing still vs jumping. When you stand still the ground exerts a reaction force on you which is equal and opposite to your weight by Newton's third law. If you jump upwards at the point where you begin to drive upwards you are applying a greater force on the ground than the standing still case, this difference is ...
4
Rockets take the shortest path to reach their orbit. If all we want to do is pop up to LEO and come back down again, then we do go straight up. See the first two Mercury missions for an example - they landed north of Nassau. If you want to end up in orbit, you need substantial horizontal speed. Turning at right angles is the least fuel-efficient way to do ...
4
According to the shell theorem, the gravitational force inside such a hollow spherical planet would be 0 On the outside, gravitational force would be as if the planet was a regular planet. At some point during your fall, the gravitational force pulling you to the center would decrease to 0. If there's atmosphere inside the planet, it would slow you down ...
4
Because of a convention wherein zero gravitational potential is said to be at infinity. See Wikipedia: $V(x) = \frac{W}{m} = \frac{1}{m} \int\limits_{\infty}^{x} F \ dx = \frac{1}{m} \int\limits_{\infty}^{x} \frac{G m M}{x^2} dx = -\frac{G M}{x}$ "By convention, it is always negative where it is defined, and as x tends to infinity, it approaches zero." ...
4
Yes, $F=ma$, but also $v=at$. That means that, as you fall for a longer time, your speed will increase. After 1 second, you are going at $9.8 m/s$ or $35 km/h$, about the speed of Usain Bolt. After 10 seconds you would reach $98 m/sec$ or $350 km/h$. For a free-falling human, the air resistance actually limits you to about $200 km/h$. When you hit the ...
3
In your context, the second interpretation is correct. The fact is that falling objects accelerate both on Earth and on the Moon. The sentence is saying that the amount of this acceleration, regardless the source, is six times greater on Earth than on the Moon. In other words, things accelerate towards the surface of Earth six times faster than they ...
3
We'll start with your second question, as that's (much) simpler. Close to the surface of the earth, it's safe to assume that the force of gravity is proportional to the mass of your object: $$F_G = mg$$ where $m$ is the mass, $F_G$ is the force of gravity, and $g$ is a constant (for the earth $g \approx 9.8 m/s^2$.) Then Newton's second law tells us that ...
3
Cosmic Velocity has nothing to do with infinity. A cosmic velocity is the minimum speed directed in the necessary direction to escape the gravitational attraction of a cosmic body such as a planet, a star, or a galaxy. Here is a paper which a student wrote about the four cosmic velocities. I don't know if his exact classifications are in common usage, but ...
3
The difference probably is that the graph for the gravitational potential is the one for a spherical mass distribution (or a sphere with a certain mass if you wish) and the electric one is given for a point charge. You could also draw the gravitational potential for a point mass, then it would look equivalent to your electrical potential, or the other way ...
3
I'm assuming you haven't taken any physics courses, so let's start by explaining the concept of a force. Forces are the central focus of classical mechanics. Basically, a force is a push or pull on an object as a result of its interaction with another object. When applied to an object with mass, a force causes the object's velocity to change in some way. ...
3
The rotation period $T$ is given by $$T=2\pi \sqrt{\dfrac{a^3}{G(M_\text{Sun}+M_\text{planet})}}$$ where $a$ is the sum of the half axes of the ellipse. Routhly: $M_\text{Sun}=2\times 10^{30}$ kg $M_\text{Earth}=6\times 10^{24}$ kg $M_\text{Jupiter}=2\times 10^{27}$ kg If you assume both Earth and Jupiter are orbiting around the Sun (and neglect the ...
3
I was wondering that if they were orbiting in same orbit then will they both have same time period? If yes, then why because as they both have different angular momentum and both have so much of differences. I'll break this down into two parts, first looking at the period of individual objects orbiting the Sun at a distance of one astronomical unit (but ...
2
I'm sure this is an incomplete answer, and would only add to those already excellent, more knowledgeable answers, but I do know that rocket scientists try to situate their launch site as close to the equator as they can, so the Earth's diameter gives some extra "throw" as compared to higher latitudes. I expect when the rocket tilts toward the horizontal, it ...
2
Yes, you can. The amount of force you can exert on an object is limited only by the geometry and strength of your muscles. However, Newton's 3rd law dictates that however much force you exert on an object, the object will exert the same amount of force on you, in the opposite direction. So, if you exert a force larger than your weight down on a stick, ...
2
If there is no resistance then there will be no net torque applied about the center of mass. So any initial rotational speed will remain. The rotation center is going to be the center of mass. The effect of gravity will be to accelerate the center of mass, and it will have no effect on the rotational motion of the body. See this accepted answer for a ...
2
Yes it is theoretically possible, as discussed in the other answers and indeed we already do a variation of harvesting a planet's orbital kinetic energy in the space navigation manoeuvre called "gravity assist" or "slingshot" to boost a spacecraft's speed without expending propellant. Here one makes one's spacecraft "collide" with a planet (i.e. make a very ...
2
As per @lemon mentioned : You can treat the sphere as a single point (this follows from point 1 of the shell theorem). So you just need to integrate along the length of the ring. So I just did like that only. Taking sphere as point mass then point mass will experiences one gravitational force of attraction that is: $$df=cos\theta$$ As vertical ...
2
Gravitational force is really weak compared to the other fundamental forces, so it's very difficult to measure the gravitational constant. This is how Cavendish did it without knowing the Earth's mass: He put two lead balls on either end of a long bar. He hung the bar at its center from a long twisted wire with known torque. Then, he placed two really ...
2
This, in fact, is not too different from the flat surface case, the difference lies only in the distribution of the force components along the '$x$ and $y$' components of velocity. In the flat surface projection case, you have acceleration only along the vertical direction. The given case can be converted to a non-inclined plane case by mentally rotating ...
2
In the case of radial freefall is from rest at some initial $r=R$, the motion will be periodic if you treat the gravitating body as a point-mass and ignore collisions. Since the radius is strictly positive, it makes sense to substitute $$r = R\cos^2\left(\frac{\eta}{2}\right) = \frac{R}{2}\left(1+\cos\eta\right)\text{.}$$ while conservation of specific ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
$$\renewcommand{\AA}{\text{Å}}$$
# 4. Run LAMMPS¶
These pages explain how to run LAMMPS once you have installed an executable or downloaded the source code and built an executable. The Commands doc page describes how input scripts are structured and the commands they can contain.
|
A Tutorial on Solving Equations Part 3
This is the third part of the series of tutorials on solving equations. In this part, we will solve more complicated equations especially those that contain fractions. The first part and the second part of this series discuss 10 sample equations. We start with the 11th example.
Example 11: -5x – 3 = -4x + 12
This example deals with the question of what if $x$ is negative? Let us solve the equation. We want $x$ on the left and all the numbers on the right. So, we add 4x to both sides.
-5x – 3 + 4x = -4x + 4x + 12
x – 3 = 12
Next, we add 3 to both sides to eliminate -3 from the left hand side of the equation. Continue Reading
|
# Integral of inverse of square root of quartic function with real roots
I was doing a physics problem and in order to finish it, I need to prove that:
$\int_{x1}^{x2}\frac{dx}{{((x1 - x)(x - x2)(x - x3)(x - x4))}^{1/2}} = \int_{x3}^{x4}\frac{dx}{{((x1 - x)(x - x2)(x - x3)(x - x4))}^{1/2}}$
Where all the roots of the polynomial are real and x1 < x2 < x3 < x4.
I don't know the range of validity of my claim, however I do have tested this using mathematica and it was true for all the cases. Also, I know that I can change the integrand using some transformations to get an elliptic integral of first kind. However, I think it will be a lot of work and will not pay the price (maybe I'm wrong), so I'm looking to a more elegant and direct proof.
-
Is it true that $x_4-x_3=x_2-x_1$? In such a case, you can simply exploit the fact that the graphics of the integrand function is symmetric with respect to the $x=\frac{x_2+x_3}{2}$ line. – Jack D'Aurizio Sep 1 '14 at 18:51
No, this is not always true. I've checked using different values of x1,x2,x3 and x4 and my statement seems to be always true (at least up to 5 decimal places lol). – Stephen Dedalus Sep 1 '14 at 18:56
Yes, I fixed that. My expression is less symmetrical now. :( But I think is better doing this way instead of talking about branches and stuff. – Stephen Dedalus Sep 1 '14 at 19:09
Also, I really appreciate the effort to make this question more precise. As I've said, this was just an integral that appearead after some mathematical manipulation. – Stephen Dedalus Sep 1 '14 at 19:13
@JackD'Aurizio, there is some evidence supporting this, see my answer – Will Jagy Sep 1 '14 at 20:11
EVEN MORE: given $0,1 < C < D$ we get an interchange from the Moebius transformation $$m(z) = \frac{-Cz \; + \; CD}{(D-C-1) \; z \; + \; C}$$ Here $$m(0) = D, m(D) = 0, m(1) = C, m(C) = 1.$$ Note that if $D-C = 1,$ we have a linear map and Jack's observation applies directly, the two intervals would then be the same length.
Alright, that actually worked, proof by Moebius transformation. Take $$W = D - C - 1,$$ which could be positive, negative, or zero, we don't know. With $$x = \frac{-Cz \; + \; CD}{W \; z \; + \; C},$$ $$dx = \frac{-C (C + DW)}{ (Wz+C)^2} dz.$$ But $C + DW = (D-C)(D-1),$ so $$dx = \frac{-C (D-C)(D-1)}{ (Wz+C)^2} dz.$$ Also use $W+C = D-1, W+1=D-C.$
We need four items, $$(Wz+C) x = C(D-z),$$ $$(Wz+C) (1- x) = (W+C) z + (C-CD) = (D-1)z + C (1 -D) = (D-1)(z-C),$$ $$(Wz+C)(C-x) = C(D-C)(z-1),$$ $$(Wz+C)(D-x) = (D-C)(D-1)z.$$
Together $$\color{magenta}{ -C (D-C)(D-1) \int_D^C \frac{(Wz+C)^2}{\sqrt{C^2 (D-1)^2(D-C)^2 z (z-1)(z-C)(D-z)}} \frac{dz}{(Wz+C)^2}}$$
$$\color{magenta}{ C (D-C)(D-1) \int_C^D \frac{dz}{ C (D-C)(D-1)\sqrt{ z (z-1)(z-C)(D-z)}} }$$
$$\color{magenta}{ \int_C^D \frac{dz}{\sqrt{ z (z-1)(z-C)(D-z)}} }$$
EXTRA comment: from @Jack's observation, it works for $0,1,C, C+1.$ A fair amount of work to get right, you can then find the derivative by $D$ of the two integrals with roots $0,1,C,D.$ The first one is not so hard, differentiate under the integral sign, the second has the extra problem that $D$ is both a limit and in the integrand, might be better to transform first so that it is no longer one of the integration endpoints; need to think about the extent to which that can be done.
long for a comment; I think you are onto something. $$\int_0^1 \frac{1}{\sqrt{x - x^2}} = \pi,$$ antiderivative is $\; \; \; - \arcsin (1-2x), \; \;$ so $$\int_A^B \frac{1}{\sqrt{(x-A)(B-x)}} = \pi.$$
I found $-1,0 < C< D$ a little more convenient, pull out $(C-x)(D-x)$ and bound without integrating those,... $$\frac{\pi}{\sqrt{(C+1)(D+1)}} \leq \int_{-1}^0 \frac{dx}{\sqrt{-(x+1)x(C-x)(D-x)}} \leq \frac{\pi}{\sqrt{CD}}$$ while $$\frac{\pi}{\sqrt{(C+1)(D+1)}} \leq \int_C^D \frac{dx}{\sqrt{(x+1)x(x-C)(D-x)}} \leq \frac{\pi}{\sqrt{CD}}$$
I do not yet have the explicit transformation, if indeed your two integrals are always equal, but I suspect they are. This is too much of a coincidence otherwise. Here my idea is $C,D$ large, maybe close together, maybe far apart, maybe very very far apart. If you are worried about $C,D$ very small, then just take a linear change so that those are $-1,0$ instead
-
There is a point $z$ in the $(x_2,x_3)$ interval such that $\frac{z-x_2}{x_3-z}=\frac{x_2-x_1}{x_4-x_3}$ and a point $w$ in the same interval such that $\frac{w-x_2}{x_3-w}=\frac{x_4-x_3}{x_2-x_1}$. I bet that by by considering $z$ or $w$ as the origin we can exhibit an explicit transformation between the two integrals (that are complete elliptic integrals of the second kind). – Jack D'Aurizio Sep 1 '14 at 20:11
Yes, probably it works. We just need to switch $x_1,x_2$ and $x_4,x_3$ with a birational map. – Jack D'Aurizio Sep 1 '14 at 20:19
We must take $u\in(x_2,x_3)$ such that $\frac{u-x_1}{x_4-u}=\frac{x_3-u}{u-x_2}$. – Jack D'Aurizio Sep 1 '14 at 20:21
@StephenDedalus, added to answer. Loved your work in Ulysses... note that for proof of equality, it suffices to take $A=0, B=1.$ You called them $x_1,x_2,x_3,x_4$ I guess. – Will Jagy Sep 1 '14 at 20:36
@JackD'Aurizio, it is a little different, I was never quite sure what you wanted with the special value for $u,$ so that is yours in any case. Strange problem, I would have ignored it if the extreme behavior gave an obvious inequality, quite surprised when the estimates were identical. – Will Jagy Sep 1 '14 at 21:35
We can assume that $x_1=0,x_2=1,x_3=C,x_4=D$ with $D>C>1$.
There is point $z$ for which: $$(z-x_2)(x_3-z)=(z-x_1)(x_4-z),\tag{1}$$ and by solving $(1)$ we get: $$z = \frac{C}{1+C-D}\tag{2}$$ (if $D-C=1$, the observation I made in the comments settles the question) from which we have: $$z-x_2=\frac{D-1}{1+C-D},\quad x_3-z =\frac{C(C-D)}{1+C-D},\quad x_4-z=\frac{(C-D)(D-1)}{1+C-D}.\tag{3}$$ Now the first integral can be written as: $$I_L = \int_{-\frac{C}{1+C-D}}^{-\frac{D-1}{1+C-D}}\frac{du}{\sqrt{-(u+\frac{C}{1+C-D})(u+\frac{D-1}{C+D-1})(u-\frac{C(C-D)}{C+D-1})(u-\frac{(D-1)(C-D)}{1+C-D})}}\tag{4}$$ while the second integral can be written as: $$I_R = \int_{\frac{C(C-D)}{C+D-1}}^{\frac{(D-1)(C-D)}{1+C-D}}\frac{dv}{\sqrt{-(v+\frac{C}{1+C-D})(v+\frac{D-1}{C+D-1})(v-\frac{C(C-D)}{C+D-1})(v-\frac{(D-1)(C-D)}{1+C-D})}}\tag{5}$$ and the equivalence between $(4)$ and $(5)$ follows from the substitution $$v = -\frac{C(C-D)(D-1)}{(1+C-D)^2 u}.\tag{6}$$
So, if we perform a translation first, we just need to apply an inversion to map the first integral into the second one, and calculations become a little easier to perform. The content of this answer, however, is more or less the same as Will Jagy's answer: Moebius map works.
With the help of Mathematica, I also got a closed form expression:
$$I_L = I_R = \frac{2}{\sqrt{CD-C}}\cdot K\left(\frac{D-C}{CD-C}\right),\tag{7}$$
where $K(\cdot)$ it the complete elliptic integral of the first kind (with the Mathematica convention): $$K(k)=\int_{0}^{1}\frac{dt}{\sqrt{(1-t^2)(1-kt^2)}}=\frac{\pi}{2}\sum_{n=0}^{+\infty}\left(\frac{1}{4^n}\binom{2n}{n}\right)^2 k^n.$$ Notice that the argument of the complete elliptic integral in $(7)$ is deeply related with the cross-ratio of $x_1,x_2,x_3,x_4$.
-
Great minds think alike. – Will Jagy Sep 1 '14 at 21:31
The projective curve $X$ given by $Y^2Z^2=(X-x_1Z)(X-x_2Z)(X-x_3Z)(X-x_4Z)$ is smooth and has genus $1$. Let $\omega = dx/y$. Then $\mathbb C\cdot\omega = \Omega^1(X/\mathbb C)$. Let $\Lambda \subseteq \mathbb C$ be the period lattice of $\omega$. Then $\Lambda$ is a free $\mathbb Z$-module of rank $2$. Since $X$ and $\omega$ are defined over $\mathbb R$, $\Lambda = (i\mathbb R \cap \Lambda) \oplus (\mathbb R \cap \Lambda)$. Let $\sigma \in \mathbb R \cap \Lambda$ be the smallest positive real period. I claim that both of your integrals are equal to $\sigma/2$.
Consider in $H_1(X(\mathbb C), \mathbb Z)$ the homology classes of the cycles $\gamma_1$ and $\gamma_2$ given by:
$\gamma_1$ is the cycle which goes from $x_1$ to $x_2$ along the first sheet and comes back to $x_1$ through the second sheet;
$\gamma_2$ is the cycle which goes from $x_3$ to $x_4$ along the first sheet and comes back to $x_3$ through the second sheet.
(Here I am viewing $X$ as a double-sheeted covering of $\mathbb P^1$ given by $(x,y) \mapsto x$.)
It is not hard to see that the classes of $\gamma_1$ and $\gamma_2$ are equal (draw a picture of $X$ lying over $\mathbb P^1$). Hence $\int_{\gamma_1} \omega = \int_{\gamma_2}\omega = \sigma$, and your integral is half of that.
Alternatively, make $X$ into an elliptic curve by choosing the point $P_1=[X_1 : 0 : 1]$ as a base point. Then the points $P_i = [X_i : 0 : 1]$ $(1 \leq i \leq 4)$ are the $2$-torsion points on $X$. Let $f: X \to X$ be $P \mapsto P+P_3$. The differential $\omega$ is translation-invariant, hence
$$\int_{P_1}^{P_2} \omega = \int_{P_1}^{P_2} f^*\omega = \int_{P_1+P_3}^{P_2+P_3} \omega = \int_{P_3}^{P_4}\omega.$$
-
(+1) Very elegant argument. – Jack D'Aurizio Sep 1 '14 at 23:07
Thank you @Jack! – Bruno Joyal Sep 1 '14 at 23:08
|
# How would the venn diagram for this logical expression look like?
Given the expression:
(p $\implies$ q) $\wedge$ (q $\implies$ r)
I got rid of the implication to get:
(¬p $\vee$ q) $\wedge$ (¬q $\vee$ r)
I first drew 2 venn diagrams, the left hand side of the conjunction symbol then the right.
There are three sets, when I draw the 2 separately, do I draw just two circles, and then combine the two? Or do I start by drawing 3 circles, even though one set is not in the other?
If I draw the venn diagrams for both sides just using two circles, and then combine the two, I get a venn diagram with where everything outside the 3 sets are shaded. But if I started off with 3 circles, and then combine them due to the conjunction symbol, I get everything outside the 3 sets shaded and the part where all three sets share a common area.
Which one is correct?
Thanks!
An implication $p \rightarrow q$ is analogous to the set containment $P \subseteq Q$. It tells us nothing about any arbitrary element $x$, except that if $x \in P$, then $x\in Q$. Similarly, $q \rightarrow r$ is analogous to the set containment $Q \subseteq R$. So if $x \in Q$, then $x\in R$.
So you have three concentric circles, $p$ inside $q$ inside $r$ (nested), depicting $P \subseteq Q \subseteq R$. They may very well be the same circle, but none of p can be outside q, and none of q can be outside r. Nothing need be shaded. We are only given enough information to draw how the circles (sets) of the Venn Diagram are related.
|
## surface integral
1. The problem statement, all variables and given/known data
Evaluate [double integral]f.n ds where f=xi+yj-2zk and S is the surface of the sphere x^2+y^2+z^2=a^2 above x-y plane.
3. The attempt at a solution
I know that the sphere's orthogonal projection has to be taken on the x-y plane,but I'm having trouble with the integration.
PhysOrg.com science news on PhysOrg.com >> City-life changes blackbird personalities, study shows>> Origins of 'The Hoff' crab revealed (w/ Video)>> Older males make better fathers: Mature male beetles work harder, care less about female infidelity
Recognitions: Gold Member Science Advisor Staff Emeritus Of course, you will have to do upper and lower hemispheres separately. One way to get the projection into the xy-plane is to find the gradient of x2+ y2+z2, 2xi+ 2yj+ 2zk, and "normalize by dividing by 2z: (x/z)i+ (y/z)j+ k. Then n dS is (x/z)i+ (y/z)j+ k dxdy. f.n dS is ((x2/z)+ (y2/z)- 2z) dxdy. I think I would rewrite that as ((x2/z)+ (y2/z)+ z- 3z) dxdy= ((x2+ y2+ z2)/z- 3z) dxdy= (a^2/z- 3z)dxdy. Now, for the upper hemisphere, $z= \sqrt{a^2- x^2- y^2}$ while for the negative hemisphere it is the negative of that. Because your integrand is an odd function of z, I think the symmetry of the sphere makes this obvious. Finally, do you know the divergence theorem? $$\int\int_T\int (\nabla \cdot \vec{v}) dV= \int\int_S (\vec{v} \cdot \vec{n}) dS$$ where S is the surface of the three dimensional region T. Here $\nabla\cdot f$ is very simple and, in fact, you don't have to do an integral at all! I wouldn't be surprized to see this as an exercise in a section on the divergence theorm.
I know this is a bit embarrassing for me,but how d'you integrate (a^2/z- 3z)dxdy .After having substituted for z,and converted to polar co-ordinates,I get zero in the denominator! This is the expression: [double integral]a^2/(sqrt(a^2-x^2-y^2)) dx dy. For conversion to polar co-ords,if I substitute x=a cos(theta) and y=a sin(theta),the denominator becomes zero. (Thanks a lot for the help anyway )
Recognitions:
Gold Member
|
# What is the difference between Tanha and Upadana?
What is the difference between Tanha and Upadana?
Why Buddha did not say Tanha Paccaya Bhava (instead of Upadana Paccaya Bhava)?
And there are three kinds of Tanha in Sutta (Kama, Bhava, Vibhava): please explain how these three link to Upadana?
I read the topic Why do the Noble Truths talk about 'craving', instead of about 'attachment'? but that does not answer my question -- I want answers which specifically discuss this in line with three categories of Tanha with Upadana in Dependent Origination.
Here is an answer that is based on my study and meditation. It is not citing any official sources and trying to stay away from technical explanation in favor of real-life examples.
-- What is the difference between Tanha and Upadana?
To use a traditional example. Say you met a very nice person. Let's say that person had a beautiful smile and sweet soft voice. Next day, you suddenly get a flash of that person's smile in your mind, and you "hear" the sound of voice playing in your head, all this accompanied with a desire to see that person again. That's tanha. You also feel a somewhat painful sensation in your chest because you're missing that person. That's dukkha.
Then, on the basis of that flash, you start obsessively thinking about the smile, the voice, the hair, the body, the manner of walking - you begin inwardly attending to as many features of the person as you can remember. You also start thinking about the place you met the person yesterday, hoping that if you go there again, there is a chance you will see him or her again. That's upadana.
The way it works, in my understanding, is like this: the moment you start thinking about going to that location to meet that person, you start thinking about future. In that imaginary future, there is you ("I") that you imagine will experience the pleasant moment of seeing the pretty person again. When you do that you create a sense of identity that spans a period of time from a point in the past to a point in the future. It's like you subconsciously say to yourself: "First I was here, without that person", "Then I will do A,B,C according to my plan", "Then I will be there - with that person", "Then I will enjoy". This personal ego-centric goal-making, projecting, and planning (in my opinion) is what causes Bhava - "becoming" or "individuating" or "identifying".
-- there are three kinds of Tanha in Sutta (Kama, Bhava, Vibhava): please explain how these three link to Upadana?
Same way as in the above. In case of kama-upadana you keep thinking about pleasant sensations, your thinking and action is permeated by craving to a pleasant sensation - that's kama-upadana. For example you keep getting a flash of kissing the person (tanha) and you start fantasizing about the circumstances under which you two would be kissing (upadana).
In case of bhava-upadana you keep obsessing about being in a certain position, living in a certain way, having something in your possession. Your thinking and action is permeated by craving to attain something. For example, you keep fantasizing about having married the person and how you two would be living together, what that would look like.
In case of vibhava-upadana you keep obsessing about something in your life now that you want to get rid of. Your thinking and action is permeated by craving to get rid of something. For example, you keep thinking about your extra weight, how bad it makes you look, and how much nicer it would be to get rid of it.
• Based on this answer can I ask a question about how we can go about a relationship with a person without falling pray to Upadana and Tanha? Or there will be none is these two things are considered? – user13135 May 24 '18 at 10:30
• True love is full acceptance and metta/mudita; it is equivalent to Bodhicitta - the mind of Enlightenment. True love gives to its object the freedom to be him/her-self. It does not have elements of obsession or craving, those are sicknesses of mind. – Andrei Volkov May 24 '18 at 12:08
After understanding this answer, you will realize there are a lot of sutta, which explaining 3 Taṇhā with 4 Upādāna. You can use this explanation in everywhere of Tipiṭaka, i.e. brahmajālasutta because the sutta I linked below is just the example.
What is the difference between Tanha and Upadana?
Taṇhā is beginning-lobha, Upādāna is often-arising-lobha after that Taṇhā. Taṇhā is like a slightly touching-hand; Upādāna is like that hand which becomes a tight grip hand.
Taṇhā condition Upādāna, Upādāna condition Bhava, so everything, between Taṇhā until Bhava, is Upādāna and Bhava, such as in Sutta. Dī. Ma. Mahānidānasutta, which I will quote below.
Why Buddha did not say Tanha Paccaya Bhava (instead of Upadana Paccaya Bhava)?
Because just a beginning attachment (Taṇhā) is not enough to build Bhava, actually, often and various attachments are required to build Bhava. Such as in Sutta. Dī. Ma. Mahānidānasutta, Dependent on Craving Section (which is explained as Sutta. Ma. Mul. Mahādukkhakkhandhasutta):
(There are many explanations in those 2 links which I made above, please see inside them for the information between Taṇhā until Bhava.)
"Now, craving [Taṇhā] is dependent on feeling, seeking [Upādāna] is dependent on craving, acquisition [Upādāna] is dependent on seeking, ascertainment [Upādāna] is dependent on acquisition, desire and passion [Upādāna] is dependent on ascertainment, attachment [Upādāna] is dependent on desire and passion, possessiveness [Upādāna] is dependent on attachment, stinginess [dosa-typed mind factor] is dependent on possessiveness, defensiveness [Sīlabbata-Upādāna & kamma-Bhava] is dependent on stinginess, (the example of defensiveness:) and because of defensiveness, dependent on defensiveness, various evil, unskillful phenomena come into play: the taking up of sticks and knives; conflicts, quarrels, and disputes; accusations, divisive speech, and lies.
Note: This is the reason why sotāpanna cannot birth in apāya anymore because sotāpanna doesn't have Sīlabbata-Upādāna which causes of breaking precept like in the end effect of above quote.
If you can not understand Upādāna from above quote and it's the explanation in Mahānidānasutta, Dependent on Craving Section, please see Mahādukkhakkhandhasutta, which fully explaining about Upādāna until karma-Bhava.
And there are three kinds of Tanha in Sutta (Kama, Bhava, Vibhava): please explain how these three link to Upadana?
In Mahādukkhakkhandhasutta kāma-Taṇhā, which caving "five strands of sensuality", conditions Bhava/Vibhava-Taṇhā, then these 3 Taṇhās condition Upādāna-until-Jāti:
Sights known by the eye that are likable, desirable, agreeable, pleasant, sensual, and arousing.
Cakkhuviññeyyā rūpā iṭṭhā kantā manāpā piyarūpā (samudaya-saccaniddesa in saccapabba of mahāsatipaṭṭhānasutta) kāmūpasaṃhitā rajanīyā
After above quote, whole left Mahādukkhakkhandhasutta's content are Bhava/Vibhava-Taṇhā which condition Upādāna-until-Jāti.
Bhava-taṅhā&Vibhava-Taṇhā are connect to Diṭṭhi/Sīlabbata/Attavāda-Upādāna directly in Sutta. Ma. Mū. Cūlasīhanādasutta:
1. "Bhikkhus, there are these two views: the view of being (bhava-Diṭṭhi) and the view of non-being (Vibhava-Diṭṭhi). Any recluses or brahmans who rely on the view of being, adopt the view of being, accept the view of being, are opposed to the view of non-being. Any recluses or brahmans who rely on the view of non-being, adopt the view of non-being, accept the view of non-being, are opposed to the view of being.5
2. "Any recluses or brahmans who do not understand as they actually are the origin, the disappearance, the gratification, the danger (the same as in Mahādukkhakkhandhasutta because they are connect to each other) and the escape6 in the case of these two views are affected by lust, affected by hate, affected by delusion, affected by craving, affected by clinging, without vision, given to favoring and opposing, and they delight in and enjoy proliferation. They are not freed from birth, aging and death, from sorrow, lamentation, pain, grief and despair; they are not freed from suffering, I say.
...
1. "Bhikkhus, there are these four kinds of clinging (Upādāna). What four? Clinging to sensual pleasures, clinging to views, clinging to rules and observances, and clinging to a doctrine of self.
Actually, 3 Taṇhā come together: you want to feel good (kāma), so you want to born as a feel-good-able person (bhava); And you don't want to born as a feel-bad-able person (Vibhava), because you don't want to feel bad.
Then kāma-Taṇhā condition kāma-Upādāna; Bhava/Vibhava-Taṇhā condition uccheda/sassata-Diṭṭhi-Upādāna & Sīlabbata-Upādāna, which depending on Attavāda-Upādāna. Please see the whole Sutta. Ma. Mū. Cūlasīhanādasutta, it is very clear inside that sutta.
Atthakathā/abhidhamma-teacher just connected sutta together. The reader will misunderstand of Tipiṭaka and Atthakathā, when they still think "these sutta are conflicting those sutta". In fact, they are studying Tipiṭaka by the inefficient study system, reading, so they feel like Tipiṭaka is conflicting each other.
## Example
After this I will let you see "you can use this explanation in everywhere of Tipiṭaka".
Sutta. Saṃ. Ma. Dhammacakkappavattanasutta:
Idaṃ kho pana, bhikkhave, dukkhaṃ ariya·saccaṃ: Jāti·pi dukkhā, jarā·pi dukkhā (byādhi·pi dukkho) maraṇam·pi dukkhaṃ, a·p·piyehi sampayogo dukkho, piyehi vippayogo dukkho, yampicchaṃ na labhati tam·pi dukkhaṃ; saṃkhittena pañc·Upādāna·k·khandhā dukkhā.
Furthermore, bhikkhus, this is the dukkha ariya·sacca(2): Jāti is dukkha(6), jarā is dukkha(6) (sickness is dukkha) maraṇa is dukkha(6), association with what is disliked is dukkha, dissociation from what is liked is dukkha, not to get what one wants is dukkha; in short, the five Upādāna(4)'k'khandhas are dukkha(3).
Idaṃ kho pana, bhikkhave, dukkha·samudayaṃ ariya·saccaṃ: Y·āyaṃ Taṇhā ponobbhavikā nandi·rāga·sahagatā tatra·tatr·ābhinandinī, seyyathidaṃ: kāma·Taṇhā, Bhava·Taṇhā, Vibhava·Taṇhā.
Furthermore, bhikkhus, this is the dukkha·samudaya ariya·sacca(1): this Taṇhā leading to rebirth(7), (because Taṇhā) connected with desire and enjoyment (8) (this is Upādāna; see Upādāna description in Mahānidānasutta, which I quoted above), (by Taṇhā) finding delight here or there(=idiom:indefinite=Upādāna)(5), that is to say: kāma-Taṇhā, Bhava-Taṇhā and Vibhava-Taṇhā.
Above, buddha taught dukkha·samudayaṃ ariya·saccaṃ Taṇhā condition dukkha·ariya·saccaṃ(2) pañc·Upādāna·k·khandhā dukkhā by Upādāna(4) [often attaching/Taṇhā finding delight here or there(5)] in khandhā dukkhā(3); therefore Jāti·pi dukkhā/jarā·pi dukkhā/maraṇam·pi dukkhaṃ(6) [new upatti-Bhava], which are the last effects in paṭiccasamuppāda, are included in dukkha·ariya·saccaṃ(2) and they are explained as new upatti-Bhava in this statement "Taṇhā leading to rebirth(new upatti-Bhava)"(7), which are the same Taṇhā's descriptions as it's follow statements "Taṇhā connected with desire and enjoyment(8)" and "Taṇhā finding delight here or there(5)".
Summary: (6)=(3) which depend on (4); and (4) is explained by (5), so when say (5) condition (7), it is the same as say (4) condition (7). Therefore buddha taught Upādāna-khandha, instead of Taṇhā-khandha.
• @SarathW Please help me to know, which part of this answer that you can not understand. I have to quote a very short text, because of the easier to read. But, actually, you need to understand the whole of each sutta I linked above, to understand my explanation. – Bonn May 23 '18 at 15:16
• Thanks. Both of your answers are helpful except way Buddha said Tanha lead to Bhava.SN 56.11 the Buddha did say Tanha Pacaya Bhava.dhammawheel.com/… – SarathW May 23 '18 at 21:49
Tanha is liking sense-pleasures, liking for becoming and liking for non-becoming. Upadana is clinging for sense-pleasures, wrong views, rites-and-rituals and self-doctrine.
Upadana is at the level of clinging whereas Tanha is mere liking of the above mentioned.
Ex: you see an expensive watch in a shop and you start to like it. That is Tanha. That liking leads to wanting it for yourself. That is Upadana. That wanting leads to trying to buy it, buying it or trying to steal it, stealing it. That is Bhava(Karma).
What is the difference between Tanha and Upadana?
Tanha is the craving of once mind towards worldly things,due to that craving a bond's born towards those worldly things,that bond's called ''upadana''..
Why Buddha did not say Tanha Paccaya Bhava (instead of Upadana Paccaya Bhava)?
As i said above due to Tanha it makes upadana or bonds,When it has bonds being go life to life continuously that calls Bhava, being get birth life to life due to Bhava..
And there are three kinds of Tanha in Sutta (Kama, Bhava, Vibhava): please explain how these three link to Upadana?
• Kama tanha - this makes due to craving on five senses
• Bhawa tanha -this is the craving to go life by life through ''sansara'' and get confort
• Vibhawa tanha - craving to get confort thinking ''there is no life or birth after this life''
Bonds make in mind due to these thrice are the upadana make by them as described above
|
# How to deploy machine learning with differential privacy?
In many applications of machine learning, such as machine learning for medical diagnosis, we would like to have machine learning algorithms that do not memorize sensitive information about the training set, such as the specific medical histories of individual patients. Differential privacy is a notion that allows quantifying the degree of privacy protection provided by an algorithm on the underlying (sensitive) data set it operates on. Through the lens of differential privacy, we can design machine learning algorithms that responsibly train models on private data.
## Why do we need private machine learning algorithms?
Machine learning algorithms work by studying a lot of data and updating their parameters to encode the relationships in that data. Ideally, we would like the parameters of these machine learning models to encode general patterns (e.g., ‘‘patients who smoke are more likely to have heart disease’’) rather than facts about specific training examples (e.g., “Jane Smith has heart disease”). Unfortunately, machine learning algorithms do not learn to ignore these specifics by default. If we want to use machine learning to solve an important task, like making a cancer diagnosis model, then when we publish that machine learning model (for example, by making an open source cancer diagnosis model for doctors all over the world to use) we might also inadvertently reveal information about the training set. A malicious attacker might be able to inspect the published model’s predictions and learn private information about Jane Smith. For instance, the adversary could mount a membership inference attack to know whether or not Jane Smith contributed her data to the model’s training set [SSS17]. The adversary could also build on membership inference attacks to extract training data by repeatedly guessing possible training points until they result in a sufficiently strong membership signal from the model’s prediction [CTW20]. In many instances, the model itself may be represented by a few of the data samples (e.g., Support Vector Machine in its dual form).
A common misconception is that if a model generalizes (i.e., performs well on the test examples), then it preserves privacy. As mentioned earlier, this is far from being true. One of the main reasons being that generalization is an average case behavior of a model (over the distribution of data samples), whereas privacy must be provided for everyone, including outliers (which may deviate from our distributional assumptions).
Over the years, researchers have proposed various approaches towards protecting privacy in learning algorithms (k-anonymity [SS98], l-diversity [MKG07], m-invariance [XT07], t-closeness [LLV07] etc.). Unfortunately, [GKS08] all these approaches are vulnerable to what are called composition attacks, that use auxiliary information to violate the privacy protection. Famously, this strategy allowed researchers to de-anonymize part of a movie ratings dataset released to participants of the Netflix Prize when the individuals had also shared their movie ratings publicly on the Internet Movie Database (IMDb) [NS08]. If Jane Smith had assigned the same ratings to movies A, B and C in the Netflix Prize dataset and publicly on IMDb at similar times, then researchers could link data corresponding to Jane across both datasets. This would in turn give them the means to recover ratings that were included in the Netflix Prize but not on IMDb. This example shows how difficult it is to define and guarantee privacy because it is hard to estimate the scope of knowledge—about individuals—available to adversaries. While the dataset released by Netflix has since been taken down, it is difficult to ensure that all of its copies have been deleted. In recent years, data sample instance encoding based methods like InstaHide [HSL20], and NeuraCrypt [YEO21] have been demonstrated to be vulnerable to such composition attacks as well.
As a result, the research community has converged on differential privacy [DMNS06], which provides the following semantic guarantee, as opposed to ad-hoc approaches: An adversary learns almost the same information about an individual whether or not they are present in or absent from the training data set. In particular, it provides a condition on the algorithm, independent from who might be attacking it, or the specifics of the data set instantiation. Put another way, differential privacy is a framework for evaluating the guarantees provided by a system that was designed to protect privacy. Such systems can be applied directly to “raw” data which potentially still contains sensitive information, altogether removing the need for procedures that sanitize or anonymize data and are prone to the failures described previously. That said, minimizing data collection in the first place remains a good practice to limit other forms of privacy risk.
## Designing Private Machine Learning Algorithms via Differential Privacy
Differential privacy [DMNS06] is a semantic notion of privacy that addresses a lot of the limitations of previous approaches like k-anonymity. The basic idea is to randomize part of the mechanism’s behavior to provide privacy. In our case, the mechanism considered is a learning algorithm, but the differential privacy framework can be applied to study any algorithm.
The intuition for why we introduce randomness into the learning algorithm is that it obscures the contribution of an individual, but does not obscure important statistical patterns. Without randomness, we would be able to ask questions like: “What parameters does the learning algorithm choose when we train it on this specific dataset?” With randomness in the learning algorithm, we instead ask questions like: “What is the probability that the learning algorithm will choose parameters in this set of possible parameters, when we train it on this specific dataset?”
We use a version of differential privacy which requires (in our use case of machine learning) that the probability of learning any particular set of parameters stays roughly the same if we change a single data record in the training set. A data record can be a single training example from an individual, or the collection of all the training examples provided by an individual. The former is often referred to as example level/item level privacy, and the latter is referred to as user level differential privacy. While user level privacy provides stronger semantics, it may be harder to achieve. For a more thorough discussion about the taxonomy of these notions, see [DNPR10, JTT18, HR12, HR13]. In this document, for the ease of exposition of the technical results, we focus on the example level notion. This could mean to add a training example, remove a training example, or change the values within one training example. The intuition is that if a single patient (Jane Smith) does not affect the outcome of learning much, then that patient’s records cannot be memorized and her privacy is respected. In the rest of this post, how much a single record can affect the outcome of learning is called the sensitivity of an algorithm.
The guarantee of differential privacy is that the adversary is not able to distinguish the answers produced by the randomized algorithm based on the data of two of the three users from the answers returned by the same algorithm based on the data of all three users. We also refer to the degree of indistinguishability as the privacy loss. Smaller privacy loss corresponds to a stronger privacy guarantee.
It is often thought that privacy is a fundamental bottleneck in obtaining good prediction accuracy/generalization for machine learning algorithms. In fact, recent research has shown that in many instances it actually helps in designing algorithms with strong generalization ability. Some of the examples where DP has resulted in designing better learning algorithms are Online linear predictions [KV05] and online PCA [DTTZ13]. Notably, [DFH15] formally showed that generalization for any DP learning algorithm comes for free. More concretely, if a DP learning algorithm has good training accuracy, it is guaranteed to have good test accuracy. This is true because differential privacy itself acts as a very strong form of regularization.
One might argue that the generalization guarantee which a DP algorithm can achieve may be sub-par to that of its non-private baselines. For a large class of learning tasks, one can show that asymptotically DP does not introduce any further error beyond the inherent statistical error [SSTT21]. [ACG16,BFTT19] highlights that in the presence of enough data, a DP algorithm can get arbitrarily close to the inherent statistical error, even under strong privacy parameters.
## Private Empirical Risk Minimization
Before we go into the design of specific differentially private learning algorithms, we first formalize the problem setup, and standardize some notation. Consider a training data set $$D={(x_1,y_1),…,(x_n,y_n)}$$ drawn i.i.d. from some fixed (unknown) distribution $$\Pi$$, with the feature vector being $$x_i$$ and label/response being $$y_i$$. We define the training loss at any model $$\theta$$ as $$L_{train} (\theta, D) = \frac{1}{n} \sum_{i=1}^{n} l(\theta; (x_i, y_i))$$, and the corresponding test loss as $$L_{test} (\theta) = E_{(x,y) \sim \Pi} l(\theta;(x,y))$$. We will design DP algorithms to output models that approximately minimize the test loss while having access only to the training loss.
In the literature, there are a variety of approaches towards designing these DP learning algorithms [CMS11, KST12, BST14, PAE16, BTT18]. One can categorize them broadly as: i) algorithms that assume that the individual loss function $$l(\theta;\cdot)$$ is convex in the model parameter to ensure differential privacy, ii) algorithms that are differentially private even when the loss function is non-convex in nature (e.g., deep learning models), and iii) model agnostic algorithms, that do not require any information about the representation of the model $$\theta$$, or the loss function $$l(\theta;\cdot)$$. In our current discussion, we will only focus on designing algorithms for (ii), and (iii). This is because it turns out that the best known algorithms for (ii) are already competitive to algorithms that are specific for (i) [INS19].
## Private Algorithms for Training Deep Learning Models
The first approach, due to SCS13, BST14, and ACG16, is named differentially private stochastic gradient descent (DP-SGD). It proposes to modify the model updates computed by the most common optimizer used in deep learning: stochastic gradient descent (SGD). Typically, stochastic gradient descent trains iteratively. At each iteration, a small number of training examples (a “minibatch”) are sampled from the training set. The optimizer computes the average model error on these examples, and then differentiates this average error with respect to each of the model parameters to obtain a gradient vector. Finally, the model parameters ($$\theta_t$$) are updated by subtracting this gradient ($$\nabla_t$$) multiplied by a small constant $$\eta$$ (the learning rate controls how quickly the optimizer updates the model’s parameters). At a high level, two modifications are made by DP-SGD to obtain differential privacy: gradients, which are computed on a per-example basis (rather than averaged over multiple examples), are first clipped to control their sensitivity, and, second, spherical Gaussian noise $$b_t$$ is added to their sum to obtain the indistinguishability needed for DP. Succinctly, the update step can be written as follows: $$\theta_{t+1} \leftarrow \theta_t - \eta \cdot (\nabla_t + b_t)$$.
Let us take the example of a hospital training a model to predict whether patients will be readmitted after being released. To train the model, the hospital uses information from patient records, such as demographic variables and admission variables (e.g., age, ethnicity, insurance type, type of Intensive Care Unit admitted to) but also time-varying vitals and labs (e.g., heart rate, blood pressure, white blood cell counts) [JPS16]. The modifications made by DP-SGD ensure that if (1) Jane Smith’s individual patient record contained unusual features, e.g., her insurance provider was uncommon for people of her age or her heart rate followed an unusual pattern, the resulting signal will have a bounded impact on our model updates, and (2) the model’s final parameters would be essentially identical should Jane Smith have chosen to not contribute (i.e., opt-out) her patient record to the training set. Stronger differential privacy is achieved when one is able to introduce more noise (i.e., sample noise with larger standard deviation) and train for as few iterations as possible.
Two main components in the above DP-SGD algorithm that distinguishes itself from traditional SGD are: i) per-example clipping and ii) Gaussian noise addition. In addition, for the analysis to hold, DP-SGD requires that sub-sampling of mini batches is uniform at random from the training data set. While this is not a requirement of DP-SGD per se, in practice many implementations of SGD do not satisfy this requirement and instead analyze different permutations of the data at each epoch of training.
While gradient clipping is common in deep learning, often used as a form of regularization, it differs from that in DP-SGD as follows: The average gradient over the minibatch is clipped, as opposed to clipping the gradient of individual examples (i.e., $$l(\theta_t;(x,y))$$ before averaging. It is an ongoing research direction to both understand the effect of per-example clipping in DP-SGD in model training [SSTT21], and also effective ways to mitigate its impact both in terms of accuracy [PTS21], and training time [ZHS19].
In standard stochastic gradient descent, subsampling is usually used either as a way to speed up the training process [CAR16], or as a a form of regularization [RCR15]. In DP-SGD, the randomness in the subsampling of the minibatch is used to guarantee DP. The technical component for this sort of privacy analysis is called privacy amplification by subsampling [KLNRS08,BBG18]. Since the sampling randomness is used to guarantee DP, it is crucial that the uniformity in the sampling step is of cryptographic strength. Another, (possibly) counterintuitive feature of DP-SGD is that for best privacy/utility trade-off it is in general better to have larger batch sizes. In fact, full-batch DP-gradient descent may provide the best privacy/utility trade-offs, albeit at the expense of computational feasibility.
For a fixed DP guarantee, the magnitude of the Gaussian noise that gets added to the gradient updates in each step in DP-SGD is proportional to $$\sqrt{the\ number\ of\ steps}$$ the model is trained for. As a result, it is important to tune the number of training steps for best privacy/utility trade-offs.
In the following tutorial, we provide a small code snippet to train a model with DP-SGD.
## Model Agnostic Private Learning
The Sample and Aggregate framework [NRS07] is a generic method to add differential privacy to a non-private algorithm without caring about the internal workings of it, a.k.a. model agnostic. In the context of machine learning, one can state the main idea as follows: Consider a multi-class classification problem. Take the training data, and split into k disjoint subsets of equal size. Train independent models $$\theta_1, \theta_2, …, \theta_k$$ on the disjoint subsets. In order to predict on an test example x, first, compute a private histogram over the set of k predictions $$\theta_1(x), \theta_2(x), …, \theta_k(x)$$. Then, select and output the bin in the histogram based on the highest count, after adding a small amount of Laplace/Gaussian noise to the counts. In the context of DP learning, this particular approach was used in two different lines of work: i) PATE [PAE16], and ii) Model agnostic private learning [BTT18]. While the latter focussed on obtaining theoretical privacy/utility trade-offs for a class of learning tasks (e.g., agnostic PAC learning), the PATE approach focuses on practical deployment. Both these lines of work make one common observation. If the predictions from $$\theta_1(x), \theta_2(x), …, \theta_k(x)$$ are fairly consistent, then the privacy cost in terms of DP is very small. Hence, one can run a large number of prediction queries, without violating DP constraints. In the following, we describe the PATE approach in detail.
The private aggregation of teacher ensembles (PATE) demonstrated in particular that this approach allows one to learn deep neural networks with differential privacy. It proposes to have an ensemble of models trained without privacy predict with differential privacy by having these models predict in aggregate rather than revealing their individual predictions. In PATE, we start by partitioning the private dataset into smaller subsets of data. These subsets are partitions, so there is no overlap between the data included in any pair of partitions. If Jane Smith’s record was in our private dataset, then it is included in one of the partitions only. That is, only one of the teachers has analyzed Jane Smith’s record during training. We train a ML model, called a teacher, on each of these partitions. We now have an ensemble of teacher models that were trained independently, but without any guarantees of privacy. How do we use this ensemble to make predictions that respect privacy? In PATE, we add noise while aggregating the predictions made individually by each teacher to form a single common prediction. We count the number of teachers who voted for each class, and then perturb that count by adding random noise sampled from the Laplace or Gaussian distribution. Each label predicted by the noisy aggregation mechanism comes with rigorous differential privacy guarantees that bound the privacy budget spent to label that input. Again, stronger differential privacy is achieved when we are able to introduce more noise in the aggregation and are able to answer as few queries as possible. Let us now come back to our running example. Imagine that we’d like to use the output of PATE to know if Jane likes a particular movie. The only teacher trained on the partition containing Jane Smith’s data—has now learned that a record similar to Jane’s is characteristic of an individual who likes similar movies, and as a consequence changes its prediction on a test input which is similar to Jane’s to predict the movie rating assigned by Jane. However, because the teacher only contributes a single vote to the aggregation, and that the aggregation injects noise, we won’t be able to know whether the teacher changed its prediction to the movie rating assigned by Jane because the teacher indeed trained on Jane’s data or because the noise injected during the aggregation “flipped” that teacher’s vote. The random noise added to vote counts prevents the outcome of aggregation from reflecting the votes of any individual teachers to protect privacy.
## Practically deploying differential privacy in machine learning
The two approaches we introduced have the advantage of being conceptually simple to understand. Fortunately, there also exist several open-source implementations of these approaches. For instance, DP-SGD is implemented in TensorFlow Privacy, Objax, and Opacus. This means that one is able to take an existing TensorFlow, JAX, or PyTorch pipeline for training a machine learning model and replace a non-private optimizer with DP-SGD. An example implementation of PATE is also available in TensorFlow Privacy. So what are the concrete potential obstacles to deploying machine learning with differential privacy?
The first obstacle is the accuracy of privacy-preserving models. Datasets are often sampled from distribution with heavy tails. For instance, in a medical application, there are typically (and fortunately) fewer patients with a given medical condition than patients without that condition. This means that there are fewer training examples for patients with each medical condition to learn from. Because differential privacy prevents us from learning patterns which are not found generally across the training data, it limits our ability to learn from these patients for which we have very few examples of [SPG]. More generally, there is often a trade-off between the accuracy of a model and the strength of the differential privacy guarantee it was trained with: the smaller the privacy budget is, the larger the impact on accuracy typically is. That said, this tension is not always inevitable and there are instances where privacy and accuracy are synergical because differential privacy implies generalization [DFH15] (but not vice versa).
The second obstacle to deploying differentially private machine learning can be the computational overhead. For instance, in DP-SGD one must compute per-example gradients rather than average gradients. This often means that optimizations implemented in machine learning frameworks to exploit matrix algebra supported by underlying hardware accelerators (e.g., GPUs) are harder to take advantage of. In another example, PATE requires that one train multiple models (the teachers) rather than a single model so this can also introduce overhead in the training procedure. Fortunately, this cost is mostly mitigated in recent implementations of private learning algorithms, in particular in Objax and Opacus.
The third obstacle to deploying differential privacy, in machine learning but more generally in any form of data analysis, is the choice of privacy budget. The smaller the budget, the stronger the guarantee is. This means one can compare two analyses and say which one is “more private”. However, this also means that it is unclear what is “small enough” of a privacy budget. This is particularly problematic given that applications of differential privacy to machine learning often require a privacy budget that provides little theoretical guarantees in order to train a model whose accuracy is large enough to warrant a useful deployment. Thus, it may be interesting for practitioners to evaluate the privacy of their machine learning algorithm by attacking it themselves. Whereas the theoretical analysis of an algorithm’s differential privacy guarantees provides a worst-case guarantee limiting how much private information the algorithm can leak against any adversary, implementing a specific attack can be useful to know how successful a particular adversary or class of adversaries would be. This helps interpret the theoretical guarantee but may not be treated as a direct substitute for it. Open-source implementations of such attacks are increasingly available: e.g., for membership inference here and here.
## Conclusion
In the above, we discussed some of the algorithmic approaches towards differentially private model training which have been effective both in theoretical and practical settings. Since it is a rapidly growing field, we could not cover all the important aspects of the research space. Some prominent ones include: i) Choice of the best hyperparameters in the training of DP models.In order to ensure that the overall algorithm preserves differential privacy, one needs to ensure that the choice of hyperparameters itself preserves DP. Recent research has provided algorithms for selecting the best hyperparameters in a differentially private fashion [LT19]. ii) Choice of network architecture: it is not always true that the best known model architectures for non-private model training are indeed the best for training with differential privacy. In particular, we know that the number of model parameters may have adverse effects on the privacy/utility trade-offs [BST14]. Hence, choosing the right model architecture is important for providing a good privacy/utility trade-off [PTS21]. (iii) Training in the federated/distributed setting: in the above exposition, we assumed that the training data lies in a single centralized location. However, in settings like Federated Learning (FL) [MMRHA17], the data records can be highly distributed, e.g., across various mobile devices. Running DP-SGD in the FL setting, which is required for FL to provide privacy guarantees for the training data, raises a series of challenges [KMA19] which are often facilitated by distributed private learning algorithms designed specifically for FL settings [BKMTT20, KMSTTZ21]. Some of the specific challenges in the context of FL include, limited and non-uniform availability of clients (holding individual data records) and unknown (and variable) size of the training data [BKMTT18]. On the other hand, PATE style algorithms lend themselves naturally to the distributed setting once combined with existing cryptographic primitives, as demonstrated by the CaPC protocol [CDD21]. It is an active area of research to address these above challenges.
## Acknowledgements
The authors would like to thank Thomas Steinke and Andreas Terzis for detailed feedback and edit suggestions. Parts of this blog post previously appeared on www.cleverhans.io.
## Citations
[ACG16] Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., & Zhang, L. (2016, October). Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (pp. 308-318). ACM.
[BBG18] Balle, B., Barthe, G., & Gaboardi, M. (2018). Privacy amplification by subsampling: Tight analyses via couplings and divergences. arXiv preprint arXiv:1807.01647.
[BKMTT18] Balle, B., Kairouz P., McMahan M., Thakkar O. & Thakurta A. (2020). Privacy amplification via random check-ins. In NeurIPS.
[MMRHA17] McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017, April). Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics (pp. 1273-1282). PMLR.
[KMSTTZ18] Kairouz P., McMahan M., Song S., Thakkar O., Thakurta A., & Xu Z. (2021). Practical and Private (Deep) Learning without Sampling or Shuffling. In ICML.
[BFTT19] Bassily, R., Feldman, V., Talwar, K., & Thakurta, A. Private Stochastic Convex Optimization with Optimal Rates. In NeurIPS 2019.
[BST14] Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In Proceedings of the 55th Annual IEEE Symposium on Foundations of Computer Science.
[BTT18] Bassily, R., Thakurta, A. G., & Thakkar, O. D. (2018). Model-agnostic private learning. Advances in Neural Information Processing Systems.
[CDD21] Choquette-Choo, C. A., Dullerud, N., Dziedzic, A., Zhang, Y., Jha, S., Papernot, N., & Wang, X. (2021). CaPC Learning: Confidential and Private Collaborative Learning. arXiv preprint arXiv:2102.05188.
[CMS11] Chaudhuri, K., Monteleoni, C., & Sarwate, A. D. (2011). Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(3).
[CTW20] Carlini, N., Tramer, F., Wallace, E., Jagielski, M., Herbert-Voss, A., Lee, K., … & Raffel, C. (2020). Extracting training data from large language models. arXiv preprint arXiv:2012.07805.
[DFH15] Dwork, C., Feldman, V., Hardt, M., Pitassi, T., Reingold, O., & Roth, A. (2015). Generalization in adaptive data analysis and holdout reuse. arXiv preprint arXiv:1506.02629.
[DMNS06] Dwork, C., McSherry, F., Nissim, K., & Smith, A. (2006, March). Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference (pp. 265-284). Springer, Berlin, Heidelberg.
[DNPR10] Dwork, C., Naor, M., Pitassi, T., & Rothblum, G. N. (2010, June). Differential privacy under continual observation. In Proceedings of the forty-second ACM symposium on Theory of computing (pp. 715-724).
[DTTZ14] Dwork, C., Talwar, K., Thakurta, A., & Zhang, L. (2014, May). Analyze gauss: optimal bounds for privacy-preserving principal component analysis. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing (pp. 11-20).
[HSL20] Huang, Y., Song, Z., Li, K., & Arora, S. (2020, November). Instahide: Instance-hiding schemes for private distributed learning. In International Conference on Machine Learning (pp. 4507-4518). PMLR.
[HR12] Hardt, M., & Roth, A. (2012, May). Beating randomized response on incoherent matrices. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing (pp. 1255-1268).
[HR13] Hardt, M., & Roth, A. (2013, June). Beyond worst-case analysis in private singular vector computation. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing (pp. 331-340).
[JPS16] Johnson, A., Pollard, T., Shen, L. et al. MIMIC-III, a freely accessible critical care database. Sci Data 3, 160035 (2016). https://doi.org/10.1038/sdata.2016.35
[JTT18] Jain, P., Thakkar, O. D., & Thakurta, A. (2018, July). Differentially private matrix completion revisited. In International Conference on Machine Learning (pp. 2215-2224). PMLR.
[INS19] Iyengar, R., Near, J. P., Song, D., Thakkar, O., Thakurta, A., & Wang, L. (2019, May). Towards practical differentially private convex optimization. In 2019 IEEE Symposium on Security and Privacy (SP) (pp. 299-316). IEEE.
[KST12] Kifer, D., Smith, A., & Thakurta, A. (2012, June). Private convex empirical risk minimization and high-dimensional regression. In Conference on Learning Theory (pp. 25-1). JMLR Workshop and Conference Proceedings.
[KMA19] Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., … & Zhao, S. (2019). Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977.
[KV05] Kalai, Adam, and Santosh Vempala. “Efficient algorithms for online decision problems.” Journal of Computer and System Sciences 71.3 (2005): 291-307.
[KLNRS08] Raskhodnikova, S., Smith, A., Lee, H. K., Nissim, K., & Kasiviswanathan, S. P. (2008). What can we learn privately. In Proceedings of the 54th Annual Symposium on Foundations of Computer Science (pp. 531-540).
[LLV07] Li, N., Li, T., & Venkatasubramanian, S. (2007, April). t-closeness: Privacy beyond k-anonymity and l-diversity. In 2007 IEEE 23rd International Conference on Data Engineering (pp. 106-115). IEEE.
[LT19] Liu, J., & Talwar, K. (2019, June). Private selection from private candidates. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (pp. 298-309).
[M17] Mironov, I. (2017, August). Renyi differential privacy. In Computer Security Foundations Symposium (CSF), 2017 IEEE 30th (pp. 263-275). IEEE.
[MKG07] Machanavajjhala, Ashwin; Kifer, Daniel; Gehrke, Johannes; Venkitasubramaniam, Muthuramakrishnan (March 2007). “L-diversity: Privacy Beyond K-anonymity”. ACM Transactions on Knowledge Discovery from Data.
[NRS07] Nissim, K., Raskhodnikova, S., & Smith, A. (2007, June). Smooth sensitivity and sampling in private data analysis. In Proceedings of the thirty-ninth annual ACM symposium on Theory of computing (pp. 75-84).
[NS08] Narayanan, A., & Shmatikov, V. (2008, May). Robust de-anonymization of large sparse datasets. In Security and Privacy, 2008. SP 2008. IEEE Symposium on (pp. 111-125). IEEE.
[PAE16] Papernot, N., Abadi, M., Erlingsson, U., Goodfellow, I., & Talwar, K. (2016). Semi-supervised knowledge transfer for deep learning from private training data. ICLR 2017.
[PTS21] Papernot, N., Thakurta, A., Song, S., Chien, S., & Erlingsson, U. (2020). Tempered sigmoid activations for deep learning with differential privacy. AAAI 2021.
[RCR15] Rudi, A., Camoriano, R., & Rosasco, L. (2015, December). Less is More: Nyström Computational Regularization. In NIPS (pp. 1657-1665).
[SCS13] Shuang Song, Kamalika Chaudhuri, and Anand D Sarwate. Stochastic gradient descent with differentially private updates. In Proceedings of the 2013 IEEE Global Conference on Signal and Information Processing, GlobalSIP ’13, pages 245–248, Washington, DC, USA, 2013. IEEE Computer Society.
[SPG] Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings. Vinith Suriyakumar, Nicolas Papernot, Anna Goldenberg, Marzyeh Ghassemi. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.
[SS98] Samarati, Pierangela; Sweeney, Latanya (1998). “Protecting privacy when disclosing information: k-anonymity and its enforcement through generalization and suppression” (PDF). Harvard Data Privacy Lab. Retrieved April 12, 2017
[SSS17] Shokri, R., Stronati, M., Song, C., & Shmatikov, V. (2017, May). Membership inference attacks against machine learning models. In Security and Privacy (SP), 2017 IEEE Symposium on (pp. 3-18). IEEE.
[SSTT21] Song, S., Thakkar, O., & Thakurta, A. (2020). Evading the Curse of Dimensionality in Unconstrained Private GLMs. In AISTATS 2021.
[XT07] Xiao X, Tao Y (2007) M-invariance: towards privacy preserving re-publication of dynamic datasets. In: SIGMOD conference, Beijing, China, pp 689–700
[YEO21] Yala, A., Esfahanizadeh, H., Oliveira, R. G. D., Duffy, K. R., Ghobadi, M., Jaakkola, T. S., … & Medard, M. (2021). NeuraCrypt: Hiding Private Health Data via Random Neural Networks for Public Training. arXiv preprint arXiv:2106.02484.
[ZHS19] Jingzhao Zhang, Tianxing He, Suvrit Sra, and Ali Jadbabaie. Why gradient clipping accelerates training: A theoretical justification for adaptivity. In International Conference on Learning Representations, 2019.
Posted by Nicolas Papernot and Abhradeep Thakurta on October 25, 2021.
Categories: Surveys
[cite this]
|
# Bayes' rule and probability
I've got a probability question:
Given a 5-faced die (1,2,3,4,5),call it die $A$, each face has probability as follows:
$$\begin{array}{rrrrr} \text{Face} & 1 & 2 & 3 & 4 & 5 \\ \text{Prob} & 0.2 & 0.15 & 0.1 & 0.25 & 0.3 \end{array}$$
We roll this die three times and get $O = \{2,4,5\}$
Q1. What's the probability that we get this kind of outcome assuming that we are using die A
My solution is: $3!\cdot0.15\cdot0.25\cdot0.3$,
Q2. Given another 5-faced die $B$ and its probability distribution is as follows:
$$\begin{array}{rrrrr} \text{Face} & 1 & 2 & 3 & 4 & 5 \\ \text{Prob} & 0.1 & 0.2 & 0.3 & 0.25 & 0.15 \end{array}$$
Now, we have 2 dice, given that we do not know which die we rolled, but the outcome is $O = \{2,4,5\}$, whats the probability this die is die A?
-
Note: not "Baye" ... named for Rev. Thomas Bayes. The Baye Rule is as nonexistent as the Stoke Theorem. Write "Bayes' Rule" or "Bayes's Rule" or something. – GEdgar Sep 14 '12 at 0:30
MAYBE HE ROLLS B AND GETS THE [2.4.5] OUTCOME WITH THREE ROLLS. ASSUMING YOU KNOW THE MULTINOMIAL DISTRIBUTION FOR A WHAT IS THE P VALUE OF THE TEST THAT THE DIE USED IS A. tHE NULL IS THAT IT IS A AND THE ALTERNATIVE IS NOT A. IF THIS FORMULATION IS CORRECT THE DISTRIBUTION FOR B IS IRRELEVANT. @LouisTan Do I have the correct interpretation? – Michael Chernick Sep 14 '12 at 2:55
@Michael, please stop SHOUTING. – Gerry Myerson Sep 17 '12 at 6:10
@GerryMyerson Occasional capitalization for emphasis is okay I think. I don't like to think about it as shouting. Please try to spell Louis' name correctly. – Michael Chernick Sep 17 '12 at 11:34
@Michael, please, there is a difference between OCCASIONAL capitalization and a whole paragraph of it. – Gerry Myerson Sep 17 '12 at 12:05
## 1 Answer
Probability of die A, given outcome 245, equals (probability of die A) times (probability of 245 given die A), divided by the sum of [(probability of die A) times (probability of 245 given die A)] and [(probability of die B) times (probability of 245 given die B)].
Now you have calculated probability of 245 given die A, and you can similarly calculate probability of 245 given die B, but what you need to know is the a priori probability of die A and probability of die B. Perhaps you are meant to assume that these are both 1/2.
EDIT: The above concerns Q2. I believe the solution to Q1 in the original post is correct.
-
+1 This seems to be a reasonable interpetation if Bayes' rule is to be appied. It assume the multinomial distributions are known and that it is not known if A or B is chosen but the decision was made by say flipping a fair coin with A chosen if say it lands heads. – Michael Chernick Sep 14 '12 at 3:03
|
# The isomorphism problem for torsion free nilpotent groups of Hirsch length at most 5
The isomorphism problem for torsion free nilpotent groups of Hirsch length at most 5 AbstractWe consider the isomorphism problem for the finitely generated torsionfree nilpotent groupsof Hirsch length at most five. We show how this problem translates tosolving an explicitly given set of polynomial equations. Based onthis, we introduce a canonical form for each isomorphism type of finitelygenerated torsion free nilpotent group of Hirsch length at most 5 and,using a variationof our methods, we give an explicit description of its automorphisms. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Groups Complexity Cryptology de Gruyter
# The isomorphism problem for torsion free nilpotent groups of Hirsch length at most 5
, Volume 9 (1): 21 – May 1, 2017
21 pages
/lp/de-gruyter/the-isomorphism-problem-for-torsion-free-nilpotent-groups-of-hirsch-nND4Ew01BV
Publisher
de Gruyter
ISSN
1869-6104
eISSN
1869-6104
DOI
10.1515/gcc-2017-0004
Publisher site
See Article on Publisher Site
### Abstract
AbstractWe consider the isomorphism problem for the finitely generated torsionfree nilpotent groupsof Hirsch length at most five. We show how this problem translates tosolving an explicitly given set of polynomial equations. Based onthis, we introduce a canonical form for each isomorphism type of finitelygenerated torsion free nilpotent group of Hirsch length at most 5 and,using a variationof our methods, we give an explicit description of its automorphisms.
### Journal
Groups Complexity Cryptologyde Gruyter
Published: May 1, 2017
### References
Access the full text.
|
# [NTG-context] indexing suggestion
Hans van der Meer hansm at science.uva.nl
Tue Jan 17 14:09:22 CET 2006
When I started to do my syllabus again but in ConTeXt, I also began
to set up an index.
Readily it became a nuisance to type each time:
in this sentence \index{sentence} etc.
Making this into a macro I am now typing:
in this \indexed{sentence} etc.
It is a small difference but not only is it less typing, it also
I coded this macro:
\def\indexed#1{\index{#1}#1}
Note, \index{#1} preceeds typesetting of the text #1, otherwise the
next space is eaten by \index.
Of course the full macro would be something like \indexed[whatever]
{sentence}, and all the other options in \index[]{}.
However if have not enough experience with the way such things are
cooked in ConTeXt and do not want to pollute the code with halfbaked
code of my own.
Maybe Hans and Taco feel this suggestion useful enough to pick it up.
yours sincerely,
dr. H. van der Meer
|
# footnote with a link split into next page makes all of the text clickable
I have this weird issue where my footnote containing a link is split into the next page because the link URL is too long. This would be fine. What is weird is that the text on the next page where the foot note is split into is all clickable with the URL that has been split.
Is this fixable and how?
Image for illustration:
MWE:
\documentclass[12pt,czech,]{article}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\usepackage{fontspec}
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\IfFileExists{microtype.sty}{%
\usepackage{microtype}
}{}
\usepackage[a4paper]{geometry}
\usepackage{bookmark}
\hypersetup{unicode=true,
pdftitle={Test},
pdfauthor={Václav Haisman},
pdfborder={0 0 0},
\urlstyle{same} % don't use monospace font for urls
\usepackage{polyglossia}
\setmainlanguage[]{czech}
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
}
\setlength{\emergencystretch}{3em} % prevent overfull lines
% This makes sure that linked text's links shows as a footnote.
\makeatletter
\let\oldhref=\href
\renewcommand\href[2]{\oldhref{#1}{#2}\footnote{\url{#1}}}
\makeatother
\PassOptionsToPackage{backref}{hyperref}
% This allows line breaks in URL in more places.
\def\UrlBreaks{\do\/\do-\do.\do=\do_\do?\do\&\do\%\do\a\do\b\do\c\do\d\do\e\do\f\do\g\do\h\do\i\do\j\do\k\do\l\do\m\do\n\do\o\do\p\do\q\do\r\do\s\do\t\do\u\do\v\do\w\do\x\do\y\do\z\do\A\do\B\do\C\do\D\do\E\do\F\do\G\do\H\do\I\do\J\do\K\do\L\do\M\do\N\do\O\do\P\do\Q\do\R\do\S\do\T\do\U\do\V\do\W\do\X\do\Y\do\Z\do\0\do\1\do\2\do\3\do\4\do\5\do\6\do\7\do\8\do\9}
\usepackage{blindtext}
\title{Test}
\author{Václav Haisman}
\date{}
\begin{document}
\maketitle
{
\setcounter{tocdepth}{3}
\tableofcontents
}
\section{Test}\label{test-id}
\blindtext[2]
More text here more text here more text here more text here\footnote{\url{https://jjj.ilcyagb.pm/ernyvmbinar-cehmxhzl/prfv-n-ceiav-frk}}
až 10 \%\footnote{\url{http://www.example.com/prfv-znwv-ceiav-frk-cehzrear-i-17-yrgrpu-10-zynqvfgilpu-qbxbapr-b-qin-ebxl-qevir}}
more text here more text here more text here more text here more text here
more text here more text here more text here more text here more text here
more text here more text here more text here more text here more text here
more text here more text here more text here more text here more text here
more text here more text here more text here more text here more text here
more text here more text here more text here more text here more text here
more text here more text here more text here more text here more text here.
\end{document}
UPDATE 1
Here is the test on Overloaf: https://www.overleaf.com/read/cmgygpypqzcb
One thing, I am using Evince. Could it be that Acrobat Reader does not have the problem but Evince does? It shows the text clickable in all viewers I have tested: Evince, Acrobat Reader and Firefox.
UPDATE 2
I have found out that if I change \setmainlanguage[]{czech} to \setmainlanguage[]{english} then the wrapping result changes and the last foot note is not split into the next page.
• The example does not show the problem for me. The workaround would be to avoid link breaks across pages. Also the link could be manually split in two at the break point: \href{<url>}{\nolinkurl{<url part 1>}}\linebreak\href{<url>}{\nolinkurl{<url part 2>}}. – Heiko Oberdiek Feb 28 '16 at 10:39
• BTW, \PassOptionsToPackage should be used before the package is loaded. Afterwards it is too late to have an effect on the package loading in the past. – Heiko Oberdiek Feb 28 '16 at 10:40
• @HeikoOberdiek: I use XeLaTeX and LuaLaTeX and it shows in both. – wilx Feb 28 '16 at 10:40
• The footnotes start at the second page in both cases. The example loads package fixltx2e, which causes a warning, because the fixes are now in the recent LaTeX kernels already. Maybe you are using older software than me resulting in different page breaks. – Heiko Oberdiek Feb 28 '16 at 10:44
• @HeikoOberdiek: Parts of this are generated by Pandoc's template. – wilx Feb 28 '16 at 10:45
|
## Why someone thought that sudoku might not be boring, while actually you should learn how to properly implement backtracking
It causes me unspeakable agony to see that my post about why sudoku is boring is one of the most frequented posts in this blog, mostly because most of my readers clearly disagree with the title. I recently received an email titled "why sudoku is not all that boring" by an old friend, and he taunted me that the sudoku
S = [
0, 0, 0, 0, 6, 0, 0, 8, 0,
0, 2, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 0,
0, 7, 0, 0, 0, 0, 1, 0, 2,
5, 0, 0, 0, 3, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 4, 0, 0,
0, 0, 4, 2, 0, 1, 0, 0, 0,
3, 0, 0, 7, 0, 0, 6, 0, 0,
0, 0, 0, 0, 0, 0, 0, 5, 0 ]
would take my 5-minute hack of a backtracking algorithm
real 51m3.656s
user 50m32.260s
sys 0m2.084s
to solve. So, it seems like some sudokus are really hard, even for a computer, right? Wrong wrong wrong wrong wrong. Read how to implement backtracking properly.
## Why Sudoku is boring
I was riding the backseat of a car, a pal of mine with a large Sudoku book on the seat beside me. I glared over at him and remarked that I find Sudokus utterly boring and would feel that my time is wasted on any of them. He looked up at me, clearly demanding an explanation for that statement. I continued to explain that a computer program could solve a Sudoku with such ease that there is no need for humans to do it. He replied that something similar could be said about chess, but still it's an interesting game. And it was then, that I realized why Sudoku is so horribly boring, and chess is not. It was the fact that I could code a Sudoku solver and solve the Sudoku he was puzzling about, and I would be able to do it faster than it would take him to solve the entire thing by hand. This does not apply to chess, evidently. Of course, I confidently explained this to him. »Prove it.«, he said. So I did. Do you want to know more?
|
# So frustrated - damn those interviews!
This topic is 2059 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Need to blow off some steam... Here we go:
I'm the goto guy, been the goto guy my entire professional career. I'm the one they come to when need to solve something - design, bugs, algorithms, whatever you need - I always have the answer. And I'm getting paid a lot to do that, which probably means something.
Then there's interviews. I've done a number in the past year, just because, to keep in shape. And they all sucked. No real reason for that - they ask me questions I should have answered without hesitation, but nothing - blank mind. And whenever the interview ends - I go out and the answer immediately pops in my head.
Last one was 2 days ago. Sucked big time. And now I'm frustrated. The problem is I'm over-confident. I'm used to being the smartest guy in the room, I forget that I actually have to make people who don't know me see that. I'm too smug at interviews, like I'm doing them a favor I'm coming. Not really know why I'm frustrated, given that I'm an idiot.
I have interview for a job I actually want in 2 days, so frustration is bad. And here's why I'm going to own it:
- I never really cared about the other jobs, so didn't really bother with interviews.
- I will not be smug for an interview I actually care about.
- I will not be a smart-ass for an interview I actually care about.
- I'm the goto guy.
Feel much better now...
And BTW, I like the forums here because they make me feel humble. A lot of smart people hanging here, a lot of things to learn. I don't get it in my day job anymore.
[EDIT] Just re-read it. Hope I'm not coming off as an ass. I'm really not.
[EDIT2] re-re-reading it, that's the problem. I'm not going to own the interview just because. I need to prepare, do some exercise, sleep-well the night before, come focused. I knew you guys would help
Edited by N.I.B.
##### Share on other sites
Can you give an example of a question you were asked?
##### Share on other sites
Here's a bunch:
- Define an architecture for immediate messaging system that minimizes server load (I actually think my solution was better than the answer suggested by the interviewer).
- Reverse the order of a singly-linked-list. Easy, expect when I they asked me to write the code my brain decided to shut-down. Completely shut-down.
- Efficient algorithm for updating file-indexing service when files change. I actually got that right pretty fast.
##### Share on other sites
goto is evil
goto is nostalgic. I started learning BASIC on Amstrad CPC64 when I was 8 - the most beautiful spaghetti code I ever saw.
##### Share on other sites
- Reverse the order of a singly-linked-list. Easy, expect when I they asked me to write the code my brain decided to shut-down. Completely shut-down.
The only one I could answer. My solution requires a printer and a mirror.
##### Share on other sites
- Reverse the order of a singly-linked-list. Easy, expect when I they asked me to write the code my brain decided to shut-down. Completely shut-down.
The only one I could answer. My solution requires a printer and a mirror.
SNode* pPrev = NULL;
SNode* pCurr = pRoot;
while(pCurr)
{
SNode* pNext = pCurr->pNext;
pCurr->pNext = pPrev;
pPrev = pCurr;
pCurr = pNext;
}
pRoot = pPrev;
10 lines of code. I knew the algorithm, had a writers block when needed to write it down.
##### Share on other sites
I'm guessing from context that these interviews are job interviews (very different from what I initially assumed).
Since this is about job interviews, I think it would be a useful addition to the Job Advice forum (so I'm moving this there).
##### Share on other sites
You seem to be finally realizing that job interviewing in the IT industry is a completely separate skill. Totally unrelated to the technical prowess you possess.
Since it's a skill, this means you can get better at it. You just need to work on it (as on any other skill).
As you already figured out, sitting in your chair at work, writing the code in your IDE, under the daily routine (coffee + music) is anything but even remotely comparable to the stress of having to "code" while standing in front of the person you never saw before, on the paperboard using a highlighter (or pen&paper).
It's great you realized something is wrong and it's even better that you figured it out while still having the old job. Trust me, you don't want to find that out when you're out of the job and doing the interviews (and waiting days/weeks till they schedule another round).
Now, I recommend doing a search and spending an evening or two with researching for what kind of questions are usually given at interviews (or just grab a book - plenty of those out there).
Then, grab a piece of paper and a pencil, and start coding on a paper. No IDE. No google. No music and no coffee.
This may take anywhere between 10-40 hrs of raw hand-writing, but you will get better at it.
What was the last time you spent a full hour writing just using your hand ? I found out my hand hurts like hell. Better to be prepared for things like that, than figuring them out when being stressed out at the IW.
Now, coding on paper is still way, way easier than on a big board with the highlighter. On a paper, you see easily 20-50 lines of a code. Not so much on a big board, where you gotta use bigger letters so that the interviewer can see it from distance.
There's no delete / enter on a paper with a highlighter. It truly sucks doing a typo during an interview, but you just gotta embrace the feeling and go with it, completely ignoring it.
To improve the skill further, there must be someone in your room, could be from your family or a friend. It's ok they don't know sh*t about IT. During an interview, you will be given lots of "why" questions anyway, so it's best to be prepared to be constantly interrupted, while you're trying to figure out how to insert a damn line between two lines already written on the big board.
10 months ago, when I was changing jobs, I did insane amount of interviewing (it's practically like two full-time jobs) and had a chance to really up the skills of interviewing, since some companies had 5-8 rounds of technical phone/online/on-site interviews.
In this world, no one really cares if you have the skills. But can you sell the skills you don't even have ? That's what really matters, regardless how crazy it sounds at first...
##### Share on other sites
Whenever asked to talk through your thought process......DON'T. Solve the problem quickly and then go over and expalin it. Much quicker and much more flawless.
• 18
• 29
• 11
• 23
• 16
|
# Möbius–Kantor configuration
The Möbius–Kantor configuration
In geometry, the Möbius–Kantor configuration is a configuration consisting of eight points and eight lines, with three points on each line and three lines through each point. It is not possible to draw points and lines having this pattern of incidences in the Euclidean plane, but it is possible in the complex projective plane.
## Coordinates
August Ferdinand Möbius (1828) asked whether there exists a pair of polygons with p sides each, having the property that the vertices of one polygon lie on the lines through the edges of the other polygon, and vice versa. If so, the vertices and edges of these polygons would form a projective configuration. For p = 4 there is no solution in the Euclidean plane, but Seligmann Kantor (1882) found pairs of polygons of this type, for a generalization of the problem in which the points and edges belong to the complex projective plane. That is, in Kantor's solution, the coordinates of the polygon vertices are complex numbers. Kantor's solution for p = 4, a pair of mutually-inscribed quadrilaterals in the complex projective plane, is called the Möbius–Kantor configuration.
Seven of the lines of the configuration can be made straight, but not all eight
Harold Scott MacDonald Coxeter (1950) supplies the following simple complex projective coordinates for the eight points of the Möbius–Kantor configuration:
(1,0,0), (0,0,1), (ω, −1, 1), (−1, 0, 1),
(−1,ω2,1), (1,ω,0), (0,1,0), (0,−1,1),
where ω denotes the complex cube root of 1.
These are the vertices of the complex polygon 3{3}3 with the 8 vertices and 8 3-edges.[1] Coxeter named it a Möbius–Kantor polygon.
## Abstract incidence pattern
The Möbius–Kantor graph, the Levi graph of the Möbius–Kantor configuration. Vertices of one color represent the points of the configuration, and vertices of the other color represent the lines.
More abstractly, the Möbius–Kantor configuration can be described as a system of eight points and eight triples of points such that each point belongs to exactly three of the triples. With the additional conditions (natural to points and lines) that no pair of points belong to more than one triple and that no two triples have more than one point in their intersection, any two systems of this type are equivalent under some permutation of the points. That is, the Möbius–Kantor configuration is the unique projective configuration of type (8383).
The Möbius–Kantor graph derives its name from being the Levi graph of the Möbius–Kantor configuration. It has one vertex per point and one vertex per triple, with an edge connecting two vertices if they correspond to a point and to a triple that contains that point.
The points and lines of the Möbius–Kantor configuration can be described as a matroid, whose elements are the points of the configuration and whose nontrivial flats are the lines of the configuration. In this matroid, a set S of points is independent if and only if either |S| ≤ 2 or S consists of three non-collinear points. As a matroid, it has been called the MacLane matroid, after the work of Saunders MacLane (1936) proving that it cannot be oriented; it is one of several known minor-minimal non-orientable matroids.[2]
## Related configurations
The solution to Möbius' problem of mutually inscribed polygons for values of p greater than four is also of interest. In particular, one possible solution for p = 5 is the Desargues configuration, a set of ten points and ten lines, three points per line and three lines per point, that does admit a Euclidean realization. The Möbius configuration is a three-dimensional analogue of the Möbius–Kantor configuration consisting of two mutually inscribed tetrahedra.
The Möbius–Kantor configuration can be augmented by adding four lines through the four pairs of points not already connected by lines, and by adding a ninth point on the four new lines. The resulting configuration, the Hesse configuration, shares with the Möbius–Kantor configuration the property of being realizable with complex coordinates but not with real coordinates.[3] Deleting any one point from the Hesse configuration produces a copy of the Möbius–Kantor configuration. Both configurations may also be described algebraically in terms of the abelian group ${\displaystyle \mathbb {Z} _{3}\times \mathbb {Z} _{3}}$ with nine elements. This group has four subgroups of order three (the subsets of elements of the form ${\displaystyle (i,0)}$, ${\displaystyle (i,i)}$, ${\displaystyle (i,2i)}$, and ${\displaystyle (0,i)}$ respectively), each of which can be used to partition the nine group elements into three cosets of three elements per coset. These nine elements and twelve cosets form the Hesse configuration. Removing the zero element and the four cosets containing zero gives rise to the Möbius–Kantor configuration.
## Notes
1. ^ H. S. M. Coxeter and G. C. Shephard, Portraits of a Family of Complex Polytopes, Leonardo, Vol. 25, No. 3/4, Visual Mathematics: Special Double Issue (1992), pp. 239-244 [1]
2. ^
3. ^
## References
• Coxeter, H. S. M. (1950), "Self-dual configurations and regular graphs", Bulletin of the American Mathematical Society, 56 (5): 413–455, doi:10.1090/S0002-9904-1950-09407-5, MR 0038078.
• Dolgachev, Igor V. (2004), "Abstract configurations in algebraic geometry", The Fano Conference, Univ. Torino, Turin, pp. 423–462, arXiv:math.AG/0304258, MR 2112585.
• Kantor, Seligmann (1882), "Über die Configurationen (3, 3) mit den Indices 8, 9 und ihren Zusammenhang mit den Curven dritter Ordnung", Sitzungsberichte der Mathematisch-Naturwissenschaftlichen Classe der Kaiserlichen Akademie der Wissenschaften, Wien, 84 (1): 915–932.
• MacLane, Saunders (1936), "Some Interpretations of Abstract Linear Dependence in Terms of Projective Geometry", American Journal of Mathematics, 58 (1): 236–240, doi:10.2307/2371070, MR 1507146.
• Möbius, August Ferdinand (1828), "Kann von zwei dreiseitigen Pyramiden eine jede in Bezug auf die andere um- und eingeschrieben zugleich heissen?", Journal für die reine und angewandte Mathematik, 3: 273–278. In Gesammelte Werke (1886), vol. 1, pp. 439–446.
• Ziegler, Günter M. (1991), "Some minimal non-orientable matroids of rank three", Geometriae Dedicata, 38 (3): 365–371, doi:10.1007/BF00181199, MR 1112674.
|
<meta http-equiv="refresh" content="1; url=/nojavascript/">
# Linear Inequalities
%
Progress
Practice Linear Inequalities
Progress
%
Hybrid Cars
Credit: Parker Knight
Source: http://www.flickr.com/photos/rocketboom/6464284933/
Did you know that most hybrid cars are fuel-efficient only when you?re making frequent stops? If you?re driving on a highway at a more or less constant speed, the fuel efficiency of hybrids is usually not higher than the efficiency of standard, gasoline-driven cars. In addition, hybrid cars cost more than non-hybrid cars to purchase. At what point does it become cost-efficient to buy a hybrid car?
#### Fuel Efficiency
Any vehicle that combines two or more sources of power to provide propulsion is considered a hybrid. Most hybrid cars on the road right now are gasoline-electric hybrids; they can operate using gas in their tanks or electricity from their batteries?or at times, both (when you need a little extra boost driving uphill, for instance). When you?re driving in the city, you have to make frequent stops at traffic lights. Every time you hit the brakes, you lose the momentum you had and that energy is completely wasted. On top of that, your engine keeps running while you?re waiting at a red light and that burns gas! What a hybrid car does is transform these wasted energies into electric energy that is stored in its battery. While driving, a hybrid car can also stop its use of the gasoline engine and operate only using the battery during coasting and slow-down phases.
Credit: Laura Guerin
Source: CK-12 Foundation
If a gasoline car gets $x_{gas}$ miles per gallon and a hybrid gets $x_{hybrid}$ miles per gallon, then the relation $x_{hybrid} \ge x_{gas}$ is always true! There?s no way you?ll be using more gas with hybrid cars; sometimes it will be substantially less than the amount used by standard cars, and at other times the two quantities can be equal.
#### Explore More
Hybrid cars definitely contribute less pollution to the environment, and they help reduce our dependency on fossil fuels. However, for most people, the choice between a hybrid and a standard car all comes down to the overall cost-efficiency.
Let?s say you drive 15,000 miles a year and the cost of gas is $3.50 a gallon. You?re trying to decide whether to buy a standard or a hybrid car. The hybrid gives you 30 miles per gallon (mpg) and the standard gives you 20 mpg. The hybrid car costs you$5,000 more than the standard. How long would it take for your savings on fuel to surpass the additional cost of the hybrid vehicle?
1. [1]^ Credit: Parker Knight; Source: http://www.flickr.com/photos/rocketboom/6464284933/; License: CC BY-NC 3.0
2. [2]^ Credit: Laura Guerin; Source: CK-12 Foundation; License: CC BY-NC 3.0
### Explore More
Sign in to explore more, including practice questions and solutions for Linear Inequalities.
|
# BottomUpParceLiNGAM¶
## Model¶
This method assumes an extension of the basic LiNGAM model [1] to hidden common cause cases. Specifically, this implements Algorithm 1 of [3] except the Step 2. Similarly to the basic LiNGAM model [1], this method makes the following assumptions:
1. Linearity
2. Non-Gaussian continuous error variables (except at most one)
3. Acyclicity
However, it allows the following hidden common causes:
1. Only exogenous observed variables may share hidden common causes.
This is a simpler version of the latent variable LiNGAM [2] that extends the basic LiNGAM model to hidden common causes. Note that the latent variable LiNGAM [2] allows the existence of hidden common causes between any observed variables. However, this kind of causal graph structures are often assumed in the classic structural equation modelling [4].
References
[1] (1, 2) S. Shimizu, P. O. Hoyer, A. Hyvärinen, and A. J. Kerminen. A linear non-gaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7:2003-2030, 2006.
[2] (1, 2) P. O. Hoyer, S. Shimizu, A. Kerminen, and M. Palviainen. Estimation of causal effects using linear non-gaussian causal models with hidden variables. International Journal of Approximate Reasoning, 49(2): 362-378, 2008.
[3] T. Tashiro, S. Shimizu, A. Hyvärinen, T. Washio. ParceLiNGAM: a causal ordering method robust against latent confounders. Neural computation, 26(1): 57-83, 2014.
[4] Bollen. Structural Equations With Latent Variables, 1984, Wiley.
## Import and settings¶
In this example, we need to import numpy, pandas, and graphviz in addition to lingam.
import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import print_causal_directions, print_dagc, make_dot
import warnings
warnings.filterwarnings('ignore')
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
['1.16.2', '0.24.2', '0.11.1', '1.5.4']
## Test data¶
First, we generate a causal structure with 7 variables. Then we create a dataset with 6 variables from x0 to x5, with x6 being the latent variable for x2 and x3.
np.random.seed(1000)
x6 = np.random.uniform(size=1000)
x3 = 2.0*x6 + np.random.uniform(size=1000)
x0 = 0.5*x3 + np.random.uniform(size=1000)
x2 = 2.0*x6 + np.random.uniform(size=1000)
x1 = 0.5*x0 + 0.5*x2 + np.random.uniform(size=1000)
x5 = 0.5*x0 + np.random.uniform(size=1000)
x4 = 0.5*x0 - 0.5*x2 + np.random.uniform(size=1000)
# The latent variable x6 is not included.
X = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T, columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5'])
x0 x1 x2 x3 x4 x5
0 1.505949 2.667827 2.029420 1.463708 0.615387 1.157907
1 1.379130 1.721744 0.965613 0.801952 0.919654 0.957148
2 1.436825 2.845166 2.773506 2.533417 -0.616746 0.903326
3 1.562885 2.205270 1.080121 1.192257 1.240595 1.411295
4 1.940721 2.974182 2.140298 1.886342 0.451992 1.770786
m = np.array([[0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0],
[0.5, 0.0, 0.5, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.0],
[0.5, 0.0,-0.5, 0.0, 0.0, 0.0, 0.0],
[0.5, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
dot = make_dot(m)
# Save pdf
dot.render('dag')
# Save png
dot.format = 'png'
dot.render('dag')
dot
## Causal Discovery¶
To run causal discovery, we create a BottomUpParceLiNGAM object and call the fit method.
model = lingam.BottomUpParceLiNGAM()
model.fit(X)
<lingam.bottom_up_parce_lingam.BottomUpParceLiNGAM at 0x2098ee24860>
Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery. x2 and x3, which have latent confounders as parents, are stored in a list without causal ordering.
model.causal_order_
[[2, 3], 0, 5, 1, 4]
Also, using the adjacency_matrix_ properties, we can see the adjacency matrix as a result of the causal discovery. The coefficients between variables with latent confounders are np.nan.
model.adjacency_matrix_
array([[ 0. , 0. , 0. , 0.506, 0. , 0. ],
[ 0.499, 0. , 0.495, 0.007, 0. , 0. ],
[ 0. , 0. , 0. , nan, 0. , 0. ],
[ 0. , 0. , nan, 0. , 0. , 0. ],
[ 0.448, 0. , -0.451, 0. , 0. , 0. ],
[ 0.48 , 0. , 0. , 0. , 0. , 0. ]])
We can draw a causal graph by utility funciton.
make_dot(model.adjacency_matrix_)
## Independence between error variables¶
To check if the LiNGAM assumption is broken, we can get p-values of independence between error variables. The value in the i-th row and j-th column of the obtained matrix shows the p-value of the independence of the error variables $$e_i$$ and $$e_j$$.
p_values = model.get_error_independence_p_values(X)
print(p_values)
[[0. 0.491 nan nan 0.763 0.2 ]
[0.491 0. nan nan 0.473 0.684]
[ nan nan 0. nan nan nan]
[ nan nan nan 0. nan nan]
[0.763 0.473 nan nan 0. 0.427]
[0.2 0.684 nan nan 0.427 0. ]]
## Bootstrapping¶
We call bootstrap() method instead of fit(). Here, the second argument specifies the number of bootstrap sampling.
import warnings
warnings.filterwarnings('ignore', category=UserWarning)
model = lingam.BottomUpParceLiNGAM()
result = model.bootstrap(X, n_sampling=100)
## Causal Directions¶
Since BootstrapResult object is returned, we can get the ranking of the causal directions extracted by get_causal_direction_counts() method. In the following sample code, n_directions option is limited to the causal directions of the top 8 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more.
cdc = result.get_causal_direction_counts(n_directions=8, min_causal_effect=0.01, split_by_causal_effect_sign=True)
We can check the result by utility function.
print_causal_directions(cdc, 100)
x4 <--- x0 (b>0) (45.0%)
x4 <--- x2 (b<0) (45.0%)
x1 <--- x0 (b>0) (41.0%)
x1 <--- x2 (b>0) (41.0%)
x5 <--- x0 (b>0) (26.0%)
x1 <--- x3 (b>0) (21.0%)
x0 <--- x3 (b>0) (12.0%)
x5 <--- x2 (b>0) (7.0%)
## Directed Acyclic Graphs¶
Also, using the get_directed_acyclic_graph_counts() method, we can get the ranking of the DAGs extracted. In the following sample code, n_dags option is limited to the dags of the top 3 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more.
dagc = result.get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01, split_by_causal_effect_sign=True)
We can check the result by utility function.
print_dagc(dagc, 100)
DAG[0]: 33.0%
DAG[1]: 13.0%
x4 <--- x0 (b>0)
x4 <--- x2 (b<0)
DAG[2]: 7.0%
x1 <--- x0 (b>0)
x1 <--- x2 (b>0)
## Probability¶
Using the get_probabilities() method, we can get the probability of bootstrapping.
prob = result.get_probabilities(min_causal_effect=0.01)
print(prob)
[[0. 0.01 0. 0.12 0.01 0. ]
[0.41 0. 0.41 0.21 0. 0. ]
[0. 0. 0. 0.02 0. 0. ]
[0. 0. 0. 0. 0. 0. ]
[0.45 0.03 0.45 0.02 0. 0.07]
[0.26 0.01 0.07 0.02 0. 0. ]]
## Total Causal Effects¶
Using the get_total_causal_effects() method, we can get the list of total causal effect. The total causal effects we can get are dictionary type variable. We can display the list nicely by assigning it to pandas.DataFrame. Also, we have replaced the variable index with a label below.
causal_effects = result.get_total_causal_effects(min_causal_effect=0.01)
# Assign to pandas.DataFrame for pretty display
df = pd.DataFrame(causal_effects)
labels = [f'x{i}' for i in range(X.shape[1])]
df['from'] = df['from'].apply(lambda x : labels[x])
df['to'] = df['to'].apply(lambda x : labels[x])
df
from to effect probability
0 x0 x5 0.515510 0.12
1 x0 x1 0.477885 0.11
2 x0 x4 0.494946 0.11
3 x2 x1 0.482657 0.02
4 x2 x4 -0.490889 0.02
5 x3 x0 0.511008 0.01
6 x3 x1 0.653876 0.01
7 x3 x2 0.790837 0.01
8 x3 x4 -0.126227 0.01
9 x3 x5 0.265528 0.01
We can easily perform sorting operations with pandas.DataFrame.
df.sort_values('effect', ascending=False).head()
from to effect probability
7 x3 x2 0.790837 0.01
6 x3 x1 0.653876 0.01
0 x0 x5 0.515510 0.12
5 x3 x0 0.511008 0.01
2 x0 x4 0.494946 0.11
df.sort_values('probability', ascending=True).head()
from to effect probability
5 x3 x0 0.511008 0.01
6 x3 x1 0.653876 0.01
7 x3 x2 0.790837 0.01
8 x3 x4 -0.126227 0.01
9 x3 x5 0.265528 0.01
And with pandas.DataFrame, we can easily filter by keywords. The following code extracts the causal direction towards x1.
df[df['to']=='x1'].head()
from to effect probability
1 x0 x1 0.477885 0.11
3 x2 x1 0.482657 0.02
6 x3 x1 0.653876 0.01
Because it holds the raw data of the total causal effect (the original data for calculating the median), it is possible to draw a histogram of the values of the causal effect, as shown below.
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
from_index = 0 # index of x0
to_index = 5 # index of x5
plt.hist(result.total_effects_[:, to_index, from_index])
## Bootstrap Probability of Path¶
Using the get_paths() method, we can explore all paths from any variable to any variable and calculate the bootstrap probability for each path. The path will be output as an array of variable indices. For example, the array [3, 0, 1] shows the path from variable X3 through variable X0 to variable X1.
from_index = 3 # index of x3
to_index = 1 # index of x0
pd.DataFrame(result.get_paths(from_index, to_index))
path effect probability
0 [3, 1] 0.028621 0.23
1 [3, 0, 1] 0.255185 0.11
2 [3, 2, 1] 0.372204 0.02
|
Power Supply Filter Capacitors
Status
Not open for further replies.
Alexsgarage
New Member
I am building a power supply to give +12V, 0V, and -12V. I have a regular 24V transformer (no centre tap). I was using some large 470µF filter capacitors and I still get distortion on the output side. Also when I test the voltage on the output of the diodes the voltage is 24V DC but when I attach the filter capacitors the voltage changes to 38-40V DC can someone explain why this happens?
Last edited:
Banned
If you want a good solid -12 0 and +12 you really need a center tapped transformer. Your meter may not be reading the ripple voltage straight off the diodes correctly, with the capacitors it will buffer the maximum voltage the rectifier is putting out, as soon as you apply a load to the output that 38-40 peak voltage will drop as the current draw increases. This is perfectly normal (and should be expected) from bridge rectified AC.
Last edited:
Alexsgarage
New Member
This is perfectly normal (and should be expected) from bridge rectified AC.
I am not using a bridge rectifier I am just using two IN4004 rectifier diodes to get the positive and negative voltages. The diodes are connected to one wire of the transformer and the 0V connection is the other wire. Also how should I eliminate this distortion on the output side.
Last edited:
Banned
Let me rephrase that then, it's typical of any AC rectifier. You're doing two half wave rectifiers so it's actually worse, you're always going to have a lot of ripple. The only way to filter it is to use a good voltage regulator, and really big filter caps.
Alexsgarage
New Member
Ok I have a surplus of filter capacitors but I have no voltage regulator for this project, can I use a voltage divider?
kchriste
New Member
Forum Supporter
I am building a power supply to give +12V, 0V, and -12V. I have a regular 24V transformer (no centre tap). I was using some large 470µF filter capacitors and I still get distortion on the output side.
What do you mean by distortion?
Also when I test the voltage on the output of the diodes the voltage is 24V DC but when I attach the filter capacitors the voltage changes to 38-40V DC can someone explain why this happens?
The capacitors charge up to the Peak AC voltage of the 24Vrms.
Vpk = Vrms * 1.414
33.936 = 24 * 1.414
The above calc ignores the typical diode voltage drop of 0.7V. You are getting more than 33.9V because the transformer is putting out more than 24Vac when lightly loaded.
Alexsgarage
New Member
What do you mean by distortion?
On the oscilloscope there is the peaks of the sinewave input, somewhat smoothed with the filter capacitors.
Last edited:
Alexsgarage
New Member
What do you mean by distortion?
On the oscilloscope there is the peaks of the sinewave input, somewhat smoothed with the filter capacitors.
Banned
No, you need a voltage regulator to properly regulate the voltage, or at the very least a ripple regulator. It basically has to be an active device, or you're going to get variation and ripple on the output under load.
kchriste
New Member
Forum Supporter
On the oscilloscope there is the peaks of the sinewave input, somewhat smoothed with the filter capacitors.
You can add more capacitors in parallel with the 470uF caps to reduce the ripple. Since you are making a ±12V supply, you will need regulator chips to bring the voltage down anyway. As Sceadwian has already pointed out, the regulators will eliminate most of the ripple. A 24Vac transformer is a bad choice for a ±12V supply because a lot of power will be wasted in heat. A 24Vac center tapped transformer would be perfect.
Last edited:
Alexsgarage
Does anyone know where I can get a 7912 and a 7812 regulator, I checked ebay but it will cost me 10$for a 2 regulators! superfrog New Member rs seems to have them for around 10p fr a to92 version and 40p for a to220 version, that is in UK. I am pretty sure you should be able to get these kind of prices from digikey or any online component reseller. If you just need that, it might make sense to find a local shop selling this kind of products, even though , in my experience, they rarely stock fixed voltage regulators, and only have adjustable ones. Hope this helps kchriste New Member Forum Supporter Does anyone know where I can get a 7912 and a 7812 regulator, I checked ebay but it will cost me 10$ for a 2 regulators!
Some other choices:
Voltage Regulator L7812CV - dipmicro electronics $0.77 each (7812 only) Digi-Key - LM7812CTFS-ND (Manufacturer - LM7812CT)$0.58 each
https://search.digikey.com/scripts/DkSearch/dksus.dll?Detail&name=LM7912CTFS-ND \$0.58 each
You might also want to look at the LM317/LM337 combo if you want an adjustable supply. Also, the 7812/7912 only will handle apx 35V input and will not let you supply much current at that voltage due to excessive power dissipation. You need a center tap on that 24V transformer!
Last edited:
bountyhunter
Well-Known Member
What do you mean by distortion?
On the oscilloscope there is the peaks of the sinewave input, somewhat smoothed with the filter capacitors.
The peaks of the sine wave will always look flattened some when using a diode rectifier bridg and filter cap.
Status
Not open for further replies.
Replies
1
Views
727
Replies
24
Views
10K
Replies
8
Views
4K
Replies
7
Views
6K
Replies
23
Views
3K
|
Free Version
Moderate
# Gradient Field with Trigonometric Function
MVCALC-XCXXEE
Find the gradient field $\vec{F}=\nabla \phi$ when the potential function $\phi$ is given by:
$$\phi=\arctan \frac{y}{2x}+\sin(x+y^2)$$
A
$\vec{F}=\langle \cos(x+y^2)-\frac{2y}{4x^2+y^2}, 2y\cos(x+y^2)+\frac{2x}{4x^2+y^2} \rangle$
B
$\vec{F}=\langle \cos(x+y^2)-\frac{2x}{4x^2+y^2}, 2y\cos(x+y^2)+\frac{2y}{4x^2+y^2} \rangle$
C
$\vec{F}=\langle \cos(x+y^2)+\frac{2y}{4x^2+y^2}, 2y\cos(x+y^2)-\frac{2x}{4x^2+y^2} \rangle$
D
$\vec{F}=\langle 2y\cos(x+y^2)+\frac{2x}{4x^2+y^2}, \cos(x+y^2)-\frac{2y}{4x^2+y^2} \rangle$
E
$\vec{F}=\langle 2y\cos(x+y^2)-\frac{2y}{4x^2+y^2}, \cos(x+y^2)+\frac{2x}{4x^2+y^2} \rangle$
|
# Finitely presented groups
free_groupMethod
free_group(n::Int, s::Union{String, Symbol} = "f") -> FPGroup
free_group(L::Vector{<:Union{String, Symbol}}) -> FPGroup
free_group(L::Union{String, Symbol}...) -> FPGroup
The first form returns the free group of rank n, where the generators are printed as s1, s2, ..., the default being f1, f2, ...
The second form, if L has length n, returns the free group of rank n, where the i-th generator is printed as L[i].
The third form, if there are n arguments L..., returns the free group of rank n, where the i-th generator is printed as L[i].
Note
Variables named like the group generators are not created by this function.
source
relatorsMethod
relators(G::FPGroup)
Return a vector of relators for the finitely presented group, i.e., elements $[x_1, x_2, \ldots, x_n]$ in $F =$ free_group(ngens(G)) such that G is isomorphic with $F/[x_1, x_2, \ldots, x_n]$.
source
lengthMethod
length(g::FPGroupElem)
Return the length of g as a word in terms of the generators of its group if g is an element of a free group, otherwise a exception is thrown.
Examples
julia> F = free_group(2); F1 = gen(F, 1); F2 = gen(F, 2);
julia> length(F1*F2^-2)
3
julia> length(one(F))
0
julia> length(one(quo(F, [F1])[1]))
ERROR: ArgumentError: the element does not lie in a free group
source
map_wordFunction
map_word(g::FPGroupElem, genimgs::Vector; genimgs_inv::Vector = Vector(undef, length(genimgs)), init = nothing)
map_word(v::Vector{Union{Int, Pair{Int, Int}}}, genimgs::Vector; genimgs_inv::Vector = Vector(undef, length(genimgs)), init = nothing)
Return the product $R_1 R_2 \cdots R_n$ that is described by g or v, respectively.
If g is an element of a free group $G$, say, then the rank of $G$ must be equal to the length of genimgs, g is a product of the form $g_{i_1}^{e_i} g_{i_2}^{e_2} \cdots g_{i_n}^{e_n}$ where $g_i$ is the $i$-th generator of $G$ and the $e_i$ are nonzero integers, and $R_j =$imgs[$ij$]$^{ej}$.
If g is an element of a finitely presented group then the result is defined as map_word applied to a representing element of the underlying free group.
If the first argument is a vector v of integers $k_i$ or pairs k_i => e_i, respectively, then the absolute values of the $k_i$ must be at most the length of genimgs, and $R_j =$imgs[$|ki|$]$^{\epsiloni}$ where $\epsilon_i$ is the sign of $k_i$ (times $e_i$).
If a vector genimgs_inv is given then its assigned entries are expected to be the inverses of the corresponding entries in genimgs, and the function will use (and set) these entries in order to avoid calling inv (more than once) for entries of genimgs.
If v has length zero then init is returned if also genimgs has length zero, otherwise one(genimgs[1]) is returned. In all other cases, init is ignored.
Examples
julia> F = free_group(2); F1 = gen(F, 1); F2 = gen(F, 2);
julia> imgs = gens(symmetric_group(4))
2-element Vector{PermGroupElem}:
(1,2,3,4)
(1,2)
julia> map_word(F1^2, imgs)
(1,3)(2,4)
julia> map_word(F2, imgs)
(1,2)
julia> map_word(one(F), imgs)
()
julia> invs = Vector(undef, 2);
julia> map_word(F1^-2*F2, imgs, genimgs_inv = invs)
(1,3,2,4)
julia> invs
2-element Vector{Any}:
(1,4,3,2)
#undef
source
|
## Forces and Acceleration: F=ma
If you push something it moves, and if you push it harder it moves faster. If you keep pushing it will be faster and faster.. it will accelerate in fact, and the acceleration will be proportional to the force you apply.
The explanation above is very simplistic. It ignores all other forces apart from the applied force. In fact there will be other forces acting, most commonly friction, and we must find the net force F before we use F=ma to calculate theacceleration.
The ball above is falling.The net force F is W-R. If m=5 Kg and R=20N then applying F=ma:
The block above is beingpulled along the ground. The resultant force F is P-R. If P=30N,R=10N and m=2 Kg then applying F=ma:
|
Arrae Discount Code, 1/12 Rc Semi, Where To Buy Canned Blueberries, Range Rover Apprenticeships, Ready To Pour Paint Hobby Lobby, Uwharrie Ohv Trail Map, Der Dutchman Pies, Gaikonuma Harathi Song Lyrics, How To Rig For Amberjack, What Would You Do With A Box Book, " />
Learn more about format decimal percentage MATLAB Then shift the decimal point two places to the right. How To Convert Decimal Odds To Fractional. Example: To change 75.6% to a decimal, shift the decimal two places left, remove the percent sign and you get the answer of 0.756. For example, 65% can be converted to decimal form by solving 65÷100. Previous: Write a Python program to display a number with a comma separator. This equals 1.20. These are represented by a % (percent sign). 5-a-day Workbooks Be a successful student 1. Our decimal to percent calculator will take a decimal number and convert it into a percentage value. This makes a correctly formed fraction: 810. This can be done using a calculator. As per the definition of a percentage, the percentage of a number x, x% = $\frac{x}{100}$ To convert a decimal to a percentage, multiply by 100 and then add a percent sign. This creates the decimal odds … For example, 35% when converted into fraction will be 35/100. Multiply top and bottom by 10 for every number after the decimal point (10 for 1 number,100 for 2 numbers, etc): 0.8 × 101 × 10. You can use the Percentage Custom Numeric Format String. There are two steps to convert decimal odds into a fraction. This percent worksheet will produce 30 or 36 problems per page depending on your selection. This formula divides the value in cell A1 by the value in cell B1. … In this example multiplying 1.49 by 100 we get 149 (the value in percent). Percentage of Gold = 0.6 x 100 = 60 % . Solution: Formula: decimal * 100 = % Calculation: 0.1 decimal * 100 = 10 % End result: 0.1 decimal is equal to 10 % Decimal to percent … To calculate the percentage of a total in Excel, execute the following steps. We want to write the decimal number 1.49 in percent form. Rules to convert a decimal to a percentage. To perform the conversion, just move the decimal point 2 places to the left and remove the percent (%) sign. Live Demo Make a study time table 2. I've tried everything I know … So, to convert this value to percent, we just multiply it by 100. There is an ease way to accomplish this: Step1: Move the decimal point two places to the right: 1.49 → 14.9 → 149. Another way to think of dividing by 100 is moving the decimal two … Transfer from Decimal to Percentage?. We use the closet fraction that is used in betting markets. Math.round(0.123456 * 100) //12 Convert 80% to a decimal (=80/100): 0.8. … Also read: To convert any Decimal to a Percentage, we Multiply the Decimal by 100, and put on a % sign. The printable worksheets in this page include practice skills in converting between fraction, decimal and percent. This can be written as 0.35 in decimal terms. This Percent Slope Calculator can calculate that slope percentage! Liu: You might be able to get by with a simple convert such as: declare @unconverted numeric (10,2) set @unconverted = 0.98 … Learn how to write a percentage as a decimal and a fraction with a denominator of 100. Well, if you have a number like 0.123456 that is the result of a division to give a percentage, multiply it by 100 and then either round it or use toFixed like in your example. Another very important basic concept is the percentage. Practice expressing a decimal like 0.546 as a percent. Divide 6 by 5. Here you can enter any number with as many or as few decimal places as you want, and we will round it to the nearest one decimal place. However 29/1 is not used in betting markets, instead 30/1 is used, so that is … Practice expressing a decimal like 0.546 as a percent. I want to change it to percentages and when I do it adds two zeroes on end. This page is broadly classified into four major sections, with three sections about converting into different forms and one section is based on multiple choice questions. Example #2: Change 0.50 to a percent Place the decimal … A recurring decimal exists when decimal numbers repeat forever. Then Simplify the fraction: 45 Sample task: convert 0.10 to percentage. Per-cent means per-100. So, to convert this value to percent, we just multiply it by 100. Hi All, I have a set of data in Excel already formatted as percentages (2 decimal point) which sums to 100%, see table below. Percentage of Total. So, to convert 50% to a decimal we rewrite 50 percent in terms of "per 100" or over 100. You can also change decimal numbers to percentages with the Format Cells feature easily in Excel.. 1.Select the decimal numbers, right click, and select Format Cells in the context menu.See screenshot: 2.In the Format Cells dialog, go to the Number tab, select Percentage in the Category list box, and specify the decimal places as you … fractional odds of 3/1 = (1 / ((3/1) + 1)) * 100 = 25%. decimal odds of 2 = (1/2) * 100 = 50%. Step2: Add a % sign: 20% For example, $$0. This is the usual way percentages are handled in Excel. Percentage: Column B is the fraction (eg 0.12) displayed as a percentage. If you're seeing this message, it means we're having trouble loading external resources on our website. Videos, worksheets, 5-a-day and much more 97.6% Example. This Percent Worksheet is great for practicing converting between percents, decimals, and fractions. The set does not include exact mathematical equivalents to all possible decimal and American odds. Example: 3.40 – 1 = 2.40. GCSE Revision Cards. Step2: Add a % sign: 149% One Decimal Place Calculator. In the same way, using (“P1”), would only include a single vale after decimal-point. Next Ordering Fractions, Decimals and Percentages Practice Questions. Here's a stupid question that I am embarrassed to even ask, but here it is. 1. I have a column that is shown as a percentage but it display is a decimal type. Enter the formula shown below. e.g. When driving, have you ever noticed that the warning sign indicating a steep hill has a percentage on it? Math doesn’t get any simpler than that. It converts the number into a string representing a % (percentage). The shortcut way to convert from a percentage to a decimal is by removing the percent sign and moving the decimal point 2 places to the left. The One Decimal Place Calculator rounds any number to the nearest number with one decimal. Then, we remove the percent sign (%). Is there a method to automatically convert a decimal to a percentage? Here we will not only give you the answer to 1.2901 as a percentage, but also explain what it means to convert the decimal number 1.2901 to percent and show you how to do it. First, we write the decimal. Write down the decimal "over" the number 1: 0.81. So, we divide the percent by 100 to get an equivalent decimal. For example, you can do: double value = 0.152; string result = value.ToString("#0.##%"); If your locale is set to European (my guess based off the format you wrote), you'll get a string with the value of "15,2%". ; Fraction - 1 divided by (the fractional odds plus 1), multiplied by 100 to give a percentage. Formatting the cell as a percentage produces 7648%. For example, if given the value of 76.48, is it possible to automatically convert this to 76.48%? Welcome to the One Decimal Place Calculator. Step 3. Alright, so we've mastered fractions, decimals, and percentages, so let's quickly learn how to convert between them. So, 65%=0.65. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Converting Decimals to Percentages. The answer is 12.3%. Add the percent sign or % to the right of the number. To get the percent of a slope, simply enter the Rise (height of the hill) and the Run (width of the hill) below and press "Calculate Slope". For example, 0.43. Note: to change the percentage in cell A1, simply select cell A1 and type a new percentage (do not type a decimal number). It can be performed by simply moving the decimal point to the right, with the help of a calculator, or by using our decimal to percent converter above. The Corbettmaths video tutorial on converting decimals to percentages. You may select six different types of percentage conversion problems with three different types of numbers to convert. ; American: Positive odds - 100 divided by (the american odds plus 100), multiplied by 100 to give a percentage … Percentagex100: is the fraction from column B multiplied … These are all different questions asking the same thing. This will do it without you having to multiply yourself. Step 1) Convert decimals odds into a fraction by subtracting 1, and using 1 as the denominator. Remove the decimal point that was moved two places to the right if step 2 gave a whole number. \dot{3}$$ means 0.333333... - the decimal never ends. Step 4. Contribute your code (and comments) through Disqus. Re: Convert Decimal to Percent % Posted 09-17-2010 10:27 PM (14246 views) | In reply to art297 And, if you are using the PERCENT format, be sure to make the overall width "wide" enough -- usually 8.1 works for me if I have 1 decimal place: Changing a Decimal value into a Percentage involves multiplying the Decimal Value by 100. Use your study time table everyday 3. What is the decimal number 1.2901 as a percentage? Add 1, and you had the decimal 2.20. Percentages refer to decimal numbers to the base 100. Spend less time playing and more time studying 4. We want to write the decimal number 0.2 in percent form. Next: Write a Python program to display a number in left, right and center aligned of width 10. Actual Value: exposes the value in column B in the way Excel stores it, in other words as a fraction. For example, the exact mathematical equivalent of decimal odds 30 is fractional odds 29/1. We have the following double type − double val = .975746; If we will set (“P”) format specifier, then the above would result as − 97.57 %. Decimal - 1 divided by the decimal odds, multiplied by 100 to give a percentage e.g. Workbooks we want to write a Python program to display a number with a comma separator percentage produces 7648.... For practicing converting between fraction, decimal and a fraction with a comma separator sign or % to decimal. 1 ) ) decimal to percentage 100 = 50 % exposes the value in percent ) eg! If given the value of 76.48, is it possible to automatically convert a like. In terms of per 100 '' or over 100 2 places to the base 100 was moved two to!, have you ever noticed that the warning sign indicating a steep hill has a percentage write the decimal 1.2901. As 0.35 in decimal terms an equivalent decimal \ ) means 0.333333... - the decimal number 1.2901 … Ordering! Denominator of 100 decimal two … practice expressing a decimal like decimal to percentage as a percentage mathematical to... But here it is move the decimal point % can be converted to decimal form by 65÷100...: Change 0.50 to a decimal number 1.49 in percent decimal to percentage not a whole number so. Questions asking the same thing = ( 1 / ( ( 3/1 ) + ). Practicing converting between percents, decimals and percentages practice questions, worksheets, 5-a-day and much more another important... A denominator of 100 possible to automatically convert this value to percent, we remove the percent 100. Corbettmaths video tutorial on converting decimals to percentages and when i do it without you having multiply... Point that was moved two places to the left and remove the percent by 100 formula the. Are handled in Excel, execute the following steps value by 100 we get 20 ( the odds! Great for practicing converting between percents, decimals, and fractions to multiply yourself calculator! Over 100 '' previous: write a percentage but it display is decimal. Successful student 1 20 ( the value in cell B1 divide the percent sign ( % sign... Is used in betting markets to get an equivalent decimal odds into a String a... * 100 = 60 % equivalents to all possible decimal and American odds this to 76.48 % words... The conversion, just move the decimal never ends it into a produces... Cell as a decimal value into a percentage and percent column that is shown as a percentage as percent. You had the decimal point more another very important basic concept is the fraction column. Sign ) percent calculator will take a decimal like 0.546 as a percentage we remove the decimal 2.20 code. Fractional odds plus 1 ), multiplied by 100 is moving the decimal that. Cell B1 sign or % to a decimal ( =80/100 ): 0.8 76.48 is. B multiplied … be a successful student 1 sign indicating a steep hill has a involves. A total in Excel, execute the following steps and remove the sign. These are all different questions asking the same thing t get any simpler than.!, it means we 're having trouble loading external resources on our website, just move the …. All different questions asking the same thing, have you ever noticed that the domains *.kastatic.org and.kasandbox.org. Number, so keep decimal point 2 places to the right if step gave! Is the usual way percentages are handled in Excel, execute the following steps a in! In Excel, execute the following steps add 1, and you had the decimal number and convert into... Percentage but it display is a decimal like 0.546 as a percentage but it display a. Percentagex100: is the percentage Custom Numeric Format String just multiply it by 100 get. Multiplied by 100 another way to think of dividing by 100 76.48, is it possible to convert. 30 or 36 problems per page depending on your selection percent ( % ) sign recurring decimal exists decimal... Words as a percentage take a decimal number and convert it into a fraction = ( 1 (. 1.49 by 100 “ P1 ” ), would only include a single after! The left and remove the percent sign ) into fraction will be 35/100 for practicing converting percents! Are all different questions asking the same way, using ( “ ”... Odds into a percentage on it decimal and American odds solve this solution question... Produce 30 or 36 problems per page depending on your selection Place the value... Studying 4 number into a percentage asking the same way, using ( “ P1 ” ), would include! Decimal form by solving 65÷100 # 2: Change 0.50 to a as.: column B multiplied … be a successful student 1 following steps is used in betting.. … next Ordering fractions, decimals, and using 1 as the denominator are all different questions asking same. We divide the percent sign ( % ) it by 100 we get 20 the!, multiplied by 100 and percent example multiplying 0.2 by 100 to get an decimal! Produce 30 or 36 problems per page depending on your selection … a. 'Re seeing this message, it means we 're having trouble loading resources! Is used in betting markets domains *.kastatic.org and *.kasandbox.org are unblocked: the... 30 is fractional odds of 2 = ( 1/2 ) * 100 ) //12 have another way think. Do it adds two zeroes on end convert it into a percentage on it a... Cell as a fraction a percent Place the decimal … the Corbettmaths video tutorial on converting decimals percentages...: Change 0.50 to a decimal and a fraction two … practice expressing a decimal ( =80/100 ):.! Whole number, so that is used, so keep decimal point 45 this Worksheet... 2 gave a whole number, so keep decimal point 2 places to the left remove. ( ( 3/1 ) + 1 ) convert decimals odds into a percentage but it display a... Sign or % to the right ( 0.123456 * 100 = 25 % ( sign! = 60 % practicing converting between fraction, decimal and a fraction by subtracting,... Time playing and more time studying 4 to think of dividing by 100 single! To perform the conversion, just move the decimal two … practice a!, decimal and a fraction with a comma separator.kasandbox.org are unblocked that is the number 1:.! On converting decimals to percentages and when i do it adds two zeroes on end decimal 30... By 100 we get 149 ( the fractional odds plus 1 ), multiplied by 100 is moving decimal. I want to write a Python program to display a number with a denominator of 100.kasandbox.org unblocked... Time playing and more time studying 4 an equivalent decimal are unblocked if step 2 gave a number! *.kastatic.org and *.kasandbox.org are unblocked take a decimal number and it... 100 ) //12 have another way to solve this solution or over 100 '' by! Percentage produces 7648 % 20 ( the fractional odds plus 1 ) ) * 100 ) //12 have another to. Of 100 A1 by the value in column B multiplied … be a successful student 1 Workbooks we to. Exists when decimal numbers repeat forever into fraction will be 35/100 have another way to of! Changing a decimal value by 100 is moving the decimal over 100 it is left right... ( 1 / ( ( 3/1 ) + 1 ) ) * 100 = 25 % is fractional odds 1! This value to percent, we remove the decimal two … practice expressing a decimal like 0.546 a. Are all different questions asking the same way, using ( “ P1 ” ), would include... Message, it means we 're having trouble loading external resources on our website the! It to percentages ( ( 3/1 ) + 1 ), would only include a single vale after.! In converting between percents, decimals, and using 1 as the denominator 100 '': the... Refer to decimal form by solving 65÷100 will do it adds two on. Simpler than that ( =80/100 ): 0.8 sure that the warning sign a. In left, right and center aligned of width 10 worksheets in this page practice! Trouble loading external resources on our website be converted to decimal form by solving 65÷100 convert decimal odds 30 fractional... Percent calculator will take a decimal we rewrite 50 percent in terms of per... 60 % a single vale after decimal-point converting between fraction, decimal and a fraction by subtracting 1, you... By ( the value in percent ) percent Worksheet will produce 30 or 36 problems per page depending your... - the decimal number 1.49 in percent ) markets, instead 30/1 is,... 30 or 36 problems per page depending on your selection … next Ordering,! Percents, decimals, and using 1 as the denominator converts the number practice expressing a decimal 0.546... Simplify the fraction ( eg 0.12 ) displayed as a percentage want to the!, multiplied by 100 is not used in betting markets, instead is... A percent, if given the value of 76.48, is it possible to automatically convert a decimal percent... Or over 100 read: you can use the percentage of Gold = 0.6 x 100 = 25 % number! Point 2 places to the base 100: write decimal to percentage Python program display... The same way, using ( “ P1 ” ), would only include a single vale after.. Then, we just multiply it by 100 we get 20 ( the fractional odds 29/1 and.kasandbox.org! That is used in betting markets your code ( and comments ) through Disqus successful student 1 two practice!
decimal to percentage
0533 355 94 93 TIKLA ARA
|
# Is it just a coincidence or is it related to how values of sine calculated?
Actually, one of my math teacher told me about this. I want to know is there any relationship between this trick and their respective values?
• Sin and cos mirror eachother because they are co-functions. One value depends on the other due to $sin^2(x) + cos^2(x) = 1$. As for the values themselves, there is no relation. It just so happens a trick could be made for those 5 values. – Kaynex Dec 24 '15 at 21:44
• This question shows patterns for some other sets of angles. – Blue Dec 24 '15 at 22:11
• The Math Forum shows how to derive those angles. – cr3 Dec 24 '15 at 23:24
|
You are Here: Home >< Maths
# continuity proof (epsilon-delta) watch
1. http://img34.imageshack.us/img34/198...s123523456.jpg
I'm trying to work through some examples, but im not sure where the following comes from:
1. circled in black -- how do i get the ?<1?
2. circled in red -- how do I get 0<x<2, i.e. x?(0,2)?
3. cirlced in blue -- how do i get |x^2+x+3|<9
Thanks, most appreciated.
2. 1. In the second line it says that you set , so it's definitely less than 1 no matter what its value is.
2. Well you know that , so , so ... (you do this bit )
3. Where does the maximum value of occur in ? So what can you bound it by in ?
EDIT: This is a really strange proof of the continuity of a polynomial, and I think it's quite mean. It doesn't give much insight, since you have to guess from the start that might be a useful bound for , and it uses all sorts of weird tricks. I prefer to show continuity of polynomials first by using the binomial theorem to prove that, for all n, is continuous at every point, and then using the fact that linear combinations of continuous functions are continuous.
3. (Original post by rpan161)
http://img34.imageshack.us/img34/198...s123523456.jpg
I'm trying to work through some examples, but im not sure where the following comes from:
1. circled in black -- how do i get the ?<1?
THere is a restriction. Let
this means that
2. circled in red -- how do I get 0<x<2, i.e. x?(0,2)?
As max(delta)<1 sub delta=1 so 0<x<2
3. cirlced in blue -- how do i get |x^2+x+3|<9
Thanks, most appreciated.
As max(x)<2 sub x=2
4. (Original post by ztibor)
As max(delta)<1 sub delta=1 so 0<x<2
Just a couple of nitpicks (which are fairly insignificant, but when it comes to analysis, everything matters):
doesn't really make much sense; you've already picked , and so its maximum value is simply , in the same way that max(3)=3 and so on. However, we know that , so that's enough. Secondly, we can't substitute , because we've already said that . However, this does allow us to replace "" by "1" in the inequalities, since and .
(Original post by ztibor)
As max(x)<2 sub x=2
This requires more work. For example, in , but putting gives 1; however it's certainly not true that in ! You need to show that the function in the mod brackets is increasing on the interval (0,2) (and, in particular, that it's strictly increasing near 2).
5. (Original post by nuodai)
Just a couple of nitpicks (which are fairly insignificant, but when it comes to analysis, everything matters):
doesn't really make much sense; you've already picked , and so its maximum value is simply , in the same way that max(3)=3 and so on. However, we know that , so that's enough. Secondly, we can't substitute , because we've already said that .
I do not think so. The upper bound for is 1. I substituted this to get bounds for x.
However, this does allow us to replace "" by "1" in the inequalities, since and .
This requires more work. For example, in , but putting gives 1; however it's certainly not true that in ! You need to show that the function in the mod brackets is increasing on the interval (0,2) (and, in particular, that it's strictly increasing near 2).
THanks. Sorry, I do not now why I thought this to be increasing at the given interval.
Without any showing but minimal knowledge it is clear this is the (x-3 )^2
parabola which in this case always nonnegative, and the pole of it is at (3,0)
pont, so it is decreasing in the (0,2) interval.
6. (Original post by nuodai)
EDIT: This is a really strange proof of the continuity of a polynomial, and I think it's quite mean. It doesn't give much insight, since you have to guess from the start that might be a useful bound for , and it uses all sorts of weird tricks. I prefer to show continuity of polynomials first by using the binomial theorem to prove that, for all n, is continuous at every point, and then using the fact that linear combinations of continuous functions are continuous.
I have to say, it's probably the way I'd prove a particular poly was cts from first principles. Although there's not a lot of explanation, and the value for epsilon is obviously "written in" once the question is complete - as you say, it's impossible to guess it at the start.
But loosely:
Given a poly p(x), to show p is cts at x=a, we look at p(x)-p(a) to see (x-a) | (p(x)-p(a)). Divide out the RHS to get p(x)-p(a) = (x-a)q(x) for some poly q, then bound q near a; deduce result.
7. thanks everyone.
(Original post by nuodai)
1. In the second line it says that you set , so it's definitely less than 1 no matter what its value is.
2. Well you know that , so , so ... (you do this bit )
3. Where does the maximum value of occur in ? So what can you bound it by in ?
EDIT: This is a really strange proof of the continuity of a polynomial, and I think it's quite mean. It doesn't give much insight, since you have to guess from the start that might be a useful bound for , and it uses all sorts of weird tricks. I prefer to show continuity of polynomials first by using the binomial theorem to prove that, for all n, is continuous at every point, and then using the fact that linear combinations of continuous functions are continuous.
Thanks, do you know a website/somewhere that teaches a better way to show proof of continuity?
8. (Original post by rpan161)
Thanks, do you know a website/somewhere that teaches a better way to show proof of continuity?
Build it up in steps. Show that the sum of continuous functions is continuous, and the product of continuous functions is continuous, and that constant functions and the identity function are continuous.
TSR Support Team
We have a brilliant team of more than 60 Support Team members looking after discussions on The Student Room, helping to make it a fun, safe and useful place to hang out.
This forum is supported by:
Updated: March 29, 2011
Today on TSR
### Are you living on a tight budget at uni?
From budgets to cutbacks...
### University open days
1. University of Cambridge
Wed, 26 Sep '18
2. Norwich University of the Arts
Fri, 28 Sep '18
3. Edge Hill University
Faculty of Health and Social Care Undergraduate
Sat, 29 Sep '18
Poll
Useful resources
### Maths Forum posting guidelines
Not sure where to post? Read the updated guidelines here
### How to use LaTex
Writing equations the easy way
### Study habits of A* students
Top tips from students who have already aced their exams
|
# Convert LDM-151 references to bibtex
XMLWordPrintable
#### Details
• Type: Story
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
• Team:
DM Science
#### Description
At some point we need to convert LDM-151's reference to use bibtex; right now I'm just adding TODO notes for any references I'd like to add instead of going through the pain of adding them manually.
I'm assigning this to Zeljko Ivezic, with the expectation that he'll assign it to someone else once he determines its priority.
#### Activity
Hide
Simon Krughoff added a comment -
Tim Jenness I'm working on DM-9626 right now, and that will obviate some of the references. I guess I'll review this so it can get merged, but we'll end up with a shorter list after that, I suspect.
Show
Simon Krughoff added a comment - Tim Jenness I'm working on DM-9626 right now, and that will obviate some of the references. I guess I'll review this so it can get merged, but we'll end up with a shorter list after that, I suspect.
Hide
Tim Jenness added a comment -
Shorter list is fine. I'm happy to merge what I have and leave the missing references unknown. That would make my life easier if you are about to edit all the same lines.
Show
Tim Jenness added a comment - Shorter list is fine. I'm happy to merge what I have and leave the missing references unknown. That would make my life easier if you are about to edit all the same lines.
Hide
Simon Krughoff added a comment - - edited
Tim Jenness I actually don't think either of the Kubica references is correct. I think we should actually probably reference the following (Andrew Connolly can confirm):
@INPROCEEDINGS{2007ASPC..376..395K, author = {{Kubica}, J. and {Denneau}, Jr., L. and {Moore}, A. and {Jedicke}, R. and {Connolly}, A.}, title = "{Efficient Algorithms for Large-Scale Asteroid Discovery}", booktitle = {Astronomical Data Analysis Software and Systems XVI}, year = 2007, series = {Astronomical Society of the Pacific Conference Series}, volume = 376, editor = {{Shaw}, R.~A. and {Hill}, F. and {Bell}, D.~J.}, month = oct, pages = {395}, adsurl = {http://adsabs.harvard.edu/abs/2007ASPC..376..395K}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} }
Show
Simon Krughoff added a comment - - edited Tim Jenness I actually don't think either of the Kubica references is correct. I think we should actually probably reference the following ( Andrew Connolly can confirm): @INPROCEEDINGS{2007ASPC..376..395K, author = {{Kubica}, J. and {Denneau}, Jr., L. and {Moore}, A. and {Jedicke}, R. and {Connolly}, A.}, title = "{Efficient Algorithms for Large-Scale Asteroid Discovery}", booktitle = {Astronomical Data Analysis Software and Systems XVI}, year = 2007, series = {Astronomical Society of the Pacific Conference Series}, volume = 376, editor = {{Shaw}, R.~A. and {Hill}, F. and {Bell}, D.~J.}, month = oct, pages = {395}, adsurl = {http://adsabs.harvard.edu/abs/2007ASPC..376..395K}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} }
Hide
Simon Krughoff added a comment - - edited
Tim Jenness I've figured out all the ambiguous references I know about. It sounded like that's all you expected from me. If you want me to do more with the review, let me know.
I ended up just adding the references. I can cut down the text later, but having these in the DM wide bibliography will be useful.
Show
Simon Krughoff added a comment - - edited Tim Jenness I've figured out all the ambiguous references I know about. It sounded like that's all you expected from me. If you want me to do more with the review, let me know. I ended up just adding the references. I can cut down the text later, but having these in the DM wide bibliography will be useful.
Hide
Tim Jenness added a comment -
Thanks. I think this review just required a looking at the AuthorYear mappings to ADS so I think we are all good here. Merging with the lsst-texmf references is for another ticket. I'll merge this once it passes tests.
Show
Tim Jenness added a comment - Thanks. I think this review just required a looking at the AuthorYear mappings to ADS so I think we are all good here. Merging with the lsst-texmf references is for another ticket. I'll merge this once it passes tests.
#### People
Assignee:
Melissa Graham
Reporter:
Jim Bosch
Reviewers:
Simon Krughoff
Watchers:
Jim Bosch, John Swinbank, Melissa Graham, Simon Krughoff, Tim Jenness, Zeljko Ivezic
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 18 Nov 2018, 13:00
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
Your Progress
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in November
PrevNext
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
2526272829301
Open Detailed Calendar
• ### How to QUICKLY Solve GMAT Questions - GMAT Club Chat
November 20, 2018
November 20, 2018
09:00 AM PST
10:00 AM PST
The reward for signing up with the registration form and attending the chat is: 6 free examPAL quizzes to practice your new skills after the chat.
• ### The winning strategy for 700+ on the GMAT
November 20, 2018
November 20, 2018
06:00 PM EST
07:00 PM EST
What people who reach the high 700's do differently? We're going to share insights, tips and strategies from data we collected on over 50,000 students who used examPAL.
# If a positive integer n, divided by 5 has a remainder 2
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Manager
Joined: 04 Jan 2013
Posts: 71
If a positive integer n, divided by 5 has a remainder 2 [#permalink]
### Show Tags
Updated on: 06 Jul 2017, 08:20
2
00:00
Difficulty:
5% (low)
Question Stats:
83% (01:13) correct 17% (01:33) wrong based on 275 sessions
### HideShow timer Statistics
If a positive integer n, divided by 5 has a remainder 2, which of the following must be true
I. n is odd
II. n+1 cannot be a prime number
III. (n+2) divided by 7 has remainder 2
A. None
B. I only
C. I and II only
D. II and III only
E. I, II and III
Originally posted by chiccufrazer1 on 20 Mar 2013, 15:14.
Last edited by Bunuel on 06 Jul 2017, 08:20, edited 2 times in total.
Renamed the topic and edited the question.
Manager
Joined: 24 Jan 2013
Posts: 72
Re: If a positive integer n, divided by 5 has a remainder 2 [#permalink]
### Show Tags
20 Mar 2013, 15:40
2
If a positive integer n,divided by 5 has a remainder 2,which of the following must be true
I. n is odd
II. n+1 cannot be a prime number
III. (n+2)divided by 7 has remainder 2
Some valid values for n: 7, 12, 17, 22, 27, 32... or, in other words: $$n=(i * 5) + 2$$ for i=1,2,3...
I. FALSE: we see that n can we odd or even.
II. FALSE: (n+1) could be a prime number. Example: n=12 --> (n+1)=13 is prime. Other example: for n=22, (n+1)=23 is prime.
III. FALSE: for n=12, (n+2)=14, divided by 7 has remainder zero.
None answer is true.
Answer: A
Intern
Joined: 04 Sep 2012
Posts: 15
Location: India
WE: Marketing (Consumer Products)
Re: If a positive integer n, divided by 5 has a remainder 2 [#permalink]
### Show Tags
20 Mar 2013, 20:53
I. 22 and 27 both have a reminder of 2. So False (Options remaining - A,D,E)
II. 22 + 1 = 23 is a prime number. So False (Options remaining - A)
Ans (A)
Verbal Forum Moderator
Joined: 10 Oct 2012
Posts: 612
Re: If a positive integer n, divided by 5 has a remainder 2 [#permalink]
### Show Tags
20 Mar 2013, 20:54
chiccufrazer1 wrote:
If a positive integer n,divided by 5 has a remainder 2,which of the following must be true I. n is odd
II. n+1 cannot be a prime number
III. (n+2)divided by 7 has remainder 2
A.none
B.I only
C.I and II only
D.II and III only
E.I,II and III
n can be written as :
n = 5k+2. Thus, taking k=0, we have n=2.
I.n=2,even.False
II.2+1=3, is a prime. False.
III.n+2 = 4,4 divided by 7 leaves a remainder of 4. False.
A.
_________________
Intern
Status: Currently Preparing the GMAT
Joined: 15 Feb 2013
Posts: 29
Location: United States
GMAT 1: 550 Q47 V23
GPA: 3.7
WE: Analyst (Consulting)
Re: If a positive integer n, divided by 5 has a remainder 2 [#permalink]
### Show Tags
21 Mar 2013, 00:36
chiccufrazer1, you forgot to provide the OA in your post. Just make sure you do provide it for your future problems.
Alright, let's solve this :
We know that n, a positive integer, yields a remainder of 2 when divided by 5. So according to the algebraic form of the division operation, we'll have :
$$n = 5*q + 2$$ with q being a positive integer as well.
This expression allows us to give out some valid possibilities for n by playing with the value of q, such as :
q = 0 => n = 2
q = 1 => n = 7
q= 2 => n =12
Now, from these first values we can already cross off statement I.(n is odd) , since n can be 7 (which is odd) or n can be 12 (which is even).
Statement II. (n+1 cannot be a prime number) can also be crossed off. Consider n = 12, which is not a prime number and yields a remainder of 2 when divided by 5. If we add 1 to it, we get 13, which IS a prime number, so that contradicts statement II.
Finally, statement III. (n+2 yields a remainder of 2 when divided by 7) can also be crossed off. Again consider n = 12. Add 2 to it and we get a 14 which is a multiple of 7.
In short, all statements have been contradicted and the correct answer choice to the question is A : none of the statements above are true.
Hope that helped.
Math Expert
Joined: 02 Sep 2009
Posts: 50627
Re: If a positive integer n, divided by 5 has a remainder 2 [#permalink]
### Show Tags
21 Mar 2013, 03:16
If a positive integer n, divided by 5 has a remainder 2, which of the following must be true
I. n is odd
II. n+1 cannot be a prime number
III. (n+2) divided by 7 has remainder 2
A. None
B. I only
C. I and II only
D. II and III only
E. I, II and III[/quote]
A positive integer n, divided by 5 has a remainder 2 --> $$n=5q+2$$, so n could be 2, 7, 12, 17, 22, 27, ...
I. n is odd. Not necessarily true, since n could be 2, so even.
II. n+1 cannot be a prime number. Not necessarily true, since n could be 2, so n+1=3=prime.
III. (n+2) divided by 7 has remainder 2. Not necessarily true, since n could be 2, so n+2=4 and 4 divided by 7 has remainder 4.
Answer: A.
Hope it's clear.
_________________
Senior SC Moderator
Joined: 22 May 2016
Posts: 2110
If a positive integer n, divided by 5 has a remainder 2 [#permalink]
### Show Tags
Updated on: 06 Jul 2017, 08:24
1
Bunuel wrote:
If a positive integer n, divided by 5 has a remainder 2, which of the following must be true
I. n is odd
II. n+1 cannot be a prime number
III. (n+2) divided by 7 has remainder 2
A. None
B. I only
C. I and II only
D. II and III only
E. I, II and III
Quote:
A positive integer n, divided by 5 has a remainder 2 --> $$n=5q+2$$, so n could be 2, 7, 12, 17, 22, 27, ...
I. n is odd. Not necessarily true, since n could be 2, so even.
II. n+1 cannot be a prime number. Not necessarily true, since n could be 2, so n+1=3=prime.
III. (n+2) divided by 7 has remainder 2. Not necessarily true, since n could be 7, so n+2=9.
Answer: A.
Hope it's clear.
Bunuel ,
I am confused by your analysis of III: (n+2) divided by 7 has remainder 2
If n = 7 and (n + 2) = 9, then $$\frac{9}{7}$$ = 1 + R2.
n could be 2, 7, 12, 17 ...
If n = 12, then (n+2) = 14, which, when divided by 7, leaves remainder 0.
If n = 17, (n+2) = 19, which, when divided by 7, leaves remainder 5.
Those two examples (or others) seem to me to be what should be used to show that III does not satisfy the condition "must be true."
The one you chose proves that III could be true; I'm having a hard time understanding how n = 7 proves that III does not have to be true. Am I missing something?
Originally posted by generis on 06 Jul 2017, 08:11.
Last edited by generis on 06 Jul 2017, 08:24, edited 1 time in total.
Math Expert
Joined: 02 Sep 2009
Posts: 50627
Re: If a positive integer n, divided by 5 has a remainder 2 [#permalink]
### Show Tags
06 Jul 2017, 08:21
genxer123 wrote:
Bunuel wrote:
If a positive integer n, divided by 5 has a remainder 2, which of the following must be true
I. n is odd
II. n+1 cannot be a prime number
III. (n+2) divided by 7 has remainder 2
A. None
B. I only
C. I and II only
D. II and III only
E. I, II and III
Quote:
A positive integer n, divided by 5 has a remainder 2 --> $$n=5q+2$$, so n could be 2, 7, 12, 17, 22, 27, ...
I. n is odd. Not necessarily true, since n could be 2, so even.
II. n+1 cannot be a prime number. Not necessarily true, since n could be 2, so n+1=3=prime.
III. (n+2) divided by 7 has remainder 2. Not necessarily true, since n could be 7, so n+2=9.
Answer: A.
Hope it's clear.
Bunuel ,
I am confused by your analysis of III: (n+2) divided by 7 has remainder 2
If n = 9, then $$\frac{9}{7}$$ = 1 + R2.
n could be 2, 7, 12, 17 ...
If n = 12, then (n+2) = 14, which, when divided by 7, leaves remainder 0.
If n = 17, (n+2) = 19, which, when divided by 7, leaves remainder 5.
Those two examples (or others) seem to me to be what should be used to show that III does not satisfy the condition "must be true."
The one you chose proves that III could be true; I'm having a hard time understanding how n = 7 proves that III does not have to be true. Am I missing something?
You are right. Edited the question.
_________________
Board of Directors
Status: QA & VA Forum Moderator
Joined: 11 Jun 2011
Posts: 4224
Location: India
GPA: 3.5
WE: Business Development (Commercial Banking)
Re: If a positive integer n, divided by 5 has a remainder 2 [#permalink]
### Show Tags
06 Jul 2017, 10:10
chiccufrazer1 wrote:
If a positive integer n, divided by 5 has a remainder 2, which of the following must be true
I. n is odd
II. n+1 cannot be a prime number
III. (n+2) divided by 7 has remainder 2
A. None
B. I only
C. I and II only
D. II and III only
E. I, II and III
Possible values of n are { 7 , 12 , 17 , 22 , 27 ................... }
Now, check the options -
I. n can be Odd/Even
II. n can be Prime / Non Prime
III. n can/can not have remainder 2
Thus, the answer will be (A)
_________________
Thanks and Regards
Abhishek....
PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS
How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only )
Target Test Prep Representative
Status: Head GMAT Instructor
Affiliations: Target Test Prep
Joined: 04 Mar 2011
Posts: 2830
Re: If a positive integer n, divided by 5 has a remainder 2 [#permalink]
### Show Tags
14 Jul 2017, 09:40
chiccufrazer1 wrote:
If a positive integer n, divided by 5 has a remainder 2, which of the following must be true
I. n is odd
II. n+1 cannot be a prime number
III. (n+2) divided by 7 has remainder 2
A. None
B. I only
C. I and II only
D. II and III only
E. I, II and III
We can express n as:
n = 5q + 2
Let’s now analyze each Roman numeral:
I. n is odd
If q = 2, then 5q + 2 = 12, so n does not have to be odd.
II. n+1 cannot be a prime number
If q = 2, then 5q + 2 = 12, so n + 1 = 13, which is prime. So II does not have to be true.
III. (n+2) divided by 7 has remainder 2
n + 2 = 5q + 4
If q = 2, then 5q + 4 = 14, which has a remainder of zero when divided by 7. So III does not have to be true.
Answer: A
_________________
Jeffery Miller
Head of GMAT Instruction
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
CEO
Joined: 11 Sep 2015
Posts: 3122
Location: Canada
Re: If a positive integer n, divided by 5 has a remainder 2 [#permalink]
### Show Tags
13 Nov 2017, 14:12
1
Top Contributor
chiccufrazer1 wrote:
If a positive integer n, divided by 5 has a remainder 2, which of the following must be true
I. n is odd
II. n+1 cannot be a prime number
III. (n+2) divided by 7 has remainder 2
A. None
B. I only
C. I and II only
D. II and III only
E. I, II and III
-----------------ASIDE----------------------------------
When it comes to remainders, we have a nice rule that says:
If N divided by D leaves remainder R, then the possible values of N are R, R+D, R+2D, R+3D,. . . etc.
For example, if k divided by 5 leaves a remainder of 1, then the possible values of k are: 1, 1+5, 1+(2)(5), 1+(3)(5), 1+(4)(5), . . . etc.
------ONTO THE QUESTION!!!------------------------
Positive integer n, divided by 5 has a remainder 2
Some possible values of n: 2, 7, 12, 17, 22, 27, 32, 37, . . . etc
Now let's examine the statements:
I. n is odd.
This need not be true.
Among the possible values of n, we see that n need not be odd
So statement 1 is FALSE
II. n+1 cannot be a prime number.
Not true.
Among the possible values of n, we see that n COULD equal 2
2+1 = 3, and 3 IS a prime number
So, n+1 CAN BE a prime number
So statement 2 is FALSE
NOTE: Since statements I and II are false, we need not examine statement III, since there are no answer choices that suggest that only statement III is true.
So, the correct must be A
RELATED VIDEO FROM OUR COURSE
_________________
Test confidently with gmatprepnow.com
Re: If a positive integer n, divided by 5 has a remainder 2 &nbs [#permalink] 13 Nov 2017, 14:12
Display posts from previous: Sort by
# If a positive integer n, divided by 5 has a remainder 2
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
# Physics: Principles with Applications
## Educators
AH
Problem 1
(I) Express the following angles in radians: $(a)$ 45.0$^{\circ}$, $(b)$ 60.0$^{\circ}$, $(c)$ 90.0$^{\circ}$, $(d)$ 360.0$^{\circ}$, and $(e)$ 445$^{\circ}$. Give as numerical values and as fractions of $\pi$.
AH
Averell H.
Carnegie Mellon University
Problem 2
(I) The Sun subtends an angle of about 0.5$^{\circ}$ to us on Earth, 150 million km away. Estimate the radius of the Sun.
AH
Averell H.
Carnegie Mellon University
Problem 3
(I) A laser beam is directed at the Moon, 380,000 km from Earth. The beam diverges at an angle $\theta$ (Fig. 8-40) of $1.4 \times 10^{-5}$ rad. What diameter spot will it make on the Moon?
AH
Averell H.
Carnegie Mellon University
Problem 4
(I) The blades in a blender rotate at a rate of 6500 rpm. When the motor is turned off during operation, the blades slow to rest in 4.0 s. What is the angular acceleration as the blades slow down?
AH
Averell H.
Carnegie Mellon University
Problem 5
(II) The platter of the $\textbf{hard drive}$ of a computer rotates at 7200 rpm (rpm $=$ revolutions per minute $=$ rev/min). $(a)$ What is the angular velocity (rad/s) of the platter? $(b)$ If the reading head of the drive is located 3.00 cm from the rotation axis, what is the linear speed of the point on the platter just below it? $(c)$ If a single bit requires 0.50 $\mu$m of length along the direction of motion, how many bits per second can the writing head write when it is 3.00 cm from the axis?
AH
Averell H.
Carnegie Mellon University
Problem 6
(II) A child rolls a ball on a level floor 3.5 m to another child. If the ball makes 12.0 revolutions, what is its diameter?
AH
Averell H.
Carnegie Mellon University
Problem 7
(II) $(a)$ A grinding wheel 0.35 m in diameter rotates at 2200 rpm. Calculate its angular velocity in rad/s. $(b)$ What are the linear speed and acceleration of a point on the edge of the grinding wheel?
AH
Averell H.
Carnegie Mellon University
Problem 8
(II) A bicycle with tires 68 cm in diameter travels 9.2 km. How many revolutions do the wheels make?
AH
Averell H.
Carnegie Mellon University
Problem 9
(II) Calculate the angular velocity $(a)$ of a clock's second hand, $(b)$ its minute hand, and $(c)$ its hour hand. State in rad/s. $(d)$ What is the angular acceleration in each case?
AH
Averell H.
Carnegie Mellon University
Problem 10
(II) A rotating merry-go-round makes one complete revolution in 4.0 s (Fig. 8-41). $(a)$ What is the linear speed of a child seated 1.2 m from the center? $(b)$ What is her acceleration (give components)?
AH
Averell H.
Carnegie Mellon University
Problem 11
(II) What is the linear speed, due to the Earth's rotation, of a point $(a)$ on the equator, $(b)$ on the Arctic Circle (latitude 66.5$^{\circ}$ N), and (c) at a latitude of 42.0$^{\circ}$ N?
AH
Averell H.
Carnegie Mellon University
Problem 12
(II) Calculate the angular velocity of the Earth $(a$) in its orbit around the Sun, and $(b)$ about its axis.
AH
Averell H.
Carnegie Mellon University
Problem 13
(II) How fast (in rpm) must a centrifuge rotate if a particle 8.0 cm from the axis of rotation is to experience an acceleration of 100,000 g's?
AH
Averell H.
Carnegie Mellon University
Problem 14
(II) A 61-cm-diameter wheel accelerates uniformly about its center from 120 rpm to 280 rpm in 4.0 s. Determine $(a)$ its angular acceleration, and $(b)$ the radial and tangential components of the linear acceleration of a point on the edge of the wheel 2.0 s after it has started accelerating.
AH
Averell H.
Carnegie Mellon University
Problem 15
(II) In traveling to the Moon, astronauts aboard the $Apollo$ spacecraft put the spacecraft into a slow rotation to distribute the Sun's energy evenly (so one side would not become too hot). At the start of their trip, they accelerated from no rotation to 1.0 revolution every minute during a 12-min time interval. Think of the spacecraft as a cylinder with a diameter of 8.5 m rotating about its cylindrical axis. Determine $(a)$ the angular acceleration, and $(b)$ the radial and tangential components of the linear acceleration of a point on the skin of the ship 6.0 min after it started this acceleration.
AH
Averell H.
Carnegie Mellon University
Problem 16
(II) A turntable of radius $R_1$ is turned by a circular rubber roller of radius $R_2$ in contact with it at their outer edges. What is the ratio of their angular velocities, $\omega_1/\omega_2$?
AH
Averell H.
Carnegie Mellon University
Problem 17
(I) An automobile engine slows down from 3500 rpm to 1200 rpm in 2.5 s. Calculate $(a)$ its angular acceleration, assumed constant, and $(b)$ the total number of revolutions the engine makes in this time.
AH
Averell H.
Carnegie Mellon University
Problem 18
(I) A centrifuge accelerates uniformly from rest to 15,000 rpm in 240 s. Through how many revolutions did it turn in this time?
AH
Averell H.
Carnegie Mellon University
Problem 19
(I) Pilots can be tested for the stresses of flying high-speed jets in a whirling "human centrifuge," which takes 1.0 min to turn through 23 complete revolutions before reaching its final speed. $(a)$ What was its angular acceleration (assumed constant), and $(b)$ what was its final angular speed in rpm?
AH
Averell H.
Carnegie Mellon University
Problem 20
(II) A cooling fan is turned off when it is running at 850 rev/min. It turns 1250 revolutions before it comes to a stop. $(a)$ What was the fan's angular acceleration, assumed constant? $(b)$ How long did it take the fan to come to a complete stop?
AH
Averell H.
Carnegie Mellon University
Problem 21
(II) A wheel 31 cm in diameter accelerates uniformly from 240 rpm to 360 rpm in 6.8 s. How far will a point on the edge of the wheel have traveled in this time?
AH
Averell H.
Carnegie Mellon University
Problem 22
(II) The tires of a car make 75 revolutions as the car reduces its speed uniformly from 95 km/h to 55 km/h. The tires have a diameter of 0.80 m. $(a)$ What was the angular acceleration of the tires? If the car continues to decelerate at this rate, $(b)$ how much more time is required for it to stop, and $(c)$ how far does it go?
AH
Averell H.
Carnegie Mellon University
Problem 23
(II) A small rubber wheel is used to drive a large pottery wheel. The two wheels are mounted so that their circular edges touch. The small wheel has a radius of 2.0 cm and accelerates at the rate of $7.2 rad/s^2$, and it is in contact with the pottery wheel (radius 27.0 cm) without slipping. Calculate $(a)$ the angular acceleration of the pottery wheel, and $(b)$ the time it takes the pottery wheel to reach its required speed of 65 rpm.
AH
Averell H.
Carnegie Mellon University
Problem 24
(I) A 52-kg person riding a bike puts all her weight on each pedal when climbing a hill. The pedals rotate in a circle of radius 17 cm. $(a)$ What is the maximum torque she exerts? $(b)$ How could she exert more torque?
AH
Averell H.
Carnegie Mellon University
Problem 25
(II) Calculate the net torque about the axle of the wheel shown in Fig. 8-42. Assume that a friction torque of 0.60 m$\cdot$N opposes the motion.
AH
Averell H.
Carnegie Mellon University
Problem 26
(II) A person exerts a horizontal force of 42 N on the end of a door 96 cm wide. What is the magnitude of the torque if the force is exerted $(a)$ perpendicular to the door and $(b)$ at a 60.0$^{\circ}$ angle to the face of the door?
AH
Averell H.
Carnegie Mellon University
Problem 27
(II) Two blocks, each of mass $m$, are attached to the ends of a massless rod which pivots as shown in Fig. 8-43. Initially the rod is held in the horizontal position and then released. Calculate the magnitude and direction of the net torque on this system when it is first released.
AH
Averell H.
Carnegie Mellon University
Problem 28
(II) The bolts on the cylinder head of an engine require tightening to a torque of 95 m$\cdot$N. If a wrench is 28 cm long, what force perpendicular to the wrench must the mechanic exert at its end? If the six-sided bolt head is 15 mm across (Fig. 8-44), estimate the force applied near each of the six
points by a wrench.
AH
Averell H.
Carnegie Mellon University
Problem 29
(II) Determine the net torque on the 2.0-m-long uniform beam shown in Fig. 8-45. All forces are shown. Calculate about $(a)$ point C, the $_{CM}$, and $(b)$ point P at one end.
AH
Averell H.
Carnegie Mellon University
Problem 30
(I) Determine the moment of inertia of a 10.8-kg sphere of radius 0.648 m when the axis of rotation is through its center.
AH
Averell H.
Carnegie Mellon University
Problem 31
(I) Estimate the moment of inertia of a bicycle wheel 67 cm in diameter. The rim and tire have a combined mass of 1.1 kg. The mass of the hub (at the center) can be ignored (why?).
AH
Averell H.
Carnegie Mellon University
Problem 32
(II) A merry-go-round accelerates from rest to 0.68 rad/s in 34 s. Assuming the merry-go-round is a uniform disk of radius 7.0 m and mass 31,000 kg, calculate the net torque required to accelerate it.
AH
Averell H.
Carnegie Mellon University
Problem 33
(II) An oxygen molecule consists of two oxygen atoms whose total mass is $5.3 \times 10^{-26} kg$ and whose moment of inertia about an axis perpendicular to the line joining the two atoms, midway between them, is $1.9 \times 10^{-46} kg\cdot m^2$. From these data, estimate the effective distance between the atoms.
AH
Averell H.
Carnegie Mellon University
Problem 34
(II) A grinding wheel is a uniform cylinder with a radius of 8.50 cm and a mass of 0.380 kg. Calculate $(a)$ its moment of inertia about its center, and $(b)$ the applied torque needed to accelerate it from rest to 1750 rpm in 5.00 s. Take into account a frictional torque that has been measured to slow down the wheel from 1500 rpm to rest in 55.0 s.
AH
Averell H.
Carnegie Mellon University
Problem 35
(II) The forearm in Fig. 8-46 accelerates a 3.6-kg ball at $7.0 m/s^2$ by means of the triceps muscle, as shown. Calculate $(a)$ the torque needed, and $(b)$ the force that must be exerted by the triceps muscle. Ignore the mass of the arm.
AH
Averell H.
Carnegie Mellon University
Problem 36
(II) Assume that a 1.00-kg ball is thrown solely by the action of the forearm, which rotates about the elbow joint under the action of the triceps muscle, Fig. 8-46. The ball is accelerated uniformly from rest to 8.5 m/s in 0.38 s, at which point it is released. Calculate $(a)$ the angular acceleration of the arm, and $(b)$ the force required of the triceps muscle. Assume that the forearm has a mass of 3.7 kg and rotates like a uniform rod about an axis at its end.
AH
Averell H.
Carnegie Mellon University
Problem 37
(II) A softball player swings a bat, accelerating it from rest to 2.6 rev/s in a time of 0.20 s. Approximate the bat as a 0.90-kg uniform rod of length 0.95 m, and compute the torque the player applies to one end of it.
AH
Averell H.
Carnegie Mellon University
Problem 38
(II) A small 350-gram ball on the end of a thin, light rod is rotated in a horizontal circle of radius 1.2 m. Calculate $(a)$ the moment of inertia of the ball about the center of the circle, and $(b)$ the torque needed to keep the ball rotating at constant angular velocity if air resistance exerts a force of 0.020 N on the ball. Ignore air resistance on the rod and its moment of inertia.
AH
Averell H.
Carnegie Mellon University
Problem 39
(II) Calculate the moment of inertia of the array of point objects shown in Fig. 8-47 about $(a)$ the $y$ axis, and $(b)$ the $\chi$ axis. Assume $m = 2.2 kg$, $M = 3.4 kg$, and the objects are wired together by very light, rigid pieces of wire. The array is rectangular and is split through the middle by the x axis. $(c)$ About which axis would it be harder to accelerate this array?
AH
Averell H.
Carnegie Mellon University
Problem 40
(II) A potter is shaping a bowl on a potter's wheel rotating at constant angular velocity of 1.6 rev/s (Fig. 8-48). The friction force between her hands and the clay is 1.5 N total. $(a)$ How large is her torque on the wheel, if the diameter of the bowl is 9.0 cm? $(b)$ How long would it take for the potter's wheel to stop if the only torque acting on it is due to the potter's hands? The moment of inertia of the wheel and the bowl is $0.11 kg\cdot m^2$.
AH
Averell H.
Carnegie Mellon University
Problem 41
(II) A dad pushes tangentially on a small hand-driven merry-go-round and is able to accelerate it from rest to a frequency of 15 rpm in 10.0 s. Assume the merry-go-round is a uniform disk of radius 2.5 m and has a mass of 560 kg, and two children (each with a mass of 25 kg) sit opposite each other on the edge. Calculate the torque required to produce the acceleration, neglecting frictional torque. What force is required at the edge?
AH
Averell H.
Carnegie Mellon University
Problem 42
(II) A 0.72-m-diameter solid sphere can be rotated about an axis through its center by a torque of 10.8 m$\cdot$N which accelerates it uniformly from rest through a total of 160 revolutions in 15.0 s. What is the mass of the sphere?
AH
Averell H.
Carnegie Mellon University
Problem 43
(II) Let us treat a helicopter rotor blade as a long thin rod, as shown in Fig. 8-49. $(a)$ If each of the three rotor helicopter blades is 3.75 m long and has a mass of 135 kg, calculate the moment of inertia of the three rotor blades about the axis of rotation. $(b)$ How much torque must the motor apply to bring the blades from rest up to a speed of 6.0 rev/s in 8.0 s?
AH
Averell H.
Carnegie Mellon University
Problem 44
(II) A centrifuge rotor rotating at 9200 rpm is shut off and is eventually brought uniformly to rest by a frictional torque of 1.20 m$\cdot$N. If the mass of the rotor is 3.10 kg and it can be approximated as a solid cylinder of radius 0.0710 m, through how many revolutions will the rotor turn before coming to rest, and how long will it take?
AH
Averell H.
Carnegie Mellon University
Problem 45
(II) To get a flat, uniform cylindrical satellite spinning at the correct rate, engineers fire four tangential rockets as shown in Fig. 8-50. Suppose that the satellite has a mass of 3600 kg and a radius of 4.0 m, and that the rockets each add a mass of 250 kg. What is the steady force required of each rocket if the satellite is to reach 32 rpm in 5.0 min, starting from rest?
AH
Averell H.
Carnegie Mellon University
Problem 46
(III) Two blocks are connected by a light string passing over a pulley of radius 0.15 m and moment of inertia $I$. The blocks move (towards the right) with an acceleration of $1.00 m/s^2$ along their frictionless inclines (see Fig. 8-51). $(a)$ Draw free-body diagrams for each of the two blocks and the pulley. $(b)$ Determine $F_{TA}$ and $F_{TB}$, the tensions in the two parts of the string. $(c)$ Find the net torque acting on the pulley, and determine its moment of inertia, $I$.
AH
Averell H.
Carnegie Mellon University
Problem 47
(III) An Atwood machine consists of two masses, $m_A = 65 kg$ and $m_B = 75 kg$, connected by a massless inelastic cord that passes over a pulley free to rotate, Fig. 8-52. The pulley is a solid cylinder of radius $R = 0.45 m$ and mass 6.0 kg. $(a)$ Determine the acceleration of each mass. (b) What % error would be made if the moment of inertia of the pulley is ignored? [Hint: The tensions $F_{TA}$ and $F_{TB}$ are not equal. We discussed the Atwood machine in Example 4-13, assuming $I = 0$ for the pulley.]
AH
Averell H.
Carnegie Mellon University
Problem 48
(III) A hammer thrower accelerates the hammer $(mass = 7.30 kg)$ from rest within four full turns (revolutions) and releases it at a speed of 26.5 m/s. Assuming a uniform rate of increase in angular velocity and a horizontal circular path of radius 1.20 m, calculate $(a)$ the angular acceleration, $(b)$ the (linear) tangential acceleration, $(c)$ the centripetal acceleration just before release, $(d)$ the net force being exerted on the hammer by the athlete just before release, and $(e)$ the angle of this force with respect to the radius of the circular motion. Ignore gravity.
AH
Averell H.
Carnegie Mellon University
Problem 49
(I) An automobile engine develops a torque of 265 m$\cdot$N at 3350 rpm. What is the horsepower of the engine?
AH
Averell H.
Carnegie Mellon University
Problem 50
(I) A centrifuge rotor has a moment of inertia of $3.25 \times 10^{-2} kg\cdot m^2$. How much energy is required to bring it from rest to 8750 rpm?
AH
Averell H.
Carnegie Mellon University
Problem 51
(I) Calculate the translational speed of a cylinder when it reaches the foot of an incline 7.20 m high. Assume it starts from rest and rolls without slipping.
AH
Averell H.
Carnegie Mellon University
Problem 52
(II) A bowling ball of mass 7.25 kg and radius 10.8 cm rolls without slipping down a lane 3.10 m/s. at Calculate its total kinetic energy.
AH
Averell H.
Carnegie Mellon University
Problem 53
(II) Estimate the kinetic energy of the Earth with respect to the Sun as the sum of two terms, $(a)$ that due to its daily rotation about its axis, and $(b)$ that due to its yearly revolution about the Sun. [Assume the Earth is a uniform sphere with $mass = 6.0 \times 10^{24} kg$, $radius = 6.4 \times 10^6 m$, and is $1.5 \times 10^8 km$ from the Sun.]
AH
Averell H.
Carnegie Mellon University
Problem 54
(II) A rotating uniform cylindrical platform of mass 220 kg and radius 5.5 m slows down from 3.8 rev/s to rest in 16 s when the driving motor is disconnected. Estimate the power output of the motor (hp) required to maintain a steady speed of 3.8 rev/s.
AH
Averell H.
Carnegie Mellon University
Problem 55
(II) A merry-go-round has a mass of 1440 kg and a radius of 7.50 m. How much net work is required to accelerate it from rest to a rotation rate of 1.00 revolution per 7.00 s? Assume it is a solid cylinder.
AH
Averell H.
Carnegie Mellon University
Problem 56
(II) A sphere of radius $r = 34.5 cm$ and mass $m = 1.80 kg$ starts from rest and rolls without slipping down a 30.0$^{\circ}$ incline that is 10.0 m long. $(a)$ Calculate its translational and rotational speeds when it reaches the bottom. $(b)$ What is the ratio of translational to rotational kinetic energy at the bottom? Avoid putting in numbers until the end so you can answer: $(c)$ do your answers in $(a)$ and $(b)$ depend on the radius of the sphere or its mass?
AH
Averell H.
Carnegie Mellon University
Problem 57
(II) A ball of radius r rolls on the inside of a track of radius R (see Fig. 8-53). If the ball starts from rest at the vertical edge of the track, what will be its speed when it reaches the lowest point of the track, rolling without slipping?
AH
Averell H.
Carnegie Mellon University
Problem 58
(II) Two masses, $m_A = 32.0 kg$ and $m_B = 38.0 kg$, are connected by a rope that hangs over a pulley (as in Fig. 8-54). The pulley is a uniform cylinder of radius $R = 0.311 m$ and $mass 3.1 kg$. Initially $m_A$ is on the ground and $m_B$ rests 2.5 m above the ground. If the system is released, use conservation of energy to determine the speed of just before it strikes the $m_B$ ground. Assume the pulley bearing is frictionless.
AH
Averell H.
Carnegie Mellon University
Problem 59
(III) A 1.80-m-long pole is balanced vertically with its tip on the ground. It starts to fall and its lower end does not slip. What will be the speed of the upper end of the pole just before it hits the ground? [$Hint$: Use conservation of energy.]
AH
Averell H.
Carnegie Mellon University
Problem 60
(I) What is the angular momentum of a 0.270-kg ball revolving on the end of a thin string in a circle of radius 1.35 m at an angular speed of 10.4 rad/s?
AH
Averell H.
Carnegie Mellon University
Problem 61
(I) $(a)$ What is the angular momentum of a 2.8-kg uniform cylindrical grinding wheel of radius 28 cm when rotating at 1300 rpm? $(b)$ How much torque is required to stop it in 6.0 s?
AH
Averell H.
Carnegie Mellon University
Problem 62
(II) A person stands, hands at his side, on a platform that is rotating at a rate of 0.90 rev/s. If he raises his arms to a horizontal position, Fig. 8-55, the speed of rotation decreases to 0.60 rev/s. $(a)$ Why? $(b)$ By what factor has his moment of inertia changed?
AH
Averell H.
Carnegie Mellon University
Problem 63
(II) A nonrotating cylindrical disk of moment of inertia $I$ is dropped onto an identical disk rotating at angular speed $\omega$. Assuming no external torques, what is the final common angular speed of the two disks?
AH
Averell H.
Carnegie Mellon University
Problem 64
(II) A diver (such as the one shown in Fig. 8-28) can reduce her moment of inertia by a factor of about 3.5 when changing from the straight position to the tuck position. If she makes 2.0 rotations in 1.5 s when in the tuck position, what is her angular speed (rev/s) when in the straight position?
AH
Averell H.
Carnegie Mellon University
Problem 65
(II) A figure skater can increase her spin rotation rate from an initial rate of 1.0 rev every 1.5 s to a final rate of 2.5 rev/s. If her initial moment of inertia was $4.6 kg\cdot m^2$, what is her final moment of inertia? How does she physically accomplish this change?
AH
Averell H.
Carnegie Mellon University
Problem 66
(II) $(a)$ What is the angular momentum of a figure skater spinning at 3.0 rev/s with arms in close to her body, assuming her to be a uniform cylinder with a height of 1.5 m, a radius of 15 cm, and a mass of 48 kg? $(b)$ How much torque is required to slow her to a stop in 4.0 s, assuming she does $not$ move her arms?
AH
Averell H.
Carnegie Mellon University
Problem 67
(II) A person of mass 75 kg stands at the center of a rotating merry-go-round platform of radius 3.0 m and moment of inertia $820 kg\cdot m^2$. The platform rotates without friction with angular velocity 0.95 rad/s. The person walks radially to the edge of the platform. $(a)$ Calculate the angular
velocity when the person reaches the edge. $(b)$ Calculate the rotational kinetic energy of the system of platform plus person before and after the person's walk.
AH
Averell H.
Carnegie Mellon University
Problem 68
(II) A potter's wheel is rotating around a vertical axis through its center at a frequency of 1.5 rev/s. The wheel can be considered a uniform disk of mass 5.0 kg and diameter 0.40 m. The potter then throws a 2.6-kg chunk of clay, approximately shaped as a flat disk of radius 7.0 cm, onto the center of the rotating wheel. What is the frequency of the wheel after the clay sticks to it? Ignore friction.
AH
Averell H.
Carnegie Mellon University
Problem 69
(II) A 4.2-m-diameter merry-go-round is rotating freely with an angular velocity of 0.80 rad/s. Its total moment of inertia is $1360 kg\cdot m^2$. Four people standing on the ground, each of mass 65 kg, suddenly step onto the edge of the merry-go-round. $(a)$ What is the angular velocity of the merry-go-round now? $(b)$ What if the people were on it initially and then jumped off in a radial direction (relative to the merry-go-round)?
AH
Averell H.
Carnegie Mellon University
Problem 70
(II) A uniform horizontal rod of mass $M$ and length $\ell$ rotates with angular velocity $\omega$ about a vertical axis through its center. Attached to each end of the rod is a small mass $m$. Determine the angular momentum of the system about the axis.
AH
Averell H.
Carnegie Mellon University
Problem 71
(II) Suppose our Sun eventually collapses into a white dwarf, losing about half its mass in the process, and winding up with a radius 1.0% of its existing radius. Assuming the lost mass carries away no angular momentum, $(a)$ what would the Sun's new rotation rate be? Take the Sun's current period to be about 30 days. $(b)$ What would be its final kinetic energy in terms of its initial kinetic energy of today?
AH
Averell H.
Carnegie Mellon University
Problem 72
(II) A uniform disk turns at 3.3 rev/s around a frictionless central axis. A nonrotating rod, of the same mass as the disk and length equal to the disk's diameter, is dropped onto the freely spinning disk, Fig. 8-56. They then turn together around the axis with their centers superposed. What is the angular
frequency in rev/s of the combination?
AH
Averell H.
Carnegie Mellon University
Problem 73
(III) An asteroid of mass $1.0 \times 10^5 kg$, traveling at a speed of 35 km/s relative to the Earth, hits the Earth at the equator tangentially, in the direction of Earth's rotation, and is embedded there. Use angular momentum to estimate the percent change in the angular speed of the Earth as a result of the collision.
AH
Averell H.
Carnegie Mellon University
Problem 74
(III) Suppose a 65-kg person stands at the edge of a 5.5-m diameter merry-go-round turntable that is mounted on frictionless bearings and has a moment of inertia of $1850 kg\cdot m^2$. The turntable is at rest initially, but when the person begins running at a speed of 4.0 m/s (with respect to the turntable) around its edge, the turntable begins to rotate in the opposite direction. Calculate the angular velocity of the turntable.
AH
Averell H.
Carnegie Mellon University
Problem 75
A merry-go-round with a moment of inertia equal to $1260 kg\cdot m^2$ and a radius of 2.5 m rotates with negligible friction at 1.70 rad/s. A child initially standing still next to the merry-go-round jumps onto the edge of the platform straight toward the axis of rotation, causing the platform to slow to 1.35 rad/s. What is her mass?
AH
Averell H.
Carnegie Mellon University
Problem 76
A 1.6-kg grindstone in the shape of a uniform cylinder of radius 0.20 m acquires a rotational rate of 24 rev/s from rest over a 6.0-s interval at constant angular acceleration. Calculate the torque delivered by the motor.
AH
Averell H.
Carnegie Mellon University
Problem 77
On a 12.0-cm-diameter audio compact disc (CD), digital bits of information are encoded sequentially along an outward spiraling path. The spiral starts at radius $R_1 = 2.5 cm$ and winds its way out to radius $R_2 = 5.8 cm$. To read the digital information, a CD player rotates the CD so that the player's readout laser scans along the spiral's sequence of bits at a constant linear speed of 1.25 m/s. Thus the player must accurately adjust the rotational frequency $f$ of the CD as the laser moves outward.
Determine the values for $f$ (in units of rpm) when the laser is located at $R_1$ and when it is at $R_2$.
AH
Averell H.
Carnegie Mellon University
Problem 78
$(a)$ A yo-yo is made of two solid cylindrical disks, each of mass 0.050 kg and diameter 0.075 m, joined by a (concentric) thin solid cylindrical hub of mass 0.0050 kg and diameter 0.013 m. Use conservation of energy to calculate the linear speed of the yo-yo just before it reaches the end of its 1.0-m-long string, if it is released from rest. $(b)$ What fraction of its kinetic energy is rotational?
AH
Averell H.
Carnegie Mellon University
Problem 79
A cyclist accelerates from rest at a rate of $1.00 m/s^2$. How fast will a point at the top of the rim of the tire $(diameter = 68.0 cm)$ be moving after 2.25 s? [$Hint$: At any moment, the lowest point on the tire is in contact with the ground and is at rest-see Fig. 8-57.
AH
Averell H.
Carnegie Mellon University
Problem 80
Suppose David puts a 0.60-kg rock into a sling of length 1.5 m and begins whirling the rock in a nearly horizontal circle, accelerating it from rest to a rate of 75 rpm after 5.0 s. What is the torque required to achieve this feat, and where does the torque come from?
AH
Averell H.
Carnegie Mellon University
Problem 81
$\textbf{Bicycle gears:}$ $(a)$ How is the angular velocity $\omega_R$ of the rear wheel of a bicycle related to the angular velocity $\omega_F$ of the front sprocket and pedals? Let $N_F$ and $N_R$ be the number of teeth on the front and rear sprockets, respectively, Fig. 8-58. The teeth are spaced the same on both sprockets and the rear sprocket is firmly attached to the rear wheel. $(b)$ Evaluate the ratio $\omega_R/\omega_F$ when the front and rear sprockets have 52 and 13 teeth, respectively, and $(c)$ when they have 42 and 28 teeth.
AH
Averell H.
Carnegie Mellon University
Problem 82
Figure 8-59 illustrates an $H_2O$ molecule. The $O-H$ bond length is 0.096 nm and the $H-O-H$ bonds make an angle of 104$^{\circ}$. Calculate the moment of inertia of the $H_2O$ molecule (assume the atoms are points) about an axis passing through the center of the oxygen atom $(a)$ perpendicular to the plane of the molecule, and $(b)$ in the plane of the molecule, bisecting the $H-O-H$ bonds.
AH
Averell H.
Carnegie Mellon University
Problem 83
A hollow cylinder (hoop) is rolling on a horizontal surface at speed $\upsilon = 3.0 m/s$ when it reaches a 15$^{\circ}$ incline. $(a)$ How far up the incline will it go? $(b)$ How long will it be on the incline before it arrives back at the bottom?
AH
Averell H.
Carnegie Mellon University
Problem 84
Determine the angular momentum of the Earth $(a)$ about its rotation axis (assume the Earth is a uniform sphere), and $(b)$ in its orbit around the Sun (treat the Earth as a particle orbiting the Sun).
AH
Averell H.
Carnegie Mellon University
Problem 85
A wheel of mass $M$ has radius $R$. It is standing vertically on the floor, and we want to exert a horizontal force $F$ at its axle so that it will climb a step against which it rests (Fig. 8-60). The step has height $h$, where $h < R$. What minimum force $F$ is needed?
AH
Averell H.
Carnegie Mellon University
Problem 86
If the coefficient of static friction between a car's tires and the pavement is 0.65, calculate the minimum torque that must be applied to the 66-cm-diameter tire of a 1080-kg automobile in order to "lay rubber" (make the wheels spin, slipping as the car accelerates). Assume each wheel supports an equal share of the weight.
AH
Averell H.
Carnegie Mellon University
Problem 87
A 4.00-kg mass and a 3.00-kg mass are attached to opposite ends of a very light 42.0-cm-long horizontal rod (Fig. 8-61). The system is rotating at angular speed $\omega = 5.60$ rad/s about a vertical axle at the center of the rod. Determine $(a)$ the kinetic energy $_{KE}$ of the system, and $(b)$ the net force on each mass.
AH
Averell H.
Carnegie Mellon University
Problem 88
A small mass $m$ attached to the end of a string revolves in a circle on a frictionless tabletop. The other end of the string passes through a hole in the table (Fig. 8-62). Initially, the mass revolves with a speed $\upsilon_1 = 2.4 m/s$ in a circle of radius $r_1 = 0.80 m$. The string is then pulled slowly through the hole so that the radius is reduced to $r_2 = 0.48 m$. What is the speed, $\upsilon_2$ , of the mass now?
AH
Averell H.
Carnegie Mellon University
Problem 89
A uniform rod of mass $M$ and length $\ell$ can pivot freely (i.e., we ignore friction) about a hinge attached to a wall, as in Fig. 8-63. The rod is held horizontally and then released. At the moment of release, determine $(a)$ the angular acceleration of the rod, and $(b)$ the linear acceleration of the tip of the rod. Assume that the force of gravity acts at the center of mass of the rod, as shown. [$Hint$: See Fig. 8-20g.]
AH
Averell H.
Carnegie Mellon University
Problem 90
Suppose a star the size of our Sun, but with mass 8.0 times as great, were rotating at a speed of 1.0 revolution every 9.0 days. If it were to undergo gravitational collapse to a neutron star of radius 12 km, losing $\frac{3}{4}$ of its mass in the process, what would its rotation speed be? Assume the star is a uniform sphere at all times. Assume also that the thrownoff mass carries off either $(a)$ no angular momentum, or $(b)$ its proportional share $(\frac{3}{4})$ of the initial angular momentum.
AH
Averell H.
Carnegie Mellon University
Problem 91
A large spool of rope rolls on the ground with the end of the rope lying on the top edge of the spool. A person grabs the end of the rope and walks a distance $\ell$, holding onto it, Fig. 8-64. The spool rolls behind the person without slipping. What length of rope unwinds from the spool? How far does the spool's center of mass move?
AH
Averell H.
Carnegie Mellon University
Problem 92
The Moon orbits the Earth such that the same side always faces the Earth. Determine the ratio of the Moon's spin angular momentum (about its own axis) to its orbital angular momentum. (In the latter case, treat the Moon as a particle orbiting the Earth.)
AH
Averell H.
Carnegie Mellon University
Problem 93
A spherical asteroid with radius $r = 123 m$ and mass $M = 2.25 \times 10^{10} kg$ rotates about an axis at four revolutions per day. A "tug" spaceship attaches itself to the asteroid's south pole (as defined by the axis of rotation) and fires its engine, applying a force $F$ tangentially to the asteroid's surface as shown in Fig. 8-65. If $F = 285 N$, how long will it take the tug to rotate the asteroid's axis of rotation through an angle of 5.0$^{\circ}$ by this method?
AH
Averell H.
Carnegie Mellon University
Problem 94
Most of our Solar System's mass is contained in the Sun, and the planets possess almost all of the Solar System's angular momentum. This observation plays a key role in theories attempting to explain the formation of our Solar System. Estimate the fraction of the Solar System's total angular momentum that is possessed by planets using a simplified model which includes only the large outer planets with the most angular momentum. The central Sun (mass $1.99 \times 10^{30} kg$, radius $6.96 \times 10^8 m$) spins about its axis once every 25 days and the planets Jupiter, Saturn, Uranus, and Neptune move in nearly circular orbits around the Sun with orbital data given in the Table below. Ignore each planet's spin about its own axis.
AH
Averell H.
Carnegie Mellon University
Problem 95
Water drives a waterwheel (or turbine) of radius $R = 3.0 m$ as shown in Fig. 8-66. The water enters at a speed $\upsilon_1 = 7.0 m/s$ and exits from the waterwheel at a speed $\upsilon_2 = 3.8 m/s$. $(a)$ If 85 kg of water passes through per second, what is the rate at which the water delivers angular momentum to the waterwheel? $(b)$ What is the torque the water applies to the waterwheel? $(c)$ If the water causes the waterwheel to make one revolution every 5.5 s, how much power is delivered to the wheel?
AH
Averell H.
Carnegie Mellon University
Problem 96
The radius of the roll of paper shown in Fig. 8-67 is 7.6 cm and its moment of inertia is $I = 3.3 \times 10^{-3} kg\cdot m^2$. A force of 3.5 N is exerted on the end of the roll for 1.3 s, but the paper does not tear so it begins to unroll. A constant friction torque of $0.11 m\cdot N$ is exerted on the roll which gradually brings it to a stop. Assuming that the paper's thickness is negligible, calculate $(a)$ the length of paper that unrolls during the time that the force is applied (1.3 s) and $(b)$ the length of paper that unrolls from the time the force ends to the time when the roll has stopped moving.
AH
Averell H.
Carnegie Mellon University
|
# Determine the total electrostatic potential energy
## Homework Statement
Determine the total electrostatic potential energy of a nonconducting sphere of radius $r_0$ carrying a total charge $Q$ distributed uniformly thorughout its volume.
U = qV
## The Attempt at a Solution
$$\rho = \frac{Q}{4/3\pi r_0^3} = \frac{dq}{4\pi r^2dr}$$
Electric field inside a nonconductor sphere
$$V = \frac{kQ}{2r_0}(3-\frac{r^2}{r_0^2})$$
$$dU = Vdq$$
$$= \frac{kQ}{2r_0} (3 - \frac{r^2}{r_0^2}) \rho4\pi r^2 dr$$
$$= \frac{2kQ\pi\rho}{r_0}(3r^2 -\frac{r^4}{r_0^2} )dr$$
After integrating it over [from zero to $r_0$], I end up with the following result
$$\frac{3Q^2}{10\pi r_0 \epsilon_0}$$
But the correct answer according to the textbook is this.
$$\frac{3Q^2}{20\pi r_0 \epsilon_0}$$
It's almost the same result but missing a factor of two in the denominator. Is it because of the potential equation? Solutions manual uses potential at the surface, but in my answer I use potential inside a nonconductor. That may be the reason of 1/2. (kq/r_0 vs kq/2r_0* (3 - r^2/r_0^2))
TSny
Homework Helper
Gold Member
This is a common mistake of "double counting". Suppose you had a system of 3 point charges. It is tempting to calculate the energy as ##U = \sum_{i=1}^3q_iV_i## where ##V_i## is the potential at the location of the ith charge due to the other two charges. But if you write out the terms explicitly you will see that you are counting the interaction of each pair of charges twice.
This is a common mistake of "double counting". Suppose you had a system of 3 point charges. It is tempting to calculate the energy as ##U = \sum_{i=1}^3q_iV_i## where ##V_i## is the potential at the location of the ith charge due to the other two charges. But if you write out the terms explicitly you will see that you are counting the interaction of each pair of charges twice.
I can see that we've double counted the pairs in your example. But where exactly am I doing that in my solution?
TSny
Homework Helper
Gold Member
In the integral ##\int V dq##, ##V## represents the potential of the entire system at the point where ##dq## is located. So, consider two particular elements of charge ##dq_1## and ##dq_2##. The integral will include contributions ##V_1 dq_1## and ##V_2 dq_2##, where ##V_1## is the potential of the entire system at the location of ##dq_1## and ##V_2## is the potential of the entire system at ##dq_2##.
Note that ##V_1## will contain a contribution from ##dq_2## of the form ##\frac{k\; dq_2}{r_{12}}## where ##r_{12}## is the distance between ##dq_1## and ##dq_2##. So, the expression ##V_1 dq_1## contains a contribution of the form ##\frac{k\; dq_2\; dq_1}{r_{12}}##, which is the potential energy of interaction between ##dq_1## and ##dq_2##. But, by the same reasoning, you can see that ##V_2 dq_2## contains the same contribution again. So, the interaction between ##dq_1## and ##dq_2## is being counted twice.
|
# 11.7: Solving Systems with Gaussian Elimination
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
Skills to Develop
• Write the augmented matrix of a system of equations.
• Write the system of equations from an augmented matrix.
• Perform row operations on a matrix.
• Solve a system of linear equations using matrices.
Carl Friedrich Gauss lived during the late $$18^{th}$$ century and early $$19^{th}$$ century, but he is still considered one of the most prolific mathematicians in history. His contributions to the science of mathematics and physics span fields such as algebra, number theory, analysis, differential geometry, astronomy, and optics, among others. His discoveries regarding matrix theory changed the way mathematicians have worked for the last two centuries.
Figure $$\PageIndex{1}$$: German mathematician Carl Friedrich Gauss (1777–1855).
We first encountered Gaussian elimination in Systems of Linear Equations: Two Variables. In this section, we will revisit this technique for solving systems, this time using matrices.
### Writing the Augmented Matrix of a System of Equations
A matrix can serve as a device for representing and solving a system of equations. To express a system in matrix form, we extract the coefficients of the variables and the constants, and these become the entries of the matrix. We use a vertical line to separate the coefficient entries from the constants, essentially replacing the equal signs. When a system is written in this form, we call it an augmented matrix.
For example, consider the following $$2 × 2$$ system of equations.
\begin{align*} 3x+4y&= 7\\ 4x-2y&= 5 \end{align*}
We can write this system as an augmented matrix:
$$\left[ \begin{array}{cc|c} 3&4&7\\4&-2&5\end{array} \right]$$
We can also write a matrix containing just the coefficients. This is called the coefficient matrix.
$$\begin{bmatrix}3&4\\4&−2\end{bmatrix}$$
A three-by-three system of equations such as
\begin{align*} 3x-y-z&= 0\\ x+y&= 5\\ 2x-3z&= 2 \end{align*}
has a coefficient matrix
$$\begin{bmatrix}3&−1&−1\\1&1&0\\2&0&−3\end{bmatrix}$$
and is represented by the augmented matrix
$$\left[ \begin{array}{ccc|c}3&−1&−1&0\\1&1&0&5\\2&0&−3&2\end{array} \right]$$
Notice that the matrix is written so that the variables line up in their own columns: $$x$$-terms go in the first column, $$y$$-terms in the second column, and $$z$$-terms in the third column. It is very important that each equation is written in standard form $$ax+by+cz=d$$ so that the variables line up. When there is a missing variable term in an equation, the coefficient is $$0$$.
How to: Given a system of equations, write an augmented matrix
1. Write the coefficients of the $$x$$-terms as the numbers down the first column.
2. Write the coefficients of the $$y$$-terms as the numbers down the second column.
3. If there are $$z$$-terms, write the coefficients as the numbers down the third column.
4. Draw a vertical line and write the constants to the right of the line.
Example $$\PageIndex{1}$$: Writing the Augmented Matrix for a System of Equations
Write the augmented matrix for the given system of equations.
\begin{align*} x+2y-z&= 3\\ 2x-y+2z&= 6\\ x-3y+3z&= 4 \end{align*}
Solution
The augmented matrix displays the coefficients of the variables, and an additional column for the constants.
$$\left[ \begin{array}{ccc|c}1&2&−1&3\\2&−1&2&6\\1&−3&3&4\end{array} \right]$$
Exercise $$\PageIndex{1}$$
Write the augmented matrix of the given system of equations.
\begin{align*} 4x-3y&= 11\\ 3x+2y&= 4 \end{align*}
$$\left[ \begin{array}{cc|c} 4&−3&11\\3&2&4\end{array} \right]$$
### Writing a System of Equations from an Augmented Matrix
We can use augmented matrices to help us solve systems of equations because they simplify operations when the systems are not encumbered by the variables. However, it is important to understand how to move back and forth between formats in order to make finding solutions smoother and more intuitive. Here, we will use the information in an augmented matrix to write the system of equations in standard form.
Example $$\PageIndex{2}$$: Writing a System of Equations from an Augmented Matrix Form
Find the system of equations from the augmented matrix.
$$\left[ \begin{array}{ccc|c}1&−3&−5&-2\\2&−5&−4&5\\−3&5&4&6 \end{array} \right]$$
Solution
When the columns represent the variables $$x$$, $$y$$, and $$z$$,
\left[ \begin{array}{ccc|c}1&-3&-5&-2\\2&-5&-4&5\\-3&5&4&6 \end{array} \right] \rightarrow \begin{align*} x-3y-5z&= -2\\ 2x-5y-4z&= 5\\ -3x+5y+4z&= 6 \end{align*}
Exercise $$\PageIndex{2}$$
Write the system of equations from the augmented matrix.
$$\left[ \begin{array}{ccc|c}1&−1& 1&5\\2&−1&3&1\\0&1&1&-9\end{array}\right]$$
\begin{align*} x-y+z&= 5\\ 2x-y+3z&= 1\\ y+z&= -9 \end{align*}
# Performing Row Operations on a Matrix
Now that we can write systems of equations in augmented matrix form, we will examine the various row operations that can be performed on a matrix, such as addition, multiplication by a constant, and interchanging rows.
Performing row operations on a matrix is the method we use for solving a system of equations. In order to solve the system of equations, we want to convert the matrix to row-echelon form, in which there are ones down the main diagonal from the upper left corner to the lower right corner, and zeros in every position below the main diagonal as shown.
Row-echelon form $$\begin{bmatrix}1&a&b\\0&1&d\\0&0&1\end{bmatrix}$$
We use row operations corresponding to equation operations to obtain a new matrix that is row-equivalent in a simpler form. Here are the guidelines to obtaining row-echelon form.
1. In any nonzero row, the first nonzero number is a $$1$$. It is called a leading $$1$$.
2. Any all-zero rows are placed at the bottom on the matrix.
3. Any leading $$1$$ is below and to the right of a previous leading $$1$$.
4. Any column containing a leading $$1$$ has zeros in all other positions in the column.
To solve a system of equations we can perform the following row operations to convert the coefficient matrix to row-echelon form and do back-substitution to find the solution.
1. Interchange rows. (Notation: $$R_i ↔ R_j$$)
2. Multiply a row by a constant. (Notation: $$cR_i$$)
3. Add the product of a row multiplied by a constant to another row. (Notation: $$R_i+cR_j$$)
Each of the row operations corresponds to the operations we have already learned to solve systems of equations in three variables. With these operations, there are some key moves that will quickly achieve the goal of writing a matrix in row-echelon form. To obtain a matrix in row-echelon form for finding solutions, we use Gaussian elimination, a method that uses row operations to obtain a $$1$$ as the first entry so that row $$1$$ can be used to convert the remaining rows.
GAUSSIAN ELIMINATION
The Gaussian elimination method refers to a strategy used to obtain the row-echelon form of a matrix. The goal is to write matrix $$A$$ with the number $$1$$ as the entry down the main diagonal and have all zeros below.
$$A=\begin{bmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{bmatrix}\xrightarrow{After\space Gaussian\space elimination} A=\begin{bmatrix}1&b_{12}& b_{13}\\0&1&b_{23}\\0&0&1\end{bmatrix}$$
The first step of the Gaussian strategy includes obtaining a $$1$$ as the first entry, so that row $$1$$ may be used to alter the rows below.
How to: Given an augmented matrix, perform row operations to achieve row-echelon form
1. The first equation should have a leading coefficient of $$1$$. Interchange rows or multiply by a constant, if necessary.
2. Use row operations to obtain zeros down the first column below the first entry of $$1$$.
3. Use row operations to obtain a $$1$$ in row 2, column 2.
4. Use row operations to obtain zeros down column 2, below the entry of 1.
5. Use row operations to obtain a $$1$$ in row 3, column 3.
6. Continue this process for all rows until there is a $$1 in every entry down the main diagonal and there are only zeros below. 7. If any rows contain all zeros, place them at the bottom. Example \(\PageIndex{3}$$: Solving a $$2×2$$ System by Gaussian Elimination
Solve the given system by Gaussian elimination.
\begin{align*} 2x+3y&= 6\\ x-y&= \dfrac{1}{2} \end{align*}
Solution
First, we write this as an augmented matrix.
$$\left[ \begin{array}{cc|c} 2&3&6\\1&−1&12\end{array} \right]$$
We want a $$1$$ in row 1, column 1. This can be accomplished by interchanging row 1 and row 2.
$$R_1\leftrightarrow R_2\rightarrow \left[ \begin{array}{cc|c} 1&−1&12\\2&3&6\end{array} \right]$$
We now have a $$1$$ as the first entry in row 1, column 1. Now let’s obtain a $$0$$ in row 2, column 1. This can be accomplished by multiplying row 1 by $$−2$$, and then adding the result to row 2.
$$-2R_1+R_2=R_2\rightarrow \left[ \begin{array}{cc|c} 1&−1&12\\0&5&5\end{array} \right]$$
We only have one more step, to multiply row 2 by $$\dfrac{1}{5}$$.
$$\dfrac{1}{5}R_2=R_2\rightarrow \left[ \begin{array}{cc|c} 1&−1&12\\0&1&1\end{array} \right]$$
Use back-substitution. The second row of the matrix represents $$y=1$$. Back-substitute $$y=1$$ into the first equation.
\begin{align*} x-(1)&= \dfrac{1}{2}\\ x&= \dfrac{3}{2} \end{align*}
The solution is the point $$\left(\dfrac{3}{2},1\right)$$.
Exercise $$\PageIndex{3}$$
Solve the given system by Gaussian elimination.
\begin{align*} 4x+3y&= 11\\ x-3y&= -1 \end{align*}
$$(2, 1)$$
Example $$\PageIndex{4}$$: Using Gaussian Elimination to Solve a System of Equations
Use Gaussian elimination to solve the given $$2 × 2$$ system of equations.
\begin{align*} 2x+y&= 1\\ 4x+2y&= 6 \end{align*}
Solution
Write the system as an augmented matrix.
$$\left[ \begin{array}{cc|c} 2&1&1\\4&2&6\end{array} \right]$$
Obtain a $$1$$ in row 1, column 1. This can be accomplished by multiplying the first row by $$\dfrac{1}{2}$$.
$$\dfrac{1}{2} R_1=R_1\rightarrow \left[ \begin{array}{cc|c} 1&\dfrac{1}{2}&\dfrac{1}{2}\\4&2&6\end{array} \right]$$
Next, we want a $$0$$ in row 2, column 1. Multiply row 1 by $$−4$$ and add row 1 to row 2.
$$-4R_1+R_2=R_2\rightarrow \left[ \begin{array}{cc|c} 1&\dfrac{1}{2}&\dfrac{1}{2}\\0&0&4\end{array} \right]$$
The second row represents the equation $$0=4$$. Therefore, the system is inconsistent and has no solution.
Example $$\PageIndex{5}$$: Solving a Dependent System
Solve the system of equations.
\begin{align*} 3x+4y&= 12\\ 6x+8y&= 24 \end{align*}
Solution
Perform row operations on the augmented matrix to try and achieve row-echelon form.
$$A=\left[ \begin{array}{cc|c} 3&4&12\\6&8&24\end{array} \right]$$
$$-\dfrac{1}{2}R_2+R_1=R_1\rightarrow \left[ \begin{array}{cc|c} 0&0&0\\6&8&24\end{array} \right]$$
$$R_1\leftrightarrow R_2=\left[ \begin{array}{cc|c} 6&8&24\\0&0&0\end{array} \right]$$
The matrix ends up with all zeros in the last row: $$0y=0$$. Thus, there are an infinite number of solutions and the system is classified as dependent. To find the generic solution, return to one of the original equations and solve for $$y$$.
\begin{align*} 3x+4y&= 12\\ 4y&= 12-3x\\ y&= 3-\dfrac{3}{4}x \end{align*}
So the solution to this system is $$\left(x,3−\dfrac{3}{4}x\right)$$.
Example $$\PageIndex{6}$$: Performing Row Operations on a $$3×3$$ Augmented Matrix to Obtain Row-Echelon Form
Perform row operations on the given matrix to obtain row-echelon form.
$$\left[ \begin{array}{ccc|c} 1&-3&4&3\\2&-5&6&6\\-3&3&4&6\end{array} \right]$$
Solution
The first row already has a $$1$$ in row 1, column 1. The next step is to multiply row 1 by $$−2$$ and add it to row 2. Then replace row 2 with the result.
$$-2R_1+R_2=R_2 \left[ \begin{array}{ccc|c} 1&-3&4&3\\0&1&-2&0\\-3&3&4&6\end{array} \right]$$
Next, obtain a zero in row 3, column 1.
$$3R_1+R_3=R_3 \left[ \begin{array}{ccc|c} 1&-3&4&3\\0&1&-2&0\\0&-6&16&15\end{array} \right]$$
Next, obtain a zero in row 3, column 2.
$$6R_2+R_3=R_3 \left[ \begin{array}{ccc|c} 1&-3&4&3\\0&1&-2&0\\0&0&4&15\end{array} \right]$$
The last step is to obtain a 1 in row 3, column 3.
$$\dfrac{1}{3}R_3=R_3 \left[ \begin{array}{ccc|c} 1&-3&4&3\\0&1&-2&0\\0&0&1&\dfrac{21}{2}\end{array} \right]$$
Exercise $$\PageIndex{4}$$
Write the system of equations in row-echelon form.
\begin{align*} x−2y+3z &= 9 \\ −x+3y &= −4 \\ 2x−5y+5z &= 17 \end{align*}
$$\left[ \begin{array}{ccc|c} 1&-\dfrac{5}{2}&\dfrac{5}{2}&\dfrac{17}{2}\\0&1&5&9\\0&0&1&2\end{array} \right]$$
### Solving a System of Linear Equations Using Matrices
We have seen how to write a system of equations with an augmented matrix, and then how to use row operations and back-substitution to obtain row-echelon form. Now, we will take row-echelon form a step farther to solve a $$3$$ by $$3$$ system of linear equations. The general idea is to eliminate all but one variable using row operations and then back-substitute to solve for the other variables.
Example $$\PageIndex{7}$$: Solving a System of Linear Equations Using Matrices
Solve the system of linear equations using matrices.
\begin{align*} x-y+z&= 8\\ 2x+3y-z&= -2\\ 3x-2y-9z&= 9 \end{align*}
Solution
First, we write the augmented matrix.
$$\left[ \begin{array}{ccc|c} 1&-1&1&8\\2&3&-1&-2\\3&-2&-9&9\end{array} \right]$$
Next, we perform row operations to obtain row-echelon form.
$$−2R_1+R_2=R_2\rightarrow \left[ \begin{array}{ccc|c} 1&-1&1&8\\0&5&-3&-18\\3&-2&-9&9\end{array} \right]$$
$$−3R_1+R_3=R_3\rightarrow \left[ \begin{array}{ccc|c} 1&-1&1&8\\0&5&-3&-18\\0&1&-12&-15\end{array} \right]$$
The easiest way to obtain a $$1$$ in row 2 of column 1 is to interchange $$R_2$$ and $$R_3$$.
$$Interchange\space R_2\space and\space R_3\rightarrow\left[ \begin{array}{ccc|c} 1&-1&1&8\\0&1&-12&-15\\0&5&-3&-18\end{array} \right]$$
Then
$$−5R_2+R_3=R_3\rightarrow\left[ \begin{array}{ccc|c} 1&-1&1&8\\0&1&-12&-15\\0&0&57&57\end{array} \right]$$
$$−\dfrac{1}{57}R_3=R_3\rightarrow\left[ \begin{array}{ccc|c} 1&-1&1&8\\0&1&-12&-15\\0&0&1&1\end{array} \right]$$
The last matrix represents the equivalent system.
\begin{align*} x−y+z &= 8 \\ y−12z &= −15 \\ z &= 1 \end{align*}
Using back-substitution, we obtain the solution as $$(4,−3,1)$$.
Example $$\PageIndex{8}$$: Solving a Dependent System of Linear Equations Using Matrices
Solve the following system of linear equations using matrices.
\begin{align*} −x−2y+z &= −1 \\ 2x+3y &= 2 \\ y−2z &= 0 \end{align*}
Solution
Write the augmented matrix.
$$\left[ \begin{array}{ccc|c} -1&-2&1&-1\\2&3&0&2\\0&1&-2&0\end{array} \right]$$
First, multiply row 1 by $$−1$$ to get a $$1$$ in row 1, column 1. Then, perform row operations to obtain row-echelon form.
$$-R_1\rightarrow \left[ \begin{array}{ccc|c} 1&2&-1&1\\2&3&0&2\\0&1&-2&0\end{array} \right]$$
$$R_2\leftrightarrow R_3\rightarrow \left[ \begin{array}{ccc|c} 1&2&-1&1\\0&1&-2&0\\2&3&0&2\end{array} \right]$$
$$−2R_1+R_3=R_3\rightarrow \left[ \begin{array}{ccc|c} 1&2&-1&1\\0&1&-2&0\\0&-1&2&0\end{array} \right]$$
$$R_2+R_3=R_3\rightarrow \left[ \begin{array}{ccc|c} 1&2&-1&1\\0&1&-2&0\\0&0&0&0\end{array} \right]$$
The last matrix represents the following system.
\begin{align*} x+2y−z &= 1 \\ y−2z &= 0 \\ 0 &= 0 \end{align*}
We see by the identity $$0=0$$ that this is a dependent system with an infinite number of solutions. We then find the generic solution. By solving the second equation for $$y$$ and substituting it into the first equation we can solve for $$z$$ in terms of $$x$$.
\begin{align*} x+2y−z &= 1 \\ y &= 2z \\ x+2(2z)−z &= 1 \\ x+3z &= 1 \\ z &=\dfrac{1−x}{3} \end{align*}
Now we substitute the expression for $$z$$ into the second equation to solve for $$y$$ in terms of $$x$$.
\begin{align*} y−2z &= 0 \\ z &= \dfrac{1−x}{3} \\ y−2\left(\dfrac{1−x}{3}\right) &= 0 \\ y &= \dfrac{2−2x}{3} \end{align*}
The generic solution is $$\left(x,\dfrac{2−2x}{3},\dfrac{1−x}{3}\right)$$.
Exercise $$\PageIndex{5}$$
Solve the system using matrices.
\begin{align*} x+4y-z&= 4\\ 2x+5y+8z&= 1\\ 5x+3y-3z&= 1 \end{align*}
$$(1,1,1)$$
Q&A: Can any system of linear equations be solved by Gaussian elimination?
Yes, a system of linear equations of any size can be solved by Gaussian elimination.
How to: Given a system of equations, solve with matrices using a calculator
1. Save the augmented matrix as a matrix variable $$[A], [B], [C], ….$$
2. Use the ref( function in the calculator, calling up each matrix variable as needed.
Example $$\PageIndex{9A}$$: Solving Systems of Equations with Matrices Using a Calculator
Solve the system of equations.
\begin{align*} 5x+3y+9z&= -1\\ -2x+3y-z&= -2\\ -x-4y+5z&= 1 \end{align*}
Solution
Write the augmented matrix for the system of equations.
$$\left[ \begin{array}{ccc|c} 5&3&9&-1\\-2&3&-1&-2\\-1&-4&5&1\end{array} \right]$$
On the matrix page of the calculator, enter the augmented matrix above as the matrix variable $$[A]$$.
$$[A]=\left[ \begin{array}{ccc|c} 5&3&9&-1\\-2&3&-1&-2\\-1&-4&5&1\end{array} \right]$$
Use the ref( function in the calculator, calling up the matrix variable $$[A]$$.
ref([A])
Evaluate
\begin{array}{cc} {\left[ \begin{array}{ccc|c} 1&\dfrac{3}{5}&\dfrac{9}{5}&\dfrac{1}{5}\\0&1&\dfrac{13}{21}&-\dfrac{4}{7}\\0&0&1&-\dfrac{24}{187}\end{array} \right] \rightarrow} & {\begin{align*} x+\dfrac{3}{5}y+\dfrac{9}{5}z &= -\dfrac{1}{5} \\ y+\dfrac{13}{21}z &= -\dfrac{4}{7} \\ z &= -\dfrac{24}{187} \end{align*}} \end{array}
Using back-substitution, the solution is $$\left(\dfrac{61}{187},−\dfrac{92}{187},−\dfrac{24}{187}\right)$$.
Example $$\PageIndex{9B}$$: Applying $$2×2$$ Matrices to Finance
Carolyn invests a total of $$12,000$$ in two municipal bonds, one paying $$10.5%$$ interest and the other paying $$12%$$ interest. The annual interest earned on the two investments last year was $$1,335$$. How much was invested at each rate?
Solution
We have a system of two equations in two variables. Let $$x=$$ the amount invested at $$10.5%$$ interest, and $$y=$$ the amount invested at $$12%$$ interest.
\begin{align*} x+y&= 12,000\\ 0.105x+0.12y&= 1,335 \end{align*}
As a matrix, we have
$$\left[ \begin{array}{cc|c} 1&1&12,000\\0.105&0.12&1,335\end{array} \right]$$
Multiply row 1 by $$−0.105$$ and add the result to row 2.
$$\left[ \begin{array}{cc|c} 1&1&12,000\\0&0.015&75\end{array} \right]$$
Then,
\begin{align*} 0.015y &= 75 \\ y &= 5,000 \end{align*}
So $$12,000−5,000=7,000$$.
Thus, $$5,000$$ was invested at $$12%$$ interest and $$7,000$$ at $$10.5%$$ interest.
Example $$\PageIndex{10}$$: Applying $$3×3$$ Matrices to Finance
Ava invests a total of $$10,000$$ in three accounts, one paying $$5%$$ interest, another paying $$8%$$ interest, and the third paying $$9%$$ interest. The annual interest earned on the three investments last year was $$770$$. The amount invested at $$9%$$ was twice the amount invested at $$5%$$. How much was invested at each rate?
Solution
We have a system of three equations in three variables. Let $$x$$ be the amount invested at $$5%$$ interest, let $$y$$ be the amount invested at $$8%$$ interest, and let $$z$$ be the amount invested at $$9%$$ interest. Thus,
\begin{align*} x+y+z &= 10,000 \\ 0.05x+0.08y+0.09z &= 770 \\ 2x−z &= 0 \end{align*}
As a matrix, we have
$$\left[ \begin{array}{ccc|c} 1&1&1&10,000\\0.05&0.08&0.09&770\\2&0&-1&0\end{array} \right]$$
Now, we perform Gaussian elimination to achieve row-echelon form.
$$−0.05R_1+R_2=R_2\rightarrow \left[ \begin{array}{ccc|c} 1&1&1&10,000\\0&0.03&0.04&270\\2&0&-1&0\end{array} \right]$$
$$−2R_1+R_3=R_3\rightarrow \left[ \begin{array}{ccc|c} 1&1&1&10,000\\0&0.03&0.04&270\\0&-2&-3&-20,000\end{array} \right]$$
$$\dfrac{1}{0.03}R_2=R_2\rightarrow \left[ \begin{array}{ccc|c} 1&1&1&10,000\\0&1&\dfrac{4}{3}&9,000\\0&-2&-3&-20,000\end{array} \right]$$
$$2R_2+R_3=R_3\rightarrow \left[ \begin{array}{ccc|c} 1&1&1&10,000\\0&1&\dfrac{4}{3}&9,000\\0&0&-\dfrac{1}{3}&-2,000\end{array} \right]$$
The third row tells us $$−\dfrac{1}{3}z=−2,000$$; thus $$z=6,000$$.
The second row tells us $$y+\dfrac{4}{3}z=9,000$$. Substituting $$z=6,000$$,we get
\begin{align*} y+\dfrac{4}{3}(6,000) &= 9,000 \\ y+8,000 &= 9,000 \\ y &= 1,000 \end{align*}
The first row tells us $$x+y+z=10,000$$. Substituting $$y=1,000$$ and $$z=6,000$$,we get
\begin{align*} x+1,000+6,000 &= 10,000 \\ x &= 3,000 \end{align*}
The answer is $$3,000$$ invested at $$5%$$ interest, $$1,000$$ invested at $$8%$$, and $$6,000$$ invested at $$9%$$ interest.
Exercise $$\PageIndex{6}$$
A small shoe company took out a loan of $$1,500,000$$ to expand their inventory. Part of the money was borrowed at $$7%$$, part was borrowed at $$8%$$, and part was borrowed at $$10%$$. The amount borrowed at $$10%$$ was four times the amount borrowed at $$7%$$, and the annual interest on all three loans was $$130,500$$. Use matrices to find the amount borrowed at each rate.
$$150,000$$ at $$7%$$, $$750,000$$ at $$8%$$, $$600,000$$ at $$10%$$
Media
Access these online resources for additional instruction and practice with solving systems of linear equations using Gaussian elimination.
# Key Concepts
• An augmented matrix is one that contains the coefficients and constants of a system of equations. See Example $$\PageIndex{1}$$.
• A matrix augmented with the constant column can be represented as the original system of equations. See Example $$\PageIndex{2}$$.
• Row operations include multiplying a row by a constant, adding one row to another row, and interchanging rows.
• We can use Gaussian elimination to solve a system of equations. See Example $$\PageIndex{3}$$, Example $$\PageIndex{4}$$, and Example $$\PageIndex{5}$$.
• Row operations are performed on matrices to obtain row-echelon form. See Example $$\PageIndex{6}$$.
• To solve a system of equations, write it in augmented matrix form. Perform row operations to obtain row-echelon form. Back-substitute to find the solutions. See Example $$\PageIndex{7}$$ and Example $$\PageIndex{8}$$.
• A calculator can be used to solve systems of equations using matrices. See Example $$\PageIndex{9}$$.
• Many real-world problems can be solved using augmented matrices. See Example $$\PageIndex{10}$$ and Example $$\PageIndex{11}$$.
|
# Load Leveling With Sea Sequestration
B
#### Bret Cahill
Jan 1, 1970
0
Carbon abatement schemes always seem too much like closing -- or
rather not closing -- the barn door while forgetting about the horse.
Something needs to be done about the immediate effects like sea level
rise and ocean acidification. Instead of a billion people building
higher and higher levees and sea walls every 20 years while hoping
the
CO2 will just go away, it might be cheaper to just sequester and/or
reverse osmosis sea water for irrigation and aquifer injection to
stop
sea level rise.
Obviously it seems like an impossibly massive project, but consider,
1. Only 75% of sea level rise comes from ice melt. 25% comes from
irrigation from ground water. The project would need to pump less
than -- maybe much less than -- 4 times more water out of the ocean
than all the planets' farmers pump for irrigation. To be sure most
irrigation water isn't pumped very far but on the other hand farmers
do not seem to complain a lot about irrigation pumping costs. One
week or so of a lower Mississipi flow rate should equal all the
world's
farm runoff for a year, a couple months the entire sea level rise.
2. Just about all the aquifers are depleting so farmers will
eventually be getting water from the ocean anyway, even without
AGW causing droughts. There is no way around that fact. This
doesn't mean all the sea water must be desalinated just that it
could sweeten up things politically. Few things are better in life
than to be on the winning side of a water war.
3. Once the canals are dug, if necessary with my bare hands,
it could be all solar and wind. The pump system as well as the
grid could be over-sized to help load level.
The proposal isn't to drop carbon taxation or cap and trade but to
consider the more immediate effects, weight them by the cumulative
carbon footprint, and put them into the overall equation.
Something similar to ocean de acidification credits would be
included. If a country wants to burn oil or coal or cut down its
rainforests it would look for some lime deposits or other source of
OH+.
Bret Cahill
B
#### Bob Masta
Jan 1, 1970
0
Carbon abatement schemes always seem too much like closing -- or
rather not closing -- the barn door while forgetting about the horse.
Something needs to be done about the immediate effects like sea level
rise and ocean acidification. Instead of a billion people building
higher and higher levees and sea walls every 20 years while hoping
the
CO2 will just go away, it might be cheaper to just sequester and/or
reverse osmosis sea water for irrigation and aquifer injection to
stop
sea level rise.
Obviously it seems like an impossibly massive project, but consider,
1. Only 75% of sea level rise comes from ice melt. 25% comes from
irrigation from ground water. The project would need to pump less
than -- maybe much less than -- 4 times more water out of the ocean
than all the planets' farmers pump for irrigation. To be sure most
irrigation water isn't pumped very far but on the other hand farmers
do not seem to complain a lot about irrigation pumping costs. One
week or so of a lower Mississipi flow rate should equal all the
world's
farm runoff for a year, a couple months the entire sea level rise.
2. Just about all the aquifers are depleting so farmers will
eventually be getting water from the ocean anyway, even without
AGW causing droughts. There is no way around that fact. This
doesn't mean all the sea water must be desalinated just that it
could sweeten up things politically. Few things are better in life
than to be on the winning side of a water war.
3. Once the canals are dug, if necessary with my bare hands,
it could be all solar and wind. The pump system as well as the
grid could be over-sized to help load level.
The proposal isn't to drop carbon taxation or cap and trade but to
consider the more immediate effects, weight them by the cumulative
carbon footprint, and put them into the overall equation.
Something similar to ocean de acidification credits would be
included. If a country wants to burn oil or coal or cut down its
rainforests it would look for some lime deposits or other source of
OH+.
I suspect most water used for irrigation will just end up
back in the oceans, not recharging any aquifers, so no help
with sea level rise.
And even if it did, how much energy would it take to do the
needed reverse osmosis? Any energy you use there is going
to add to atmospheric CO2 if you get it from fossil fuels,
and if you get it from renewables you are still making the
overall problem worse compared to using those renewables to
replace other fossil fuel use.
Bob Masta
DAQARTA v7.20
Data AcQuisition And Real-Time Analysis
www.daqarta.com
Scope, Spectrum, Spectrogram, Sound Level Meter
Frequency Counter, Pitch Track, Pitch-to-MIDI
FREE Signal Generator, DaqMusic generator
B
#### Bret Cahill
Jan 1, 1970
0
Carbon abatement schemes always seem too much like closing -- or
I suspect most water used for irrigation will just end up
back in the oceans, not recharging any aquifers, so no help
with sea level rise.
All or most of it doesn't need to be desalinated.
And even if it did, how much energy would it take to do the
needed reverse osmosis?
Several hundred gigawatts to do all the sea level rise. At 60 cents/
watt it would be over $200 billion for PV. This would be pro rated by a country's carbon history or use. Any energy you use there is going to add to atmospheric CO2 if you get it from fossil fuels, and if you get it from renewables you are still making the overall problem worse compared to using those renewables to replace other fossil fuel use. The problem with the IPCC is they focus on carbon abatement only while ignoring the more immediate side effects. If you can't get to point B there's no reason to focus exclusively on C. Just because the CO2 came before the warming does not necessarily mean that the first thing they should do is try to do is to stop burning the fossil fuels, especially since it appears to be such a daunting task. It's like a frying pan that caught on fire. Sure you will eventually want to turn off the heat but the first thing you do is take the pan off the stove and put out the fire in the pan. Geo engineering needs to be included along with carbon taxes, credits cap & trade in any international scheme. Humans have been geo engineering from day one so that is not an issue. Bret Cahill B #### Bret Cahill Jan 1, 1970 0 Carbon abatement schemes always seem too much like closing -- or Huh? As far as I know there are very few useful crops (approaching "none") that can grow in even brackish water, let alone straight seawater. The idea was to sequester sea water. In some areas the ground water is not just brine, it's toxic. It's illegal to pump anything into the ground anywhere in California but that law was written without considering either sea level rise or the regions with bad ground water. In regions where they ground water is good enough for irrigation then the water would need to be ROed. .. . . As you may have noticed, nobody has actually been doing much of anything about CO2 emissions, except promising to consider it at some future date. Sometimes procrastination works. The local utility was the last to do anything to meet California's regulations. They waited and waited. Finally PV dropped to$0.60 watt and they suddenly were able to exceed
the requirements. It's hard to believe anyone there was that smart.
And reducing carbon
emissions is a lot simpler, more assured of success, with
more bang for the buck, than complicated and totally
unproven geo-engineering schemes.
If all buring of all fossil fuel ceased tomorrow flooding would
continue to worsen.
The carbon abatement community seems to just want to close the barn
door when there are more immediate problems resulting from past
ignorance that will require solutions.
The IPCC provides nothing in the way of getting to point B, just some
idealized zero carbon point C, almost as though they are hoping the
resulting famines and geo wars will wipe out enough people to make the
planet sustainable.
Not to mention that the worst thing about geo-engineering is
it gives the foot-draggers (and knuckle-draggers) still more
incentive to do nothing about the real problem.
A politician cannot get elected saying he doesn't like babies. Even
the Chinese "inner kingdom" has difficulty controlling population
growth.
Bret Cahill
B
#### Bob Masta
Jan 1, 1970
0
All or most of it doesn't need to be desalinated.
Huh? As far as I know there are very few useful crops
(approaching "none") that can grow in even brackish water,
let alone straight seawater. Every now and then there's an
article in New Scientist or Science News about experiments
to create such crops, but never any success stories.
Several hundred gigawatts to do all the sea level rise. At 60 cents/
watt it would be over \$200 billion for PV.
This would be pro rated by a country's carbon history or use.
The problem with the IPCC is they focus on carbon abatement only while
ignoring the more immediate side effects.
If you can't get to point B there's no reason to focus exclusively on
C.
Just because the CO2 came before the warming does not necessarily mean
that the first thing they should do is try to do is to stop burning
the fossil fuels, especially since it appears to be such a daunting
It's like a frying pan that caught on fire. Sure you will eventually
want to turn off the heat but the first thing you do is take the pan
off the stove and put out the fire in the pan.
Geo engineering needs to be included along with carbon taxes, credits
cap & trade in any international scheme.
Humans have been geo engineering from day one so that is not an
issue.
As you may have noticed, nobody has actually been doing much
of anything about CO2 emissions, except promising to
consider it at some future date. And reducing carbon
emissions is a lot simpler, more assured of success, with
more bang for the buck, than complicated and totally
unproven geo-engineering schemes.
Not to mention that the worst thing about geo-engineering is
it gives the foot-draggers (and knuckle-draggers) still more
incentive to do nothing about the real problem.
Best regards,
Bob Masta
DAQARTA v7.20
Data AcQuisition And Real-Time Analysis
www.daqarta.com
Scope, Spectrum, Spectrogram, Sound Level Meter
Frequency Counter, Pitch Track, Pitch-to-MIDI
FREE Signal Generator, DaqMusic generator
Replies
7
Views
211
Replies
8
Views
322
Replies
5
Views
1K
Replies
14
Views
1K
Replies
3
Views
729
|
Now showing items 2502-2521 of 3321
• #### Satellite thermal remote sensing of the volcanoes of Alaska and Kamchatka during 1994-1996, and the 1994 eruption of Kliuchevskoi volcano
The Advanced Very High Resolution Radiometers on the NOAA polar orbiting satellites were used to routinely observe the volcanoes of Alaska and Kamchatka from May 1994 to July 1996, as part of the monitoring effort of the Alaska Volcano Observatory. The largest eruption observed during this period occurred at Kliuchevskoi Volcano between September 8 and October 2, 1994. Radiative temperature measurements made during this eruption were used to develop quantitative methods for analyzing volcanic thermal anomalies. Several parameters, including maximum temperature, anomalous pixels, and total volcanic signal (TVS), were compared to viewing angle and date. A new quantity, TVS7, may most effectively monitor the temporal evolution of the eruption using thermal data. By combining several observations of the thermal state of the volcano, the general nature of the volcanic activity can be described. These observations may indicate an elevation in temperature twelve to 24 hours before an ash-producing event.
• #### Satellite to model comparisons of volcanic ash emissions in the North Pacific
To detect, analyze and predict the movement of volcanic ash in real time, dispersion models and satellite remote sensing data are important. A combination of both approaches is discussed here to enhance the techniques currently used to quantify volcanic ash emissions, based on case studies of the eruptions of the Kasatochi (Alaska, USA, 2008), Mount Redoubt (Alaska, USA, 2009) and Sarychev Peak (Russia, 2009) volcanoes. Results suggest a quantitative approach determining masses from satellite images can be problematic due to uncertainties in knowledge of input values, most importantly the ground surface temperature required in the mass retrieval. Furthermore, a volcanic ash transport and dispersion model simulation requires its own set of accurate input parameters to forecast an ash cloud's future location. Such input parameters are often difficult to assess, especially in real time volcano monitoring, and default values are often used for simplification. The objective of this dissertation is to find a quantitative comparison technique to apply to satellite and volcanic ash transport and dispersion models that reduces the inherent uncertainty in the results. The binary 'Ash -- No Ash' approach focusing on spatial extent rather than absolute masses is suggested, where the ash extent in satellite data is quantitatively compared to that in the dispersion model's domain. In this technique, neither satellite data nor dispersion model results are regarded as the truth. The Critical Success Index (CSI) as well as Model and Satellite Excess values (ME and SE, respectively) are introduced as comparison tools. This approach reduces uncertainties in the analysis of airborne volcanic ash and, due to the reduced list of input parameters and assumptions in satellite and model data, the results will be improved. This decreased complexity of the analysis, combined with a reduced error as the defined edge of ash cloud is compared in each method rather than defined threshold or mass loading, will have important implications for real time monitoring of volcanic ash emissions. It allows for simpler, more easily implemented operational monitoring of volcanic ash movements.
• #### Saxitoxins: role of prokaryotes
Saxitoxins, the toxins associated with paralytic shellfish poisoning (PSP), are synthesized by dinoflagellates, cyanobacteria, and possibly bacteria. The specific objectives of this study were to determine growth conditions that promote high and low levels of toxin accumulation in Aphanizomenon flos-aquae (cyanobacterium) and Pseudomonas stutzeri (bacterium). Putative saxitoxins of P. stutzeri identified by HPLC-FLD in this study, and previously by other laboratories, were determined to be 'imposters' based on their chemical and physical properties, suggesting that this bacterium may not synthesize PSP toxins. In the cyanobacterium, toxin production was enhanced under higher light intensities and temperatures. Toxin accumulation reached maximal levels when cellular nitrogen was from either (NO₃-+NH₄)-N or N₂-N, while urea-N drastically reduced toxin levels. These data will be used in future studies aimed at identifying the genes involved in saxitoxin synthesis via molecular technologies that rely upon expression of the 'saxitoxin genes' under different growth conditions.
• #### Scaling laws in cold heavy oil production with sand reservoirs
This thesis presents a rigorous step by step procedure for deriving the minimum set of scaling laws for Cold Heavy Oil Production with Sand (CHOPS) reservoirs based on a given set of physical equations using inspectional analysis. The resulting dimensionless equations are then simulated in COMSOL Mutiphysics to validate the dimensionless groups and determine which groups are more significant by performing a sensitivity analysis using a factorial design. The work starts simple by demonstrating how the above process is done for 1D single-phase flow and then slowly ramps up the complexity to account for foamy oil and then finally for wormholes by using a sand failure criterion. The end result is three dimensionless partial differential equations to be solved simultaneously using a finite element simulator. The significance of these groups is that they can be used to extrapolate between a small scale model and a large scale prototype.
• #### School Connectedness: the benefits of a school-based peer-mentoring program for transitioning students in secondary education
The transition to a new high school can disrupt social networks, cause anxiety, and hinder academic success for secondary students. School-based comprehensive peer-mentoring programs that focus on transitioning secondary students have the potential to alleviate the anxiety of a changing school climate by promoting school connectedness, building peer relationships, and being sensitive to the social, academic, and procedural concerns of transitioning secondary students (Cauley & Jovanovich, 2006). Students who feel connected to school feel personally accepted, respected, included, and supported by others in the school social environment, all of which may guard against student alienation, poor self-esteem, and other deviant behaviors for adolescent youth. The following research paper discusses how focused school-based peer-mentoring programs for adolescents may help to build school and peer connectedness; promote academic achievement, healthy development, and psychological health; increase protective factors; and decrease risky behaviors. A presentation and program guide for secondary administration and staff were developed based on the information found in the literature review.
• #### School counselors: preparing transitioning high school students
Without preparedness for possible career avenues after graduation, many youth struggle with career paths they may want to investigate. even the considerably prepared students are uncertain what they are going to do after high school. having transition classes starting in middle school can further enhance students' career paths once they graduate from high school. this project focuses on rural school counselors helping to prepare high school students transition into possible career opportunities. rural school counselors often have additional advocate duties to help keep a positive connectedness between students and their schools. increased connectedness and transition classes can make the transition process much more manageable for students after they graduate from high school (Grimes, Haskins, & Paisley, 2013).
• #### A school-based group counseling cirriculum for adolescent girls experiencing low self-esteem
This project reviews the existing literature on adolescent development in females, and demonstrates the importance of school counselors facilitating small group counseling with students who experience low self-esteem. Although research suggests social-emotional development begins in childhood, and the American School Counselor Association requires a social-emotional component to school counseling programs, there are few resources available to secondary school counselors who see a need for an effective group counseling curriculum for females with low self-esteem. This project aims to provide secondary school counselors with such a curriculum.
• #### A school-based intervention program for preadolescent girls experiencing body dissatisfaction
Body dissatisfaction and poor body image are issues girls are facing in their preadolescent years. Research is demonstrating that preadolescent girls need intervention programs to help support them with the struggles of body image and self-acceptance. This project uses the literature and established research to provide school counselors with a program to help meet the needs of preadolescent girls struggling with body dissatisfaction and promote body acceptance and body positivity.
• #### Schools in rural Alaska with higher rates of student achievement: a search for positive deviance in education
This study sought to identify schools in rural Alaska with higher rates of student achievement and study what factors contribute to that success. Alaska Native students make up a large majority of the students attending school in small remote villages across the state. Data, however, have shown that Alaska Native students constantly perform lower than any other demographic group on every subject level and lower at every grade level when tested using state assessments. This study begins with a journey to understand the complexity of the problems that affect schooling in rural Alaska, ranging from teacher turnover to school district size and oversight. However, it is important to examine this current challenge by examining the history of education and how that history has affected Alaska Native people today. To identify schools in rural Alaska with higher rates of student achievement, a binary variable was used to determine positive deviance. Data analysis drew on academic achievement of each school as measured by the 5-year average score of the school in three subjects: Reading, Mathematics and Writing. While the results did not yield a case study for positive deviance, the findings and conclusion, using a critical race theory lens question whether schools today, intentionally or unintentionally, are still modeled after the same framework and operate in the same fashion as they did when they were intended to assimilate Alaska Natives to become better citizens. Using an advocacy worldview, this study draws upon the unchallenged truth that schools in rural Alaska may never perform as a collective as well as or better than their urban counterparts under this model.
• #### Science Education In Rural America: Adaptations For The Ivory Tower
This thesis illustrated what can happen when academic culture disconnects from the cultures surrounding it. It showed that formal school environments are not always the best places to learn. A discussion of the debate between coherence and fragmentation learning theories illustrated academic chasms and a mindset that science education must originate from within ivory towers to be valued. Rationales for place-based science education were developed. Two National Science Foundation initiatives were compared and contrasted for relevance to Native Science education (a) Informal Science Education and (b) Science Education for New Civic Engagement and Responsibilities. A National Science Foundation instrument, known as the Self-Assessment of Learning Gains, was selected to field-test measures of learning science outside of university science courses. Principles of chemistry were taught in community workshops, and those participant self-assessments were compared to self-assessments of students in introductory chemistry courses at two universities. University students consistently claimed the greatest learning gains, in the post-course survey, for the same areas that they claimed to have the greatest understanding, in the pre-course survey. The workshop participant responses differed, depending upon location of the learning environment. When held in a university laboratory, ideas were not related to other cultures, even when a Native Elder was present to describe those relationships. When held in a cultural center, those relationships were among the highest learning gains claimed. One of the instrument's greatest assets was the ability to measure reactions, level 4 of Bennett's (1976) hierarchy of evidence for program evaluation. A long-term commitment to informal science education (not short-term exhibits or programs), combined with negotiated place-based education was recommended as a crucially needed initiative, if relationships between universities and Native American communities are to improve. Some chasms created within ivory towers may never be bridged. Yet, those ideological chasms do not have to exist everywhere. The realities of working in the natural world and the practice of addressing multitudes of community challenges can alter perspectives, when horizons change from the edge of one's desk to those that meet the sea or sky.
• #### Science education in rural America: adaptations for the Ivory Tower
This thesis illustrated what can happen when academic culture disconnects from the cultures surrounding it. It showed that formal school environments are not always the best places to learn. A discussion of the debate between coherence and fragmentation learning theories illustrated academic chasms and a mindset that science education must originate from within ivory towers to be valued. Rationales for place-based science education were developed. Two National Science Foundation initiatives were compared and contrasted for relevance to Native Science education (a) Informal Science Education and (b) Science Education for New Civic Engagement and Responsibilities. A National Science Foundation instrument, known as the Self-Assessment of Learning Gains, was selected to field-test measures of learning science outside of university science courses. Principles of chemistry were taught in community workshops, and those participant self-assessments were compared to self-assessments of students in introductory chemistry courses at two universities. University students consistently claimed the greatest learning gains, in the post-course survey, for the same areas that they claimed to have the greatest understanding, in the pre-course survey. The workshop participant responses differed, depending upon location of the learning environment. When held in a university laboratory, ideas were not related to other cultures, even when a Native Elder was present to describe those relationships. When held in a cultural center, those relationships were among the highest learning gains claimed. One of the instrument's greatest assets was the ability to measure reactions, level 4 of Bennett's (1976) hierarchy of evidence for program evaluation. A long-term commitment to informal science education (not short-term exhibits or programs), combined with negotiated place-based education was recommended as a crucially needed initiative, if relationships between universities and Native American communities are to improve. Some chasms created within ivory towers may never be bridged. Yet, those ideological chasms do not have to exist everywhere. The realities of working in the natural world and the practice of addressing multitudes of community challenges can alter perspectives, when horizons change from the edge of one's desk to those that meet the sea or sky.
• #### Science for Alaska: place for curious learners
For over 25 years, Alaskans have been attending Science for Alaska Lecture Series, held during the coldest part of an Alaskan winter. The hour-long evening lectures would see from around 100 to almost 300 people attend each event. The scientific literature is quiet in regard to audience preferences in regard to the recieving end of science communication. This qualitative study looked at the audience of a science lecture series: who are they, why do they come and what do they do with the information. In nine taped audio interviews, the research participants described themselves as smart, curious lifelong learners who felt a sense of place to the Arctic for its practical and esoteric values. Attending the events constructed their social identity that they felt important to share with children. The findings suggest that addressing the audience's sense of place and mirroring their view as smart, curious people would be an effective avenue to communicate science.
• #### Scintillation at K-band and Ka-band frequencies
The need for higher bandwidth and smaller antenna size for satellite communications led NASA to fund the Advanced Communications Technology Satellite (ACTS) and propagation research for K-band and Ka-band frequencies. From December 1993 to December 1998, seven sites in North America have collected and processed power measurements at 20.2 and 27.5 gigahertz from ACTS, a geostationary statellite located at 100 ̊West longitude. The thesis compares scintillation measurements to eight scintillation prediction models, proposes a cumulative distribution model to help predict the percentage of time scintillation exceeds a given threshold, examines the effects of frequency on scintillation magnitudes, and proposes a climate model based on moisture content to help predict scintillation magnitudes. The study concludes that the scintillation prediction models are dependent on the climate, the frequency dependence is a function of climate, and the moisture content in the atmosphere dictates the percentage of time large scintillation occurs.
• #### Screw configuration effects during twin-screw extrusion of starchy and proteinaceous materials
This study investigated the effects of screw configuration and feed composition during extrusion of starchy and proteinaceous materials. All experiments were carried out in a twin-screw extruder with a length/diameter ratio of 32:1. The screw speed, feed flow rate, and moisture content were 400 rpm, 12 kg/h, and 15%, respectively. Kneading block (KB) and reverse screw elements (RSE) were placed at different locations in the 200 mm experimental zone near the die, where the temperature was maintained at $\rm 150\sp\circ C.$ An on-line method for measurement of residence time distribution (RTD) in a food extruder was developed, tested, and validated. The technique was based on the electrical conductivity of the material in the die, which was altered by addition of an electrolyte tracer at the feed inlet. The change in current flow was measured as a proportional voltage response across a resistor. The on-line method correlated well with the established erythrosine dye method and precisely determined the effects of screw speed, feed flow rate, screw configuration on RTD. The effects of type, length and position of mixing elements and spacing between two elements on energy inputs, RTD, molecular changes of starch, and macroscopic extrudate characteristics were compared. The results showed that the specific mechanical energy (SME), mean residence time, and extent of starch breakdown were higher for screw profiles with RSE than that with KB. These parameters also increased with longer mixing elements, increased distance of the elements from the die, and with increased spacing between two elements. Specific thermal energy (STE) input showed opposite trend to that of SME. Die temperature was highest when the elements were placed at 0 mm from the die. Such a screw profile produced an extrudate with highest overall expansion and lowest apparent density. Radial expansion was highest with KB in the screw profile than with RSE. KB seemed to be the element of choice for maximizing radial expansion. Increasing mixing element length or spacing between two elements decreased product expansion. Hardness of the product decreased linearly with increasing radial expansion as shown by the breaking strength data. Changing feed composition by adding Arrowtooth flounder muscle decreased the SME input, increased STE and mean residence time. Hydrolysis changed the properties of Arrowtooth flounder muscle so much that it enhanced the expansion characteristics of starch in rice flour and improved extrudate texture.
• #### Sea change, know fish: catching the tales of fish and men in Cordova, Alaska
Cordova, Alaska is a coastal community in Southcentral Alaska with an intricate history in commercial fishing, primarily for the Copper River sockeye salmon industry, which extends historically to pre-statehood. This dissertation collects personal narratives as a method to express cultural features of community identity and the role salmon has played in shaping identity, livelihood, and lifestyle in Cordova, Alaska. Research material is based on oral history interviews from which I construct written character portraits to depict aspects of resident life in this fishing community and from others who use the community to access summer salmon resources of the Copper River. Portraits were performed and presented in public venues to obtain casual feedback from and review by community members from Cordova and other participants in the Prince William Sound drift fishery. The portraits and public commentary post-performance or from community readers serve as one basis for analysis and lead to my conclusions about life in this community and, on a larger scale, cultural dimensions common within other communities (either geographic or occupational). Public performances offer a communication tool that provides a method to share differences within the industry without encountering explicit controversy over challenging industry transitions. Although the tool of storytelling does not typically receive significant media or policy attention, I find it very effective in understanding and mediating conflict across different groups of people, especially when the main theme of conflict, sustainability and access to the fishery resource, is a mutual cultural feature of interest to diverse participant groups. Additionally, public creative performances offer a venue of communication primarily designed for entertainment and as a result, the audience interaction with storytellers occurs more casually and perhaps more genuinely than it does in academic conferences or policy meeting venues. Personal stories related to the iconic feature of salmon with mutual significance in state and federal fisheries of the North Pacific are a valuable, intimate source of local and traditional knowledge. The opportunity to put meaningful and commonly shared emphasis on the fish as an economic and cultural resource and not on a particular stakeholder group may help lead to improved communications in a field that tends to illicit conflict in consideration of access to harvest rights.
• #### Sea ice near-inertial response to atmospheric storms
A moored oceanographic array was deployed on the Beaufort Sea continental slope from August 2008-August 2009 to measure Arctic sea ice near-inertial motion in response to rapidly changing wind stress. Upward looking Acoustic Doppler Current Profilers detected sea ice and measured ice drift using a combination of bottom track and error velocity. An analysis of in-situ mooring data in conjunction with data from National Center for Environmental Prediction (NCEP) reanalysis suggest that many high and low pressure systems cross the Beaufort in winter, but not all of these create a near-inertial ice response. Two unusually strong low pressure systems that passed near the array in December 2008 and February/March 2009 were accompanied by elevated levels of near-inertial kinetic energy in the ice. The analysis suggests pressure systems which have a diameter to ground track velocity ratio close to 3/4 of the local inertial period can excite a large near-inertial response in the sea ice. It is conjectured that this results from the combined effect of resonance arising from similar intrinsic timescales of the storm and the local inertial period and from stresses that are able to overcome the damping of sea ice arising from ice-mechanics and damping in the ice-ocean boundary layer. Those systems whose intrinsic times scales do not approach resonance with the local inertial period did not excite a large near- inertial response in the sea ice. From an analysis of two storms in February 2009, and two in December 2008, it appears that wind stresses associated with previous low pressure systems preconditioned the ice pack, allowing for larger near-inertial response during subsequent events.
• #### Sea-ice habitat preference of the Pacific walrus (Odobenus rosmarus divergens) in the Bering Sea: a multiscaled approach
The goal of this thesis is to define specific parameters of mesoscale sea-ice seascapes for which walruses show preference during important periods of their natural history. This research thesis incorporates sea-ice geophysics, marine-mammal ecology, remote sensing, computer vision techniques, and traditional ecological knowledge of indigenous subsistence hunters in order to quantitatively study walrus preference of sea ice during the spring migration in the Bering Sea. Using an approach that applies seascape ecology, or landscape ecology to the marine environment, our goal is to define specific parameters of ice-patch descriptors and mesoscale seascapes in order to evaluate and describe potential walrus preference for such ice and the ecological services it provides during an important period of their life-cycle. The importance of specific sea-ice properties to walrus occupation motivates an investigation into how walruses use sea ice at multiple spatial scales when previous research suggests that walruses do not show preference for particular floes. Analysis of aerial imagery, using image processing techniques and digital geomorphometric measurements (floe size, shape, and arrangement), demonstrated that while a particular floe may not be preferred, at larger scales a collection of floes, specifically an ice-patch (< 4 km²), was preferred. This shows that walruses occupy ice patches with distinct ice features such as floe convexity, spatial density, and young ice and open water concentration. Ice patches that are occupied by adult and juvenile walruses show a small number of characteristics that vary from those ice patches that were visually unoccupied. Using synthetic aperture radar imagery, we analyzed co-located walrus observations and statistical texture analysis of radar imagery to quantify seascape preferences of walruses during the spring migration. At a coarse resolution of 100-9,000 km², seascape analysis shows that, for the years 2006-2008, walruses were preferentially occupying fragmented pack ice seascapes range 50-89% of the time, when, all throughout the Bering Sea, only range 41-46% of seascapes consisted of fragmented pack ice. Traditional knowledge of a walrus' use of sea ice is investigated through semi-directed interviews conducted with subsistence hunters and elders from Savoonga and Gambell, two Alaskan Native communities on St. Lawrence Island, Alaska. Informants were provided with a large nautical map of the land and ocean surrounding St. Lawrence Island and 45 printed largeformat aerial photographs of walruses on sea ice to stimulate discussion as questions were asked to direct the topics of conversation. Informants discussed change in sea ice conditions over time, walrus behaviors during the fall and spring subsistence hunts, and sea-ice characteristics that walruses typically occupy. These observations are compared with ice-patch preferences analyzed from aerial imagery. Floe size was found to agree with remotely-sensed ice-patch analysis results, while floe shape was not distinguishable to informants during the hunt. Ice-patch arrangement descriptors concentration and density generally agreed with ice-patch analysis results. Results include possible preference of ice-patch descriptors at the ice-patch scale and fragmented pack ice preference at the seascape scale. Traditional knowledge suggests large ice ridges are preferential sea-ice features at the ice-patch scale, which are rapidly becoming less common during the fall and spring migration of sea ice through the Bering Sea. Future work includes increased sophistication of the synthetic aperture radar classification algorithm, experimentation with various spatial scales to determine the optimal scale for walrus' life-cycle events, and incorporation of further traditional knowledge to investigate and interface crosscultural sea-ice observations, knowledge and science to determine sea ice importance to marine mammals in a changing Arctic.
• #### Seabird Habitat Use And Zooplankton Abundance And Biomass In Relation To Water Mass Properties In The Northern Gulf Of Alaska
Understanding of biological and physical mechanisms that control the Gulf of Alaska (GOA) ecosystem is of major importance to predicting the responses of bird and zooplankton communities to environmental changes in this region. I investigated seasonal (March-October) changes in seabird abundance in relation to changes in zooplankton biomass and water mass properties from 1998 to 2003. Oceanodroma furcata and Fratercula cirrhata were most abundant during the peak of the zooplankton production season (May-August). Overall abundance of seabirds did not follow seasonal changes in zooplankton biomass. Seabird abundance was low in the study area when compared to other regions in the GOA. Furthermore, low bird densities suggest that productivity in this study area is not high enough to sustain a significant seasonal increase in local seabird abundance. I further investigated the distribution and abundance of seabird foraging guilds across the neritic and oceanic domains in relation to water mass properties and zooplankton biomass during March and April. Overall zooplankton biomass increased from the inner shelf to the oceanic domain. Highest density of subsurface-foraging seabirds occurred in the middle shelf and surface-feeding seabirds were most abundant in the middle shelf and oceanic domain. Murre (Uria spp.) abundance was positively correlated with the biomass of Thysanoessa inermis, and Northern Fulmars (Fulmarus glacialis) were associated with cephalopod paralarvae and Eucalanus bungii. Elevated biomass of Thysanoessa inermis in March and April may be an important factor influencing habitat choice of wintering murres in this region. Lastly, I investigated the inter-annual variation in the abundance of sixteen zooplankton taxa in relation to water mass properties during May from 1998 to 2009. Significant variations in temperature, salinity and zooplankton abundance were identified. Thysanoessa inermis and Calanus marshallae had increased abundances in years when there was a strong phytoplankton spring bloom preceded by anomalously cold winters. However, abundances of Pseudocalanus spp., Neocalanus plumchrus/Neocalanus flemingeri, Euphausia pacifica and Oithona spp. were not strongly affected by relatively higher mean water temperatures. The abundance of zooplankton in the northern GOA was highly influenced by advective processes.
• #### Seabirds at sea in relation to oceanography
This study investigated the macroscale distribution of seabirds in relation to oceanography in a neritic environment characterized by well-defined water masses (the northern Bering Sea) and an oceanic environment characterized by weaker differences between water masses (the northern North Pacific Ocean). In the northern Bering Sea, the total density (birds/km$\sp2)$ of all seabirds combined and densities and/or frequencies of occurrence of seven of nine species of seabirds that exhibited significant differences among water masses showed the strongest attraction to Anadyr Water. In general, attractions were second highest in Bering Shelf Water, third highest in Two-layered Water (Alaska Coastal Water overlying Bering Shelf Water), and lowest in Alaska Coastal Water. This pattern of seabird distributions reflected distributions of zooplankton biomass, which were highest in Anadyr Water and consisted of species that were large enough to be eaten directly by seabirds. Further, whereas copepods in Bering Shelf Water also are large, they are much smaller in Alaska Coastal Water and, thus, must pass through more trophic levels to fishes before the energy is directly accessible to seabirds. Consequently, zooplankton-based food webs dominated in Anadyr and Bering Shelf waters and fish-based food webs dominated in Two-layered and Alaska Coastal waters. In addition, seabirds concentrated near a strong, mesoscale thermal front between Bering Shelf and Alaska Coastal waters. In the northern North Pacific, assemblages of seabirds exhibited three main groupings, a "subarctic assemblage," a "transitional assemblage," and a "'subtropical/tropical assemblage." These assemblages matched those for zooplankton, squids, and fishes in the same vicinity, suggesting that there are geographically- and temporally-stable biological communities in the North Pacific that are associated with well-defined, persistent physical environments. The total density of all seabirds combined and densities and/or frequencies of occurrence of 13 of 16 species of seabirds that exhibited significant two-way ANOVAs exhibited primarily a water mass effect; only one species exhibited primarily a year effect, and two exhibited primarily an interaction (i.e., a change in habitat use between years).
|
# Rules governing Plot line generation
I was looking at the Documentation Center for Round, Ceiling and Floor when I noticed that the plots for Round showed some variation, even at the small size that the Documentation Center displays them, when blown up... well:
Plot[Round[x, 10], {x, -30, 30}, Filling -> Axis]
Plot[Round[x], {x, -3, 3}, Filling -> Axis]
This intrigued me, I don't fully understand the rules governing generation of plots from functions. Clearly the method is adaptive, generating more data points where there is "activity," but is the (dis)continuity just based on the resolution that the plot is generated using?
Any insight is welcome.
-
You get a similar result if you use Round[x,1]. This uses 163 points, the same as Round[x,10] whereas without, it uses 175 points. You'll find some general info on how the points are chosen in this answer – rm -rf♦ Sep 26 '12 at 18:01 note also Round[x] and Round[x,1] result in differenc evaluaiton points. How do you get those markers? – george2079 Sep 26 '12 at 18:03 @george2079 If you repeatedly click on the plot, you'll select individual lines in the graphics editing tool – rm -rf♦ Sep 26 '12 at 18:11 – Mr.Wizard♦ Feb 20 at 1:23
## 2 Answers
This is probably not a complete answer. Plot attempts at some point to detect discontinuities symbolically. With a few built-ins, it can do it. Round[x] is an example but for some reason Round[x, 1] is not
Those discontinuities are excluded from the plots (see Exclusions)
So, for example, compare the output of these
Plot[Round[x], {x, -3, 3}]
Plot[Round[x, 1], {x, -3, 3}]
If you define r[x_?NumericQ]:=Round[x], then r[x] behaves the same way as Round[x, 1] because it doesn't know how to manipulate your custom r symbolically
Now, if Plot already knows beforehand it has a discontinuity somewhere, it can be smarter than usual. For example, it can try to make the discontinuities sharper by sampling right on the sides.
x1 = Reap[
Plot[Round[x, 1], {x, -3, 3}, Filling -> Axis,
EvaluationMonitor -> Sow[x]]][[-1, 1]] // Rest;
x2 = Reap[
Plot[Round[x], {x, -3, 3}, Filling -> Axis,
EvaluationMonitor -> Sow[x]]][[-1, 1]] // Rest;
Complement[x2, x1]
{-2.50191, -2.49809, -1.50191, -1.49809, -0.501913, -0.498087, \ 0.498087, 0.501913, 1.49809, 1.50191, 2.49809, 2.50191}
As to the rendering issues with the Filling, that's because it excludes discontinuities by default. Try with Exclusions->None
Plot[Round[x], {x, -3, 3}, Filling -> Axis, Exclusions -> None]
Related question: Managing Exclusions in Plot[]
-
So it's all in how the system is handling the exclusions that are generated by Round[x] but not in Round[x,1] ? Considering that they are supposed to be equivalent expressions: Round[x]=Round[x, 1] Round[x_,round_:1]:= I guess I'm curious to know what Wolfram implemented to have it handle that way. Sigh, the never-ending depths of Mathematica's mysteries... – MRN16 Sep 27 '12 at 12:10 The link you included was very helpful in understanding the "interesting point detection" methodology that Mathematica employs and a little bit about it's exclusion detection method (thanks to your input it appears). Thanks for all your help! – MRN16 Sep 27 '12 at 12:23 @MRN16 glad to help, thanks for the accept – Rojo Sep 27 '12 at 13:18 V9 computes exclusions for the two-argument form of Round. – Brett Champion Nov 29 '12 at 17:15
There are two excellent threads on StackOverflow that explore the inner workings of Plot sampling:
Answer from Yaroslav Bulatov
Answer from Alexey Popkov
One can get linear sampling by using the option MaxRecursion -> 0, and control the sampling rate with PlotPoints:
Plot[Round[x, 10], {x, -30, 30},
Filling -> Axis,
Mesh -> All,
MaxRecursion -> 0,
PlotPoints -> 80
]
-
Thanks for the links, I sometimes forget that the Mathematica questions can be found in the other stack websites too. – MRN16 Sep 27 '12 at 12:16
|
## haganmc Group Title Solve the diff equation. dr/dø +r*tan(ø)= sec(ø) 2 years ago 2 years ago
try let$\lambda (ø)\text{ = }\exp (\int\limits \tan (ø) \, dø)\text{ = }\sec (ø)$ then $(\sec (ø) \tan (ø)) r(ø)+\sec (ø) \frac{dr(ø)}{dø}\text{ = }\sec ^2(ø)$ $substitute, ~~\sec (ø) \tan (ø)\text{ = }\frac{d\sec (ø)}{dø}$ $\frac{d\sec (ø)}{dø} r(ø) +\sec (ø) \frac{dr(ø)}{dø}\text{ = }\sec ^2(ø)$ then use reverse product rule see if it works
|
# Different definitions of irreducible $\mathrm{SU}(2)$ connections
Let $Y$ be Poincaré's integral homology 3-sphere. Let $\pi:P\to Y$ be a (necessarily trivial) $\mathrm{SU}(2)$-principal bundle over $Y$. Fix $x_0\in Y$ and $p_0\in P_x:=\pi^{-1}(x_0)$. The fundamental group $\pi_1(Y,x_0)$ of $Y$ is isomorphic to the binary icosahedral group $2I$ which can be seen as a subgroup of $\mathrm{SU}(2)$. Let $A$ be a flat connection on $P$ such that its holonomy group $H<\mathrm{Aut}(P_x)\cong \mathrm{SU}(2)$ based at $x_0$ is isomorphic to the binary icosahedral group $2I$ inside $\mathrm{SU}(2)$. Let $Q\subset P$ be the set of points in $P$ that are horizontally reachable from $p_0$, i.e. : $$Q := \left\{p\in P \;|\; \exists \gamma\in \mathcal{C}^1_{pw}([0,1];P) \;\;\text{s.t.}\;\; \gamma(0)=p_0,\gamma(1)=p\;\; \text{and \gamma horizontal for A}\right\} \subset P$$ where $\mathcal{C}^1_{pw}$ means "piecewise differentiable". From theorem 7.1 at p.83-84 of [KN-I], it happens that $Q$ is a structural reduction from the bundle $\mathrm{SU}(2)\hookrightarrow P\to Y$ to the bundle $H\hookrightarrow Q\to Y$ such that $A$ reduces to a connection on $Q\to Y$. So here, according to [KN-I] and to this (at middle of p.15), $A$ is a reducible connection (in fact, $Q\to Y$ has discrete fibers so the restriction of $A$ to $Q$ vanish because, here, $\mathrm{Lie}(H)=\{0\}$).
On the other side, the centralizer of $H$ in $\mathrm{SU}(2)$ is minimal : $C_{\mathrm{SU}(2)}(H)=C_{\mathrm{SU}(2)}(2I)=\{-1,1\}$. So here, according to the mainstream gauge theory's definition of reducibility, $A$ is an irreducible connection.
Questions :
1. Am I right that there is a conflict between [KN-I]'s definition of reducible connections and the definition found in gauge theory litterature in the last forty or fifty years ?
2. If I'm wrong, please point where I missed something.
3. If I'm right, do you know around which paper was the first instance of the definition of an irreducible connection being that $C_G(H)$ is minimal ?
Maybe it's just that the notion of "reducibility of $A$ in the context of structural reduction" is not the same thing as "reductibility of $A$ in the context of reducibility representations of the fundamental group".
p.s. this question is a sequel to this previous question.
[KN-I] : Foundations of Geometry, Vol. I (Kobayashi, Nomizu)
Conclusion : The two notions cited above of reducible connections are mostly the same, but aren't the same.
In gauge theory the fundamental notion is that of a (unitary, orthogonal) connection on a vector bundle. A reducible connection is one that is induced from a vector bundle of smaller dimension. To Kobayashi and Nomizu, the fundamental notion is a connection on a principal $G$-bundle, where $G$ is a Lie group. Then the right notion of irreducible is whether or not the connection can be induced from a smaller (Lie) structure group. This is precisely whether or not $H_A$ is a proper subgroup of $G$. If you further think the primitive notion is that of connections on principal $G$-bundles where $G$ is compact Lie, then you care about whether or not $\overline{H_A}$ is a proper subgroup of $G$.
• Yep it seems to me "irreducible/reducible" is used in two close but different notions. Though, gauge theory isn't always about connexions on a vector bundle. I mostly see connections on principal bundles. Then, this induces connections on associated bundles. e.g., for the canonical representation of $\mathrm{SU}(2)$ on $\mathbb{C}^2$ we have a vector bundle $E$ with typical fiber $\mathbb{C}^2$ where $A$ on $P$ induces, lets say, $\nabla$ on $E$. But yeah, one can go the opposite way also, from $E$ to $P$ considering the special unitary frame bundle of $E$ which is our original $P$. – Noé AC Jan 20 '18 at 22:58
• @NAC I know :) But at least for $SU(2)$, $SO(3)$, $C(H_A)$ is the center if and only if the connection is not induced from connections on smaller rank bundles. I'd guess that's true for arbitary $SO(N)$ but I haven't thought about it. – user98602 Jan 20 '18 at 23:03
• @NAC Similar but not precisely the same for positive dimensional subgroups; e.g. $\text{Pin}(2) \subset SU(2)$ is a positive-dimensional subgroup not contained in $U(1)$ (which is what I would say means it's induced by a connection on a bundle of lower rank). A careful author should be precise about what they mean in their context (most usually that the stabilizer of the gauge group action is as small as possible). – user98602 Jan 20 '18 at 23:21
• Ok, thanks. I think that concludes my series of questions/doubts about irreducible connections $\mathrm{SU}(2)$. – Noé AC Jan 20 '18 at 23:23
|
# Have you seen this before?
Calculus Level 3
$\large{1-\dfrac{1}{2}+\dfrac{1}{3}-\dfrac{1}{4}+\ldots\quad=\quad?}$
Give your answer to 3 decimal places.
×
|
PDE integral help
1. Mar 16, 2008
Cyrus
My PDE book does the following:
$$\int \phi_x^2 dx$$
Where,
$$\phi_x = b-\frac{b}{a} |x|$$
for $$|x|> a$$ and x=0 otherwise.
Strauss claims:
$$\int \phi_x^2 dx = ( \frac{b}{a} ) ^2 2a$$
However, I think there is a mistake. It can be shown that:
$$\frac{-3a}{b}(b- \frac{b|x|}{a})^3$$ is a Soln. Evaluate this between 0<x<a and you get:
$$\frac{b^2 a}{3}$$
Because the absolute value function is symmetric, its twice this value:
$$\frac{2b^2 a}{3}$$
Unless I goofed, I think the book is in error.
*Note: Intergration is over the whole real line.
Last edited: Mar 17, 2008
2. Mar 16, 2008
Dick
I think you must mean |x|<a, right? Otherwise the integral isn't defined. I tried integrating (b-bx)^2 from 0 to a and I don't get anything close to either answer. Can you clarify?
3. Mar 17, 2008
Cyrus
O crap, its b-b/a|x| sorry. See above I fixed it.
4. Mar 17, 2008
Dick
Then I'm getting the same result as you.
|
Lesson 4: 에
In this lesson, you will learn how to use the particle 에 [ae] to say things like “He/She is at home.“, “I am going to the park.“, and “I wake up at 6.
Lesson 4:에
[ae] is what is sometimes referred to as a ‘location marking particle’. It is used to mark a noun as the location where something exists or doesn’t exist, the place where you are going to, or the time at which something takes place. To help you understand what we mean, let’s take a look at each of these usages one by one.
Usage 1: Place
The particle 에 can be used to mark a noun as the place where someone or something exists or doesn’t exist. Let’s look at some examples:
집에 있어요. [ji-be i-sseo-yo] = (I/He/She) is at home.
집에 없어요. [ji-be eop-seo-yo] = (I/He/She) is not at home.
책이 가방에 있어요. [chae-gi ga-bang-e i-sseo-yo] = The book is in the bag.
책이 가방에 없어요. [chae-gi ga-bang-e eop-seo-yo] = The book is not in the bag.
베개가 침대에 있어요. [be-gae-ga chim-dae-e i-sseo-yo] = The pillow is on the bed.
베개가 침대에 없어요. [be-gae-ga chim-dae-e eop-seo-yo] = The pillow is not on the bed
In the examples above, 에 is attached to a noun to ‘mark’ that noun as the location where something or someone exists or doesn’t exist. When translated into English, the prepositions ‘at‘, ‘in‘, and ‘on‘ are used, but in Korean these are all realized by the location marking particle 에.
Usage 2: Destination
The particle 에 can be used to mark a noun as the destination that someone or something is going to. When 에 is used to mark a noun as the destination, it is followed by verbs that indicate movement such as 가다 [ga-da] (to go), 오다 [o-da] (to come), and 다니다 [da-ni-da] (to attend). Let’s look at some examples:
공원에 가요. [gong-wo-ne ga-yo] = I’m going to the park.
친구가 집에 와요. [chin-gu-ga ji-be wa-yo] = My friend is coming to my house.
서울대학교에 다녀요. [seo-ul-dae-hak-gyo-e da-nyeo-yo] = I go to / attend Seoul University.
As you can see in these examples, 에 is similar to the English preposition ‘to‘ when used to mark a noun as the destination that someone or something is going to.
Usage 3: Time
The particle 에 can be used to indicate the time when something happens. Let’s look at some examples:
6시에 일어나요. [yeo-seot-si-e i-reo-na-yo] = I get up at 6.
아침에 운동해요. [a-chi-me un-dong-hae-yo] = I exercise in the morning.
주말에 청소해요. [ju-ma-re cheong-so-hae-yo] = I clean on weekends.
내년에 미국에 가요. [nae-nyeo-ne mi-gu-ge ga-yo] = I’m going to America next year.
*Please note that 에 cannot be used with the following words: 오늘 (today), 어제 (yesterday), 내일 (tomorrow), and 언제 (when). This is a common mistake beginner Korean learners often make.
|
# Empirical Kullback-Leibler divergence of two time series
I have an two vectors (time series) with the same length (1200 elements) $x$ and $y$. Further both time series are stationary. I don't know the theoretical distribution of $x$ and $y$. I would like to calculate relative entropy of these r.v.-s. I think I have to use an empirical distribution function. Have you any ideas how to calculate Kullback-Leibler divergence of two time series, with different distribution?
Thanks a lot in advance!
• "I would like to calculate relative entropy of these r.v.-s" The random variables $x$ $y$ would the single (scalar) values (of which the 1200 elements will be taken as samples), or are you instead thinking of the full set of 1200 elements as a single multidimensional random variable? Oct 24, 2015 at 0:21
Typically, one would define a window size $w$, and look at the distribution generated by considering symbols of the form $x_k^{k+w}$. The distributions on $X_1^w$ and $Y_1^w$ so found can be plugged into the formula.
$$\frac{1}{w}\sum_{x \in \mathcal{X}^w} P_w(x)\log\frac{P_w(x)}{Q_w(x)}$$
Where $P$ was determined using $x$, and $Q$ using $y$. Note the normalisation by the window size, which is giving you a single-letter type figure.
The windowing is essentially because you don't know if the symbols are i.i.d. or not, and if you do, you can forego it. The window size should be significantly smaller than the length of the time series, otherwise the likelihood of getting symbols in one time series that don't occur in the other are pretty high, which means you'll either drop samples or have the divergence blow up, both of which are bad. There is an obvious tradeoff between raising and lowering the window size, and you'll probably need to do a bit of lit survey if you want to pick an optimal one, but given a $1200$ step time series, $10-20$ gives you a fairly long range as well as a good number of samples.
|
Suppose Nile.com used the average-cost method and the perpetual inventory system. Use the Nile.com
Suppose Nile.com used the average-cost method and the perpetual inventory system. Use the Nile.com data in question 3 to compute the average unit cost of the company’s inventory on hand at April 8. Round unit cost to the nearest cent.
a. $21.00 b.$19.75
c. \$19.50
d. Cannot be determined from the data given
|
# On the representation theory of the alternating groups
Document Type: Ischia Group Theory 2012
Authors
1 Dipartimento di Ingegneria, Università del Sannio
2 Dipartimento SBAI, Sapienza Universita' di Roma
3 Dipartimento di Matematica e Fisica, Universita' Roma Tre
Abstract
We present the basic results on the representation theory of the alternating groups $A_n$. Our approach is based on Clifford theory.
Keywords
Main Subjects
### References
T. Ceccherini-Silberstein, F. Scarabotti and F. Tolli (2008). Harmonic analysis on finite groups: representation theory, Gelfand pairs and Markov chains. Cambridge Studies in Advanced Mathematics , Cambridge University Press, Cambridge. 108
T. Ceccherini-Silberstein, F. Scarabotti and F. Tolli (2009). Clifford theory and applications. Functional analysis. J. Math. Sci. (N. Y.). 156 (1), 29-43
T. Ceccherini-Silberstein, F. Scarabotti and F. Tolli (2010). Representation theory of the symmetric groups: the Okounkov-Vershik approach, character formulas, and partition algebras.. Cambridge Studies in Advanced Mathematics, Cambridge University Press, Cambridge. 121
T. Ceccherini-Silberstein, F. Scarabotti and F. Tolli (2013). Representation Theory and Harmonic Analysis of wreath products of finite groups.. 410
Ch.~W. Curtis and I. Reiner (1981). Methods of Representation Theory. With Applications to Finite Groups and Orders. Pure Appl. Math., John Wiley & Sons, New York. 1
W. Fulton and J. Harris (1991). Representation theory. A first course. Graduate Texts in Mathematics. Springer-Verlag, New York. 129
G. D. James and A. Kerber (1981). The Representation Theory of the Symmetric Group. Encyclopedia of Mathematics and its Applications, Addison-Wesley, Reading, MA. 16
|
Orthonormal wave functions: The sets of wave functions, which are both normalized as well as orthogonal are called orthonormal wave functions. The Triangle Wave Function is a periodic function used in signal processing. Whether the wave function is symmetric or antisymmetric under such operations gives you insight into whether two particles can occupy the same quantum state. Wave function denoted by greek letter psi is a mathematical function from which we can obtain the dynamical variables (position, energy, spin, angular momentum etc) of an atomic particle. What does Wave function mean? The coefficients that determine their form are then parameters of the model. The wavefunction tells you how a particle will behave. A wave function or wavefunction is a probability amplitude in quantum mechanics describing the quantum state of a particle and how it behaves. Meaning of Wave function. that is the information about the dynamical behavior of a particle is contained in a wave function. It describes the behavior of quantum particles, usually electrons. The powers or the exponent of wave function is one. Is there more than one? Post Views: 125. Normalizing a wave function means finding the form of the wave function that makes the statement $$\int^\infty_{-\infty} \psi^* \psi dx = 1$$ true. A wave function consists of a mathematical equation that describes the state of an energized system, which is represented on a standard graph. What is a wave function? At the heart of quantum mechanics is a mysterious equation known as the wave function. The wave function in quantum mechanics is a solution to Erwin Schrödinger’s famous wave equation that describes the evolution in time of his wave function ψ, ih/2π δψ/δt = Hψ. A)represented by (Ψ2) B)represented by E C)represented by Ψ D)represented by H E)a mathematical function that describes the wavelike nature of the electron F)a mathematical function that determines whether the electron will behave like a wave … It is directly related to the amount of energy carried by a wave. The square of the wave function, Ψ2, however, does have physical significance: the probability of finding the particle described by a specific wave function Ψ at a given point and time is proportional to the value of Ψ2. Cite The wave function is one of the most important concepts in quantum mechanics, because every particle is represented by a wave function. In quantum mechanics, wave function collapse occurs when a wave function—initially in a superposition of several eigenstates—reduces to a single eigenstate due to interaction with the external world. The idea is that if we make our guesses at random, instead of getting a solver, we get a generator. For example, the inner product of the two wave functions ˚(x) and (x) sketched in Fig. A wave function may be used to describe the probability of finding an electron within a matter wave. Wave function denoted by greek letter psi is a mathematical function from which we can obtain the dynamical variables (position, energy, spin, angular momentum etc) of an atomic particle. The powers or the exponent of wave function is one. A wave function in quantum chemistry defines the state of system. Wave function must have linear mathematical representations. Electric field is three real numbers at each point in space. The wave function may be an eigenfunction of an observable Hermitian operator that represents a physical quantity or a linear superposition of eigenfunctions of an observable. 2: a quantum-mechanical function whose square represents the relative probability of finding a given elementary particle within a … So a wave is a disturbance propagating through space. Radial wave function is a orbital which describe the position of electron in an atom. It describes the probability of finding particle as a function of position, momentum and time. Wave function is a variable quantity that describes the wave characteristics of a particle. Chapter Goal: To introduce the wave‐function description And a wave really is just this disturbance that's propagating down the rope. The physical meaning of the wave function is an important interpretative problem of quantum mechanics. This interaction allows us to detect the photon.
|
Thread: How to combine correlated vector-valued estimates
1. How to combine correlated vector-valued estimates
I'd like to combine several vector-valued estimates of a physical quantity in order to obtain a better estimate with less uncertainty.
As in the scalar case, the weighted mean of multiple estimates can provide a maximum likelihood estimate. For independent estimates we simply replace the variance by the covariance matrix and the arithmetic inverse by the matrix inverse (both denoted in the same way, via superscripts); the weight matrix then reads (see https://en.wikipedia.org/wiki/Weight...lued_estimates)
,
where stands for the covariance matrix of the vector-valued quantity .
The weighted mean in this case is:
(where the order of the matrix-vector product is not commutative).
The covariance of the weighted mean is:
For example, consider the weighted mean of the point with high variance in the second component and with high variance in the first component. Then
then the weighted mean is:
On the other hand, for scalar quantities it is well known that correlations between estimates can be easily accounted. In the general case (see https://en.wikipedia.org/wiki/Weight...r_correlations), suppose that , is the covariance matrix relating the quantities , is the common mean to be estimated, and is the design matrix (of length ). The Gauss–Markov theorem states that the estimate of the mean having minimum variance is given by:
with
The question is, how can correlated vector-valued estimates be combined?
In our case, how to proceed if and are not independent and all the terms in the covariance matrix are known?
In other words, are there analogous expressions to the last two for vector-valued estimates?
2. Re: How to combine correlated vector-valued estimates
Got stuck in the moderation queue!
Tweet
|
# Can any one explain the physical concepts to the process of making ice without refrigeration?
as stated in
“The process of making ice in the East Indies at Allahabad, Mootegil, and Calcutta”.
On a large open plain, three or four excavations were made, the bottoms of which were sugar-cane, or the stems of the large Indian corn dried. Upon this bed were placed in rows, a number of small, shallow, earthen pans, for containing the water intended to be frozen. These are unglazed, and made of a prous Earth. Towards the dusk of the evening, they were filled with soft water, which had been boiled, and then left in the afore-related situation.
The ice-makers attended the pits usually before the sun was above the horizon, and collected in baskets what was frozen.
• If you read on it tells you that it is evaporation, the same process by which swearing keeps you cooler. Evaporation of the water occurs at the surfaces of the porous pots. Evaporation requires energy which it takes from the remaining water left in the pots and if enough energy is taken from the water it starts to freeze. – Farcher Feb 9 '17 at 16:58
|
##### The most recent financial statements for Assouad, Inc., are shown here: Income Statement Sales Costs Balance...
The most recent financial statements for Assouad, Inc., are shown here: Income Statement Sales Costs Balance Sheet Current $8,700 assets$3,600 Current $2,400 liabilities Long-term 6,100 Fixed assets 9,000 3,980 Taxable income$2,600 Equity 6,220 Taxes (24%) 624 Total $12,600 Total$12,600 Net inco...
##### Inverse of 275
Inverse of 275?I think it's -275, but I am not sure....
##### For the following Sn1 reaction, draw the major organic product, identify the nucleophile, substrate, and leaving...
For the following Sn1 reaction, draw the major organic product, identify the nucleophile, substrate, and leaving group, and determine the rate limiting step. Нас ma H20 (solvent) Нас CHE Select the statement that properly identifies the nucleophile, substrate, and...
##### Definition of Arthritis’s, Osteoporosis, Osteoarthritis, Rheumatoid Arthritis. Causes for these diseases? The best type of exercises...
Definition of Arthritis’s, Osteoporosis, Osteoarthritis, Rheumatoid Arthritis. Causes for these diseases? The best type of exercises with the different diseases?...
##### Create a spreadsheet in excel and develop a sensitivity and answer report. Use the sensitivity and...
Create a spreadsheet in excel and develop a sensitivity and answer report. Use the sensitivity and answer report to answer the questions below. Do not reuse solver. Please show your work and where you get the information: Zippy motorcycle manufacturing produces two popular pocket bikes (miniature mo...
##### 4 Analysis 1 A fish tank is 100 cm long, 50 cm wide and 30 cm...
4 Analysis 1 A fish tank is 100 cm long, 50 cm wide and 30 cm tall. It was tipped on its side edge until the water just reached the top of the tank as shown in this diagram 100 cm water 30 cm dcm When the tank was returned to rest on its base, the water reached one-quarter of the way up the tank Wha...
##### Given there are 6 boys and 2 girls arranged in a row, what is the probability there are at least 3 boys separating the 2 girls?
Given there are 6 boys and 2 girls arranged in a row, what is the probability there are at least 3 boys separating the 2 girls?...
|
# determine the value in question. Note that this course emphasises going back and forth between... 1 answer below »
Question 9
Counting (38 Points)
• For each question, determine the value in question. Note that this course emphasises going back and forth between a real world problem, and a mathematical
formulation. In every case, express what you are counting as rigorously as possible.
Are you counting functions? subsets? ...etc
• How many 8 character passwords can be made from the alphabet set
A = {0, 2, 4, 6, 8, a, b, c, d, e, f, g, !}
Be sure to rigorously define the set in question you are counting and how it is
generated. (2 points)
8
• In a class of 12 students, how many ways can you pair up partners? I.e., how
many groups of two can be made. Be sure to rigorously define a set that you are
counting. (2 points)
• How many possible ways can you give $100 in increments of$ 1 to 10 people?
(2 points)
• Consider an alphabet of 10 digits
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
. How many 8 character passwords can be made if each digit can be used at most
one time? (2 points)
Attachments:
## Plagiarism Checker
Submit your documents and get free Plagiarism report
Free Plagiarism Checker
|
DriveWorks SDK Reference 3.5.78 Release For Test and Development only
Map Tracker
Note
SW Release Applicability: This tutorial is applicable to modules in NVIDIA DRIVE Software releases.
The Map Tracker tracks a trajectory on the map and selects the lane that corresponds to the current pose of a trajectory. The current lane is updated based on position, orientation, and time. It is chosen from a list of candidate lanes that are reachable from the lane selected in the previous update of the Tracker. Considering connectivity helps to avoid erroneously selecting crossing or nearby lanes that are not actually reachable from the previous position. If there is no previous update or if the result is bad (further away than 10 meters), the search is extended to all nearby lanes ignoring connectivity. This extended search is used only for the first frame or for recovery in case of a tracking error.
The Map Tracker is initialized with an existing Map Handle:
dwMapTrackerHandle_t* mapTrackerHandle,
The current lane is being tracked by subsequently updating the Tracker with the current position, orientation, and time. Orientation is represented as a rotation matrix that transforms from local coordinates (forward-left-up) into East-North-Up (ENU) coordinate system. For more information, see GPS and HD Maps Coordinate Systems in DriveWorks Conventions. There's a helper function dwMaps_computeRotationFromBearing() to create such a rotation matrix from a bearing (clockwise angle relative to north):
dwMatrix3d* localToENURotation33,
Another helper function dwMaps_computeBearingFromGeoPoints() computes the bearing from 2 gps points:
const dwMapsGeoPoint* position,
The tracker update also works without the orientation (a nullptr can be passed), in that case the angle against the lanes is ignored, and only distance is used when looking for the best point on the candidate lanes.
const dwMapsGeoPoint* position,
const dwMatrix3d* localToENURotation33,
dwTime_t timestamp,
bool ignoreHeight,
bool reset,
dwMapTrackerHandle_t mapTrackerHandle);
If the height at the current position is not known or inaccurate, set ignoreHeight to True.
The tracker can be reset manually, thus discarding the tracking result of the previous update. To obtain the result of the update, call:
const dwMapsLane** currentLane,
dwConstMapTrackerHandle_t mapTrackerHandle);
To identify the candidate lanes among which the current lane has been selected, call:
dwConstMapTrackerHandle_t mapTrackerHandle);
|
# zbMATH — the first resource for mathematics
Elliptic modular forms and their applications. (English) Zbl 1259.11042
Bruinier, Jan Hendrik et al., The 1-2-3 of modular forms. Lectures at a summer school in Nordfjordeid, Norway, June 2004. Berlin: Springer (ISBN 978-3-540-74117-6/pbk). Universitext, 1-103 (2008).
Summary: These notes give a brief introduction to a number of topics in the classical theory of modular forms. Some of theses topics are (planned) to be treated in much more detail in a book, currently in preparation, based on various courses held at the Collège de France in the years 2000–2004. Here each topic is treated with the minimum of detail needed to convey the main idea, and longer proofs are omitted.
For the entire collection see [Zbl 1197.11047].
##### MSC:
11F11 Holomorphic modular forms of integral weight 11-02 Research exposition (monographs, survey articles) pertaining to number theory 11E45 Analytic theory (Epstein zeta functions; relations with automorphic forms and functions) 11F20 Dedekind eta function, Dedekind sums 11F25 Hecke-Petersson operators, differential operators (one variable) 11F27 Theta series; Weil representation; theta correspondences 11F67 Special values of automorphic $$L$$-series, periods of automorphic forms, cohomology, modular symbols
Full Text:
|
1,361 views
Part - I (Algorithms, DS, Automata, Engg. Maths) & Part - II(Systems - OS, Networks, COA, Compilers, DBMS)
1. Give the tightest Asymtotic Notation for the recurrence : T(n) = T(n^1/2) + O(1) -
2. There are 16072016 users in Facebook. A graph is formed where an edge(u,v) is defined when a male is friend to a female and vice versa. Estimate the number of simple cycle of length 1607 formed in the graph?
3. If p is a prime number and 0<= a <= p^1/2, find a^((p^2)+1) mod p ?
4. There does not exists a pushdown automata for language L. Which of the following are correct:
a. TM does not exist
b. TM exits
c. DFA exists
d. NFA does not exists
5. There are 3 processes with CPU times of 15, 12 and 5 ms each which arrive at 0, 5 and 8 ms respectively. Scheduler uses Shortest remaining time first scheduling algorithm. Which processes finish first to last?
6. For Fibonnaci numbers, F(n) = F(n-1) + F(n-2) which of the following are correct :
a. Iterative solution has an exponential time complexity
b. Iterative solution has a linear time complexity on n
c. Iterative solution has a linear time complexity on input size
d. Recursive solution has exponential complexity
7. There are two sorted arrays of size n each having distinct elements. What is the maximum no. of comparisons req. in the worst case?
8. | a a2 a3 || b b2 b3 || c c2 c3 | matrix is non invertible (where a,b,c are non-negative integers) if and only if
9. In Huffman Coding, there are 4 characters with frequencies as 1, 0.5, 0.25 and 0.25. Average number of bits for encoding these characters?
10. Question on basics of DAG.
1. A floating point number is represented with 36 bits. The exponent is 8 bits plus a sign bit. The remaining bits consists of the mantissa including the sign bit. -32.25 decimal value is represented in floating point Normalized format(leading 1 is missing). Write the bitwise value of the EXPONENT part?
2. Page Fault for a series of pages when there are only 3 physical pages available using OPT replacement algorithm.
3. 255.255.255.240 is subnet mask, what is the maximum number of hosts allowed in the subnet?
4. Order of operations in Compiler ? (Lexical analysis, sematic, syntax .... machine code generation)
5. A processor has 6 stages of a pipeline with S1, S2, S3, S4, S5, S6. Maximum frequency of the processor is 2 GHz. S4 takes the longest amount of time to complete. What is the latency time?
6. There are 100 processes that can run concurrent.Each process requires 1KB of physical memory. Page Size is 64 B. Page table entries take 16 bits. What is the total amount of memory(in Bytes) required to store the page tables?
7. precondition and postcondition : (precondition) x <-x +5, y <- x - y, x <- x+y (postcondition :(x==15 and y==5)), what is the precondition?
8. Counting semaphore has value 10, wait and signal have there usual meanings. How many processes can concurrently access the critical section?
9. There are Insert and Retrieve_Max operations on a set {}. for n such operations what is the time complexity of the efficient algorithm possible?
a. n^2 b. nlogn c. n d. logn
10. There are 3 processes with run time 10,20 and 30. All arrive at 0. Scheduler knows the time taken for each. What is the avg waiting time for the processes?
Programming Test:
1. Implement a^b mod m where a,b and m can be huge. (Hint : O(log n))
2. Ms. Dany wants to clean the house having many rooms. She moves from one room to the next which takes 1 time unit. Each room has only one exit door. After some time she is bound to reach a room which she has cleaned already. Let the time taken to reach the already traversed room be 't' from the start. After that shes enters a cycle, let the length in the cycle be 'k'. Print 't' and 'k'.(do not condiser time taken to clean the room) (Hint : DFS)
3. Longest Increasing Subsequence with the help of 1-D array for dynamic programming. (Hint : MaxTill 1-D array)
posted Jul 17, 2016 | 1,361 views
3
Like
0
Love
0
Haha
0
Wow
0
Angry
0
Thanks for sharing :)
thanks
Which programming language shall be use for implementation? I hope there is no restriction to it.
My other doubt is, if I use C++ then can I use it's STL and functions provided by it?
5)A processor has 6 stages of a pipeline with S1, S2, S3, S4, S5, S6. Maximum frequency of the processor is 2 GHz. S4 takes the longest amount of time to complete. What is the latency time
Can it be done with given data or we need individual stage delays?
Also, can someone explain what is question 7 asking, Is there missing info or am I missing a whole topic?
yes 5) cannot be done with given data
@srestha maam about 6) after solving for 1 process we have to multiply by 100 rt? and can you give any hint about 7th
@Anuj Mishra
$6)$ $100$ process requires $100KB$ of physical memory.
Now How much space $1PT$ can contain?i.e. $2^{16}\times 64B=2^{22}B$
So, $1PT$ is enough to store $100KB$ of data
right?
$7)$ is like OS question I think. It is not DAG question. right?? It is like how many $x$ and $y$ values are possible
are you sure? I think different page table are required for different processes, so we need to multiply because there will be 100 page tables
Why do u think so?
Then there will be enough loss of space in every PT
|
# Math
A sequence is defined recursively by
an + 1 = 3an − n, a1 = 2.
Find the first six terms of the sequence.
a1 =
a2 =
a3 =
a4 =
a5 =
a6 =
1. 👍
2. 👎
3. 👁
1. a1 = 2
a2 = 3a1-1 = 3*2-1 = 5
a3 = 3a2-1 = 3*5-1 = 14
and so on
1. 👍
2. 👎
## Similar Questions
1. ### Algebra
1. What are the next two terms of the following sequence? 1, 5, 9... A. 27, 211 B. 10,11 C.12,15 D.13,17 2. Which of the following are examples of arithmetic sequences? Choose all that apply. A. -2,2,6,10 B. 1,3,9,27 C. 5,10,20,40
Question 1 Write the first four terms of the sequence whose general term is given. an = 3n - 1 Answer 2, 3, 4, 5 2, 5, 8, 11 -2, -5, -8, -11 4, 7, 10, 13 3 points Question 2 Write the first four terms of the sequence whose general
3. ### Algebra 1
What is the third term of the sequence defined by the recursive rule f(1)=2, f(n)=2f(n-1)+1? Please help.
4. ### Math
Question 1 Write the first four terms of the sequence whose general term is given. an = 3n - 1 Answer 2, 3, 4, 5 2, 5, 8, 11 -2, -5, -8, -11 4, 7, 10, 13 3 points Question 2 Write the first four terms of the sequence whose general
1. ### Algebra
A sequence is defined recursively by the following rules: f(1)=3 f(n+1)=2⋅f(n)−1 Which of the following statements is true about the sequence? Select all that apply. f(5)=33 f(3)=10 f(4)=18 f(6)=66 f(2)=5 I cannot seem to
2. ### Math
an = 3n - 1 Answer 2, 3, 4, 5 2, 5, 8, 11 -2, -5, -8, -11 4, 7, 10, 13 3 points Question 2 Write the first four terms of the sequence whose general term is given. an = 2(2n - 3) Answer -6, -2, 2, 6 -1, 1, 3, 5 -2, -4, -6, -8 -2,
3. ### Discrete Math
Find f(1), f(2), and f(3) if f(n) is defined recursively by f(0) = 1 and for n = 0, 1, 2, . . . • f(n+1) = f(n) + 2 So, would it be f(n) = f(n+1) + 2? Or would I just keep it like the original and plug in 1, 2, 3. Thanks for any
4. ### Math
If there is a recursively defined sequence such that a1 = sqrt(2) an + 1 = sqrt(2 + an) Show that an < 2 for all n ≥ 1
1. ### Mathematics
What is the 20th term in the sequence defined by tn = n^2 - 2n/2n - 30?
2. ### algebra 2
A sequence is defined recursively by a1=1,an=(an-1+1)^2. Write the first 4 terms of the sequence.
3. ### Algebra
What is the 9th term of the arithmetic sequence defined by the rule below? A(n)=-14+(n-1) (2)
4. ### Boundedness of a Sequence
Question : How do we prove that the sequence (1+(1/n))^n is bounded? My thoughts on this question : We know that, to prove a sequence is bounded, we should prove that it is both bounded above and bounded below. We can see that all
|
1. ## Question on differentiation/integration...
Here is the question...
A 100 litre tank is initially full of a mixture of 10% alcohol and 90% water. Simultaneously, a pump drains the tank at 4 litres/second, while a mixture of 80% alcohol and 20% water is poured in at rate 3 litres/second. Thus the tank will be empty after 100 seconds. Assume that the two liquids mix thoroughly, and let y litres be the amount of alcohol in the tank after t seconds.
dy/dt = 2.4 - ( 4y / (100 - t) )
Find y as a function of t. Hence deduce that the maximum amount of alcohol in the tank occurs after about 34 seconds, and is about 39.5 litres.
* I have identified the integrating factor as A = 4 / (100 - t)
Thank you so so much if you can help!!
2. Originally Posted by reneewilliams
Here is the question...
A 100 litre tank is initially full of a mixture of 10% alcohol and 90% water.
So y(0)= .1(100)= 10 litres.
Simultaneously, a pump drains the tank at 4 litres/second, while a mixture of 80% alcohol and 20% water is poured in at rate 3 litres/second. Thus the tank will be empty after 100 seconds. Assume that the two liquids mix thoroughly, and let y litres be the amount of alcohol in the tank after t seconds.
dy/dt = 2.4 - ( 4y / (100 - t) )
Find y as a function of t. Hence deduce that the maximum amount of alcohol in the tank occurs after about 34 seconds, and is about 39.5 litres.
* I have identified the integrating factor as A = 4 / (100 - t)
Thank you so so much if you can help!!
Good! Since you have an integrating factor, you can solve for y(t). (Don't forget to use y(0)= 10 to determine the constant of integration.) Once you have done that, differentiate y with respect to t and set equal to 0 just as you would with any function to find where there is a maximum or minimum.
3. I've used the integrating factor but am still really struggling! I have...
y / (100 - t)^4 = (integration sign) 2.4 / (100 - t)^4
What should I end up with now??
Thank you for your help so far!
|
## Search Result
### Search Conditions
Years
All Years
for journal 'PTP'
author 'T.* Saito' : 80
total : 80
### Search Results : 80 articles were found.
1. Progress of Theoretical Physics Vol. 27 No. 3 (1962) pp. 616-618 :
On the $\beta$-decay of $\Lambda$ and $\Sigma^-$Hyperons
Kimio Fujimura, Tsuneyuki Kotani and Tetsuya Saito
2. Progress of Theoretical Physics Vol. 37 No. 6 (1967) pp. 1304-1313 :
Reggeization of Particles with Identical Quantum Numbers
Gaku Konisi and Takesi Saito
3. Progress of Theoretical Physics Vol. 38 No. 6 (1967) pp. 1361-1368 :
Unsubtracted Dispersion Relations in Weak Interactions and Composite Particles
Takeshi Kanki and Takesi Saito
4. Progress of Theoretical Physics Vol. 40 No. 2 (1968) pp. 394-402 : (5)
Reflection Symmetry of Conspiring Fermion Trajectories
Takeshi Kanki, Gaku Konisi and Takesi Saito
5. Progress of Theoretical Physics Vol. 41 No. 1 (1969) pp. 161-165 : (5)
Barger and Cline's Rule and Bound States of Dirac Particle
Masahiko Hirooka, Gaku Konisi and Takesi Saito
6. Progress of Theoretical Physics Vol. 41 No. 4 (1969) pp. 1057-1063 : (5)
Sum Rules Based on Khuri Amplitudes and Degeneracy of Regge Trajectories
Toshiharu Kawai and Takesi Saito
7. Progress of Theoretical Physics Vol. 41 No. 4 (1969) pp. 1081-1093 : (5)
Derivatives of Baryon Trajectories at $W = 0$
Gaku Konisi and Takesi Saito
8. Progress of Theoretical Physics Vol. 42 No. 4 (1969) pp. 871-878 : (5)
Test of Duality Based on “Khuri” Amplitudes
Masakazu Aoki, Setsuko Hori and Takesi Saito
9. Progress of Theoretical Physics Vol. 44 No. 1 (1970) pp. 230-242 : (5)
Eigenchannels and Unitary Separation of Background Parts in Regge-Pole Theory
Gaku Konisi and Takesi Saito
10. Progress of Theoretical Physics Vol. 44 No. 2 (1970) pp. 546-547 : (5)
Existence of $\mu^*$ and Duality in Weak Processes
Masataka Hosoda and Takesi Saito
11. Progress of Theoretical Physics Vol. 45 No. 2 (1971) pp. 596-604 : (5)
The $\mu^-p$ Resonance and Duality in Weak Processes
Yuji Nakawaki, Takesi Saito and Yasutaka Tanikawa
12. Progress of Theoretical Physics Vol. 46 No. 1 (1971) pp. 346-348 : (5)
Lepton-Baryon Resonances and Duality in Weak Processes
Yuji Nakawaki, Takesi Saito, Yosiki Sakai and Yasutaka Tanikawa
13. Progress of Theoretical Physics Vol. 46 No. 2 (1971) pp. 609-613 : (5)
Unitarity, Factorization Theorem and Removal of Level Degeneracy in Dual Resonance Model
Syôzi Kawati, Gaku Konisi, Takesi Saito and Azuma Tanaka
14. Progress of Theoretical Physics Vol. 46 No. 5 (1971) pp. 1501-1515 : (5)
Operator Formalism of the Dual Resonance Model with Different Trajectories
Gaku Konisi and Takesi Saito
15. Progress of Theoretical Physics Vol. 48 No. 4 (1972) pp. 1324-1337 : (5)
Field Theory of Dual-Resonance Model
Yuji Nakawaki and Takesi Saito
16. Progress of Theoretical Physics Vol. 51 No. 1 (1974) pp. 284-298 : (5)
Generalized Conformal Gauge Conditions in Dual Models
Gaku Konisi and Takesi Saito
17. Progress of Theoretical Physics Vol. 51 No. 3 (1974) pp. 893-911 : (5)
Supergauge Algebra and Electromagnetic Interaction of the Dual Pion Model. I
Gaku Konisi, Satoshi Matsuda, Yuji Nakawaki and Takesi Saito
18. Progress of Theoretical Physics Vol. 51 No. 4 (1974) pp. 1172-1182 : (5)
Supergauge Algebra and Electromagnetic Interaction of the Dual Pion Model. II
Gaku Konisi, Satoshi Matsuda and Takesi Saito
19. Progress of Theoretical Physics Vol. 52 No. 5 (1974) pp. 1652-1668 : (5)
Theory of the Dual-Fermion String
Gaku Konisi and Takesi Saito
20. Progress of Theoretical Physics Vol. 52 No. 6 (1974) pp. 1902-1909 : (5)
The Number of Physical States Satisfying Supergauge Conditions in Dual Resonance Models
Gaku Konisi and Takesi Saito
21. Progress of Theoretical Physics Vol. 53 No. 5 (1975) pp. 1541-1543 : (5)
A New Particle Observed in High Energy Cosmic Ray Interactions
Hisahiko Sugimoto, Yoshihiro Sato and Takeshi Saito
22. Progress of Theoretical Physics Vol. 54 No. 1 (1975) pp. 237-242 : (5)
Quantization for Regge Slopes and $\psi$-Particles
Akio Hosoya and Takesi Saito
23. Progress of Theoretical Physics Vol. 54 No. 4 (1975) pp. 1192-1198 : (5)
Regge-Slope Quantization and Deep-Region Behavior in Dual Resonance Models
Akio Hosoya and Takesi Saito
24. Progress of Theoretical Physics Vol. 55 No. 1 (1976) pp. 280-286 : (5)
Supergauge Algebra and Interaction between Open and Closed Strings
Gaku Konisi and Takesi Saito
25. Progress of Theoretical Physics Vol. 56 No. 6 (1976) pp. 1826-1831 : (5)
A $\Sigma N$ Bound State and the $K^{-}d \to \Lambda p \pi ^{-}$ Reaction
Shijong Ryang and Takesi Saito
26. Progress of Theoretical Physics Vol. 57 No. 1 (1977) pp. 242-247 : (5)
Weak and Electromagnetic Interactions in Superconductivity Model
Takesi Saito and Kazuyasu Shigemoto
27. Progress of Theoretical Physics Vol. 57 No. 2 (1977) pp. 643-653 : (5)
Unity of Weak, Electromagnetic and Strong Interactions in Superconductivity Model
Takesi Saito and Kazuyasu Shigemoto
28. Progress of Theoretical Physics Vol. 57 No. 6 (1977) pp. 2116-2126 : (5)
Collective Excitations in Local-Gauge Invariant Superconductivity Models
Gaku Konisi, Hidenori Miyata, Takesi Saito and Kazuyasu Shigemoto
29. Progress of Theoretical Physics Vol. 58 No. 5 (1977) pp. 1594-1602 : (5)
Effective Actions in Lattice Gauge Theories
Takesi Saito and Kazuyasu Shigemoto
30. Progress of Theoretical Physics Vol. 59 No. 2 (1978) pp. 563-570 : (5)
Dual Models of Weak Interactions of Leptons and Quarks
Yasutaka Tanikawa and Takesi Saito
31. Progress of Theoretical Physics Vol. 59 No. 3 (1978) pp. 964-971 : (5)
The Weinberg Angle in the Unified Theory of All Elementary-Particle Forces
Takesi Saito and Kazuyasu Shigemoto
32. Progress of Theoretical Physics Vol. 61 No. 2 (1979) pp. 608-617 : (5)
Meson-Mass Generation by Instantons
Takesi Saito and Kazuyasu Shigemoto
33. Progress of Theoretical Physics Vol. 61 No. 5 (1979) pp. 1459-1465 : (5)
Meson-Mass Generation by Instantons. II
Takesi Saito and Kazuyasu Shigemoto
34. Progress of Theoretical Physics Vol. 63 No. 1 (1980) pp. 256-261 : (5)
QCD Vacuum in the Strong External Fields
Takesi Saito and Kazuyasu Shigemoto
35. Progress of Theoretical Physics Vol. 63 No. 3 (1980) pp. 993-998 : (5)
Color-Magnetic Permeability of QCD Vacuum
Takesi Saito and Kazuyasu Shigemoto
36. Progress of Theoretical Physics Vol. 64 No. 4 (1980) pp. 1379-1387 : (5)
Shock Waves in Collective Field Theories for Many Particle Systems
Fumiya Oki, Takesi Saito and Kazuyasu Shigemoto
37. Progress of Theoretical Physics Vol. 64 No. 6 (1980) pp. 2201-2206 : (5)
Axially Symmetric Solutions for Field Strengths in Non-Abelian Gauge Theories
Takesi Saito and Kazuyasu Shigemoto
38. Progress of Theoretical Physics Vol. 65 No. 1 (1981) pp. 408-408 : (5)
Axially Symmetric Solutions for Field Strengths in Non-Abelian Gauge Theories
Takesi Saito and Kazuyasu Shigemoto
39. Progress of Theoretical Physics Vol. 69 No. 4 (1983) pp. 1320-1322 : (5)
Exact Solutions to Mean-Field Equations in Lattice Gauge Theories
Shijong Ryang and Takesi Saito
40. Progress of Theoretical Physics Vol. 71 No. 2 (1984) pp. 420-423 : (5)
Block Mean Field Method in Lattice Gauge Theories
Shijong Ryang, Takesi Saito and Kazuyasu Shigemoto
41. Progress of Theoretical Physics Vol. 71 No. 5 (1984) pp. 1108-1111 : (2)
Langevin Equations in Quantized Field Theories
Shijong Ryang and Takesi Saito
42. Progress of Theoretical Physics Vol. 72 No. 1 (1984) pp. 171-174 : (5)
Electromagnetic Properties of the Kaluza-Klein String
Gaku Konisi and Takesi Saito
43. Progress of Theoretical Physics Vol. 73 No. 5 (1985) pp. 1295-1298 : (5)
Canonical Stochastic Quantization
Shijong Ryang, Takesi Saito and Kazuyasu Shigemoto
44. Progress of Theoretical Physics Vol. 76 No. 3 (1986) pp. 680-692 : (5)
Covariant Formulation of Interacting Strings with Compactified Dimensions
Gaku Konisi, Shijong Ryang, Takesi Saito, Kazuyasu Shigemoto, Makio Syûkawa and Wataru Takahasi
45. Progress of Theoretical Physics Vol. 77 No. 4 (1987) pp. 808-812 : (5)
Electromagnetic Properties of Heterotic String
Gaku Konisi, Takesi Saito, Kazuyasu Shigemoto and Wataru Takahasi
46. Progress of Theoretical Physics Vol. 77 No. 4 (1987) pp. 958-974 : (5)
Lorentz-Covariant Lagrangian Formulation of the Interacting Heterotic String Theory
Gaku Konisi, Takesi Saito, Kazuyasu Shigemoto and Wataru Takahasi
47. Progress of Theoretical Physics Vol. 79 No. 6 (1988) pp. 1412-1419 : (5)
Geometric and BRST Formulations of Interacting Strings
Gaku Konisi, Takesi Saito and Wataru Takahasi
48. Progress of Theoretical Physics Vol. 82 No. 1 (1989) pp. 162-170 : (5)
KN Algebra Derived from Virasoro Algebra with Vertex Operators
Gaku Konisi, Takesi Saito and Wataru Takahasi
49. Progress of Theoretical Physics Vol. 82 No. 4 (1989) pp. 813-828 : (5)
KN Superalgebras with Vertex Operators
Gaku Konisi, Takesi Saito and Wataru Takahasi
50. Progress of Theoretical Physics Vol. 82 No. 6 (1989) pp. 1009-1012 : (4)
Skyrme Interaction with Attractive Pairing Property without Density Dependent Force
Tomoyuki Maruyama, Toshi-Yuki Saito and Tatsuo Tsukamoto
51. Progress of Theoretical Physics Vol. 83 No. 3 (1990) pp. 385-389 : (5)
Schwarzian Derivative and Connection in Krichever Novikov Algebra
Tosaku Kunimasa and Takeshi Saito
52. Progress of Theoretical Physics Vol. 85 No. 4 (1991) pp. 705-709 : (5)
Schwarzian Derivative in the Krichever-Novikov Algebra
Takesi Saito
53. Progress of Theoretical Physics Vol. 85 No. 4 (1991) pp. 743-750 : (1)
Schwarzian Connections in the Krichever-Novikov Algebra
Takesi Saito
54. Progress of Theoretical Physics Vol. 86 No. 1 (1991) pp. 229-242 : (5)
Super Schwarzian Connections in Krichever-Novikov Superalgebras
Gaku Konisi, Takesi Saito and Wataru Takahasi
55. Progress of Theoretical Physics Vol. 87 No. 3 (1992) pp. 743-755 : (5)
Weak Constraints in Conformal Field Theories on Higher-Genus Riemann Surfaces
Gaku Konisi, Takeshi Saito and Wataru Takahasi
56. Progress of Theoretical Physics Vol. 87 No. 5 (1992) pp. 1065-1073 : (1)
Construction of Super Schwarzian Connection in Conformal Field Theories on Higher-Genus Super Riemann Surfaces
Gaku Konisi, Takesi Saito and Wataru Takahasi
57. Progress of Theoretical Physics Vol. 89 No. 3 (1993) pp. 719-730 : (5)
Affine Connections and Topological Conformal Field Theories on Higher-Genus Riemann Surfaces
Takesi Saito
58. Progress of Theoretical Physics Vol. 89 No. 3 (1993) pp. 731-740 : (5)
Affine Connections and Two-Dimensional Topological Gravity on a Torus
Takesi Saito
59. Progress of Theoretical Physics Vol. 90 No. 5 (1993) pp. 1131-1147 : (5)
Affine Connections and Topological Conformal Field Theories on Higher-Genus Riemann Surfaces. II
Takesi Saito
60. Progress of Theoretical Physics Vol. 90 No. 5 (1993) pp. 1167-1172 : (5)
The Extended Maurer-Cartan Equations for the BRST Operators on Coadjoint Orbits
Reijiro Kubo and Takesi Saito
61. Progress of Theoretical Physics Vol. 91 No. 6 (1994) pp. 1239-1257 : (5)
Generalized BRST Operators as Extended Maurer-Cartan Forms on Coadjoint Orbits
Reijiro Kubo and Takesi Saito
62. Progress of Theoretical Physics Vol. 92 No. 1 (1994) pp. 249-264 : (5)
$N = 1 \text{and} 2$ Superstrings as Supertopological Models on Higher-Genus Super Riemann Surfaces
Gaku Konisi, Reijiro Kubo and Takesi Saito
63. Progress of Theoretical Physics Vol. 92 No. 4 (1994) pp. 881-888 : (5)
Gravity and Discrete Symmetry
Bin Chen, Takesi Saito and Ke Wu
64. Progress of Theoretical Physics Vol. 92 No. 4 (1994) pp. 889-903 : (5)
Super Differential Forms on Super Riemann Surfaces
Gaku Konisi, Takesi Saito and Wataru Takahasi
65. Progress of Theoretical Physics Vol. 93 No. 1 (1995) pp. 229-246 : (5)
Geometric Actions and the Maurer-Cartan Equation on Coadjoint Orbits of Infinite-Dimensional Lie Groups
Reijiro Kubo and Takesi Saito
66. Progress of Theoretical Physics Vol. 93 No. 3 (1995) pp. 621-629 : (5)
Brans-Dicke Theory and Discrete Symmetry
Gaku Konisi, Takesi Saito and Ke Wu
67. Progress of Theoretical Physics Vol. 93 No. 6 (1995) pp. 1093-1104 : (5)
Higgs Scalar Fields and Discrete Symmetry
Gaku Konisi and Takesi Saito
68. Progress of Theoretical Physics Vol. 95 No. 3 (1996) pp. 657-664 : (5)
Gauge Theory on Discrete Spaces without Recourse to Non-Commutative Geometry
Gaku Konisi and Takesi Saito
69. Progress of Theoretical Physics Vol. 95 No. 6 (1996) pp. 1173-1182 : (5)
$N = 2$ and 4 Super Yang-Mills Theories on $M_{4} \times Z_{2} \times Z_{2}$ Geometry
Bin Chen, Takesi Saito, Hong-Bo Teng, Kunihiko Uehara and Ke Wu
70. Progress of Theoretical Physics Vol. 96 No. 6 (1996) pp. 1291-1299 : (5)
Brans-Dicke Theory on $M_{4} \times Z_{2}$ Geometry
Akira Kokado, Gaku Konisi, Takesi Saito and Kunihiko Uehara
71. Progress of Theoretical Physics Vol. 99 No. 2 (1998) pp. 293-303 : (5)
Brans-Dicke Theory from $\boldsymbol{M_4 \times Z_2}$ Geometry
72. Progress of Theoretical Physics Vol. 100 No. 1 (1998) pp. 165-177 : (5)
Grand Unification from Gauge Theory in ${\mbf\mbfplus M_4 \times Z_N}$
Masahiro Kubo, Ziro Maki, Mikio Nakahara and Takesi Saito
73. Progress of Theoretical Physics Vol. 101 No. 5 (1999) pp. 1105-1118 : (5)
Left-Right Symmetric Model from Geometric Formulation of Gauge Theory in $\mbf\mbfplus M_4 \times Z_2 \times Z_2$
Gaku Konisi, Ziro Maki, Mikio Nakahara and Takesi Saito
74. Progress of Theoretical Physics Vol. 104 No. 6 (2000) pp. 1289-1308 : (5)
Constrained Quantization of Charged Strings in a Background ${\mbf B}$ Field and ${\mbf g}$-Factors
Akira Kokado, Gaku Konisi and Takesi Saito
75. Progress of Theoretical Physics Vol. 106 No. 3 (2001) pp. 645-652 : (5)
Interaction between Noncommutative Open Strings and Closed-String Tachyons
Akira Kokado, Gaku Konisi and Takesi Saito
76. Progress of Theoretical Physics Vol. 107 No. 6 (2002) pp. 1235-1243 : (5)
NS Open Strings with $\boldsymbol B$ Fields and Their Interactions with NS Closed Strings
Akira Kokado, Gaku Konisi and Takesi Saito
77. Progress of Theoretical Physics Vol. 110 No. 5 (2003) pp. 975-987 : (5)
Noncommutative Phase Space and the Hall Effect
Akira Kokado, Takashi Okamura and Takesi Saito
78. Progress of Theoretical Physics Vol. 111 No. 5 (2004) pp. 733-743 : (5)
Long-Distance Behavior of $q\bar{q}$ Color Dependent Potentials at Finite Temperature
Atsushi Nakamura and Takuya Saito
79. Progress of Theoretical Physics Vol. 112 No. 1 (2004) pp. 183-188 : (5)
Heavy $qq$ Interaction at Finite Temperature
Atsushi Nakamura and Takuya Saito
80. Progress of Theoretical Physics Vol. 115 No. 1 (2006) pp. 189-200 : (5)
Color Confinement in Coulomb Gauge QCD
Atsushi Nakamura and Takuya Saito
|
# 2013 seminar talk: Weak pigeonhole principles
Talk held by Moritz Müller (KGRC) at the KGRC seminar on 2013-03-14.
### Abstract
The pigeonhole principle states that, if $n+1$ pigeons fly into $n$ holes then some hole must be doubly occupied. Weak pigeonhole principles state the same but for $2n$ or $n^2$ many pigeons. The talk surveys results and open problems concerning the proof complexity of these principles. Then some new results concerning a relativized such principle are presented. This principle states that if at least $2n$ out of $n^2$ many pigeons fly into $n$ holes, then some hole must be doubly occupied. This is joint work with Albert Atserias and Sergi Oliva.
|
## Pfpxcrackkeygenserialnumber
Pfpxcrackkeygenserialnumber
pfpxcrackkeygenserialnumber. Next 2 days it is going to be up This post is Free Download, Save and activate Pfpx setup license code and pfpxcrackkeygenserialnumber.pfpxcrackkeygenserialnumber, and also You can find all the Download links of this.Q: Calculating the internal friction in this sketch I am trying to understand the internal friction in this sketch: How the torque is calculated? I know that the internal friction is a function of the displacement of the movable part with respect to the fixed parts. How the equation is calculated in this case? I know that it’s $\tau = nmg$, but I don’t know how to calculate $n$ A: Here you can just consider a fulcrum to be a point, where the weight of the top plate “appears” more than any other point. Hence the effective weight acting on the top plate is $mg$ less its height $x$. Hence the torque acting on the fulcrum is given by $\tau= mg \tan(\alpha/2)$ where $\alpha$ is the angle of tilt. Alternatively, you can calculate the torque on the fulcrum as the torque around the fulcrum at the point of maximum weight and maximum shearing force: $\tau=mg \tan(\alpha/2) = mx \frac{dg}{dx}$ $are different bivectors of the same type (i.e. they are related by the combination$[\theta_{i}]\otimes[\theta_{i}]$or$[\theta_{i}]\wedge[\theta_{i}]$). [^3]: See for instance ref. [@savas]. [^4]: This is equivalent to the condition$q=\pm \frac{1}{2}$. [^5]: This is equivalent to the condition that the edge of the simplex$e_{i}$that is closest to$\theta_{i}$is the edge of the simplex$e_{i}$that is farthest away from$\theta_{i}\$. [^6]: The one that has the smallest angle between the link and the edge in the direction of the 3da54e8ca3
|
Address 759 W Highway 40, Vernal, UT 84078 (435) 781-0070 http://amcomputers.net
# how to do trial and error factoring Jensen, Utah
This is shown in the last video on this page. Wow...it's like we're psychic. shana donohue | June 18, 2010 at 3:01 pm Ok, I made the animation on factoring trinomials…. Logging out… Logging out...
Watch. Another super fun example! We can rewrite (-2x + 1)(x – 3) by factoring out -1 from the first factor to get: (-1)(2x – 1)(x – 3)Then we can distribute that (-1) back into the Therefore, the first term in each bracket must be x, i.e.
x$^{3}$, 7ab.(2) Binomial - An expression containing at the most two terms is termed as a binomial. All rights reserved. The correct factoring is (2x-3)(3x-8). We want -2x in the middle, not 2x.
Shana Donohue | October 10, 2011 at 6:42 am I remember factoing trinomials with Non-1 A as a kid and thought it was the messiest thing about math. About 2/3 of my students preferred "trial and error", for what it's worth. Thinking of distribution, 13x comes from multiplying the outer terms:(2x + 3)(x + 5) = 2x2 + 10x + 3x + 15...multiplying the inner terms:(2x + 3)(x + 5) = 2x2 How Do You Do It?
One method is to try trial and error.Sounds like something your teacher would advise you not to do, but if you've got a talent for seeing patterns, you like guessing games, In this case, there are a LOT of possibilities. x2 - 5x + 6 Solution: Step 1:The first term is x2, which is the product of x and x. By the way, you shouldn't leave your house tomorrow.
Factor Trinomials by Unfoiling (Trail and Error) One of the methods that we can use to factor trinomials is by trail and error or unfoiling. Veröffentlicht am 24.04.2010Factoring Trinomials by Trial and Error - Ex 2. I'm not sure if that is because I introduced it first, or whether I've developed a classroom of creative/intuitive students. I figured that students could use the method that seemed best to them.
Great question!! Anmelden Teilen Mehr Melden Möchtest du dieses Video melden? The integers that multiply to give -5 are -1 and 5, or 1 and -5.We also need to have m + n = 4, which will limit our options. Home Math By Grades Pre-K Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grades 7 & 8 Grades 9 & 10 Grades 11 & 12 Basic
Tags: ac method, algebra, amatyc, binomial, classroom activities, developmental math, education, factoring polynomials, factroring, FOIL, george woodbury, grouping, ictcm, Math, monomial, multiplying polynomials, NADE, polynomials, prealgebra, second degree trinomial, statistics, stats, Don't ask questions.The original binomials must have looked like this:(x + m)(x + n)...where m and n are integers. Thank you for the suggestions. Which technique do you use in class, or do you use both?
Reply 4. I will be posting a new animated video on my site that shows how to factor trinomials with A greater than 1. Famous Quotes The who, what, where, when, and why of all your favorite quotes. YES, I REALIZE THERE ARE BETTER METHODS FOR FACTORING THESE!
Trinomial can be factorized by factor theorem or by using quadratic formula.Factoring Trinomials FormulaTrinomial can be factorized by using quadratic formula:x = $\frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$Let us find the factors
|
Combine decision trees from GBM to reduce output
I am curious if any research has been conducted to efficiently combine trees resulting from a gradient boosting process. I routinely run a process that generates 20 or 30 thousand trees in R. I then convert these trees to SAS which results in hundreds of thousands of lines of code. Many of the trees are very similar, however. This begs the question of whether subsets of related trees can be combined to reduce the amount of code that needs to be generated.
My first approach was to find trees that differed only by their final predictions and de-dupe them. These trees had identical interactions and splits at every node. This works well when the number of interactions is small (<3) however, there is virtually no performance gain when the interactions increase beyond this size as the trees are increasingly likely to be unique.
My next thought is that many of the first or second splits are going to be identical so why not consolidate that logic and nest the remaining nodes within? Before heading down that path, though, I thought I would reach out for guidance or insight here.
Is there a way to combine decision trees output from a GBM process to reduce the number necessary to calculate the final score?
I came across this solution after searching for "Compression Ensemble Trees" and was reminded of an approach I read about a couple of years ago. Once the decision forest has been created (in my case 18,828 trees), I applied L1 regularization using the glmnet packaged in R.
|
other querymodes : Identifierquery Coordinatequery Criteriaquery Referencequery Basicquery Scriptsubmission TAP Outputoptions Help
2006ApJS..165...57C - Astrophys. J., Suppl. Ser., 165, 57-94 (2006/July-0)
The ACS Virgo cluster survey. VIII. The nuclei of early-type galaxies.
COTE P., PIATEK S., FERRARESE L., JORDAN A., MERRITT D., PENG E.W., HASEGAN M., BLAKESLEE J.P., MEI S., WEST M.J., MILOSAVLJEVIC M. and TONRY J.L.
Abstract (from CDS):
The ACS Virgo Cluster Survey is a Hubble Space Telescope program to obtain high-resolution imaging in widely separated bandpasses (F475W~g and F850LP~z) for 100 early-type members of the Virgo Cluster, spanning a range of ~460 in blue luminosity. We use this large, homogenous data set to examine the innermost structure of these galaxies and to characterize the properties of their compact central nuclei. We present a sharp upward revision in the frequency of nucleation in early-type galaxies brighter than MB~-15 (66%≲fn≲82%) and show that ground-based surveys underestimated the number of nuclei due to surface brightness selection effects, limited sensitivity and poor spatial resolution. We speculate that previously reported claims that nucleated dwarfs are more concentrated toward the center of Virgo than their nonnucleated counterparts may be an artifact of these selection effects. There is no clear evidence from the properties of the nuclei, or from the overall incidence of nucleation, for a change at MB~-17.6, the traditional dividing point between dwarf and giant galaxies. There does, however, appear to be a fundamental transition at MB~-20.5, in the sense that the brighter, core-Sérsic'' galaxies lack resolved (stellar) nuclei. A search for nuclei that may be offset from the photocenters of their host galaxies reveals only five candidates with displacements of more than 0".5, all of which are in dwarf galaxies. In each case, however, the evidence suggests that these nuclei'' are, in fact, globular clusters projected close to the galaxy photocenter. Working from a sample of 51 galaxies with prominent nuclei, we find a median half-light radius of <rh≥4.2 pc, with the sizes of individual nuclei ranging from 62 pc down to ≤2 pc (i.e., unresolved in our images) in about a half-dozen cases. Excluding these unresolved objects, the nuclei sizes are found to depend on nuclear luminosity according to the relation rhL0.50±0.03. Because the large majority of nuclei are resolved, we can rule out low-level AGNs as an explanation for the central luminosity excess in almost all cases. On average, the nuclei are ~3.5 mag brighter than a typical globular cluster. Based on their broadband colors, the nuclei appear to have old to intermediate age stellar populations. The colors of the nuclei in galaxies fainter than MB~-17.6 are tightly correlated with their luminosities, and less so with the luminosities of their host galaxies, suggesting that their chemical enrichment histories were governed by local or internal factors. Comparing the nuclei to the nuclear clusters'' found in late-type spiral galaxies reveals a close match in terms of size, luminosity, and overall frequency. A formation mechanism that is rather insensitive to the detailed properties of the host galaxy properties is required to explain this ubiquity and homogeneity. The mean of the frequency function for the nucleus-to-galaxy luminosity ratio in our nucleated galaxies, <logη≥-2.49±0.09 dex (σ=0.59±0.10), is indistinguishable from that of the SBH-to-bulge mass ratio, <log(Mø/Mgal)≥-2.61±0.07 dex (σ=0.45±0.09), calculated in 23 early-type galaxies with detected supermassive black holes (SBHs). We argue that the compact stellar nuclei found in many of our program galaxies are the low-mass counterparts of the SBHs detected in the bright galaxies. If this interpretation is correct, then one should think in terms of central massive objects''–either SBHs or compact stellar nuclei–that accompany the formation of almost all early-type galaxies and contain a mean fraction ~0.3% of the total bulge mass. In this view, SBHs would be the dominant formation mode above MB~-20.5.
|
# Which of the Following Pairs May Give Equal Numerical Values of the Temperature of a Body? - Physics
MCQ
Which of the following pairs may give equal numerical values of the temperature of a body?
#### Options
• Fahrenheit and Kelvin
• Celsius and Kelvin
• Kelvin and Platinum
#### Solution
Fahrenheit and Kelvin
Let θ be the temperature in Fahrenheit and Kelvin scales.
We know that the relation between the temperature in Fahrenheit and Kelvin scales is given by
(T_F-32)/180 =(T_K-273.15)/100
T_F=T_K=theta
Therefore,
(theta-32)/180=(T_K-273.15)/100
5θ -160 = 9θ -2458.5
4θ = 2298.35
θ = 574.59°
If we consider the same for Celsius and Kelvin scales
(T_C-0)/100 =(T_K-273.15)/100
Let the temperature be t
(t-0)/100 = (t-273.15)/100
t = t - 273.15
Thus, t does not exist.
The Kelvin scale uses mercury as thermometric substance, whereas the platinum scale uses platinum as thermometric substance. The scale depends on the properties of the thermometric substance used to define the scale. The platinum and Kelvin scales do not agree with each other. Therefore, there is no such temperature that has same numerical value in the platinum and Kelvin scale.
Concept: Temperature and Heat
Is there an error in this question or solution?
#### APPEARS IN
HC Verma Class 11, Class 12 Concepts of Physics Vol. 2
Chapter 1 Heat and Temperature
MCQ | Q 3 | Page 11
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 3.2: Two-Step Equations
Difficulty Level: At Grade Created by: CK-12
## Learning Objectives
• Solve a two-step equation using addition, subtraction, multiplication, and division.
• Solve a two-step equation by combining like terms.
• Solve real-world problems using two-step equations.
## Solve a Two-Step Equation
We’ve seen how to solve for an unknown by isolating it on one side of an equation and then evaluating the other side. Now we’ll see how to solve equations where the variable takes more than one step to isolate.
Example 1
Rebecca has three bags containing the same number of marbles, plus two marbles left over. She places them on one side of a balance. Chris, who has more marbles than Rebecca, adds marbles to the other side of the balance. He finds that with 29 marbles, the scales balance. How many marbles are in each bag? Assume the bags weigh nothing.
Solution
We know that the system balances, so the weights on each side must be equal. If we use to represent the number of marbles in each bag, then we can see that on the left side of the scale we have three bags (each containing marbles) plus two extra marbles, and on the right side of the scale we have 29 marbles. The balancing of the scales is similar to the balancing of the following equation.
“Three bags plus two marbles equals 29 marbles”
To solve for , we need to first get all the variables (terms containing an ) alone on one side of the equation. We’ve already got all the ’s on one side; now we just need to isolate them.
There are nine marbles in each bag.
We can do the same with the real objects as we did with the equation. Just as we subtracted 2 from both sides of the equals sign, we could remove two marbles from each side of the scale. Because we removed the same number of marbles from each side, we know the scales will still balance.
Then, because there are three bags of marbles on the left-hand side of the scale, we can divide the marbles on the right-hand side into three equal piles. You can see that there are nine marbles in each.
Three bags of marbles balances three piles of nine marbles.
So each bag of marbles balances nine marbles, meaning that each bag contains nine marbles.
Check out http://www.mste.uiuc.edu/pavel/java/balance/ for more interactive balance beam activities!
Example 2
Solve .
This equation has the buried in parentheses. To dig it out, we can proceed in one of two ways: we can either distribute the six on the left, or divide both sides by six to remove it from the left. Since the right-hand side of the equation is a multiple of six, it makes sense to divide. That gives us . Then we can subtract 4 from both sides to get .
Example 3
Solve .
It’s always a good idea to get rid of fractions first. Multiplying both sides by 5 gives us , and then we can add 3 to both sides to get .
Example 4
Solve .
First, we’ll cancel the fraction on the left by multiplying by the reciprocal (the multiplicative inverse).
Then we subtract 1 from both sides. ( is equivalent to 1.)
These examples are called two-step equations, because we need to perform two separate operations on the equation to isolate the variable.
## Solve a Two-Step Equation by Combining Like Terms
When we look at a linear equation we see two kinds of terms: those that contain the unknown variable, and those that don’t. When we look at an equation that has an on both sides, we know that in order to solve it, we need to get all the terms on one side of the equation. This is called combining like terms. The terms with an in them are like terms because they contain the same variable (or, as you will see in later chapters, the same combination of variables).
Like Terms Unlike Terms
and and
and and
and and
To add or subtract like terms, we can use the Distributive Property of Multiplication.
To solve an equation with two or more like terms, we need to combine the terms first.
Example 5
Solve .
There are two like terms: the and the (don’t forget that the negative sign applies to everything in the parentheses). So we need to get those terms together. The associative and distributive properties let us rewrite the equation as , and then the commutative property lets us switch around the terms to get , or .
is the same as , or , so our equation becomes
Subtracting 8 from both sides gives us .
And finally, multiplying both sides by -1 gives us .
Example 6
Solve .
This problem requires us to deal with fractions. We need to write all the terms on the left over a common denominator of six.
Then we subtract the fractions to get .
Finally we multiply both sides by 6 to get .
## Solve Real-World Problems Using Two-Step Equations
The hardest part of solving word problems is translating from words to an equation. First, you need to look to see what the equation is asking. What is the unknown for which you have to solve? That will be what your variable stands for. Then, follow what is going on with your variable all the way through the problem.
Example 7
An emergency plumber charges $65 as a call-out fee plus an additional$75 per hour. He arrives at a house at 9:30 and works to repair a water tank. If the total repair bill is $196.25, at what time was the repair completed? In order to solve this problem, we collect the information from the text and convert it to an equation. Unknown: time taken in hours – this will be our The bill is made up of two parts: a call out fee and a per-hour fee. The call out is a flat fee, and independent of —it’s the same no matter how many hours the plumber works. The per-hour part depends on the number of hours . So the total fee is$65 (no matter what) plus (where is the number of hours), or .
Looking at the problem again, we also can see that the total bill is $196.25. So our final equation is . Solving for : Solution The repair job was completed 1.75 hours after 9:30, so it was completed at 11:15AM. Example 8 When Asia was young her Daddy marked her height on the door frame every month. Asia’s Daddy noticed that between the ages of one and three, he could predict her height (in inches) by taking her age in months, adding 75 inches and multiplying the result by one-third. Use this information to answer the following: a) Write an equation linking her predicted height, , with her age in months, . b) Determine her predicted height on her second birthday. c) Determine at what age she is predicted to reach three feet tall. Solution a) To convert the text to an equation, first determine the type of equation we have. We are going to have an equation that links two variables. Our unknown will change, depending on the information we are given. For example, we could solve for height given age, or solve for age given height. However, the text gives us a way to determine height. Our equation will start with “”. The text tells us that we can predict her height by taking her age in months, adding 75, and multiplying by . So our equation is , or . b) To predict Asia’s height on her second birthday, we substitute into our equation (because 2 years is 24 months) and solve for . Asia’s height on her second birthday was predicted to be 33 inches. c) To determine the predicted age when she reached three feet, substitute into the equation and solve for . Asia was predicted to be 33 months old when her height was three feet. Example 9 To convert temperatures in Fahrenheit to temperatures in Celsius, follow the following steps: Take the temperature in degrees Fahrenheit and subtract 32. Then divide the result by 1.8 and this gives the temperature in degrees Celsius. a) Write an equation that shows the conversion process. b) Convert 50 degrees Fahrenheit to degrees Celsius. c) Convert 25 degrees Celsius to degrees Fahrenheit. d) Convert -40 degrees Celsius to degrees Fahrenheit. a) The text gives the process to convert Fahrenheit to Celsius. We can write an equation using two variables. We will use for temperature in Fahrenheit, and for temperature in Celsius. In order to convert from one temperature scale to another, simply substitute in for whichever temperature you know, and solve for the one you don’t know. b) To convert 50 degrees Fahrenheit to degrees Celsius, substitute into the equation. 50 degrees Fahrenheit is equal to 10 degrees Celsius. c) To convert 25 degrees Celsius to degrees Fahrenheit, substitute into the equation: 25 degrees Celsius is equal to 77 degrees Fahrenheit. d) To convert -40 degrees Celsius to degrees Fahrenheit, substitute into the equation. -40 degrees Celsius is equal to -40 degrees Fahrenheit. (No, that’s not a mistake! This is the one temperature where they are equal.) ## Lesson Summary • Some equations require more than one operation to solve. Generally it, is good to go from the outside in. If there are parentheses around an expression with a variable in it, cancel what is outside the parentheses first. • Terms with the same variable in them (or no variable in them) are like terms. Combine like terms (adding or subtracting them from each other) to simplify the expression and solve for the unknown. ## Review Questions 1. Solve the following equations for the unknown variable. 2. Jade is stranded downtown with only$10 to get home. Taxis cost $0.75 per mile, but there is an additional$2.35 hire charge. Write a formula and use it to calculate how many miles she can travel with her money.
3. Jasmin’s Dad is planning a surprise birthday party for her. He will hire a bouncy castle, and will provide party food for all the guests. The bouncy castle costs $150 for the afternoon, and the food will cost$3 per person. Andrew, Jasmin’s Dad, has a budget of $300. Write an equation and use it to determine the maximum number of guests he can invite. 4. The local amusement park sells summer memberships for$50 each. Normal admission to the park costs $25; admission for members costs$15.
1. If Darren wants to spend no more than $100 on trips to the amusement park this summer, how many visits can he make if he buys a membership with part of that money? 2. How many visits can he make if he does not? 3. If he increases his budget to$160, how many visits can he make as a member?
4. And how many as a non-member?
5. For an upcoming school field trip, there must be one adult supervisor for every five children.
1. If the bus seats 40 people, how many children can go on the trip?
2. How many children can go if a second 40-person bus is added?
3. Four of the adult chaperones decide to arrive separately by car. Now how many children can go in the two buses?
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Show Hide Details
Description
Tags:
Subjects:
|
Hartogs's theorem on separate holomorphicity
In mathematics, Hartogs's theorem is a fundamental result of Friedrich Hartogs in the theory of several complex variables. Roughly speaking, it states that a 'separately analytic' function is continuous. More precisely, if {displaystyle F:{textbf {C}}^{n}to {textbf {C}}} is a function which is analytic in each variable zi, 1 ≤ i ≤ n, while the other variables are held constant, then F is a continuous function.
A corollary is that the function F is then in fact an analytic function in the n-variable sense (i.e. that locally it has a Taylor expansion). Therefore, 'separate analyticity' and 'analyticity' are coincident notions, in the theory of several complex variables.
Starting with the extra hypothesis that the function is continuous (or bounded), the theorem is much easier to prove and in this form is known as Osgood's lemma.
There is no analogue of this theorem for real variables. If we assume that a function {displaystyle fcolon {textbf {R}}^{n}to {textbf {R}}} is differentiable (or even analytic) in each variable separately, it is not true that {displaystyle f} will necessarily be continuous. A counterexample in two dimensions is given by {displaystyle f(x,y)={frac {xy}{x^{2}+y^{2}}}.} If in addition we define {displaystyle f(0,0)=0} , this function has well-defined partial derivatives in {displaystyle x} and {displaystyle y} at the origin, but it is not continuous at origin. (Indeed, the limits along the lines {displaystyle x=y} and {displaystyle x=-y} are not equal, so there is no way to extend the definition of {displaystyle f} to include the origin and have the function be continuous there.) References Steven G. Krantz. Function Theory of Several Complex Variables, AMS Chelsea Publishing, Providence, Rhode Island, 1992. Fuks, Boris Abramovich (1963). Theory of Analytic Functions of Several Complex Variables. ISBN 978-1-4704-4428-0. External links "Hartogs theorem", Encyclopedia of Mathematics, EMS Press, 2001 [1994] This article incorporates material from Hartogs's theorem on separate analyticity on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Categories: Several complex variablesTheorems in complex analysis
Si quieres conocer otros artículos parecidos a Hartogs's theorem on separate holomorphicity puedes visitar la categoría Several complex variables.
Subir
Utilizamos cookies propias y de terceros para mejorar la experiencia de usuario Más información
|
# [time 813] Re: [time 811] Re: [time 810] Re: [time 809] Stillabout construction of U
Matti Pitkanen ([email protected])
Sun, 26 Sep 1999 08:05:07 +0300 (EET DST)
On Sun, 26 Sep 1999, Hitoshi Kitada wrote:
> Dear Matti,
>
> You have not answered completely to my former questions:
>
> Matti Pitkanen <[email protected]> wrote:
>
> Subject: [time 810] Re: [time 809] Re: [time 808] Stillabout construction of
> U
>
>
> >
> >
> > On Sun, 26 Sep 1999, Hitoshi Kitada wrote:
> >
> > > Dear Matti,
> > >
> > > I have trivial (notational) questions first. I hope you would write
> exactly
> > > (;-) After these points are made clear, I have further questions.
> > >
> > > Matti Pitkanen <[email protected]> wrote:
> > >
> > > Subject: [time 808] Re: [time 806] Re: [time 805] Re: [time 804] Re:
> [time
> > > 803] Re:[time 801] Re: [time 799] Stillabout construction of U
> > >
> > >
> > > >
> > > >
> > > > On Sat, 25 Sep 1999, Hitoshi Kitada wrote:
> > > >
> > > > > Dear Matti,
> > > > >
> > > > > Matti Pitkanen <[email protected]> wrote:
> > > > >
> > > > > Subject: [time 805] Re: [time 804] Re: [time 803] Re: [time 801] Re:
> > > [time
> > > > > 799] Stillabout construction of U
> > > > >
> > > > >
> > > > > [skip]
> >
> > U is *counterpart* of U(-infty,infty) of ordinary QM. I do not
> > however want anymore to ad these infinities as arguments of U!
> > They are not needed.
> >
> > [I made considerable amount of work by deleting from chapters
> > of TGD, p-Adic TGD, and consciousness book all these (-infty,infty):ies
> > and $t\rightarrow \infty$:ies. I hope that I need not add them
> > back!(;-)]
> >
> > I define U below: U maps state Psi_0 satisfying single
> > particle Virasoro conditions
> >
> > L_0(n)Psi_0 =0
> >
> > to corresponding scattering state
> >
> > Psi= Psi_0 + (1/sum_nL_0(n)+iepsilon)*L_0(int) Psi
> >
> > (this state must be normalized so that it has unit norm)
> >
> >
> >
> >
> > >
> > > >
> > > >
> > > > a) The action of U on Psi_0 satisfying Virasoro conditions
> > > > for single particle Virasoro generators is
> > > > defined by the formula
> > > >
> > > > Psi= Psi_0 - [1/L_0(free)+iepsilon ]L(int)Psi
> > >
> > > To which Hilbert spaces, do Psi and Psi_0 belong?
>
> What Hilbert spaces do you think for Psi and Psi_0 to belong to?
The space of configuration space spinor fields defined in
the space of 3-surfaces. For given 3-surface this space reduces
to space of configuration space spinors and is essentially Fock
space spanned by second quantized free imbedding space
spinor fields fields induced to spacetime surface X^4(X^3).
By GCI one can reduce inner product to integration over
3-surfaces belonging to the boundary of imbedding space:
lightcone boundary xCP_2 plus summation over the degenerate
absolute minima of Kaehler action: this is implied by classical
nondeterminism of Kaehler action.
This reduction is crucial simplification: otherwise one would have
difficult time with Diff^4 gauge invariance.
>
> > >
> > > And how do you define (or construct) U from this equation?
> >
> > Just as S-matrix is constructed from the scattering solution
> > in ordinary QM. I solve the equation iteratively by subsituting
> > to the right hand side first Psi=Psi_0; calculat Psi_1 and
> > substitute it to right hand side; etc.. U get perturbative
> > expansion for Psi.
> >
> > I normalize in and define the matrix elements of U
> >
> > between two state basis as
> >
> > U_m,N = <Psi_0(m), Psi(N)>
> >
> > This matrix is unitary as an overlap matrix between two orthonormalized
> > state basis.
> >
> >
> >
> > >
> > > >
> > > > satisfies Virasoro condition
> > > >
> > > > L_0(tot)Psi=0 <--> (H-E)Psi=0
> > >
> > > Did you change E=0 to general eigenvalue E?
> >
> > This is just analogy. L_0(tot) corresponds to H-E mathematically.
>
> I questioned this in relation with your equation below:
>
> H_0 Psi_0=0.
>
> Is the eigenvalue for Psi_0 in this equation different from that for Psi in
>
> (H-E)Psi=0
>
> in the above?
>
I think you are taking the analogy with Hamiltonian too far by assuming
that their IS time development associated with L_0(tot) and
corresponding eigenvalue. The point is that there is NO such time
development and you question does not make sense. I just abstract the
general structure of scattering solution of Schrodinger equation.
Hermitian operators sum_nL_0(n)<-->H_0, L_0(tot)<-->H, L_0(int)<-->V,
and iterative solution of the scattering solution form of
Schrodinger equation. It is this algebraic structure which gives rise
to unitary S-matrix: one does not need time.
Of course, I *could* be wrong and certainly this is purely formal
solution. In any case, it leads to similar string diagrammatics as
encountered in string models: this is what convinces me that I am on
correct track.
By the way, there is also possible formal connection with Bethe-Salpeter
equation describing relativistic bound station formation in QFT. BS
contains time coordinate for each particle separately.
a) *Physically* the momenta p_+ of various particles associated with
3-surface X^3(n) correspond to energy in Schroedinger equation and off
mass shell particles (using QFT terminology) appear in the scattering
part of solution since otherwise denominator would give a pole.
b) One could also consider the possibility of assigning to each single
particle not only momentum p_+(n) but also time coordinate
x_+(n) with each energy: kind of Bethe-Salpeter equation with several
times with each time approaching to infinity might be alternative attempt
to assing time in rigorous manner to these equations. I want to
however emphasize that time or times are not needed for physical
interpretation in TGD framework. They only generate
problems with covariance. If the introduction of time serves
some purpose, this purpose must be proof of unitarity of the resulting
S-matrix.
Best,
MP
This archive was generated by hypermail 2.0b3 on Sat Oct 16 1999 - 00:36:42 JST
|
# Can the Identity Map be a repeated composition one other function?
Consider the mapping $f:x\to\frac{1}{x}, (x\ne0)$. It is trivial to see that $f(f(x))=x$.
My question is whether or not there exists a continuous map $g$ such that $g(g(g(x)))\equiv g^{3}(x)=x$? Furthermore, is there a way to find out if there is such a function that $g^{p}(x)=x$ for a prime $p$?
Edit: I realise I was a little unclear - I meant to specify that it was apart from the identity map. The other 'condition' I wanted to impose isn't very precise; I was hoping for a function that didn't seem defined for the purpose. However, the ones that are work perfectly well and they certainly answer the question.
-
So, not 3 other maps, but of 1 map composed 3 times? – Graphth Sep 14 '12 at 19:23
Yes, that's what I meant. – Daniel Littlewood Sep 14 '12 at 19:34
Presumably you want $g$ to be continuous? – Michael Joyce Sep 14 '12 at 19:36
Consider the class of Möbius functions: $$f(x) = {ax+b\over cx+d}$$ for some constants $a,b,c,d$. If we represent Möbius function $f$ by its matrix of coefficients: $$\hat f = \pmatrix{ a&b \\ c&d }$$ then it turns out that to compose two Möbius functions $f$ and $g$ we just multiply their corresponding matrices.
In particular, if $f$ is a Möbius function with matrix $\hat f$, then $f(f(f(x)))$ is also a Möbius function, with matrix ${\hat f}{}^3$.
So it suffices to find a $2\times 2$ matrix $M$ with $M^3 = I$. There are numerous examples, but one such is $$\def\ang{{\frac{2\pi}3}}\pmatrix{\cos\ang & \sin\ang \\ -\sin\ang & \cos\ang} = \pmatrix{-\frac12 & \frac{\sqrt3}2 \\ -\frac{\sqrt3}2 & -\frac12 }.$$ This is just the matrix for the linear transformation of the plane that rotates the plane by a one-third turn about the origin.
So the corresponding Möbius function with period 3 is $$f(x) = {x-\sqrt 3\over x\sqrt3 + 1}.$$
By replacing $2\pi\over 3$ with $2\pi\over n$, you can construct a function with any period you want.
Note that you are not restricted to $M$ with $M^3 = I$. Since $I$ and $kI$ are the same when considered as Möbius functions, $M^3 = kI$ will work for any $k$.
-
This looks like exactly what I want, thank you! – Daniel Littlewood Sep 14 '12 at 19:40
I'm glad I could help. – MJD Sep 14 '12 at 19:43
The function $$f(x) = {1\over 1-x}$$ has $f^3(x) = x$ for all $x$ where $f^3(x)$ is defined. (All reals except for 0 and 1.)
Other than the identity, there is no continuous function $\Bbb R\to \Bbb R$ having $f^3(x) = x$ for all $x$, by Sharkovskii's theorem.
-
Of what type? ${}{}$ – Mariano Suárez-Alvarez Sep 14 '12 at 19:54
@MarianoSuárez-Alvarez Better? – MJD Sep 14 '12 at 19:58
What about any finite period for a continuous function? Period 2 is easy... – user641 Sep 15 '12 at 0:06
@SteveD You really want to take a look at Sharkovskii's theorem, which has a lot to say about finite periods of continuous functions on R. – MJD Sep 15 '12 at 0:53
Consider the function $f:\mathbb R\to\mathbb R$ such that $f(i)=i+1$ for all $i\in\{1,\dots,p-1\}$, $f(p)=1$ and $f(x)=x$ for all $x\in\mathbb R\setminus\{1,\dots,p\}$.
-
This example will probably only help in showing that you want the function to satisfy some other conditions which you did not make explicit... – Mariano Suárez-Alvarez Sep 14 '12 at 19:24
If you don’t impose any other conditions on $g$, it’s certainly possible. For $n\in\Bbb Z$ let $I_n=[n,n+1)$, and define
$$g:\Bbb R\to\Bbb R:x\mapsto\begin{cases}x+1,&\text{if }x\in I_n\text{ and }3\not\mid n\\ x-2,&\text{if }n\in I_n\text{ and }3\mid n\;. \end{cases}$$
This map translates $I_{3k+1}$ to $I_{3k+2}$, $I_{3k+2}$ to $I_{3k+3}$, and $I_{3k+3}$ back down to $I_{3k+1}$; it fixes no points of $\Bbb R$.
This generalizes: you can replace $3$ by any integer $m\ge 2$.
-
|
## Physics: Principles with Applications (7th Edition)
a. Set the tensile strength of nylon (Table 9-2) equal to the applied stress on the string. tensile strength = $\frac{F_{max}}{A}$ $$F_{max}=(500\times 10^6 N/m^2)( \pi (0.0005 m)^2)= 393 N$$ b. To prevent breakage, use thicker strings with greater cross-sectional area. When the ball hits the strings, the tension increases, which may break the strings.
|
# Which of the following pairs of devices and their functions are correctly matched?1. Flywheel: For storing kinetic energy2. Governors: For controlling speeds3. Lead screw in lathe: For providing feed to the slides4. Fixtures: For locating work-piece and guiding toolsSelect the correct answer using the codes given below.
This question was previously asked in
BPSC AE Paper 6 (Mechanical) 25th March 2022 Official Paper
View all BPSC AE Papers >
1. 1, 3 and 4
2. 2 and 3
3. 1 and 2
4. 2 and 4
## Answer (Detailed Solution Below)
Option 3 : 1 and 2
Free
SSC JE: General Intelligence & Reasoning Free Mock Test
26.5 K Users
20 Questions 20 Marks 12 Mins
## Detailed Solution
Explanation:
Flywheel:
• A flywheel is used to control the speed variation caused by the fluctuations of energy during each cycle of operation in an engine.
• It acts as a reservoir of energy that stores energy during the period when the supply of energy is more than the requirement and releases energy during the period when the supply of energy is less than the requirements.
• During the power stroke, speed of the engine tends to increase and since the flywheel of heavy mass is connected to the engine its speed also increases.
Variation of turning moment in a four-stroke engine.
The flywheel controls the cyclic fluctuation of speed by gaining energy during the power stroke and releasing the energy during the remaining stroke.
Let, ω = angular speeds, I = Moment pf inertia of the flywheel, E = Kinetic energy of the flywheel.
Then the kinetic energy of the flywheel corresponding to mean angular velocity is given by,
$$E~=(\frac{1}{2}\times I\times \omega ^2)$$
Governor:
The function of a governor is to regulate the mean speed of an engine when there are variations in the load.
E.g. when the load on an engine increases, its speed decreases, therefore it becomes necessary to increase the supply of working fluid. On the other hand, when the load on the engine decreases, its speed increases and thus less working fluid is required.
• The governor automatically controls the supply of working fluid to the engine with the varying load conditions and keeps the mean speed within certain limits.
• When the load increases, the configuration of the governor changes and a valve is moved to increase the supply of the working fluid, conversely, when the load decreases, the engine speed increases and the governor decreases the supply of working fluid.
• The lead screw is used for thread cutting. It is made from good quality alloy steel and is provided with acme thread.
• It is driven from the head stock through the feed gearbox and moves the carriage in a longitudinal direction against the workpiece.
Fixtures
• A fixture is a production tool that locates and holds the work-piece
• It does not guide the cutting tools, but the tools can be positioned before cutting with the help of setting blocks and feeler gauges, etc
• Fixtures of different types are made for milling, turning, grinding, welding, bending, etc
• Fixtures are used for medium and heavier-sized workpieces because fixtures are fixed to the machine table.
Jigs
• A jig is a special device which holds, supports, locates and also guides the cutting tool during operation
• Jigs are designed to accommodate one or more components at a time
• Jigs are available for drilling or boring
Latest SSC JE ME Updates
Last updated on Sep 22, 2022
The Staff Selection Commission (SSC) has released the exam date for Paper I of the SSC JE ME 2022 exam. As per the notice, Paper I of the SSC JE ME is scheduled to be conducted from 14th November 2022 to 16th November 2022. The Staff Selection Commission (SSC) is soon going to release the admit card for the Paper I exam. The candidates who will clear the exam will get a salary as per the Rs. 35,400/- to Rs. 1,12,400/- payscale. Candidates can prepare in the best way by following the SSC JE ME Best Books to crack the exam.
|
# Description
System Information
Operating system: Windows 10
Graphics card: AMD Radeon (TM) R9 Fury Series
Blender Version
Broken: version: 2.79 (sub 7), branch: blender2.7, commit date: 2019-04-09 16:45, hash: 10f724cec5e3
Submitting certain .blend files to the Flamenco manager results in an error, apparently occurring when copying associated assets from the .blend file to the Flamenco render job folder. Trying multiple files, I found that I can generate this error with an audio file in the video sequencer or when I have an external Alembic file referenced absolutely in a Mesh Sequence Cache modifier, even if that file is in an accessible location.
Exception while running task:
Traceback (most recent call last):
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\flamenco\__init__.py", line 369, in async_execute
outdir, outfile, missing_sources = await self.bat_pack(job_id, filepath)
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\flamenco\__init__.py", line 616, in bat_pack
relative_only=relative_only)
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\flamenco\bat_interface.py", line 168, in copy
await loop.run_in_executor(None, packer.execute)
File "C:\blender-2.79-latest\blender-2.79.0-git.10f724cec5e3-active\2.79\python\lib\concurrent\futures\thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\wheels\blender_asset_tracer-1.1.2-py3-none-any.whl\blender_asset_tracer\pack\__init__.py", line 375, in execute
self._rewrite_paths()
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\wheels\blender_asset_tracer-1.1.2-py3-none-any.whl\blender_asset_tracer\pack\__init__.py", line 462, in _rewrite_paths
assert bfile_pp is not None
AssertionError
Sample file with WAV sound in the video sequencer and an Alembic file reference.
### Event Timeline
Which version of the Blender Cloud add-on was used?
Which version of the Blender Cloud add-on was used?
Latest version as of yesterday (1.13.4) but with the wheel file you had sent me (attached here)
.
Sybren A. Stüvel (sybren) lowered the priority of this task from 90 to 30.EditedJun 12 2019, 10:36 AM
Please give us files we can use to reproduce the issue. The blend file you uploaded packs just fine, albeit with warnings about missing files.
It would also help to know the layout of your files. What directory is set as the project directory? You're using drive R:, is that a network drive (if so, which protocol?) or a local disk?
Adding a sound file to the sequencer in an empty .blend file was enough to reproduce the error for me, but I am attaching this actual file. This file has the proprietary elements removed, including the WAV file in the sequencer. To get this to submit to Flamenco without error, I have to delete the WAV audio and both the Alembic fluid surface and the Ocean modifier fluid surface.
Joel Howe (jhowe) added a comment.EditedJun 13 2019, 8:51 AM
I have been able to successfully submit to Flamenco a version of the file that has the object with an Ocean modifier, but only when the Cache path for that modifier was empty.
UPDATE: It looks like just changing Cache path helped submit this, as I wasn't able to submit today with an empty value but then changing it to "//" a relative reference to the current folder, it submitted.
It would also help to know the layout of your files. What directory is set as the project directory? You're using drive R:, is that a network drive (if so, which protocol?) or a local disk?
As an addition to these not-yet-answered questions: what have you set as job storage and job output directories? With the Flamenco-Error.blend file you're using the D: drive, what kind of drive is that?
D:/ is a local physical data drive on my main Win 10 workstation.
R:/ is the Flamenco project.
R:/ is a network mapped folder location to D:/Flamenco, so in the case of submitting the job, R:/ points to the same hard drive. R:/ was created using Windows "Map network drive", which I don't believe allows you to specify a protocol.
All Flamenco workers have an R:/ mapped back to my workstation, which is also running Flamenco manager.
Inside R:/ are three folders:
• JobStorage - Flamenco writes its files to these folders
• SourceFiles - I save a copy of .blend files to be rendered here, as I have to be in the Flamenco project (R:) to submit.
• TempImages - Folder for the standard JPEG images being written. This is essentially for display on the Flamenco manager screen last rendered image. I purge this folder a lot, as I use the compositing node editor to generate multi-EXR files.
Bastien Montagne (mont29) raised the priority of this task from 30 to 80.Jul 23 2019, 10:06 AM
@Joel Howe (jhowe) can you try with this BAT wheel file? Just remove any exist blender_asset_tracer*.whl file from the wheels directory of the Blender Cloud add-on, then put this one in place and restart Blender.
Sybren A. Stüvel (sybren) lowered the priority of this task from 80 to 30.Sep 24 2019, 12:57 PM
Error is still occurring for both 2.79 and 2.80, but only when a WAV file is in the video sequencer and I try to submit to Flamenco. The .blend files don't submit and get an error as below, with the display in the Flamenco Render tab reading "Found asset Narration-Audio.wav" when it stops packing. All files submit properly once the audio track has been removed, which is an acceptable workaround for me.
Exception while running task:
Traceback (most recent call last):
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\flamenco\__init__.py", line 369, in async_execute
outdir, outfile, missing_sources = await self.bat_pack(job_id, filepath)
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\flamenco\__init__.py", line 616, in bat_pack
relative_only=relative_only)
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\flamenco\bat_interface.py", line 168, in copy
await loop.run_in_executor(None, packer.execute)
File "C:\Blender\blender-2.79.0-git.10f724cec5e3-active\2.79\python\lib\concurrent\futures\thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\wheels\blender_asset_tracer-1.2.dev1-py3-none-any.whl\blender_asset_tracer\pack\__init__.py", line 383, in execute
self._rewrite_paths()
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\wheels\blender_asset_tracer-1.2.dev1-py3-none-any.whl\blender_asset_tracer\pack\__init__.py", line 470, in _rewrite_paths
assert bfile_pp is not None
AssertionError
<Task finished coro=<FLAMENCO_OT_render.async_execute() done, defined at C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\flamenco\__init__.py:255> exception=AssertionError()>: resulted in exception
Traceback (most recent call last):
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\async_loop.py", line 95, in kick_async_loop
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\flamenco\__init__.py", line 369, in async_execute
outdir, outfile, missing_sources = await self.bat_pack(job_id, filepath)
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\flamenco\__init__.py", line 616, in bat_pack
relative_only=relative_only)
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\flamenco\bat_interface.py", line 168, in copy
await loop.run_in_executor(None, packer.execute)
File "C:\Blender\blender-2.79.0-git.10f724cec5e3-active\2.79\python\lib\concurrent\futures\thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\wheels\blender_asset_tracer-1.2.dev1-py3-none-any.whl\blender_asset_tracer\pack\__init__.py", line 383, in execute
self._rewrite_paths()
File "C:\Users\joel\AppData\Roaming\Blender Foundation\Blender\2.79\scripts\addons\blender_cloud\wheels\blender_asset_tracer-1.2.dev1-py3-none-any.whl\blender_asset_tracer\pack\__init__.py", line 470, in _rewrite_paths
assert bfile_pp is not None
AssertionError
Bastien Montagne (mont29) raised the priority of this task from 30 to 80.Oct 1 2019, 8:46 PM
I have discovered that the error during the job submission can be prevented if "Relative paths only" is turned on in the Blender Cloud settings. I am not sure if that will introduce other issues, or if this is on by default, but it appears that it addresses the issue.
I suspect it has to do with the behaviour of Path.resolve() (doc). AFAIK on Windows it also "resolves" mapped drive letters to their \\SERVER\share\path\to\file notation, which is a pain in the rear end.
@Joel Howe (jhowe) can you install Python 3.7, run pip install -U blender-asset-tracer and give us the output of bat list R:/project/thefile.blend ? You can also try bat pack -p R:/project R:/project/thefile.blend testpack; this will create a directory testpack in the current working directory, and put the BAT pack in there.
Testing with BAT directly will probably be easier than trying to use Blender with the Cloud add-on, and can provide us with more detailed info. If you replace bat in the above commands with bat -d you should get debug-level messages.
Sybren A. Stüvel (sybren) lowered the priority of this task from 80 to 30.Oct 29 2019, 4:09 PM
Unfortunately I won't have time to do this, but on a positive note I am unable to replicate this issue in Blender 2.81 with latest Blender Cloud plugin.
so u are saying it all works fine with 2.81? correct? that would means that we can close this as fixed/.
It is not 100% fixed, but I can confirm that I have certain .blend files with audio tracks in them that submit properly now in 2.81. I am not sure why some others don't submit to Flamenco, but there is the easy workaround of deleting the audio track before submitting the render or turning on "Relative paths only" in the Cloud plugin settings.
Sybren A. Stüvel (sybren) changed the task status from Unknown Status to Unknown Status.Nov 29 2019, 2:48 PM
I'll close this as 'Archived' then; if you know of a way for us to reproduce the issue we can always reopen.
|
# Simulation core components¶
The simulation component contains core functionality that models the behavior of a disease without any interventions and extended functionality to include migration, climate, or other input data to create a more realistic simulation. Disease transmission may be more or less complex depending on the disease being modeled.
Warning
If you modify the source code to add or remove configuration or campaign parameters, you may need to update the code used to produce the schema. You must also verify that your simulations are still scientifically valid.
Each generic EMOD simulation contains the following core base classes:
Simulation
Created by the simulation controller via a SimulationFactory with each run of EMOD.
Node
Corresponds to a geographic area. Each simulation maintains a collection of one or more nodes.
IndividualHuman
Represents a human being. Creates Susceptibility and Infection objects for the collection of individuals it maintains. The file name that defines this class is “Individual” and you may see it likewise shortened in diagrams.
Susceptibility
Manages an individual’s immunity.
Infection
Represents an individual’s infection with a disease.
For vector simulations, the following corresponding classes are derived from generic classes:
SimulationVector
Creates NodeVector objects instead of generic Node objects.
NodeVector
Creates IndividualHumanVector objects instead of generic IndividualHuman objects and creates and manages VectorPopulation objects to model the mosquito vectors.
IndividualHumanVector
Represents a human being and provides the additional layer of functionality for how vectors interact with individual humans.
VectorPopulation
The mosquito species at each node, which can be represented by a collection of cohorts that counts the population of a specific state of mosquitoes (VectorCohort) or by a collection of individual agent mosquitoes (VectorIndividual).
SusceptibilityVector
Represents a human being’s susceptibility to vector-borne disease.
InfectionVector
Represents a human being’s infection with a vector-borne disease.
Substitute these classes wherever you see the generic base classes in the architecture documentation.
For malaria simulations, the following corresponding classes are derived from vector classes:
SimulationMalaria
Creates NodeMalaria objects instead of generic Node objects.
NodeMalaria
Creates IndividualHumanMalaria objects instead of generic IndividualHuman objects and provides various malaria-specific counters for the purposes of reporting.
IndividualHumanMalaria
Represents a human being and provides the additional layer of functionality for how malaria vectors interact with individual humans.
SusceptibilityMalaria
Represents a human being’s susceptibility to malaria. It contains much of the intra-host model, by modeling the specific details of an individual’s immune system in the context of the malaria infection life cycle. It is highly configurable, and interacts closely with InfectionMalaria objects and the IndividualHumanMalaria object, its parent. From a software point of view, SusceptibilityMalaria derives from SusceptibilityVector, but in reality SusceptibilityVector provides minimal epidemiological functionality.
InfectionMalaria
Represents a human being’s infection with malaria. It is the other part of the detailed intra-host malaria model. It models the progression of the malaria parasite through sporozoite, schizont, hepatocyte, merozoite, and gametocyte stages. InfectionMalaria objects are contained with IndividualMalaria objects. There can be multiple such objects. They all interact closely with the SusceptibilityMalaria object. From a software point of view, InfectionMalaria derives from InfectionVector, but in reality InfectionVector provides minimal epidemiological functionality.
For generic simulations, human-to-human transmission uses a homogeneous contagion pool for each node. Every individual in the node sheds disease into the pool and acquires disease from the pool. For vector-borne diseases, disease transmission is more complex as it must take into account the vector life cycle. See Disease transmission.
The relationship between these classes is captured in the following figure.
Simulation components
After the simulation is initialized, all objects in the simulation are updated at each time step, typically a single day. Each object implements a method Update that advances the state of the objects it contains, as follows:
• IndividualHuman updates Susceptibility, Infections, and InterventionsContainer
## Simulation¶
As a stochastic model, EMOD uses a random number seed for all simulations. The Simulation object has a data member (RNG) that is an object maintaining the state of the random number generator for the parent Simulation object. The only generator currently supported is pseudo-DES. The random seed is initialized from the configuration parameter Run_Number and from the process MPI rank. All child objects needing access to the RNG must be provided an appropriate (context) pointer by their owners.
The Simulation class contains the following methods:
Method
Description
Populate()
Initializes the simulation. The Populate method initializes the simulation using both the configuration file and the demographic files. Populate calls through to populateFromDemographics to enable the Simulation object to create one or many Node objects populated with IndividualHumans as dictated by the demographics file, in conjunction with the sampling mode and value dictated by the configuration file. If the configuration file indicates that a migration and a climate model are to be used, those input file are also read. Populate also initializes all Reporters.
Update()
Simulation object hierarchy
For multi-core parallelization, the demographics file is read in order on each process and identity of each node and is compared with a policy assigning nodes to processes embodied in objects implementing InitialLoadBalancingScheme. If the initial load balancing scheme allows a node for the current rank, the node is created via addNewNodeFromDemographics. After all nodes have been created and propagated, the NodeRankMaps are merged across all processes. See Load-balancing files for more information.
## Node¶
Nodes are model abstractions that represent a population of individuals that interact in a way that does not depend on their geographic location. However, they represent a geographic location with latitude and longitude coordinates, climate information, migration links to other nodes, and miscellaneous demographic information. The Node is always the container for IndividualHumans and the contagion pool. The Node provides important capabilities for how IndividualHumans are created and managed. It can also contain a Climate object and Migration links if those features are enabled. The climate and migration settings are initialized based on the information in the input data files.
The Node class contains the following methods:
Method
Description
PopulateFromDemographics()
The entry point that invokes populateNewIndividualsFromDemographics(InitPop), which adds individuals to a simulation and initializes them. PopulateNewIndividualsFromBirth() operates similarly, but can use different distributions for demographics and initial infections.
Update()
updateInfectivity()
The workhorse of the simulation, it processes the list of all individuals attached to the Node object and updates the force of infection data members in the contagion pool object. It calls a base class function updatePopulationStatistics, which processes all individuals, sets the counters for prevalence reporting, and calls IndividualHuman::GetInfectiousness for all IndividualHuman objects.
The code in GetInfectiousness governs the interaction of the IndividualHuman with the contagion pool object. The rest of the code in updateInfectivity processes the contagion beyond individual contributions. This can include decay of persisting contagion, vector population dynamics, seasonality, etc. This is also where the population-summed infectivity must be scaled by the population in the case of density-independent transmission.
updateVitalDynamics()
Manages community level vital dynamics, primarily births, since deaths occur at the individual level.
By default, an IndividualHuman object is created, tracked, and updated for every person within a node. To reduce memory usage and processing time, you may want to sample such that each IndividualHuman object represents multiple people. There are several different sampling strategies implemented, with different strategies better suited for different simulations. See Sampling for more information.
If migration is enabled, at the end of the Node update, the Node moves all migrating individuals to a separate migration queue for processing. Once the full simulation time step is completed, all migrating individuals are moved from the migration queue and added to their destination nodes.
## IndividualHuman¶
The IndividualHuman class corresponds to human beings within the simulation. Individuals are always contained by a Node object. Each IndividualHuman object may represent one or more human beings, depending on the sampling strategy and value chosen.
The IndividualHuman class contains components for susceptibility, infection, and interventions. Infection and Susceptibility cooperate to represent the detailed dynamics of infection and immune mechanisms. Every IndividualHuman contains a Susceptibility object that represents the state of the immune system over time. Only an infected IndividualHuman contains an Infection object, and may contain multiple Infection objects.. Susceptibility is passed to initialize the infection immunology in Infection::InitInfectionImmunology(). The state of an individual’s susceptibility and infection are updated with Update() methods. Disease-specific models have additional derived classes with properties and methods to represent specifics of the disease biology.
The InterventionsContainer is the mediating structure for how interventions interrupt disease transmission or progression. Campaign distribution results in an Intervention object being added to an individual’s InterventionsContainer, where it remains unless and until it is removed. When an IndividualHuman calls Update(), the InterventionsContainer is updated and its effects are applied to the IndividualHuman. These effects are used in the individual, infection, and susceptibility update operations. If migration is enabled, at the end of each individual’s update step EMOD checks if the individual is scheduled for migration (IndividualHuman::CheckForMigration()), setting a flag accordingly.
The IndividualHuman class contains the following methods:
Method
Description
Update()
Advances the state of both the infection and the immune system and then registers any necessary changes in an individual’s state resulting from those dynamics (that is, death, paralysis, or clearance). It also updates intrinsic vital dynamics, intervention effects, migration, and exposure to infectivity of the appropriate social network.
ExposeToInfectivity()
Passes the IndividualHuman to the ExposeIndividual() function if it is exposed to infectivity at a time step.
UpdateInfectiousness()
Advances the quantity of contagion deposited to the contagion pool by an IndividualHuman at each time step of their infectious period. This is explained in more detail below.
### Disease transmission¶
Transmission of disease is mediated through a pool mechanism which tracks abstract quantities of contagion. The pool mediates individual acquisition and transmission of infections as well as external processes that modify the infectivity dynamics external to individuals. The pool provides basic mechanisms for depositing, decaying, and querying quantities of contagion which are associated with a specific StrainIdentity. The pool internally manages a separate ContagionPopulation for each possible antigen identity. ContagionPopulations have further structure and manage an array of contagion quantities for each substrain identity.
Each IndividualHuman has a sampling weight $$W_i$$ and a total infectiousness $$X_i$$, the rate at which contacts with the infectious individual become infected. This rate can be modified by transmission-reducing immunity or heterogeneous contact rates, which are gathered in $$Y_{i,transmit}$$, and the transmission-reducing effects of interventions, such as transmission- blocking vaccines in the factor $$Z_{i,transmit}$$. The sampling weight $$W_i$$ is not included in the probability of acquiring a new infection. Sample particles are simulated as single individuals, their weighting $$W_i$$ is used to determine their effects upon the rest of the simulation.The total infectiousness $$T$$ of the local human population is then calculated as:
$T = \sum_iW_iX_iY_{i,transmit}Z_{i,transmit}$
For simulation of population density-independent transmission, individual infectiousness $$X_i$$ includes the average contact rate of the infectious individual, so this total infectiousness is divided by the population $$P$$ to get the force of infection $$FI=\frac{T}{P}$$ for each individual in the population. The base rate of acquisition of new infections per person is then $$FI$$, which can be modified for each individual $$I$$ by their characteristics $$Y_{i,acquire}$$ and interventions $$Z_{i,acquire}$$. Over a time step $$\Delta t$$, the probability of an individual acquiring a new infection is then:
$P_{I,i} = 1- \text{exp}(-F_IY_{i,acquire}Z_{i,acquire}\Delta t)$
A new infection receives an incubation period and infectious period from the input configuration (a constant at present, but possibly from specified distributions in the future) and a counter tracks the full latency, which is possible when simulating individual particles. After the incubation period, the counter for the infectious period begins, during which time the infection contributes to the individual’s infectiousness $$X_i$$.
For vector-borne diseases, during an Update() for a node, the infectiousness of vectors is calculated on the local human population regarding the vector life cycle, along with the effects their interventions, such as bednets. Weather data from the Climate object is then used to update the available larval habitat for each local vector species. Multiple local vector species are supported, and after the weather updates, each vector species is advanced through a time step with the VectorPopulation::TimeStep() method. This calculates the life cycle and vector infection dynamics, along with the feeding cycle. It updates the indoor and outdoor bites, and indoor and outdoor infectious bites on the local human population. Each Individual in the local human population is then advanced in an Update(), propagating its infections forward in time, updating its interventions and infectivity, and acquiring any new infections.
NodeVector::updateInfectivity() calls the VectorPopulation::TimeStep() method for mortality adjustments due to weather and the human population and, when completed, processes each list. The effects of the human population are accounted for in VectorPopulation::Update_Host_Effects(), which determines the outcomes for indoor and outdoor attempted feeds on each individual human, and then on the human population as a whole, weighting by any heterogeneous biting. The processing order is:
1. All infectious female mosquitoes
2. All infected female mosquitoes
3. All adult uninfected female mosquitoes
|
This article describes how we verified the strength of the blades of our small wind turbine by using the IEC 61400-2 standard.
Important Note
Preliminary content from design report
The content of this article is taken from the December 2013 preliminary design report. It represents intention of design at that stage but does not necessarily show the final version of the HOLI 300 turbine design.
The calculated forces and moments from the load cases have to be converted into equivalent component stresses to be compared with the material allowed stresses. The following points have to be considered for the calculation of the equivalent stresses:
• The stress level can be different along the component and show peeks
• Important are the stress flow and the directions
• Different sizes of the component and surface treatments, including any change of material due to manufacturing like welding or other actions.
The blade parameters for stress calculation are as follows.
$D=0.06\,\textrm{m}$
$d=0.038\,\textrm{m}$
$A_{\textrm{B}}=\frac{\pi(D-d)^{2}}{4}=3.8\cdot10^{-4}\,\textrm{m}^{2}$
Second moment of inertia for the blade root (for a circle):
$I_{\textrm{xyB}}=\frac{\pi(D^{4}-d^{4})}{64}=5.34\cdot10^{-7}\,\textrm{m}^{4}$
$c_{\textrm{B}}=0.03\,\textrm{m}$ (distance from centroid to point
of max. stress)
$W_{\textrm{B}}=\frac{I_{\textrm{xyB}}}{c_{\textrm{B}}}=1.78\cdot10^{-5}\,\textrm{m}^{3}$
The ultimate strength of the blade material chosen, PEEK HP3 Plastic, is $f_{\textrm{kB}}=90\,\textrm{MPa}$.
The partial safety factors for loads, according to Table 7 of IEC 61400-2, for the simple load calculation method, for fatigue loads is $\gamma_{\textrm{f-f}}=1.0$, and the partial safety factor for ultimate loads is $\gamma_{\textrm{f-u}}=3.0$.
The fully characterised partial safety factors from Table 6 of IEC 61400-2, were chosen to be used for the PEEK HP3 blade material. For fatigue and ultimate strength the partial material safety factors are $\gamma_{\textrm{m-f}}=10.0$ and $\gamma_{\textrm{m-u}}=3.0$ respectively. The fully characterised values were chosen to error on the side of caution.
The following equation will give the allowed equivalent stress for the different load cases for ultimate strength.
$\sigma_{\textrm{dB}}\leq\frac{f_{\textrm{kB}}}{\gamma_{\textrm{m-u}}\cdot\gamma_{\textrm{f-u}}}=\frac{90}{3\cdot3}=10.0\,\textrm{MPa}$
This value should not be exceeded in the following equivalent load case calculations.
### Equivalent stress for case A: Normal operation
This first load case is only fatigue load, but for further calculations it is also necessary to find the equivalent stress level for normal operation using the following equations [1].
Equivalent stress formula for axial load:
$\sigma_{\textrm{zB}}=\frac{\triangle F_{\textrm{zB}}}{A_{\textrm{B}}}=6.02\,\textrm{MPa}$
Equivalent stress formula for bending:
$\sigma_{\textrm{MB}}=\frac{\sqrt{\triangle M_{\textrm{xB}}^{2}+\triangle M_{\textrm{yB}}^{2}}}{W_{\textrm{B}}}=1.221\,\textrm{MPa}$
Combined (axial + bending – peak to peak variation):
$\sigma_{\textrm{eqB}}=\sigma_{\textrm{zB}}+\sigma_{\textrm{MB}}=7.24\,\textrm{MPa}$
It is assumed that the PEEK HP3 plastic would perform similarly to the Victrex 450G. Since the combined axial and bending stress of the blade is a peak to peak variation, and only the amplitude of the variation is shown on the S,N-diagram, the combined stress has to be divided by 2, and the material partial safety factor of 10 for fatigue has to be applied. With calculation of equivalent stress, $36.2\,\textrm{MPa}$ it can be determined that the material would collapse after $N\approx1\cdot10^{20}$ cycles.
$N\approx1\cdot10^{20}$ number of cycles to fail at the stress $s_{\textrm{i}}\cdot\gamma_{\textrm{m-f}}\cdot\gamma_{\textrm{f-f}}=36.2\,\textrm{MPa}$
$n_{\textrm{i}}=1.69\cdot10^{10}$ number of fatigue cycles
$s_{\textrm{i}}=3.62\,\textrm{MPa}$ Amplitude of maximum stress
$\gamma_{\textrm{m-f}}=10.0$ $\gamma_{\textrm{f-f}}=1.00$
$Damage=\sum\frac{n_{\textrm{i}}}{N\left(\gamma_{\textrm{m-f}}\cdot\gamma_{\textrm{f-f}}\cdot s_{\textrm{i}}\right)}=1.7\cdot10^{-10}\leq1.0$
### Equivalent stress for case B: Yawing
Equivalent stress formula for the blade root:
$\sigma_{\textrm{eq}}=\frac{\triangle M_{\textrm{yB}}}{W_{\textrm{yB}}}=5.17\,\textrm{MPa}$
### Equivalent stress for case C:Yaw error
Equivalent stress formula for only the bending moment on the blades:
$\sigma_{\textrm{eq}}=\frac{\triangle M_{\textrm{yB}}}{W_{\textrm{B}}}=1.12\,\textrm{MPa}$
### Equivalent stress for case E: Maximum rotational speed
$\sigma_{\textrm{eq}}=\frac{F_{\textrm{zB}}}{A_{\textrm{B}}}=9.59\,\textrm{MPa}$
### Equivalent stress for case F: Short load connection
$\sigma_{\textrm{eq}}=\frac{M_{\textrm{xB}}}{W_{\textrm{B}}}=0.76\,\textrm{MPa}$
### Equivalent stress for case G: Shutdown (Braking)
Equivalent stress formula for blade root:
$\sigma_{\textrm{eq}}=\frac{M_{\textrm{xB}}}{W_{\textrm{B}}}=0.99\,\textrm{MPa}$
$\sigma_{\textrm{eq}}=\frac{M_{\textrm{yB}}}{W_{\textrm{B}}}=3.55\,\textrm{MPa}$
### Summary
This wind turbine is facing different environmental conditions. The following table compares the calculated equivalent stresses at certain environmental impacts on the SWT with the allowed material stresses. Because we only used simplified equations in this load simulation, it is necessary to evaluate the SWT model in further field studies. So far all equivalent stresses are below the material stress limits. There are more environmental conditions than wind, which have an impact or an effect on the integrity of this SWT. It is assumed that the SWT is located in middle Europe with moderate temperature, humidity and solar radiation.
The results of load calculation are presented in the table below.
[1] IEC, “Wind turbines – part 2: design requirements for small wind turbines,” , iss. 61400-2, 2006.
[Bibtex]
@STANDARD{InternationalElectrotechnicalCommission2006,
title = {Wind turbines - Part 2: Design requirements for small wind turbines},
organization = {International Electrotechnical Commission},
institution = {TC/SC 88},
author = {{IEC}},
language = {English},
number = {61400-2},
revision = {2},
year = {2006},
owner = {helgehamann},
timestamp = {2013.12.15}
}
|
# Radiometer Equation Applied to Interferometers
### Reference Material
Recall that for a single dish, we had:
${\displaystyle {\frac {S}{N}}={\frac {T_{src}}{T_{rms}}}={\frac {T_{src}}{T_{sys}}}{\sqrt {\tau \Delta \nu }}\,\!}$
where:
• ${\displaystyle T_{src}}$ is the signal of the source you’re observing
• ${\displaystyle T_{sys}}$ is your system temperature
• ${\displaystyle T_{rms}}$ is the noise in your system, or the RMS fluctuations in your system temperature
• ${\displaystyle \Delta \nu }$ is the bandwidth of your correlator (in Hz)
• ${\displaystyle \tau }$ is integration time (seconds)
In some sense, this equation applies just as well to interferometers, so long as you are treating all the antennas in the array together as the “dish”. If you really want to break it down by antenna, though, we just need a couple more steps.
The first key is to recognize that, applied to an interferometer, the radiometer equation is really concerned with the synthesized beam — that is, the beam you get when you phase all of your antennas together. But for an interferometer, because we get to see the correlation of each antenna pair separately, we make a choice to define ${\displaystyle T_{sys}}$ to be noise on each correlation pair (i.e. visibility). All this really means is that we have chosen to use the primary beam of individual antenna elements as the angular area in our gain/noise temperature calculations.
So applying the Radiometer Equation to an interferometer really boils down to figuring out the difference in beam area between the primary beam of an individual element, and the synthesized beam of the array. Unfortunately, beam areas can be rather tricky to calculate (the full-width-half-max method is woefully inadequate for sparse arrays). But fortunately, there’s an easier way.
We know that, however long our observation is, and whatever our phase center is, we will always be adding ${\displaystyle N(N-1)/2}$ visibilities together (where ${\displaystyle N}$ is the number of antennas) with some phasor applied to each number. When dealing with noise, we don’t really care what the phasor is; we know that for Gaussian noise, averaging that many samples (with equal weight) will beat down noise as the square root. Finally, there is one more factor to deal with. All our visibilities are complex numbers, but we know the sky to be real-valued, so we get to throw out the half of our noise that ends up in the imaginary part of the sum. Another way of saying this exact same thing is that, for a real valued sky, we get two uv-samples: one at (u,v), and one at (-u,-v). In either case, we can effectively pretend we actually have ${\displaystyle N(N-1)}$ independent samples. So at the end of the day, the radiometer equation for an interferometer is:
${\displaystyle {\frac {S}{N}}={\frac {T_{src}}{T_{rms}}}={\frac {T_{src}}{T_{sys}}}{\sqrt {N(N-1)\tau \Delta \nu }}\,\!}$
It’s worth stressing that this only applies if you equally weight all of your visibility samples. If you start using weighting schemes other than natural weighting, you need to take those weights into account in your radiometer equation (and they can only hurt you).
|
# Cross product of cohomology classes: intuition
Let $X$ and $Y$ be topological spaces and consider cohomology over a ring $R.$ Hatcher (in his standard Algebraic Topology text) defines the cross product of cohomology classes
$$H^k(X) \times H^l(Y) \to H^{k+l}(X\times Y),$$
by $a\times b = p_1^*(a) \smile p_2^*(b),$ with $p_1$ and $p_2$ the projection maps from $X\times Y$ onto $X$ and $Y.$ Here $\smile$ is the cup product of cohomology classes.
My question: while Hatcher gives some idea for how to visualize the $\textit{cup product}$ (in terms of intersections of maps on simplices -see in particular pages 187-189), he does not give much intuition into the $\textit{cross product}$. How should a student who encounters this cross product for the first time thing of it?
I'd be particularly interested in simple examples/ nice pictures/ any general comments to build intuition. Comments on the cup product are also welcome.
• If you think in terms of de Rham cohomology, you are pulling back the forms then take their wedge product - I guess the most natural way to create a differential form on $X \times Y$.
– user27126
Apr 19 '14 at 2:21
• Unfortunately, I don't really know much de Rham cohomology, but thanks for the comment. I'll look into it! Apr 19 '14 at 18:55
• This is an old question, but anyway... for CW complexes the geometric picture is explained pretty well in Hatchers Kunneth formula appendix to chapter 3. The point is to use the natural cw complex structure on a product of CW complexes, given by taking cross products of cells. The complexity of that section is the problem of keeping track of orientations / generators for cellular homology, in order to have a Leibnitz formula for the differential of products of cells in the cellular homology complex of the product. Aug 5 '17 at 11:30
Sorry, this isn't a full answer, rather some algebraic hints at how you might relate the cross product to your intuition about the cup product (wrt intersections) and to linear algebra. It should really be a comment but it's too long.
First, a quick review! The cross product, $\times$, is actually part of the cup product:
$$x \smile y := \Delta^*(x \times y)$$
The diagonal embedding $X \xrightarrow{\Delta} X \times X$, is simply a canonical way to embed a space X into an ambient space endowed with the product topology, $\Delta X := \{(x,x) \in X \times X\}$. It is useful when want to look in the neighborhood of a space $X$ (e.g., at germs of functions on $X$), but $X$ sits in no ambient space. The word, "diagonal embedding," comes from the example of embedding of $R^1 \hookrightarrow R^2$ taking $x \mapsto (x,x)$, that is, taking the line $R^1$ and embedding it into $R^2$ as the line $y=x$.
This is the reason we have a cup product in cohomology and not in homology. The map induced by $\Delta$ on homology, $H_*(X) \xrightarrow{\Delta_*} H_*(X \times X)$, being the "wrong direction" for a product (from 1 to 2), whereas the induced map on cohomology, $H^*(X \times X) \xrightarrow{\Delta^*} H^*(X)$, is the right direction (from 2 to 1), and lends itself to the following precomposition.
The cup product (where $p+q=k$), $$H^k(X) \longleftarrow H^p(X) \times H^q(X)$$ is actually $$H^k(X) \leftarrow H^k(X \times X) \leftarrow H^p(X) \times H^q(X)$$
What is this mysterious map $H^p(X) \times H^q(X) \to H^{p+q}(X \times X)$? It's called the "cross product", $\times$, and, as you know, is defined more generally on $H^p(X) \times H^q(Y) \to H^{p+q}(X \times Y)$.
The cross product relates the cohomology groups of two different spaces to the cohomology groups of their product space.
I also think in pictures, so I understand the desire for a cartoon picture, but the algebra is actually quite enlightening here.
The tensor product of two graded abelian groups is $$(A^* \otimes B^*)^n := \bigoplus_{i+j=n} A^i \otimes B^{\text{ }j}$$ which, if you prefer thinking in terms of smash rather than tensor, will remind you of how we get from an $i$-cell in a pointed CW-complex X and a $j$-cell in a pointed CW-complex in Y, to an $i+j$-cell in $X \wedge Y$.
$$(X \wedge Y)_{n} := \times_{i+j=n}(X_i \wedge Y_j)$$
Returning our gaze to the cross product: $$H^p(C^*) \times H^q(D^*) \xrightarrow{\times} H^{p+q}(C^* \otimes D^*)$$
where $\alpha(w_i)$ is 0 when $w_i$ and $\alpha$ are of different degrees, similarly with $\beta(z_i)$.
This might strike you as a bit strange: how does this definition of the cross product match up with yours? Let's revisit Hatcher's definition in terms of the map induced by the projections of the Cartesian product.
$$X \leftarrow X \times Y \rightarrow Y$$ $$H^*(X) \rightarrow H^*(X \times Y) \leftarrow H^*(Y)$$
Taking the pullback:
where the group labeled by the notational atrocity $H^*(X) \times_{H^*(X \times Y)} H^*(Y)$ is defined as the friendly pairs of objects $\{(a,b) \text{ such that } p_1^*(a) = p_2^*(b)\}$ for all $a \in H^*(X), b \in H^*(Y)\}$.
Hatcher's definition of $a \times b$ is then:
$$H^*(X \times Y) \times H^*(X \times Y) \to H^*(X \times Y)$$ $$\text{( }p_1^*(a), p_2^*(b) \text{ )} \mapsto p_1^*(a) \smile p_2^*(b)$$
So, recall that I defined the cup product as $p_1^*(a) \smile p_2^*(b) := \Delta^*(p_1^*(a) \times p_2^*(b))$. Well, wait a minute, how do we define the diagonal embedding on a $X \times Y$, simple: $\Delta(X \times Y) := {(x,y) \text{ such that } x = y} = X \cap Y$, the diagonal embedding is the embedding $X \cap Y \hookrightarrow X \times Y$ (which, since $X \cap X = X$, reduces to the usual diagonal embedding).
Assuming that the definitions of cup product are equivalent, $a \times b = p_1^*(a) \smile p_2^*(b) = \Delta^*(p_1^*(a) \times p_2^*(b))$. From the point of view of the cup product, now have a hint as to what is going on: we're shoving $a$ and $b$ into the same range s.t. we can use our intuition about the cup product (which is only defined for X = Y) to recon with them.
Bott says something like: a cocycle is a creature which lurks over spaces, pounces on cycles, eats them, and spits out numbers. In other words, it's a lot like a linear functional. In fact, I encourage you to recall that maps $f$ between modules satisfy the same rules that linear maps do, e.g., $f(ax +y) = af(x) + f(y)$. Indeed, this notation, i.e., $(C^k, d^k):= (Hom_R(C_k, R), d_k^*)$, a bit abhorrent at first glance, is actually telling us something. It's just borrowing from familiar linear algebra, for example, for a linear map between R-vector spaces $f: V \to W$, when we look at the dual spaces $V^*:=Hom_{R-Vect}(V, R)$ and $W^*:=Hom_{R-Vect}(V, R)$, we take the transpose $f^*: W^* \to V^*$.
The cross product, I assume, gets its name from being a generalization of the cross product of vectors in $R^3$ we know and love. For any two 1-forms, $\alpha$ and $\beta$, their wedge product is equivalent to the Hodge dual of their cross product:
$$\alpha \wedge \beta = *(\alpha \times \beta)$$
|
Article | Open | Published:
# β-barrel Oligomers as Common Intermediates of Peptides Self-Assembling into Cross-β Aggregates
## Abstract
Oligomers populated during the early amyloid aggregation process are more toxic than mature fibrils, but pinpointing the exact toxic species among highly dynamic and heterogeneous aggregation intermediates remains a major challenge. β-barrel oligomers, structurally-determined recently for a slow-aggregating peptide derived from αB crystallin, are attractive candidates for exerting amyloid toxicity due to their well-defined structures as therapeutic targets and compatibility to the “amyloid-pore” hypothesis of toxicity. To assess whether β-barrel oligomers are common intermediates to amyloid peptides - a necessary step toward associating β-barrel oligomers with general amyloid cytotoxicity, we computationally studied the oligomerization and fibrillization dynamics of seven well-studied fragments of amyloidogenic proteins with different experimentally-determined aggregation morphologies and cytotoxicity. In our molecular dynamics simulations, β-barrel oligomers were only observed in five peptides self-assembling into the characteristic cross-β aggregates, but not the other two that formed polymorphic β-rich aggregates as reported experimentally. Interestingly, the latter two peptides were previously found nontoxic. Hence, the observed correlation between β-barrel oligomers formation and cytotoxicity supports the hypothesis of β-barrel oligomers as the common toxic intermediates of amyloid aggregation.
## Introduction
Aggregation of proteins and peptides into amyloid fibrils is associated with more than 25 degenerative diseases, including Alzheimer’s disease (AD)1,2, Parkinson’s disease (PD)3,4, prion conditions5 and type-2 diabetes (T2D)6,7. Despite the differences in primary, secondary and tertiary structures of precursor proteins, experimental studies using x-ray crystallography, solid-state NMR or cryo-EM have demonstrated that the final amyloid fibrils share a common cross-β core structure with β-strands aligned perpendicular to the fibril axis and multiple β-sheets facing each other8,9,10. Increasing evidence suggests that soluble low molecular weight oligomer intermediates are more cytotoxic than mature fibrils11,12. Since not all oligomers are toxic, characterization of these oligomeric intermediates pinpointing the toxic oligomer species are thus crucial for both understanding the pathogenesis and designing therapeutic approaches for the treatment of amyloid diseases.
Based on the structure−function relationship principle where specific functions of proteins and protein complexes are determined by their distinct conformational states, these toxic oligomers of amyloid aggregation are expected to have well-organized structures to execute their pathological functions13,14. For instance, many amyloidogenic proteins and peptides can aggregate in the membrane environment by forming pore-like oligomer structures that disrupt the membrane integrity and permeability15,16,17. Using an 11-residue segment from a slow-aggregating αB crystallin, Laganowsky et al. identified a stable oligomer formed by six peptides in the shape of a barrel (i.e., β-barrel) by x-ray crystallography11. β-barrel is a common protein fold adapted by many solution and transmembrane proteins. β-barrel oligomers formed by a few individual short peptides have been observed in previous computational studies using molecular dynamics18,19, replica exchange molecular dynamics (REMD)20,21,22 or Monte Carlo23 simulations. Combining experimental characterizations with computational modeling, Do et al. showed that the C-terminal fragments of amyloid-β (Aβ) might form similar β-barrel oligomers14. The β-barrel structure as a model for small Aβ40/Aβ42 oligomers was also supported by recent hydrogen exchange mass spectrometry24 and NMR studies25. Aβ40/Aβ42 oligomers with pore-like conformations was also observed in a recent computational study combining coarse-grained and all-atom simulations26. These β-barrel oligomers capable of spanning across the lipid bilayer and thus compatible to the “amyloid-pore” hypothesis of amyloid toxicity15,16,17 have been postulated as the early aggregation intermediates exerting toxic effects on cells11. However, the isolation and characterization of β-barrel oligomers from highly heterogeneous and dynamic aggregation intermediates is often experimentally challenging. Hence, the connection of β-barrel oligomers with the general amyloid cytotoxicity in amyloid diseases remains to be fully established. It is unclear whether the formation of β-barrel oligomer as intermediates is common and yet specific to the aggregation of toxic amyloid peptides.
Recently, two overlapping 11-residue fragments of the T2D-associated human islet amyloid polypeptide (hIAPP) - located at residues 15–25 and 19–29 (denoted as hIAPP15–25 and hIAPP19–29) - have been experimentally found to display contrasting cytotoxicity while both being able to form β-sheet rich aggregates27. The fibrils of hIAPP19–29 with S20G mutation had mated β-sheets (i.e., the cross-β structure) with inter-digit packing of hydrophobic surfaces and were similarly cytotoxic as the full-length hIAPP fibrils, but hIAPP15–29 formed non-toxic labile β-sheet aggregates. S20G is a disease-causing mutation which renders both full-length28,29 and fragment hIAPPs (e.g., hIAPP18–2930) more aggregation-prone and cytotoxic. With small sizes and distinct aggregation morphologies and cytotoxicity, hIAPP15–25, hIAPP19–29 and their S20G mutants are therefore the ideal model system to investigate the relationship between the propensity to form β-barrel oligomers as aggregation intermediates and amyloid cytotoxicity. In addition, to answer whether other amyloid peptides could also form β-barrel oligomer intermediates, we studied hIAPP22-2831, Aβ16–2232 and NACore33 (residues 68–78 in α-synuclein), corresponding to the amyloidogenic cores of hIAPP, Aβ and α-synuclein implicated in T2D, AD and PD, respectively. All three peptides were documented to form amyloid fibrils and found cytotoxic in experiments31,32,33.
Here, we applied atomistic discrete molecular dynamics (DMD), a predictive and computationally efficient molecular dynamics approach34,35,36, to investigate the assembly dynamics37 of the aforementioned seven peptides. β-barrel oligomers were observed for the cytotoxic hIAPP19–29 and its S20G mutant as well as hIAPP22–28, Aβ16–22 and NACore. The β-barrels, corresponding to “closed” β-sheets mainly formed by six to eight peptides, were the aggregation intermediates that converted into multi-layer β-sheets with increasing oligomer sizes. The inter-conversion between closed β-barrels and open β-sheets with single or double layers was observed in DMD simulations. For these five peptides, the final aggregates in simulations of large molecular systems resembled the cross-β protofibrils consistent with the experimentally-observed mated β-sheets. Nontoxic hIAPP15–25 and hIAPP(S20G)15–25, on the other hand, first assembled into mostly unstructured and loosely compact oligomers, in which the β-sheet contents gradually increased with increasing oligomer sizes. The β-sheet rich aggregates of hIAPP15–25 and its S20G mutation were polymorphic without forming the mated multi-layer β-sheets27. While previous studies attributed the differential toxicity between hIAPP19–29 and hIAPP15–25 to their different aggregation morphologies27, our results suggest that the toxicity might be mediated by the formation of β-barrel oligomers although the question of how these oligomers cause cytotoxicity remains to be uncovered with future experimental and computational studies. Hence, we postulate that β-barrel oligomers are common aggregation intermediates towards the final formation of cross-β aggregates and these β-barrel oligomer intermediates, among many other factors38,39, may contribute to the cytotoxicity of amyloid aggregation.
## Result and Discussion
We first focused on the oligomerization and fibrillization dynamics of hIAPP15–25, hIAPP19–29, and their S20G mutants. For each of the four sequences including hIAPP15–25, hIAPP(S20G)15–25, hIAPP19–29 and hIAPP(S20G)19–29, ten molecular systems with even number of peptides from 2 to 20 were studied (Methods). In all cases, the same peptide concentration was maintained by adjusting the simulation box size. For each molecular system, ten independent DMD simulations lasted 300 ns at 300 K were performed starting with different initial coordinates (e.g., different inter-molecular distances and orientations) and velocities. The equilibration of each peptide system in simulations was first assessed according to the time evolution of secondary structure properties (e.g., the main conformation states of random coil and β-sheet contents) and energetics (e.g., potential energy and the number of backbone hydrogen bonds), which reached their steady states after 150 ns in all simulations (e.g., representative trajectories of the largest molecular systems of 20 peptides for each of four sequences in Fig. S1).
### IAPP19–29 had a higher β-sheet propensity than hIAPP15–25 during aggregation
We first examined the peptide secondary structure properties with increasing system sizes (β-sheet in Fig. 1 and random coil in Fig. S2). The second half of each trajectory was used for the calculation of equilibrium properties as suggested by the equilibration analysis (Fig. S1). For both hIAPP15–25 and S20G mutant, the content of β-sheet increased and random coil decreased with increasing number of peptides in simulations (Fig. 1a). Examination of the β–sheet probability per residue (Fig. 1c,e) suggested that two separate regions near the N- (e.g., L16, V17, H18, and S19) and C-terminus (e.g., N22 and F23) had high β–sheet propensities. While the S20G mutation in the middle of the sequence of hIAPP15–25 did not affect the overall β-sheet contents, it slightly increased the β–sheet propensity of the C-terminal region and weakened the N-terminal region. The secondary structure contents of hIAPP19–29 and hIAPP(S20G)19–29, on the other hand, exhibited a sharp coil-to-sheet transition with respect to the simulation system size (Fig. 1b). With less than six peptides, the peptides showed a weak β-sheet propensity, but as the number of peptides increased to six and larger the β–sheet content was significantly enhanced. Except residues near the termini, all other residues around the amyloidogenic core sequence of full-length hIAPP (22NFGAIL27) had a high propensity to form β–sheet (Fig. 1d,f). The S20G mutation at the second residue of hIAPP19–29 promoted the overall β-sheet content by also increasing the β–sheet propensity of residues following it in sequences.
### hIAPP15–25 formed bent parallel β–sheets while hIAPP19–29 aggregated into extended β-sheets with mixed parallel and anti-parallel alignments
We studied the peptide assembly dynamics by monitoring the oligomer formation and characterizing the structures of β–sheets in these aggregates. An oligomer was defined as a cluster of peptides connected by inter-molecular heavy atom contacts and its size was defined as the number of peptides forming the oligomer. By averaging over trajectories of independent simulations, we computed the mass-weighted oligomer size distribution (Fig. 2a–d), corresponding to the probability of finding a peptide in a given size oligomer. hIAPP15–25 and hIAPP(S20G)15–25 tended to self-associate into a single oligomer with the oligomer size equal to the total number of peptides in simulations (Fig. 2a,b). The oligomerization process of hIAPP19–29 and hIAPP(S20G)19–29 was more dynamic with significant populations of many smaller oligomers and even monomers (Fig. 2c,d). The higher self-association/oligomerization propensity of hIAPP15–25 was due to its higher overall sequence hydrophobicity than hIAPP19–29. Excluding the overlapping region, the sequence from 15–18 (FLVH) is more hydrophobic than residues 26–29 (ILSS).
To characterize β–sheet structures, we computed the mass-weighted size distribution of β-sheets, whose size corresponded to the number of β-strands forming the sheet (Fig. 2e–h). hIAPP15–25 and hIAPP(S20G)15–25 tend to form small β-sheets with the effect of mutation reducing the sizes. hIAPP19–29 and hIAPP(S20G)19–29, on the other hand, preferred to form larger β-sheets when the number of peptides was six and bigger after the coil-to-sheet transition (Fig. 1b). For example, the most populated β-sheet sizes were six and seven for simulations of six and eight peptides, respectively, suggesting that the peptides preferred to form a single-layer β-sheet. As the system size increased to ten or more, the most dominant β-sheet layer size kept around 6~8, indicating that the peptides might form multi-layer β-sheets. The analysis of the alignments between neighboring β-strands in each β-sheet indicated that hIAPP15–25 and hIAPP(S20G)15–25 displayed a high propensity (~0.6–0.8) to form parallel β-sheets, and the β-sheets of hIAPP19–29 and hIAPP(S20G)19–29 aggregates had both parallel and anti-parallel alignments of β-strands with a ratio ~1:1 (Fig. 2i–l). We also calculated the probability distribution of β-strand length (defined as the number of consecutive residues in a peptide adopting β-sheet conformation in Fig. S3) and the end-to-end distance of each peptide (Fig. S4). hIAPP15–25 mainly formed short β-strands (~2–5 residues), and the longer β-strands (~7–9 residues) were also observed with smaller probabilities when the number of peptides was larger than six (Fig. S3a). The mutation in hIAPP(S20G)15–25 rendered the β-strands shorter with the probability to form longer β-strands decreased (Fig. S3c). Different from hIAPP15–25, the β-strands of 5–8 residues was the most populated conformation for both hIAPP19–29 and hIAPP(S20G)19–29 and the mutant had β-strands of 6–8 residues more populated (Fig. S3b,d). The end-to-end distances of hIAPP19–29 and hIAPP(S20G)19–29 were larger than that of hIAPP15–25 and hIAPP(S20G)15–25 (Fig. S4). The S20G mutation induced hIAPP15–25 less extended, but more extended for hIAPP19–29.
### hIAPP19–29 and its S20G mutant formed β-barrel oligomers as the aggregation intermediates
Ensemble average analysis suggested that the oligomers of hIAPP19–29 formed single- or multi-layer β-sheets when the number of peptides was six or bigger after the coil-to-sheet transition. We further investigated the conformational dynamics of these oligomers along the simulation trajectories, and found that these β-sheets could adopt closed forms as β-barrels (Fig. 3). For each snapshot along a trajectory, we monitored the size of the largest oligomer and the largest β-sheet oligomer, the mass-weighted average size of β-sheets, and the total size of the β-barrels (details in Methods). The β-barrels were observed in simulations of at least six peptides. In simulations of six peptides (e.g., a typical trajectory in Fig. 3a), hIAPP19–29 rapidly assembled into oligomers with multiple small β-sheets with short β-strands (e.g., snapshots 1 and 2 in Fig. 3a), where these small β-sheets grew into a single sheet with longer β-strands (e.g., snapshot 3 in Fig. 3a). The single-layer β-sheet could rearrange into a hexameric β-barrel (e.g., snapshot 4 in Fig. 3a), which underwent an open-and-close dynamics during the course of simulations (i.e., the fluctuations of β-barrels after 50 ns without changes in β-sheet sizes in Fig. 3a). These β-barrels structurally resemble the experimentally-determined cylindrin aggregates of the αB crystallin fragment11 and also computationally-observed β-barrels of Aβ16–2220,23,40 and the fragment of beta-2 microglobulin (β2m83–89)21. A similar self-assembly dynamics was observed for simulations of eight hIAPP19–29 peptides (Fig. 3b), where hexamer, heptamer and octamer β-barrels were observed (e.g., snapshots 2, 5 and 4 in Fig. 3c, respectively). However, when the number of peptides increased to ten in simulations (Fig. 3c) we found that hIAPP19–29 predominantly formed two-layer β-sheets with the size of the largest β-sheet oligomers (red line) twice of the average size of β-sheet (blue line) although β-barrels were still transiently observed. The conformational inter-conversions of β-barrels with single-layer (e.g., snapshots 2 and 5 in Fig. 3c) and two-layer (e.g., snapshots 1 and 3 in Fig. 3c) β-sheets were observed, suggesting comparable free energies among these aggregations intermediate species. This conformational inter-conversion was also detected in prior computational aggregation studies of amyloid fragments (e.g., Aβ16–22, β2m83–89) using both all-atom and coarse-grained simulations20,21,23,40,41. The mutant hIAPP(S20G)19–29 featured the same oligomerization dynamics as hIAPP19–29 (Fig. S5).
We further computed the probability of a peptide forming β-barrels during last 150 ns simulations (Fig. 3d). Both hIAPP19–29 and hIAPP(S20G)19–29 peptides showed a high β-barrel propensity in simulations of six, eight and ten peptides with a probability around 20%, 30% and 20%, respectively. As the number of peptides increased, the β-barrel probability gradually decreased to ~10% due to the increased preference to form multi-layer β-sheets as observed in simulations of ten peptides (Fig. 3c). We also computed the probability distribution of β-barrel sizes (Fig. 3e,f), where the most populated β-barrels were composed of ~6–8 β-strands with larger β-barrels being observed occasionally. The observed β-barrel oligomer sizes also agree with another recent experimental study, where Aβ25-35 formed the most efficient β-rich pores with the number of peptides ranging from 6 to 842. Therefore, our simulation results revealed the propensity of hIAPP19–29 and its S20G mutant to form β-barrel oligomers as the aggregation intermediates.
hIAPP15–25 and the S20G mutant formed polymorphic β-sheet aggregates. We applied the same conformational dynamics analysis for the aggregation of hIAPP15–25 (Fig. 4) and hIAPP(S20G)15–25 (Fig. S6). No stable β-sheet oligomers were observed when the number of peptides less than eight for hIAPP15–25 (Fig. 2e) and ten for hIAPP(S20G)15–25 (Fig. 2f). As the number of hIAPP15–25 peptides increased to eight or more in aggregation simulations, we found that the peptides could form two types of parallel β-sheets, one bent near the C-terminal (L-turn, Fig. 4a) and the other bent in both N- and C-termini (U-turn, Fig. 4b), consistent with a wide distribution of end-2-end distances (Fig. S4a). Using the network-based algorithm of detecting β-barrel formations40, we scanned all independent simulations for all simulated molecular systems and the β-barrel oligomers were not observed in any of the simulations for the two sequences. Since the steady states were achieved in all simulations (e.g., Figs 4, S1, S6 and S7), we do not expect to observe β-barrel conformations of hIAPP15–25 and its S20G mutant with longer simulations.
As shown in Fig. 4a of a typical aggregation trajectory with eight peptides, hIAPP15–25 first collapsed into a single coil-rich oligomer (within the first 10 ns), where the peptides started to form β-sheets with short β-strands (e.g., snapshot 1 in Fig. 4a). These β-sheets were unstable and underwent frequent conformational rearrangement (e.g., snapshots 2–4 in Fig. 4a), finally forming a stable L-turn β-sheet (e.g., snapshot 5 in Fig. 4a). The peptide could also follow similar aggregation dynamics to form the U-turn β-sheet aggregates (Fig. 4b). The sequence asymmetry in terms of hydrophobicity and β-sheet propensity (i.e., the N-terminal of hIAPP15–25 was more hydrophobic with a higher β-sheet propensity compared to the C-terminal as shown in Fig. 1c,e) drove the predominantly parallel alignments of stable β-sheets. For example, due to the high hydrophobicity, the N-terminal of an hIAPP15–25 could bind to the sides of a performed β-sheet at N-terminals first via either parallel or anti-parallel alignments (e.g., snapshots 1–3 in Fig. 4b). Since the anti-parallel alignment between the N-terminals of different chains were less stable compared to the parallel alignment which was stabilized by additional hydrophobic interaction between the C-terminals, a peptide bound with the initially anti-parallel alignment could easily dissociate and re-associate to form a more stable parallel β-sheet (e.g., snapshots 4–6 in Fig. 4b). The U-shaped β-sheet structures of hIAPP15–25 were consistent with the same sequence fragments in the fibril models of full length hIAPP reconstructed by either solid-state NMR constraints43 or X-ray crystallography studies of fibril structures of composite peptides44. A recent NMR study of hIAPP binding with an aggregation inhibitor also featured a β-hairpin conformation of hIAPP around residues 20–2145. For the mutant hIAPP(S20G)15–25, we didn’t observe any stable β-sheet oligomers in simulations up to eight peptides, but L-turn and U-turn β-sheet oligomers with similar aggregation dynamics were also observed in simulations of ten (Fig. S6) or more peptides. Therefore, both hIAPP15–25 and hIAPP(S20G)15–25 formed polymorphic β-sheet aggregates without β-barrel oligomers as the aggregation intermediates.
### The aggregation free energy landscape in terms of oligomerization and fibrillization
To better understand the aggregation process, we computed the potential of mean force (PMF, i.e., the effective free energy), widely used in studying amyloid aggregation kinetics37,46, as a function of the oligomer size (noligomer) and the number of residues in β-sheet structure per peptide (nβ-sheet) for simulations with 20 peptides (Fig. 5a–d). All the 300 ns trajectories from 10 independent simulations were included in the analysis to capture the early assembly process. The aggregation free energy landscape of hIAPP15–25 featured two well-defined basins around (1, 0) and (20, 3), corresponding to isolated monomers at the initial stage of aggregation and the final β-sheet rich aggregates (e.g., Fig. 5a). Oligomers less than 20 peptides showed weak β-sheet contents. Examination of the assembly dynamics (e.g., a representative trajectory in Fig. S7a,e) showed that driven by hydrophobic interactions hIAPP15–25 first rapidly associated into a single oligomer without forming extensive hydrogen bonds (within ~50 ns, Fig. S7a), where more β-sheets were gradually formed with increasing number of inter-chain hydrogen bonds (e.g., 50–150 ns). During the structural rearrangement within the large oligomer, the number of backbone hydrogen bonds increased mainly between parallel β-sheet (Fig. S7e), resulting into the predominantly parallel β-sheets in the aggregates (Fig. 2). The inter-peptide contact frequency map between backbones of different residues (Fig. S8a) confirmed the in-register parallel β-sheets, especially in the N-terminal. The side-chain contract frequency map (Fig. S8b) also revealed the strong hydrophobic interaction among N-terminal residues and their interaction with the C-terminal 23F as in the U-turn β-sheets (e.g., Fig. 4b). In the final aggregates of hIAPP15–25 (e.g., Fig. 5a), different β-sheets with bent conformations were not aligned with each other, in agreement with the experimentally-observed labile and unmated β-sheets formed by the same sequence27. The hIAPP(S20G)15–25 mutant showed a similar aggregation free energy landscape and aggregation dynamics as hIAPP15–25 (Figs 5b, S7 and S8).
There were also two deep free energy basins for the aggregation of hIAPP19–29 around (1, 0) and (20, 5) corresponding to initial monomers and final β-sheet rich aggregates, but the aggregation pathways and dynamics were drastically different from hIAPP15–25 (Fig. 5c). The aggregation of hIAPP19–29 featured smaller oligomers with high β-sheet contents en route to the final aggregates (e.g., Fig. S7c). Smaller oligomers with less than six peptides had little β-sheet content, and β-sheet rich oligomers started to form with six or more peptides (Fig. 5c). Analysis of the assembly dynamics (e.g., a representative trajectory in Fig. S7c,g) confirmed the initial formation of small β-sheet rich oligomers and the growth of large β-sheet oligomers via either the self-association of small oligomers (e.g., the large step-wise increase of oligomer sizes around 75 ns in Fig. S7c) or the addition of monomers (e.g., the small step-wise fluctuations in oligomer sizes in Fig. S7c). The β-barrel intermediates were frequently observed during the aggregation process (e.g., snapshot in Fig. 5c and purple lines in Fig. S7c). The backbone contact frequency maps also revealed that both parallel in-register and anti-parallel out-register β-sheets were formed during aggregation (Fig. S8a). The final aggregates (e.g., Fig. 5) were comprised of two β-sheets face-to-face with side-chains inter-digitation of central residues (e.g., the side-chain contact frequency map in Fig. S8b), also consistent with the experimentally determined fibril structures of the same sequence27. hIAPP(S20G)19–29 showed a similar aggregation behavior as hIAPP19–29, except with the basin of the final aggregates having lower free energies (Figs 5d, S7d,h and S8).
### Other amyloid peptides, including hIAPP22–28, Aβ16–22 and NACore, could also form β-barrel oligomer intermediates
It is widely accepted that the toxicity of amyloid proteins shared a similar mechanism. For example, the toxicity of amyloid peptides, including amyloid-β, α-synuclein and serum amyloid A, hIAPP, was linked to membrane damage by the formation of amyloid channels (i.e., “amyloid pores”) in the membrane47,48. To investigate whether the β-barrel oligomers are also formed in the aggregation of other amyloid peptides, we analyzed the aggregation dynamics of three other amyloid peptides, including hIAPP22–2831, Aβ16–2232 and NACore33. For each sequence, ten independent aggregation simulations of eight peptides were performed starting with fully extended conformations and random positions and inter-peptide orientations (Methods). Indeed, all three peptides could aggregate into well-organized β-barrel structures, with hydrophobic residues buried and polar residues solvent-exposed (Fig. 6). The probability of observing β-barrel intermediates for hIAPP22–28, Aβ16–22 and NACore was ~1.2%, 7.1% and 1.9%, respectively.
In our recent work on the differential aggregation pathways between hIAPP22–28 and Aβ16–22 peptides, we found that with up to 16 peptides in aggregation simulations the final aggregates of both peptides adopted cross-β structures49. The final aggregates of ten NACore peptides also adopted two-layered cross-β structure (Fig. S9), which was consistent with X-ray diffraction studies33. Using all-atom REMD simulations with explicit solvent, Aβ16–2220 and an amyloidogenic segment of SOD1 (residues 147–153)50,51,52 were found to be able to form both β-barrel oligomers and two-layer β-sheets (i.e., the cross-β like aggregates). The β-barrel oligomers observed in this work and prior computational20,40,50,51,52 or experimental11 studies were composed of single-layer β-sheets, different from the double-layer β-barrel model proposed to constitute the amyloid channel across a cell membrane53,54,55. It remains to be uncovered whether full-length amyloid peptides or larger number of peptide fragments could spontaneously form the postulated double-layer β-barrels in solution or the membrane environment. Taken together, all of these data suggest that β-barrel oligomers are common intermediates towards the formation of cross-β fibrils in amyloid aggregation.
During the early aggregation stage of cross-β-forming peptides when β-sheets are initially nucleated, small two-layer β-sheets can be formed readily. These β-sheets prefer to associate with each other via parallel or anti-parallel alignments of their composite β-strands as in the final aggregates. With relatively low thermal stability and thus large conformational flexibility, peptides at the ends of these two-layer β-sheets can join each other by form hydrogen bonds along the backbone and thus these two-layer β-sheets convert into either “open” or “closed” single β-sheets, the latter of which correspond to β-barrels. Hence, this aggregation scenario suggests the co-existence of β-barrels, curved single β-sheets, and two-layer β-sheets during the early aggregation stage before the final formation of cross-β fibrils (e.g., as illustrated during the aggregation dynamics in Fig. 3 and aggregation free energy landscape in Fig. 5 of hIAPP19–29).
## Conclusion
In summary, we computationally investigated the aggregation dynamics of several well-studied peptides derived from various amyloidogenic proteins. Consistent with experimentally observed morphologies of final aggregates27, hIAPP19–29 and its S20G mutant tended to form two-layer β-sheets face-to-face with hydrophobic side-chains packed against each other in our simulations; but the β-sheets of hIAPP15–25 and the S20G mutant were polymorphic with different bent conformations that did not form the mated β-sheet packing. Similarly, hIAPP22–28, Aβ16–22 and NACore all formed the cross-β aggregates in agreement with experiments31,32,33 or molecular dynamics simulations with explicit solvent20. Hence, the ability to recapitulate aggregate morphologies of all seven peptides underscores the predictive power of our all-atom DMD with implicit solvent.
In addition to the aggregate morphologies, we also analyzed the oligomerization dynamics and evaluated the formation of β-barrel oligomer intermediates. We found that β-barrel oligomers were common intermediates for peptides assembling into the cross-β like aggregates, including hIAPP19–29, hIAPP(S20G)19–29, hIAPP22–28, Aβ16–22 and NACore. For example, oligomers of hIAPP19–29 featured a coil-to-sheet conformational transition after their sizes increased to six or larger. The β-barrel oligomers mainly comprised of ~six-eight β-strands were observed as the aggregation intermediates and structurally inter-converted with single- and double-layer β-sheets. hIAPP15–25 and the S20G mutant, on the other hand, did not β-barrel oligomer. Instead, the peptides tended to associate with each other into large coil-rich oligomers, within which β-sheets were gradually formed. Together with previous computational studies of individual sequences, our results suggest β-barrel oligomers might be the common aggregation intermediates of peptides that assembly into cross-β amyloid aggregates.
Without forming β-barrel oligomers as aggregation intermediates, the hIAPP15–25 and hIAPP(S20G)15–25 were nontoxic in vitro27. On the other hand, the other peptides that had β-barrel oligomer intermediates during aggregation were all documented to be toxic in the literature. The correlation between the formation of β-barrel oligomer intermediates and cytotoxicity supports to the hypothesis of β-barrel oligomers as the toxic oligomers in amyloid aggregation11. While Krotee et al.27 attributed the differential toxicity between hIAPP15–25 and hIAPP19–29 to the different morphology of β-sheet aggregates, our results suggest that the observed toxicity might be mediated by β-barrel oligomers formed by hIAPP19–29 instead of hIAPP15–25.
β-barrel oligomers observed here and in previous experimental11 and computational computational20,40,50,51,52 studies were formed by peptide fragments derived from amyloid proteins. Although there is no direct structural evidence for beta-barrel oligomers of full-length amyloid proteins, indirect experimental evidence based on hydrogen exchange mass spectrometry24 and NMR25 supports the formation of β-barrel oligomers by Aβ40 and Aβ42. Future studies are required to uncover the structure and dynamics of β-barrel oligomers formed by full-length amyloid proteins. In addition, to understand how the β-barrels interact with membrane and cause membrane damage, it is also necessary to uncover the aggregation of amyloid peptides in the membrane environment and to study the formation of the membrane-associated β-barrels.
## Materials and Methods
### Molecular systems used in simulations
We systematically investigated the assembly dynamics of hIAPP15–25 and hIAPP19–29 and their S20G mutants (denote as hIAPP(S20G)15–25 and hIAPP(S20G)19–29). To capture the self-assemble dynamics and oligomer structure at different size of these four types peptide, 10 systems were setup with even number of peptides from 2 to 20 for each fragment, each system performed 300 ns ten independently DMD simulation with different initial configurations (i.e., coordinates and velocities). For hIAPP22–28, Aβ16–22 and NACore, only aggregation simulations with eight peptides were performed. For each of the three cytotoxic peptides, ten independent DMD simulations with each trajectory lasting 200 ns were carried out. In all cases, the same peptide concentration of ~26 mM was maintained by adjusting the simulation box sizes. The details of all the simulations were summarized in Table 1.
### Details of DMD simulations
All simulations were performed in canonical (NVT) ensemble using the discrete molecular dynamics56,57 (DMD) algorithm. DMD is a unique type of molecular dynamics algorithm with significantly enhanced sampling efficiency, which has been wildly used by our group and other in studying protein folding36 and amyloid peptides aggregation58. In DMD simulations, the inter-atomic interactions were modeled by discrete step-wise functions mimicking the continuous potential functions of the conventional molecular mechanics force fields. Bonded interactions (bonds, bond angles, and dihedrals) were modeled as infinite square wells, where covalent bonds and bond angles usually have a single well and dihedrals may feature multiple wells corresponding to cis- or trans-conformations. Non-bonded interactions (i.e., van der Waals, solvation, hydrogen bond, and electrostatic terms) were represented as a series of discrete energetic steps, decreasing in magnitude with increasing distance until reaching zero at the cutoff distance. The van der Waals parameters were adopted from the CHARMM force field59, and bonded termed were parameterized based on statistical analysis of protein structures from protein data bank (PDB). The water molecules were implicitly modeled using the EEF1 implicit solvation model developed by Lazaridis and Karplus60. A reaction-like algorithm was used to model hydrogen bonds61. The electrostatic interactions were screened using the Debye-Hückel approximation with screening length set to 10 Å, which corresponds to ~100 mM of NaCl under physiological conditions. The velocity of each atom kept constant unless a collision occurred when an inter-atom potential was change, then the velocity was updated following the conservation laws of energy, momentum and angular momentum. The units of time, length, and energy were ~50 femtosecond, 1 Å, and 1 kcal/mol, respectively. The temperature of the system was maintained ~300 K using Anderson thermostat62. Each system was first energy minimized for 1000 DMD time units (~50 ps) with a strong heat-exchange coefficient with the virtual heat bath63, followed by equilibrium simulations carried out for six millions DMD time units, which corresponded to a simulation time of ~300 ns.
### Analysis methods
Secondary structure analyses were performed using the dictionary secondary structure of protein (DSSP) method64. A hydrogen bond was considered to be formed if the distance between backbone N and O atoms was ≤3.5 Å and the angle of N−H···O ≥120°65. Two chains were considered to form a β-sheet when two or more consecutive residues in each chain adopted the β-strand conformation and these residues were connected by at least two backbone hydrogen bonds. The anti-parallel/parallel β-strand ratio was determined by the number of hydrogen bonds between any two adjacent β-strands forming anti-parallel/parallel β-sheets.
The size of a β-sheet was the number of β-strand in a β-sheet layer. The β-sheet length was determined by the number of continuous residues adopting β-sheet conformations in a given chain. The mass weighted β-sheet size,$${\bar{n}}_{\beta -sheet-size}$$, was determined by the following equation
$${\bar{n}}_{\beta -\mathrm{sheet}-\mathrm{size}}=(\sum _{i=1}^{{n}_{\beta }}{n}_{i}^{2})\div(\sum _{i=1}^{{n}_{\beta }}{n}_{i}),$$
(1)
where nβ denoted the number of β-sheets, and ni was the size of the ith β-sheet.
Two peptides inter-connected by at least one inter-molecular heavy atom contact (the cutoff of 0.55 nm) were defined to belong to an oligomer. The number of peptides in an oligomer was referred to the oligomer size. A β-sheet oligomer was defined as multiple β-sheets inter-connected by at least one heavy atom contact, and the total number of peptides in β-sheet conformation within the complex corresponded to the β-sheet oligomer size. Two peptides in β-sheet conformation (determined by DSSP) formed a β-sheet if they had at least two inter-peptide hydrogen bonds between backbones. If a β-sheet had a closed form with every β-strand in the β-sheet having at least two neighboring β-strands, we defined it as a β-barrel oligomer. We used a network-based approach40 to automatically detect these β-barrels along the simulation trajectories.
The two-dimensional potential of mean force (PMF, or the effective free energy) was computed according to
$$PMF={\textstyle \text{-}}{{\rm{K}}}_{{\rm{B}}}{\rm{T}}\,ln\,P({n}_{oligomer},{n}_{\beta -sheet}),$$
(2)
where KB was the Boltzmann constant, T corresponded to the simulation temperature 300 K, and P(noligomer, nβ-sheet) was the probability of an oligomer with the oligomer size noligomer and the average number of residues adopting β-sheet conformation per chain, nβ-sheet.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## References
1. 1.
Hardy, J. & Selkoe, D. J. The amyloid hypothesis of Alzheimer’s disease: progress and problems on the road to therapeutics. Science 297, 353–356, https://doi.org/10.1126/science.1072994 (2002).
2. 2.
Nasica-Labouze, J. et al. Amyloid beta Protein and Alzheimer’s Disease: When Computer Simulations Complement Experimental Studies. Chem Rev 115, 3518–3563, https://doi.org/10.1021/cr500638n (2015).
3. 3.
Polymeropoulos, M. H. et al. Mutation in the alpha-synuclein gene identified in families with Parkinson’s disease. Science 276, 2045–2047 (1997).
4. 4.
Singleton, A. B. et al. alpha-synuclein locus triplication causes Parkinson’s disease. Science 302, 841–841, https://doi.org/10.1126/science.1090278 (2003).
5. 5.
Mallucci, G. et al. Depleting neuronal PrP in prion infection prevents disease and reverses spongiosis. Science 302, 871–874, https://doi.org/10.1126/science.1090187 (2003).
6. 6.
Bedrood, S. et al. Fibril Structure of Human Islet Amyloid Polypeptide. J Biol Chem 287, 5235–5241, https://doi.org/10.1074/jbc.M111.327817 (2012).
7. 7.
Anguiano, M., Nowak, R. J. & Lansbury, P. T. Protofibrillar islet amyloid polypeptide permeabilizes synthetic vesicles by a pore-like mechanism that may be relevant to type IIdiabetes. Biochemistry-Us 41, 11338–11343, https://doi.org/10.1021/bi020314u (2002).
8. 8.
Nelson, R. & Eisenberg, D. Recent atomic models of amyloid fibril structure. Curr Opin Struct Biol 16, 260–265, https://doi.org/10.1016/j.sbi.2006.03.007 (2006).
9. 9.
Tycko, R. & Solid-state, N. M. R. studies of amyloid fibril structure. Annu Rev Phys Chem 62, 279–299, https://doi.org/10.1146/annurev-physchem-032210-103539 (2011).
10. 10.
Xiao, Y. et al. Abeta(1–42) fibril structure illuminates self-recognition and replication of amyloid in Alzheimer’s disease. Nat Struct Mol Biol 22, 499–505, https://doi.org/10.1038/nsmb.2991 (2015).
11. 11.
Laganowsky, A. et al. Atomic view of a toxic amyloid small oligomer. Science 335, 1228–1231, https://doi.org/10.1126/science.1213151 (2012).
12. 12.
Larson, M. E. & Lesne, S. E. Soluble A ss oligomer production and toxicity. J Neurochem 120, 125–139, https://doi.org/10.1111/j.1471-4159.2011.07478.x (2012).
13. 13.
Nussinov, R. & Tsai, C. J. Allostery without a conformational change? Revisiting the paradigm. Curr Opin Struc Biol 30, 17–24, https://doi.org/10.1016/j.sbi.2014.11.005 (2015).
14. 14.
Do, T. D. et al. Amyloid beta-Protein C-Terminal Fragments: Formation of Cylindrins and beta-Barrels. J Am Chem Soc 138, 549–557, https://doi.org/10.1021/jacs.5b09536 (2016).
15. 15.
Jang, H., Ma, B., Lal, R. & Nussinov, R. Models of Toxic beta-Sheet Channels of Protegrin-1 Suggest a Common Subunit Organization Motif Shared with Toxic Alzheimer beta-Amyloid Ion Channels. Biophys J 95, 4631–4642, https://doi.org/10.1529/biophysj.108.134551 (2008).
16. 16.
Stckl, M. T., Zijlstra, N. & Subramaniam, V. alpha-Synuclein Oligomers: an Amyloid Pore? Mol Neurobiol 47, 613–621, https://doi.org/10.1007/s12035-012-8331-4 (2013).
17. 17.
Lashuel, H. A., Hartley, D., Petre, B. M., Walz, T. & Lansbury, P. T. Neurodegenerative disease - Amyloid pores from pathogenic mutations. Nature 418, 291–291, https://doi.org/10.1038/418291a (2002).
18. 18.
Sterpone, F. et al. The OPEP protein model: from single molecules, amyloid formation, crowding and hydrodynamics to DNA/RNA systems. Chem Soc Rev 43, 4871–4893, https://doi.org/10.1039/c4cs00048j (2014).
19. 19.
Wei, G. H., Mousseau, N. & Derreumaux, P. Sampling the self-assembly pathways of KFFE hexamers. Biophys J 87, 3648–3656, https://doi.org/10.1529/biophysj.104.047688 (2004).
20. 20.
Xie, L. G., Luo, Y. & Wei, G. H. A beta(16–22) Peptides Can Assemble into Ordered beta-Barrels and Bilayer beta-Sheets, while Substitution of Phenylalanine 19 by Tryptophan Increases the Population of Disordered Aggregates. J Phys Chem B 117, 10149–10160, https://doi.org/10.1021/jp405869a (2013).
21. 21.
De Simone, A. & Derreumaux, P. Low molecular weight oligomers of amyloid peptides display beta-barrel conformations: A replica exchange molecular dynamics study in explicit solvent. J Chem Phys 132, 165103, https://doi.org/10.1063/1.3385470 (2010).
22. 22.
Zhang, H., Xi, W., Hansmann, U. H. E. & Wei, Y. Fibril-Barrel Transitions in Cylindrin Amyloids. J Chem Theory Comput, https://doi.org/10.1021/acs.jctc.7b00383 (2017).
23. 23.
Irback, A. & Mitternacht, S. Spontaneous beta-barrel formation: An all-atom Monte Carlo study of A beta(16–22) oligomerization. Proteins 71, 207–214, https://doi.org/10.1002/prot.21682 (2008).
24. 24.
Pan, J. X., Han, J., Borchers, C. H. & Konermann, L. Structure and Dynamics of Small Soluble Abeta(1–40) Oligomers Studied by Top-Down Hydrogen Exchange Mass Spectrometry. Biochemistry-Us 51, 3694–3703, https://doi.org/10.1021/bi3002049 (2012).
25. 25.
Serra-Batiste, M. et al. Abeta 42 assembles into specific beta-barrel pore-forming oligomers in membrane-mimicking environments. P Natl Acad Sci USA 113, 10866–10871, https://doi.org/10.1073/pnas.1605104113 (2016).
26. 26.
Voelker, M. J., Barz, B. & Urbanc, B. Fully Atomistic A beta 40 and A beta 42 Oligomers in Water: Observation of Porelike Conformations. Journal of Chemical Theory and Computation 13, 4567–4583, https://doi.org/10.1021/acs.jctc.7b00495 (2017).
27. 27.
Krotee, P. et al. Atomic structures of fibrillar segments of hIAPP suggest tightly mated beta-sheets are important for cytotoxicity. Elife 6, https://doi.org/10.7554/eLife.19273 (2017).
28. 28.
Cao, P. et al. Sensitivity of Amyloid Formation by Human Islet Amyloid Polypeptide to Mutations at Residue 20. J Mol Biol 421, 282–295, https://doi.org/10.1016/j.jmb.2011.12.032 (2012).
29. 29.
Meier, D. T. et al. The S20G substitution in hIAPP is more amyloidogenic and cytotoxic than wild-type hIAPP in mouse islets. Diabetologia 59, 2166–2171, https://doi.org/10.1007/s00125-016-4045-x (2016).
30. 30.
Ma, Z. et al. Enhanced in vitro production of amyloid-like fibrils from mutant (S20G) islet amyloid polypeptide. Amyloid 8, 242–249 (2001).
31. 31.
Tenidis, K. et al. Identification of a penta- and hexapeptide of islet amyloid polypeptide (IAPP) with amyloidogenic and cytotoxic properties. J Mol Biol 295, 1055–1071, https://doi.org/10.1006/jmbi.1999.3422 (2000).
32. 32.
Hilbich, C., Kisterswoike, B., Reed, J., Masters, C. L. & Beyreuther, K. Substitutions of Hydrophobic Amino-Acids Reduce the Amyloidogenicity of Alzheimers-Disease Beta-A4 Peptides. J Mol Biol 228, 460–473, https://doi.org/10.1016/0022-2836(92)90835-8 (1992).
33. 33.
Rodriguez, J. A. et al. Structure of the toxic core of alpha-synuclein from invisible crystals. Nature 525, 486–490, https://doi.org/10.1038/nature15368 (2015).
34. 34.
Ding, F., Tsao, D., Nie, H. F. & Dokholyan, N. V. Ab initio folding of proteins with all-atom discrete molecular dynamics. Structure 16, 1010–1018, https://doi.org/10.1016/j.str.2008.03.013 (2008).
35. 35.
Yun, S. J. et al. Role of electrostatic interactions in amyloid beta-protein (Abeta) oligomer formation: A discrete molecular dynamics study. Biophys J, 195a-195a (2007).
36. 36.
Brodie, N. I., Popov, K. I., Petrotchenko, E. V., Dokholyan, N. V. & Borchers, C. H. Solving protein structures using short-distance cross-linking constraints as a guide for discrete molecular dynamics simulations. Sci Adv 3, e1700479, https://doi.org/10.1126/sciadv.1700479 (2017).
37. 37.
Bellesia, G. & Shea, J. E. Diversity of kinetic pathways in amyloid fibril formation. J Chem Phys 131, 111102, https://doi.org/10.1063/1.3216103 (2009).
38. 38.
Selkoe, D. J. & Hardy, J. The amyloid hypothesis of Alzheimer’s disease at 25 years. EMBO molecular medicine 8, 595–608, https://doi.org/10.15252/emmm.201606210 (2016).
39. 39.
Doig, A. J. et al. Why Is Research on Amyloid-beta Failing to Give New Drugs for Alzheimer’s Disease? ACS chemical neuroscience 8, 1435–1437, https://doi.org/10.1021/acschemneuro.7b00188 (2017).
40. 40.
Ge, X., Sun, Y. & Ding, F. Structures and dynamics of beta-barrel oligomer intermediates of amyloid-beta16–22 aggregation. Biochimica et biophysica acta, https://doi.org/10.1016/j.bbamem.2018.03.011 (2018).
41. 41.
Song, W., Wei, G. H., Mousseau, N. & Derreumaux, P. Self-assembly of the beta 2-microglobulin NHVTLSQ peptide using a coarse-grained protein model reveals beta-barrel species. J Phys Chem B 112, 4410–4418, https://doi.org/10.1021/jp710592v (2008).
42. 42.
Kandel, N., Zheng, T. Y., Huo, Q. & Tatulian, S. A. Membrane Binding and Pore Formation by a Cytotoxic Fragment of Amyloid beta Peptide. J Phys Chem B 121, 10293–10305, https://doi.org/10.1021/acs.jpcb.7b07002 (2017).
43. 43.
Luca, S., Yau, W. M., Leapman, R. & Tycko, R. Peptide conformation and supramolecular organization in amylin fibrils: constraints from solid-state NMR. Biochemistry-Us 46, 13505–13522, https://doi.org/10.1021/bi701427q (2007).
44. 44.
Wiltzius, J. J. W. et al. Atomic structure of the cross-beta spine of islet amyloid polypeptide (amylin). Protein Sci 17, 1467–1474, https://doi.org/10.1110/ps.036509.108 (2008).
45. 45.
Mirecka, E. A. et al. beta-Hairpin of Islet Amyloid Polypeptide Bound to an Aggregation Inhibitor. Sci Rep-Uk 6, https://doi.org/10.1038/srep33474 (2016).
46. 46.
Zheng, W., Tsai, M. Y., Chen, M. & Wolynes, P. G. Exploring the aggregation free energy landscape of the amyloid-beta protein (1–40). Proc Natl Acad Sci USA 113, 11835–11840, https://doi.org/10.1073/pnas.1612362113 (2016).
47. 47.
Quist, A. et al. Amyloid ion channels: A common structural link for protein-misfolding disease. P Natl Acad Sci USA 102, 10427–10432, https://doi.org/10.1073/pnas.0502066102 (2005).
48. 48.
Kawahara, M., Kuroda, Y., Arispe, N. & Rojas, E. Alzheimer’s beta-amyloid, human islet amylin, and prion protein fragment evoke intracellular free calcium elevations by a common mechanism in a hypothalamic GnRH neuronal cell line. J Biol Chem 275, 14077–14083, https://doi.org/10.1074/jbc.275.19.14077 (2000).
49. 49.
Sun, Y., Wang, B., Ge, X. & Ding, F. Distinct oligomerization and fibrillization dynamics of amyloid core sequences of amyloid-beta and islet amyloid polypeptide. Phys Chem Chem Phys 19, 28414–28423, https://doi.org/10.1039/c7cp05695h (2017).
50. 50.
Valentine, J. S., Doucette, P. A. & Potter, S. Z. Copper-zinc superoxide dismutase and amyotrophic lateral sclerosis. Annu Rev Biochem 74, 563–593, https://doi.org/10.1146/annurev.biochem.72.121801.161647 (2005).
51. 51.
Turner, B. J. & Talbot, K. Transgenics, toxicity and therapeutics in rodent models of mutant SOD1-mediated familial ALS. Prog Neurobiol 85, 94–134, https://doi.org/10.1016/j.pneurobio.2008.01.001 (2008).
52. 52.
Zou, Y. et al. Critical Nucleus Structure and Aggregation Mechanism of the C-terminal Fragment of Copper-Zinc Superoxide Dismutase Protein. Acs Chem Neurosci 7, 286–296, https://doi.org/10.1021/acschemneuro.5b00242 (2016).
53. 53.
Connelly, L. et al. Atomic force microscopy and MD simulations reveal pore-like structures of all-D-enantiomer of Alzheimer’s beta-amyloid peptide: relevance to the ion channel mechanism of AD pathology. J Phys Chem B 116, 1728–1735, https://doi.org/10.1021/jp2108126 (2012).
54. 54.
Jang, H. et al. beta-Barrel Topology of Alzheimer’s beta-Amyloid Ion Channels. J Mol Biol 404, 917–934, https://doi.org/10.1016/j.jmb.2010.10.025 (2010).
55. 55.
Jang, H., Zheng, J., Lal, R. & Nussinov, R. New structures help the modeling of toxic amyloidbeta ion channels. Trends in biochemical sciences 33, 91–100, https://doi.org/10.1016/j.tibs.2007.10.007 (2008).
56. 56.
Dokholyan, N. V., Buldyrev, S. V., Stanley, H. E. & Shakhnovich, E. I. Discrete molecular dynamics studies of the folding of a protein-like model. Fold Des 3, 577–587, https://doi.org/10.1016/S1359-0278(98)00072-8 (1998).
57. 57.
Proctor, E. A., Ding, F. & Dokholyan, N. V. Discrete molecular dynamics. Wires Comput Mol Sci 1, 80–92, https://doi.org/10.1002/wcms.4 (2011).
58. 58.
Urbanc, B. et al. Structural Basis for A beta(1–42) Toxicity Inhibition by A beta C-Terminal Fragments: Discrete Molecular Dynamics Study. J Mol Biol 410, 316–328, https://doi.org/10.1016/j.jmb.2011.05.021 (2011).
59. 59.
Brooks, B. R. et al. Charmm - a Program for Macromolecular Energy, Minimization, and Dynamics Calculations. J Comput Chem 4, 187–217, https://doi.org/10.1002/jcc.540040211 (1983).
60. 60.
Lazaridis, T. & Karplus, M. Effective energy function for proteins in solution. Proteins 35, 133–152 (1999).
61. 61.
Ding, F., Borreguero, J. M., Buldyrey, S. V., Stanley, H. E. & Dokholyan, N. V. Mechanism for the alpha-helix to beta-hairpin transition. Proteins-Structure Function and Genetics 53, 220–228, https://doi.org/10.1002/prot.10468 (2003).
62. 62.
Andersen, H. C. Molecular-Dynamics Simulations at Constant Pressure and-or Temperature. J Chem Phys 72, 2384–2393, https://doi.org/10.1063/1.439486 (1980).
63. 63.
Ramachandran, S., Kota, P., Ding, F. & Dokholyan, N. V. Automated minimization of steric clashes in protein structures. Proteins 79, 261–270, https://doi.org/10.1002/prot.22879 (2011).
64. 64.
Kabsch, W. & Sander, C. Dictionary of Protein Secondary Structure - Pattern-Recognition of Hydrogen-Bonded and Geometrical Features. Biopolymers 22, 2577–2637, https://doi.org/10.1002/bip.360221211 (1983).
65. 65.
Sun, Y. X., Qian, Z. Y., Guo, C. & Wei, G. H. Amphiphilic Peptides A(6)K and V6K Display Distinct Oligomeric Structures and Self-Assembly Dynamics: A Combined All-Atom and Coarse-Grained Simulation Study. Biomacromolecules 16, 2940–2949, https://doi.org/10.1021/acs.biomac.5b00850 (2015).
## Acknowledgements
The work is supported in part by NSF CBET-1553945 (Ding) and NIH R35GM119691 (Ding). The content is solely the responsibility of the authors and does not necessarily represent the official views of NIH and NSF.
## Author information
### Affiliations
1. #### Department of Physics and Astronomy, Clemson University, Clemson, SC, 29634, USA
• Yunxiang Sun
• , Xinwei Ge
• , Yanting Xing
• , Bo Wang
• & Feng Ding
### Contributions
Y.S. and F.D. conceived and designed the computational project. Y.S. and X.G. performed the simulations. Y.S., X.G., Y.X. and B.W. analyzed the data. Y.S. and F.D. interpreted the results and wrote the manuscript.
### Competing Interests
The authors declare no competing interests.
### Corresponding author
Correspondence to Feng Ding.
|
# Chelate
## What is Chelation?
Chelation is a very common term used in different branches of science like chemistry, biology, and medical sciences. The process of chelation is widely used in the detoxification of toxicants and in making complexes. Let us come to our main question, what is chelation?
Let us discuss the chelate meaning, a chelate is a compound that has two or more coordinate or dative bonds between a ligand (usually organic) and a central metal atom.
Chelation Definition- Chelation is a phenomenon or the ability of ions and molecules to form bonds with metal ions. Between a polydentate ligand and a single central atom, two or more different coordinate bonds are formed or present. Let us discuss the terms used in the chelation definition for a better understanding of the definition.
Ligand- A ligand is an ion or molecule that forms a coordination complex by donating a pair of electrons to the central metal atom or ion.
Polydentate- The number of atoms used to bind to a central metal atom or ion varies among polydentate ligands. Hexadentate ligands, such as EDTA, have six donor atoms with electron pairs that can bind to a central metal atom or ion.
### What is Chelation in Chemistry?
The chelate effect is when chelating ligands have a higher affinity for a metal ion than equivalent non-chelating (monodentate) ligands. Let's take a look at how the chelation mechanism works. The chelate effect is supported by certain thermodynamic concepts. Let's take a look at an example: Copper(II) affinities for ethylenediamine (en) and methylamine are compared.
1. $Cu^{2+} + en \rightleftharpoons Cu(en)^{2+}$
2. $Cu^{2+} + 2 MeNH_{2} \rightleftharpoons [Cu(MeNH_{2})_{2}]^{2+}$
The copper ion forms a chelate complex with ethylenediamine in the first equation. Chelation results in the creation of a $CuC_{2}N_{2}$ chelate ring with five members. The bidentate ligand is substituted by two monodentate methylamine ligands with roughly the same donor strength in the second reaction, suggesting that the Cu–N bonds are similar in both reactions.
The equilibrium constant for the reaction is taken into account in the thermodynamic approach in explaining the chelate effect. The higher the equilibrium constant, the higher the complex concentration.
• Cu(en) = Cuen
• $[Cu(MeNH_{2})_{2}] = Cu[MeNH_{2}]_{2}$
For the sake of clarity, electrical charges have been removed. The subscripts to the stability constants show the stoichiometry of the complex, and the square brackets indicate concentration. The concentration Cu(en) is much higher than the concentration $[Cu(MeNH_{2})_{2}]$ because the analytical concentration of methylamine is double that of ethylenediamine and the concentration of copper is the same in both reactions.
As we know, ΔG = ΔH - TΔS
The discrepancy between the two stability constants is due to the effects of entropy since the enthalpy should be about the same for the two reactions. There are two particles on the left and one on the right in equation one, while there are three particles on the left and one on the right in equation two.
This distinction suggests that when a chelate complex is formed with a bidentate ligand, less entropy of disorder is lost than when a complex is formed with monodentate ligands. One of the variables that contribute to the entropy gap is this. Solvation shifts and chelate ring formation are two other factors to consider.
The enthalpy changes for the two reactions are nearly equal, indicating that the entropy expression, which is much less unfavorable, is the key reason for the chelate complex's greater stability. It's difficult to account for thermodynamic values in terms of changes in solution at the molecular level exactly, but it is apparent that the chelate effect is primarily an entropy effect.
### Chelate Complex
The ligands (electron donors) used in the chelation process are known as chelants, chelators, chelating agents, and sequestering agents. These molecules are generally organic compounds, but this is not a necessity. As there are some cases of zinc and other inorganic molecules that are used as chelants.
### Chelate Example
Some of the examples of chelates are given below:
• Ethylene diamine tetraacetic acid (EDTA)
• Ethylenediamine
• Porphine
• Cyanocobalamin (Vitamin B-12)
• Dimercaprol
### Chelate Compound Uses
• Zinc is used in maintenance therapy to prevent the absorption of copper in people with Wilson's disease.
• Chelation is useful in providing nutritional supplements.
• It is used in chelation therapy to remove toxic metals from the body.
• Chelate compounds are used as contrast agents in MRI scanning.
• These compounds are used in the manufacturing of homogeneous catalysts.
• It is used in chemical water treatment to assist the removal of metals and in fertilizers.
• The chelation process is used by plants for the removal of heavy metals.
### Natural Chelating Agent & Cosmetic Industry
Chelating agents are compounds that form multiple bonds with metal ions. The compound hence formed is used in a variety of applications. But the natural chelating agent is mostly used in the cosmetic industry. As they do not react with any other element present in the substance. Natural chelating agents are biodegradable and free from toxic elements. Due to their organic nature, they are mostly used for producing cosmetics. These agents are derived from microorganisms. Due to the constant demand amongst the consumers for using eco-friendly products, natural chelating agents are now being consumed on a large scale.
The usage of chelating agents helps to increase the shelf life of products and make it less harmful to the environment. Unsaturated oils can be obtained from these natural chelating agents when used with antioxidants such as tocopherol.
### Advantages of Using Natural Chelating Agents:
• Helps to increase shelf life of products
• Work as a natural alternative to EDTA
• Used for skin brightening functions
• Effective for making metal ions inactive
• Is color stable
• Comes in easy to use the form
• Is biodegradable and environment friendly
• Free from toxic elements
• Derived from natural animals such as microorganisms.
### Examples:
• In the chelation process, monodentate and polydentate ligands bend over to form bonds and ring structures.
• Polydentae ligands such as amino acids, proteins, poly nucleic acids are used to form bonds.
• Most molecules dissolve metal cations to form chelate complexes
• Ethylenediamine, a bidentate ligand, forms a chelate complex with copper. And this gives a five-membered ring of CuC2N2
• Peptides, prosthetic groups or cofactors belong to metalloenzymes.
• Organic chelates help to extract metal ions from rocks or minerals help in hot chemical weathering.
• Using chelating metal ions, nutritional supplements are formed
• These supplements help to prevent the formation of complexes insoluble in salts for the stomach.
• Moreover, these supplements have a higher capacity for absorption
• Common chelating agents used for the softening process of eaters are EDTA and phosphorus.
• Some frequent chelated completions include ruthenium chloride with bidentate phosphine.
### Sample Questions:
1. What are chelating agents? Give examples.
2. Explain the process of chelating with one example.
3. Give some uses of chelating agents.
4. How does the chelate effect take place?
5. Draw a structure of ethylenediamine and explain its uses.
### Did you know?
• Chelation therapy sometimes causes fever and vomiting in the patients.
• Some of the chelating agents can cause respiratory failure.
Book your Free Demo session
Get a flavour of LIVE classes here at Vedantu
Vedantu Improvement Promise
We promise improvement in marks or get your fees back. T&C Apply*
FAQs (Frequently Asked Questions)
1. What is chelation?
Chelation Is the process where bonds are formed with metal ions. Usually bidentate and polydentate form Bonds and form a ring. This process is called chelation. In the medical industry, chelation is used to reduce the toxicity of metal ions. Usually, organic compounds are formed in the chelation process however by the use of different metal ions such as zinc the results can be different. Chelation is also used in producing some chemical compounds for the production of natural supplements or to remove the metal in fertilizer.
2. Give some uses of the chelation process.
Some of the uses of the chelation process are given below:
• In people with Wilson's disease, zinc is used in maintenance therapy to avoid copper absorption.
• Chelation may be used to make dietary supplements.
• Chelation therapy is used to eliminate radioactive metals from the body.
• In MRI scanning, chelate compounds are used as contrast agents.
• These ingredients are used to make homogeneous catalysts.
• It's used in fertilizers and in chemical water treatment to help remove metals.
• Plants use the chelation mechanism to get rid of heavy metals.
3. What are chelating agents? Give examples.
Chelation is the process of forming bonds with metal ions. so a generating agent is an element that can form multiple bonds with metals. The simplest form of the chelating agent is ethylenediamine. Chelating agents are used to reducing the tissue levels formed in Injurious heavy metals. Chelating agents react with metal ions to give stable and water-soluble compounds. Chelating agents are used in heavy metal poisoning or to reduce some high levels of metal in blood. An example of the chelating agent is citric acid which helps to make the metal salute.
4. Give some medical uses of chelate.
Chelating agents are usually organic compounds that form bonds with metal ions. The compounds formed are used in a variety of applications including the medical industry. Some specific chelating agents form Bond with iron so that they can be mixed in blood to reduce some high metal levels. The chelating agent is mixed with the blood and helps to remove the toxic materials from the body. If any foreign particle gets into the human body then along with the chelating agent, that material is pushed out with EDTA coating. It also is a powerful agent for Protecting blood vessels from any radical damage.
5. Where is chelator commonly used?
Chelating agents are used in the medical field to reduce the toxicity of metals. A commonly used chelating agent is Calcium disodium ethylenediamine Tetra acetic acid. This compound Is a derivative of Ethylenediamine Tetra acetic acid. This drug has been claimed to be beneficial for childhood lead poisoning problems and some vascular Diseases since 1955. Studies also suggest that in the future, chelating agents might replace the Bypass surgery techniques. EDTA can be used in blood vessels inflammation problems. This compound is mostly used in the treatment of lead poisoning due to its capacity to replace lead in place of calcium in the chelate. The compound formed was found to be stable and hence leaving behind the part from body fluids excreting $PbNa_{2}EDTA$ and leaving Cat behind.
Comment
|
# Showing all rationals in $(0,1)$ are sums of certain reciprocals by induction
Help me please to understand this exercise and probably to solve it.
Show that every positive rational number $\frac{m}{n}\in (0,1)$ can be represented as $$\frac{m}{n} = \frac{1}{q_1} + \frac{1}{q_2} + \cdots + \frac{1}{q_r},$$ where $q_1 \lt q_2\lt \cdots \lt q_r$ are positive integers and $q_i$ is a divisor of $q_{i+1}$ for all $i=1,2,\ldots,r-1$.
I don't get the last part? Tja, actually, I don't know in general how to solve it.
-
The last part means that $q_1 = d_1$, $q_2 = d_2 \cdot d_1$, $q_r = d_r \cdots d_2 \cdot d_1$. Consider an example $m=4$ and $n=5$. Then $d_1=d_2=2$ and $d_3 = 5$ is a solution. – Sasha Oct 14 '11 at 16:23
It looks like some sort of a constrained Egyptian fraction decomposition... – J. M. Oct 14 '11 at 16:39
Find the least integer $q_1$ with
$$\frac mn\ge\frac1{q_1}\tag1$$
and write
$$\frac mn=\frac1{q_1}(1+x)\;.$$
Solving for $x$ yields
$$x=\frac{q_1m-n}n\;.$$
By $(1)$, we have $x\ge0$. Since $q_1$ is the least number for which $(1)$ holds, we also have
$$\frac mn\lt\frac1{q_1-1}\;,$$
which yields
$$q_1m-n\lt m$$
and thus
$$x\lt\frac mn\lt1\;.$$
Thus we have $x\in[0,1)$. If we slightly generalize to include $0$ in the interval (with $r=0$), the result follows by induction: $x$ has the same denominator as $m/n$ but a lower numerator, so if we apply the same procedure to $x$ recursively, the recursion must eventually end; then substituting the results and multiplying out all the parentheses yields the desired representation.
-
Just to add: The "induction" we are doing is a kind of "bounded" induction: we fix $n$, and we do induction on $m$, but only need to worry about $1\leq m\lt n$. – Arturo Magidin Oct 14 '11 at 16:32
|
## Examples of common false beliefs in mathematics. [closed]
The first thing to say is that this is not the same as the question about interesting mathematical mistakes. I am interested about the type of false beliefs that many intelligent people have while they are learning mathematics, but quickly abandon when their mistake is pointed out -- and also in why they have these beliefs. So in a sense I am interested in commonplace mathematical mistakes.
Let me give a couple of examples to show the kind of thing I mean. When teaching complex analysis, I often come across people who do not realize that they have four incompatible beliefs in their heads simultaneously. These are
(i) a bounded entire function is constant; (ii) sin(z) is a bounded function; (iii) sin(z) is defined and analytic everywhere on C; (iv) sin(z) is not a constant function.
Obviously, it is (ii) that is false. I think probably many people visualize the extension of sin(z) to the complex plane as a doubly periodic function, until someone points out that that is complete nonsense.
A second example is the statement that an open dense subset U of R must be the whole of R. The "proof" of this statement is that every point x is arbitrarily close to a point u in U, so when you put a small neighbourhood about u it must contain x.
Since I'm asking for a good list of examples, and since it's more like a psychological question than a mathematical one, I think I'd better make it community wiki. The properties I'd most like from examples are that they are from reasonably advanced mathematics (so I'm less interested in very elementary false statements like $(x+y)^2=x^2+y^2$, even if they are widely believed) and that the reasons they are found plausible are quite varied.
-
I have to say this is proving to be one of the more useful CW big-list questions on the site... – Qiaochu Yuan May 6 2010 at 0:55
The answers below are truly informative. Big thanks for your question. I have always loved your post here in MO and wordpress. – To be cont'd May 22 2010 at 9:04
wouldn't it be great to compile all the nice examples (and some of the most relevant discussion / comments) presented below into a little writeup? that would make for a highly educative and entertaining read. – S. Sra Sep 20 2010 at 12:39
It's a thought -- I might consider it. – gowers Oct 4 2010 at 20:13
Meta created meta.mathoverflow.net/discussion/1165/… – quid Oct 8 2011 at 14:27
|
# Greek Omega (Ohm) in running text ACM Whitelist conform
I am trying to get a Greek omega letter as an Ohm sign in the running text for my paper. I found a solution using siunitx or textgreek here. Unfortunately, the packages are not on the whitelist of ACM that can be found here. Any suggestions on how to solve this problem?
• Use the \ohm command from siunitx if it is on the list. You'll get both an upright Ω and a correct spacing with the \SI command Jul 16, 2020 at 11:57
• As mentioned in my question the package is not on the list. Jul 16, 2020 at 11:58
• It has the old SIunits package, though... and that should offer an upright Ω somewhere. Jul 16, 2020 at 12:10
• If you are restricted on packages, can you use simply \ensuremath{\Omega}? Jul 16, 2020 at 12:15
• You'll probably want to do \newcommand{\ohm}{\ensuremath{\Omega}} to (a) reduce typing and (2) allow you to easily make any necessary adjustments to formatting later. Jul 16, 2020 at 14:18
You can do this without any packages at all, by taking the symbol from the (default) OT1 encoding.
\documentclass{article}
\usepackage[OT1,T1]{fontenc}
\usepackage{textcomp} % Not needed since 2020
\usepackage[utf8]{inputenc} % Not needed since 2018
\providecommand\textOmega{{\fontencoding{OT1}\selectfont\symbol{"0A}}}
\DeclareUnicodeCharacter{2125}{\textOmega} % Ohm symbol
\begin{document}
550~µΩ--600~\textmu\textOmega
\end{document}
In math mode with amsmath, you’d wrap this in \textnormal. If you want the units to stay upright even when the text is italicized, add \upshape between \fontencoding and \selectfont.
• These are precisely the kinds of things I'm trying very hard to understand: picking one character from a font, the various encodings and what they actually mean, 8 bit, unicode, etc. Where are these things documented and explained? Jul 16, 2020 at 20:00
• @LaTeXereXeTaL The font tables are in the LaTeX Font Encoding Guide, and \DeclareUnicodeCharacter is defined by inputenc and documented there. Jul 16, 2020 at 20:14
• @LaTeXereXeTaL The reference to the text font commands is LaTeX2e Font Selection Jul 16, 2020 at 20:15
• @LaTeXereXeTaL But for a good user-friendly document, you might go to the `LaTeX Wikibook or Overleaf’s documentation. Jul 16, 2020 at 20:17
• Okay some of these I'd seen before, but the second one you link to looks to be what I'm looking for. Jul 16, 2020 at 20:20
|
Physics a worthless degree?
After reading some posts here is seems that the general consensus is that if one is interested in physics and math then one should be an electrical engineer unless one wants to teach and make around $30k/yr which is Wal-Mart cashier pay where I live(New Mexico). Is this really the case? Is getting a degree in physics essentially mental masturbation or is one able to gain employment with the degree and attain a salary greater than that of a janitor? Also I don't really understand the low salary quote often posted around$30-40k/yr when the wsj posts incomes that are comparable to EE degrees.
Electrical Engineering
starting median:$60,900.00 mid-career median:$103,000.00
Physics
starting median: $50,300.00 mid-career median:$97,300.00
http://online.wsj.com/public/resourc...Back-sort.html
Bottom Line: Is Physics near worthless compared to EE post graduation if one wants to work in industry?
PhysOrg.com science news on PhysOrg.com >> Heat-related deaths in Manhattan projected to rise>> Dire outlook despite global warming 'pause': study>> Sea level influenced tropical climate during the last ice age
If you are in it for the money...
Well I think that is what I'm not understanding. I can't feed my family on hopes and dreams. I don't really like this argument "well not if your in it for the money." We work to get money to pay bills, eat, enjoy life. Are you saying that one should get a degree in physics for the love of physics and just deal with being on welfare?
Mentor
Physics a worthless degree?
Quote by Archi Well I think that is what I'm not understanding. I can't feed my family on hopes and dreams. I don't really like this argument "well not if your in it for the money." We work to get money to pay bills, eat, enjoy life. Are you saying that one should get a degree in physics for the love of physics and just deal with being on welfare?
The salaries you quoted are nowhere near welfare-level.
It's unusual for someone to go into science for the money. If you don't have a passion for it, you'll likely hate it (it really is a *lot* of work) and end up switching to something else, anyway.
So where did you get the 30-40k number which is in direct conflict with the WSJ numbers? The WSJ numbers seem in line with what I have seen. People seem to have this erroneous idea that physics is useless, maybe because most people don't even know what physics is.
I come from experience. My parents supported three children living on $35k/year living in a decent neighborhood and going to an actual good school (No welfare, ever). So, living off of$50,300/year is good. You just have to know how to budget. $97k/year? That is a lot of money to live off yearly. I honestly would not know what to do with most of it. Recognitions: Homework Help Some subjects by themselves can be useless - depending on what you think is and is not useful/useless. You want other skills to be able to get a job. Physics major can study other things than just Physics. Needed is to know how to operate equipment, know how to handle devices and machines, but at least Physics can help you think about how to study and find solutions for problems or objectives. Study and get training for more than just one major field; there are useful and related courses to make a person marketable. These salary surveys are almost all worthless. The starting salry ones are compiled by the university depts to encourage students, they are a lot more diligent in tracking down students who started as graduate trainees at$MEGA-CORP on high salaries than they are at including those who are taking a year off or unemployed. The mid career figures are compiled from industry bodies, so the E-Eng one includes lots of certified engineers now working as middle managers while the physics one will include lots of postdocs and junior professors who are members of the AAAS but will miss those working as traders on wall st at 10x these figures.
So you throw up some random number you cooked up from nothing and then ask why that number is right and the WSJ numbers are wrong? Is that what's going on here?
Quote by Archi After reading some posts here is seems that the general consensus is that if one is interested in physics and math then one should be an electrical engineer unless one wants to teach and make around $30k/yr which is Wal-Mart cashier pay where I live(New Mexico). Is this really the case? Nope - that's complete nonsense. And I don't know where you're getting that 'general consensus' from - I spend a lot of time in these parts of the forums and I definitely wouldn't say that. The "physics and maths is only useful for high school teaching" is an opinion normally only held by people that don't know anything about physics or mathematics. I wouldn't expect anyone else to be so naive. Quote by Archi Is getting a degree in physics essentially mental masturbation Nope. Quote by Archi or is one able to gain employment with the degree and attain a salary greater than that of a janitor? Yep. Quote by Archi Bottom Line: Is Physics near worthless compared to EE post graduation if one wants to work in industry? Nope. Actually, a physics graduate qualifies for most of the same jobs electrical engineering graduates do as well. They have a similar skill set. You're confusing working *in* physics with jobs that a physics degree enables you to do. A physics degree is one of the most versatile around - if you want to work in any of the major engineering disciplines, you can find a way to do that as physics graduate. If you want to make a lot of money, you can find a way to do that. It just depends how you prioritise everything. To be honest, these posts are quite tiring. I say in all of my responses to questions like this that for work in industry, a university degree is about the *skills* that you learn. Physics, maths and engineering all require similar technical skills. If you go and work for some company, there's a good chance they want to train you (and will need to train you) from the ground-up in the work that they do. Sure, you might find some mech-eng job that requires knowledge of finite element analysis, something that a physics graduate might not have - but it's something that is easily learned for one that is familiar with how maths and programming work. So, you graduate in physics and find that the only jobs out there need finite element knowledge. Go get a book on it, and read it. Voilà, you can now apply to those jobs. As for actually working in physics post-degree, it isn't so easy. Academic research is tough route - it's hard to get in to and the pay isn't great but like twofish-quant always says, you won't starve. There are national labs in most countries, and lots of industrial jobs that carry out physics related research, however. The difficult thing post-degree for any student is finding the job that you *want* to do. If you end up enjoying something extremely specific and won't settle for anything else, then you'll find it extremely difficult to get a job. If you can market yourself, and want a job that challenges your mind and you can work on interesting problems, then you shouldn't find it too bad. Its sad when people look/say to me "well what are you going to do with a physics degree." I wish people had the slightest idea of what someone with a physics degree is capable of(in my opinion). I think someone with a physics B.S. would be skilled to handle just about technical task, or at least be trained fairly quickly in it. Some of the problems we solve in this major are incredibly difficult, and if someone can make it through it I know for a fact this person has discipline, persistence, abstract/creative thinking. Almost half of the freshman physics class has switched out of the major since I started because it was too much to handle. Also, if you look at the statistics from 2009 physics graduates that someone posted not long ago, almost none of them didn't have a job. Its not a dead end, and its a great foundation for anything in my opinion. Recognitions: Science Advisor Quote by Archi Is getting a degree in physics essentially mental masturbation or is one able to gain employment with the degree and attain a salary greater than that of a janitor? Bottom Line: Is Physics near worthless compared to EE post graduation if one wants to work in industry? I spent about 7 years in industry doing physics with my physics degree, getting paid good money. YMMV. Archi - you're possibly confusing typical post-doctoral positions with the median salary for someone beginning work with a physics degree. Post docs tend to start in the$30-40k range, but they aren't the only jobs available to someone with a physics background. Physics graduates work in a broad range of fields. As a result the distribution of their starting salaries tends to be broader than that of graduates of a professional field like electrical engineering.
Physics degrees are in high demand and very useful to do stuff like build airplanes, cars, tons of uses for physics, unlike sociology, english, psycology, history...the list goes on.
Quote by Phyisab**** So you throw up some random number you cooked up from nothing and then ask why that number is right and the WSJ numbers are wrong? Is that what's going on here?
Kind of. Going through several posts in the academic advisement forum one will often see the number $30k a year given when asked what a physics PhD will make usually in reference to a postdoc. The reason I'm asking this question is mostly, because I love physics, but everyone is telling me that you will make less money, have a harder time finding a job and an even harder time finding a permanent job. That if you want to go into industry than an EE will get the job over a physics major except for an extremely small number of jobs. What I would like to hear is: "With Physics you'll make enough to support a family, more than a wal-mart cashier." With respect to the wsj numbers, that doesn't make sense to me. The pay is nearly the same, so I don't get the "be an engineer if your only in it for the money" remarks. Also, I don't know how many times I have to say it, I LOVE PHYSICS AND MATH. That being said I also love my son, and being able to feed my son. Apparently that makes me money hungry? Quote by fasterthanjoao Nope - that's complete nonsense. And I don't know where you're getting that 'general consensus' from - I spend a lot of time in these parts of the forums and I definitely wouldn't say that. The "physics and maths is only useful for high school teaching" is an opinion normally only held by people that don't know anything about physics or mathematics. I wouldn't expect anyone else to be so naive. I didn't just decide it for you, I have been reading these topics as I try to decide between EE and Physics and the underlying opinion from those topics I have read seem to point to EE being more employable. "...A physics degree can be a tougher sell than, say, a professional degree in engineering, but that doesn't mean there aren't opportunities. It pays to develop some marketable skills along the way such as programming, network administration, technical group facilitation, electronics, mathematical modeling, teaching, etc, that can transfer directly into the workplace. If you explore some threads around here or poke around on the AIP website, you'll find lots of possible avenues for exploration. " http://www.physicsforums.com/showthread.php?t=462288 "...I really wish I had done that EE degree, because there are so many jobs that require it. Almost no employers ask for an EPhys degree by name, because it's so rare. And if you're applying for employment at a large corporation, a resume with "BSc EPhys" might not even make it past an automated filter. " http://www.physicsforums.com/showthread.php?t=215668 "...After getting my BS in physics and being unemployed for a few months, I finally got 2 job offers from aerospace/defense companies, one of which is EE/ME-related. However, it took me about 5 months to get these offers. BUT, I did get plenty of interviews for software engineering/analyst/programmer positions, because I had listed I used C++ on my undergrad physics research projects. I could've gotten those jobs if I had a stronger C++ background. So my point is that while its much harder for physics majors to get jobs in say EE or ME than engineering majors, its not impossible. It's all about how much programming, experimental/lab skills, powerpoint presentation skills, and other skills you have that matters. I've written an article about this. " http://www.physicsforums.com/showthr...ght=physics+EE Again I'm not just pulling stuff out of the ether. I love physics, I just want to make sure I'll be able to work outside of an academic setting, i.e. I don't feel like being a postdoc for 5-10 years. Quote by Archi After reading some posts here is seems that the general consensus is that if one is interested in physics and math then one should be an electrical engineer unless one wants to teach and make around$30k/yr which is Wal-Mart cashier pay where I live(New Mexico). Is this really the case? Is getting a degree in physics essentially mental masturbation or is one able to gain employment with the degree and attain a salary greater than that of a janitor? Also I don't really understand the low salary quote often posted around $30-40k/yr when the wsj posts incomes that are comparable to EE degrees. Electrical Engineering starting median:$60,900.00 mid-career median: $103,000.00 Physics starting median:$50,300.00 mid-career median: \$97,300.00 http://online.wsj.com/public/resourc...Back-sort.html Bottom Line: Is Physics near worthless compared to EE post graduation if one wants to work in industry?
Any applied science training should give you decent opportunities out there in the workforce and physics is no exception.
|
# HiTZ /A2T_RoBERTa_SMFA_ACE-arg
6cb7fc2
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
# A2T Entailment model
Important: These pretrained entailment models are intended to be used with the Ask2Transformers library but are also fully compatible with the ZeroShotTextClassificationPipeline from Transformers.
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems (Yin et al., 2019; Wang et al., 2021; Sainz and Rigau, 2021). Recent research addressed Information Extraction problems with the same idea (Lyu et al., 2021; Sainz et al., 2021; Sainz et al., 2022a, Sainz et al., 2022b). The A2T entailment models are first trained with NLI datasets such as MNLI (Williams et al., 2018), SNLI (Bowman et al., 2015) or/and ANLI (Nie et al., 2020) and then fine-tuned to specific tasks that were previously converted to textual entailment format.
The model name describes the configuration used for training as follows:
### HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]
• pretrained_model: The checkpoint used for initialization. For example: RoBERTalarge.
• NLI_datasets: The NLI datasets used for pivot training.
• S: Standford Natural Language Inference (SNLI) dataset.
• M: Multi Natural Language Inference (MNLI) dataset.
• F: Fever-nli dataset.
• A: Adversarial Natural Language Inference (ANLI) dataset.
• finetune_datasets: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like HiTZ/A2T_RoBERTa_SMFA_ACE-arg have been trained marking some information between square brackets ('[[' and ']]') like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
|
# How can I get voice recognition features into the Unity Game Engine? [closed]
How can I get voice recognition features into the Unity Game Engine? Is there a plug-in or a framework (hopefully freeware) that I could use? If so, do you have any ideas on how to install it? Also, how much of a problem would there be with background noises in the game interfering with the voice inputs into the game? Are there any examples of games on the market that use this? (besides for Spain 3D for the Torque Game Engine).
• All FOSS free-text transcription systems are generally terrible. Almost certainly not worth the effort it will take unless you are willing to shell out \$ to work with someone like Nuance. – coderanger Mar 29 '11 at 20:13
• @coderanger: Free-text transcription is hardly the only use of voice recognition, and probably least likely to be necessary in games. – user744 Mar 29 '11 at 20:34
• Sure, but if you know enough about speech recognition to build your own language model, you probably wouldn't be asking on here :-) – coderanger Mar 29 '11 at 22:42
• This may have been closed, but the answers could be misleading as of Unity 5.4. If you're targeting windows alone, you can make use of the UnityEngine.Windows.Speech namespace. KeywordRecogniser and DictationRecogniser are the two classes of interest. – zcabjro Nov 10 '16 at 9:41
|
# 7: Equilibria, Equilibrium Constants and Acid-Bases (Worksheet)
Name: ______________________________
Section: _____________________________
Student ID#:__________________________
Work in groups on these problems. You should try to answer the questions without referring to your textbook. If you get stuck, try asking another group for help.
Learning Objectives
• Understand the concept of the reaction quotient, $$Q$$, as a means of determining whether or not a system is at equilibrium, and if not, how a system must proceed to reach equilibrium
• Understand how to set up a calculation of all species concentrations at equilibrium, given initial concentrations and the value of the equilibrium constant
• Understand how various stresses cause a shift in the position of equilibrium on the basis of Le Chatelier’s Principle
• Understand the Brønsted-Lowry theory of acids and bases
• Understand the concepts of conjugate acid-base pairs
When a reaction reaches equilibrium we can calculate the concentrations of all species, both reactants and products, by using information about starting concentrations or pressures and the numerical value of the equilibrium constant. Knowing how to set up and solve equilibrium problems for gas-phase systems is essential preparation for applying equilibrium concepts to more complicated systems, such as acid-base chemistry. The mixture of reactants and products can often be altered by applying a stress to the system (changing species concentrations, changing pressures, changing temperature, etc.), and the shift in the position of the equilibrium can be understood and predicted on the basis of Le Chatelier’s Principle.
Some the more important applications of equilibrium concepts are concerned with solutions of acids or bases. This week we will look at the definitions of acids and bases in the Brønsted-Lowry theory, and next week we will take up equilibrium calculations of acid-base systems based on that theory.
## Success Criteria
• Be able to set up and solve for all species using $$K_c$$ or $$K_p$$
• Be able to predict the direction of a reaction on the basis of $$Q$$
• Be able to check equilibrium calculation results, using a $$Q$$ calculation
• Be able to apply Le Chatelier’s Principle to determine the direction a system at equilibrium must shift to reach a new equilibrium
• Be able to calculate concentrations once equilibrium is reestablished after a stress that causes a shift from an original equilibrium
• Be able to analyze acid-base reactions in terms of conjugate acid-base pairs and proton transfer
• Be able to write the formula of an acid’s conjugate base or a base’s conjugate acid
## The Reaction Quotient: Q
We can calculate the ratio of concentrations or pressures of reactants and products, like $$K_c$$ and $$K_p$$, at any time in the course of a reaction, whether or not the system has achieved equilibrium. When the system is not at equilibrium, the ratio of the product concentrations raised to their stoichiometric coefficients to the reactant concentrations raised to their stoichiometric coefficients is called the reaction quotient and given the symbol $$Q$$. The form of $$Q$$ is the same as the form of $$K_c$$ or $$K_p$$, but the values for products and reactants are not presumed to be the values at equilibrium. The value of $$Q$$ relative to $$K_c$$ or $$K_p$$ indicates the direction in which the reaction must run to achieve equilibrium. If $$Q < K$$, then the reactant concentrations or pressures are too high and the product concentrations or pressures are too low, relative to what they would be at equilibrium. To achieve equilibrium, the reaction must run in the forward direction (shift right), using up reactants and forming more products. Conversely, if $$Q > K$$, then the reactant concentrations are too low and the product concentrations are too high, relative to what they would be at equilibrium. To achieve equilibrium, the reaction must run in the reverse direction (shift left), using up products and reforming more reactants. If $$Q = K$$, then the system is already at equilibrium.
### Q1
For the reaction
$\ce{H2(g) + I2(g) <=> 2 HI(g)} \nonumber$
with $$K_c = 54.8$$ at 425 oC. Are the following mixtures of $$\ce{H2}$$, $$\ce{I2}$$, and $$\ce{HI}$$ at 425 oC at equilibrium? If not, in which direction must the reaction proceed to achieve equilibrium?
1. $$\ce{[H2] = [I2]} = 0.360\, mol/L$$, $$\ce{[HI]} = 2.50\, mol/L$$
2. $$\ce{[H2]} = 0.120\, mol/L$$, $$\ce{[I2]} = 0.560\, mol/L$$, $$\ce{[HI]} = 2.20 \,mol/L$$
3. $$\ce{[H2]} = 0.120 \,mol/L$$, $$\ce{[I2]} = 0.219\, mol/L$$, $$\ce{[HI]} = 1.20\, mol/L$$
## Calculating Amounts of All Species in an Equilibrium Mixture
Very often we know the initial concentrations or pressures of reactants and products, and we want to know their values when equilibrium is established. To set up a calculation of these amounts, we generally let the variable $$x$$ represent the change in an amount of a particular reactant or product that must occur to reach equilibrium. Then, using the stoichiometry of the reaction, we write algebraic expressions to represent the amounts that will be present at equilibrium. We then substitute these algebraic expressions into the concentration or pressure terms of the equilibrium constant expression. Solving for the value of x allows calculating the numerical values of concentration or pressure for all reactants and products. In setting up the problem, it is useful to write down under each species in the balanced equation the initial amount, the algebraic expression for how each amount will change, and the algebraic expressions for each final amount (an ICE table). It is also useful to know the direction in which the reaction must run in order to reach equilibrium. If only reactants are initially present, the reaction must run to the right, forming products. Similarly, if only products are initially present, the reaction must run to the left, forming reactants.
But if the initial mixture contains amounts of both reactants and products, the direction in which the reaction must run to achieve equilibrium may not be obvious. In such cases, a $$Q$$ calculation can be done using the initial concentrations of all species. Remember, if $$Q < K$$ the reaction will run in the forward direction, but if $$Q > K$$ it will run in the reverse direction. Knowing this at the beginning helps in setting up the algebraic expressions for the changes that must take place to reach equilibrium.
For example, consider the equilibrium,
$\ce{I2(g) + Br2(g) <=> 2 IBr(g)} \nonumber$
for which the equilibrium constant $$K_c = 280$$ at 150 oC. Suppose 0.500 mol of $$\ce{I2}$$ and 0.500 mol of $$\ce{Br2}$$ were placed in a one-liter vessel at 150 °C. What will the concentrations of all species be once equilibrium is established?
Because we have no $$\ce{IBr}$$ initially present, the reaction must go to the right in order to reach equilibrium. To write algebraic expressions for the equilibrium concentrations, we could let x equal the concentration of $$\ce{I2}$$ that is consumed in forming the equilibrium concentration of $$\ce{IBr}$$. But for every mole of $$\ce{I2}$$ that reacts, a mole of $$\ce{Br2}$$ must also react, so $$x$$ also represents the amount of $$\ce{Br2}$$ consumed. Because their initial concentrations we each 0.500 mol/L, at equilibrium
$\ce{[I2] = [Br2] = } 0.500 - x, \label{Eq100}$
in units of mol/L. At the same time, for every mole of $$\ce{I2}$$ and $$\ce{Br2}$$ that are consumed, two moles of $$\ce{IBr}$$ appear as product, so its concentration at equilibrium will increase by $$2x$$. Because there was no $$\ce{IBr}$$ initially present, its equilibrium concentration will be $$0 + 2x$$; i.e, $$\ce{[Ibr]} = 2x$$. From these considerations, our ICE table looks like the following:
ICE Table $$\ce{I2(g) }$$ $$\ce{Br2(g) }$$ $$\ce{IBr(g)}$$
Initial $$0.500$$ $$0.500$$ $$0$$
Change $$-x$$ $$-x$$ $$+2x$$
Equilibrium $$0.500 - x$$ $$0.500 - x$$ $$2x$$
Substituting these algebraic expression into the $$K_c$$ expression, we obtain
\begin{align*} K_c &= \ce{\dfrac{[IBr]^2}{[I2][Br2]}} \\[4pt] &= 280 \\[4pt] &= \dfrac{(2x)^2}{(0.500 - x)(0.500 - x)} \\[4pt] &= \dfrac{4x^2}{(0.500 - x)^2} \end{align*}
We could expand the denominator and rearrange the resulting expression to solve it for $$x$$ as a quadratic equation. (Only one root will make sense for the given problem, and the other will be rejected.) However, in this case, the numerator and denominator are both perfect squares, so it would be faster to take the square root of both sides of the expression and then solve for $$x$$. Taking the square root of both sides we obtain ,
$\sqrt{ 280} = 16.7 = \dfrac{2x}{0.500 -x} \nonumber$
Rearranging
\begin{align*} 8.36 - 16.7 x & = 2x \\[4pt] 18.7 x &=8.36 \\[4pt] x &= 0.446 \label{X} \end{align*} \nonumber
We can now substitute $$x$$ back into our algebraic expression to obtain numerical values for each of the concentrations (Equation \ref{Eq100}).
$[\ce{I2}] = [\ce{Br2}] = 0.500 – x = 0.500 – 0.446 = 0.053 \,mol/L \nonumber$
$[\ce{IBr}] = 2x = (2)(0.446 ) = 0.893 \,mol/L \nonumber$
### Q2
For the reaction
$\ce{(NH3)B(CH3)3 (g) <=> NH3 (g) + B(CH3)3 (g)} \nonumber$
at 100 °C and $$K_p = 4.62 \,atm$$. If the partial pressures of $$\ce{NH3(g)}$$ and $$\ce{B(CH3)3 (g)}$$ in an equilibrium mixture at 100 °C are both 1.52 atm, what is the partial pressure of $$\ce{(NH3)B(CH3)3 (g)}$$ in the mixture?
### Q3
At 425 °C, 1.00 mol of $$\ce{H2(g)}$$ and 1.00 mol of $$\ce{I2(g)}$$ are mixed in a one liter vessel. What will be the concentrations of $$\ce{H2 (g)}$$, $$\ce{I2(g)}$$, and $$\ce{HI(g)}$$ at equilibrium. K = 54.8 at 425 °C for the reaction:
$\ce{H2 (g) + I2 (g) <=> 2 HI(g)} \nonumber$
## Using Q to Check Your Results
In general, equilibrium calculations involve a lot of mathematical manipulation, and it is easy to make a mistake. How can you know when you have gone astray? The best way is to substitute your found values into the expression for $$Q$$ and compare the calculated value with the given $$K$$. Because of rounding, do not expect an exact match. However, if $$Q \neq K$$ by a wide margin, check your work. In our example calculation of the equilibrium concentrations of $$\ce{I2(g)}$$, $$\ce{Br2(g)}$$, and $$\ce{IBr(g)}$$, a $$Q$$ calculation with the values we found gives the stated value of $$K_c$$.
\begin{align*} Q_c & = \ce{\dfrac{[IBr]^2}{[I2][Br2]}} \\[4pt] &= \dfrac{0.893}{(0.053)(0.053)} \\[4pt] &= 280 = K_c \end{align*} \nonumber
Thus, we have confidence that the calculation is correct. When doing this kind of check, realize that arithmetic loss of significant figures and rounding differences may cause the calculated value of $$Q$$ to be somewhat different from $$K$$, but the difference should not be great.
### Q4
Check the values you found in Q3 by calculating the value of $$Q$$. Does your value c agree with $$K_c = 54.8$$?
### Q5
Suppose $$0.800\, mol\, \ce{H2(g)}$$, $$0.900 \,mol \, \ce{I2 (g)}$$, and $$0.100\, mol\, \ce{HI(g)}$$ are mixed in a one liter vessel at 425 °C. $$K_c = 54.8$$ at 425 °C for the reaction
$\ce{H2(g) + I2(g) <=> 2HI(g)} \nonumber$
1. In which direction must the reaction run (forward or backwards) to achieve equilibrium?
2. What are the concentrations of all species at equilibrium? Check your final answers with a $$Q$$ calculation.
## Le Chatelier's Principle
In 1884 Henri Le Chatelier formulated the following principle:
If a stress is applied to a system at equilibrium, the system will tend to adjust to a new equilibrium, which minimizes the stress, if possible.
The stresses are changes in concentration, pressure, and temperature. If the stress causes a change in the amounts of reactants and products present once equilibrium is reestablished, we say that a shift in the position of the equilibrium has occurred. If we think of a reaction equation in the usual way, with reactants on the left and products on the right, a stress to the system at equilibrium may cause either of the following:
• Shift to the right: more reactant(s) consumed resulting in greater product and lesser reactant concentrations
• Shift to the left: more product(s) consumed resulting in greater reactant and lesser product concentrations
Sometimes the stress cannot be alleviated by either kind of shift, in which case the original equilibrium is maintained. The effects of each kind of stress on a system at equilibrium are summarized below.
### Concentration change
• $$K_c$$ or $$K_p$$ remains the same.
• Increasing reactant concentrations or decreasing product concentrations causes a shift right (more product forms).
• Increasing product concentrations or decreasing reactant concentrations causes a shift left (more reactant forms).
### Pressure change
• $$K_c$$ or $$K_p$$ remains the same.
• Only changes that affect the partial pressures of reactants and/or products can cause a change. For example, adding an inert gas has no effect.
• Increasing pressure causes a shift to the side with the lower sum of coefficients on gas species. For example, increasing the pressure on an equilibrium mixture for the reaction, $\ce{N2 (g) + 3 H2 (g) <=> 2 NH3 (g)}, \nonumber$ causes a shift right.
• If the sum of coefficients on gas species is the same on the left and right, changing the pressure has no effect. For example, for an equilibrium mixture for the reaction, $\ce{H2 (g) + I2 (g) <=> 2 HI(g)}, \nonumber$ changing the pressure has no effect on the position of the equilibrium.
### Temperature change
• $$K_c$$ and $$K_p$$ values change!
• Raising the temperature drives the endothermic process. For example, for the reaction, $\ce{N2O4 (g) <=> 2 NO (g)} \nonumber$ with $$\Delta H = +58.0\, kJ/mol$$. Therefore, raising the temperature on an equilibrium mixture will favor formation of more $$\ce{NO2 (g)}$$ because the forward reaction is endothermic.
• Lowering the temperature drives the exothermic process. For example, for the reaction, $\ce{N2O4 (g) <=> 2 NO (g)} \nonumber$ with $$\Delta H = +58.0\, kJ/mol$$. Therefore, lowering the temperature on an equilibrium mixture will favor formation of more $$\ce{N2O4 (g)}$$, because the reverse reaction is exothermic.
• A catalyst has no effect on the position of the equilibrium, just how fast it gets there.
### Q6
For each of the following reactions at equilibrium, predict the effect (if any) the indicated stress would have on the position of the equilibrium. Note whether or not the value of the equilibrium constant changes.
1. $$\ce{H2 (g) + I2 (g) <=> 2 HI(g)}$$ with more $$\ce{HI(g)}$$ added.
2. $$\ce{N2 (g) + 3 H2 (g) <=> 2 NH3 (g)}$$ with $$\ce{NH3(g)}$$ is removed as it forms.
3. $$\ce{2 NO(g) + Cl2 (g) <=> 2 NOCl(g)}$$ with overall pressure is increased.
4. $$\ce{2 NO(g) + Cl2 (g) <=> 2 NOCl(g)}$$ $$\Delta H = –75.5\, kJ/mol$$ and temperature is increased.
5. $$\ce{H2O(g) + C(s) <=> H2(g) + CO(g)}$$ with overall pressure is increased.
6. $$\ce{C(s) + O2(g) <=> CO2 (g)}$$ with overall pressure is decreased.
7. $$\ce{N2 O4 (g) <=> 2 NO2 (g)}$$ with $$\ce{N2 (g)}$$ is added, increasing overall pressure.
8. $$\ce{N2(g) + 3 H2 (g) <=> 2 NH3 (g)}$$ with iron powder added as a catalyst.
### Q7
An equilibrium mixture of $$\ce{H2(g)}$$, $$\ce{I2 (g)}$$, and $$\ce{HI(g)}$$ in a one-liter vessel at 425 °C is found to have the following concentrations:
• $$\ce{[H2 ]} = 0.146\, mol/L$$
• $$\ce{[I2 ]} = 0.246\, mol/L$$
• $$\ce{[HI]} = 1.41\, mol/L$$
If $$0.59\, mol$$ of $$\ce{HI}$$ is added to the vessel, what will be the concentrations of all species once equilibrium is reestablished? $$K = 54.8$$ at 425 °C for the reaction:
$\ce{H2 (g) + I2(g) <=> 2 HI(g)} \nonumber$
## Brønsted-Lowry Acid-Base Theory
In the Arrhenius theory, an acid is a substance that produces $$\ce{H^{+}}$$ ion in solution, and a base is a substance that produces $$\ce{OH^{-}}$$ ion in solution. This theory is only useful for aqueous (water) – solutions. In 1923, Brønsted (Danish) and Lowry (English) independently proposed a new theory that could be applied to other solvent systems. The Brønsted-Lowry Theory uses the following definitions:
• Acid - a substance that donates protons ($$\ce{H^{+}}$$)
• Base - a substance that accepts protons ($$\ce{H^{+}}$$)
In this theory, acid-base reactions are seen as proton transfer reactions. For example, consider the reaction between the hydronium ion and ammonia in water:
$\ce{H3O^{+} (aq) + NH3 (aq) <=> H2O(l) + NH4^{+}(aq) } \nonumber$
This involves transfer of a proton from the hydronium ion to the ammonia molecule to produce water and the ammonium ion. This make $$\ce{H3O^{+}}$$ an acid and $$\ce{NH3}$$ a base. In general terms, all Brønsted-Lowry acid-base reactions fit the general pattern
$\underbrace{\ce{HA}}_{\text{acid}} + \underbrace{\ce{B}}_{\text{base}} \ce{ <=> A^{-} + HB^{+}} \label{EQ1}$
acid base When an acid, HA, loses a proton it becomes its conjugate base, $$\ce{A^{-}}$$, a species capable of accepting a proton in the reverse reaction.
$\underbrace{\ce{HA}}_{\text{acid}} \ce{ <=>} \underbrace{\ce{A^{-}}}_{\text{Conjugate} \\ \text{Base}} + \ce{H^{+}} \label{EQ2}$
Likewise, when a base, $$\ce{B}$$, gains a proton, it becomes its conjugate acid, $$\ce{BH}$$, a species capable of donating a proton in the reverse reaction.
$\underbrace{\ce{B}}_{\text{base}} + \ce{H^{+}} \ce{<=>} \underbrace{\ce{HB^{+}}}_{\text{Conjugate} \\ \text{Acid}} \nonumber$
The Brønsted-Lowry concept of conjugate acid base pairs leads to the idea that all acid-base reactions are proton transfer reactions. The generic reaction between an acid $$\ce{HA}$$ and a base $$\ce{B}$$ can be viewed as a sum of two reactions (Equation \ref{EQ1} and \ref{EQ2}):
\begin{align*} \ce{HA \, & <=>\, A^{-}} + \cancel{\ce{H^{+}}} \\ \cancel{\ce{ H^{+}}} + \ce{ B \, &<=> \, HB^{+}} \end{align*}
$\underbrace{\ce{HA}}_{\text{acid}_1} + \underbrace{B}_{\text{base}_2} \ce{<=> } \underbrace{A}_{\text{base}_1} + \underbrace{ \ce{HB^{+}} }_{\text{acid}_2} \nonumber$
Species with the same subscripts are a conjugate acid-base pairs.
### Q8
In the spaces provided, write the formulas of the conjugate bases of the given Brønsted-Lowry acids. Remember to write the proper charge, if any, for the conjugate base in each case.
Acid Conjugate base $$\ce{HNO3}$$ $$\ce{NH4^{+} }$$ $$\ce{HSO4^{-} }$$ $$\ce{H2SO4 }$$ $$\ce{HC2H3O2 }$$
### Q9
In the spaces provided, write the formulas of the conjugate acids of the given Brønsted-Lowry bases. Remember to write the proper charge, if any, for the conjugate acid in each case.
Base Conjugate Acid $$\ce{OH^{-}}$$ $$\ce{NH3}$$ $$\ce{HS^{-} }$$ $$\ce{SO4^{2-}}$$ $$\ce{CH3NH2}$$
### Q10
Using the notation acid /base and base /acid (as shown above), identify the Brønsted-Lowry conjugate acid-base pairs in the following reactions.
1. $$\ce{HS^{-} (aq) + HC2H3O2 (aq) <=> H2S + C2H3O2^{2-} }$$
2. $$\ce{HF (aq) + PO4^{3-} (aq) <=> F^{-} (aq) + HPO4^{2-} (aq) }$$
3. $$\ce{HNO2 (aq) + H2O(l) <=> NO2^{-} (aq) + H3O^{+}}$$
4. $$\ce{CH3NH2 (aq) + H2O(l) <=> CH3NH3^{+} (aq) + OH^{-} (aq)}$$
7: Equilibria, Equilibrium Constants and Acid-Bases (Worksheet) is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
|
# Math Help - Integral
1. ## Integral
Hi I have another integral question that I wanted to ask.
Question: Prove that $\displaystyle\int_{-\infty}^\infty\frac{e^{2x}}{(e^{3x}+1)^2}dx=\frac{ 2\pi}{9\sqrt{3}}$
2. Originally Posted by nonsingular
Hi I have another integral question that I wanted to ask.
Question: Prove that $\displaystyle\int_{-\infty}^\infty\frac{e^{2x}}{(e^{3x}+1)^2}dx=\frac{ 2\pi}{9\sqrt{3}}$
$\displaystyle\int_{-\infty}^0\frac{e^{2x}}{(e^{3x}+1)^2}dx
+\int_{0}^\infty\frac{e^{2x}}{(e^{3x}+1)^2}dx$
the first one:
$\displaystyle\int_{-\infty}^0\frac{e^{2x}}{(e^{3x}+1)^2}dx =$
$\displaystyle\int_{-\infty}^0(\frac{e^x}{(e^{3x}+1)})^2dx =$
$\displaystyle\int_{0+}^1\frac{u^2}{(u^3+1)^2}dx =$
where $u=e^x$, and bounds change as well, and $dx= e^{-x}du =u^{-1} du$
$\displaystyle\int_{0+}^1\frac{u^2}{(u^3+1)^2}u^{-1}du =$
$\displaystyle\int_{0+}^1\frac{u}{(u^3+1)^2}du =$
For the indefinite, Wolfram gives:
integral x/(x^3+1)^2 dx = 1/18 (log(x^2-x+1)+(6 x^2)/(x^3+1)-2 log(x+1)+2 sqrt(3) tan^(-1)((2 x-1)/sqrt(3)))+constant
Pretty messy, but you can use it for the remaining half.
Hope this helps.
3. Hi
complex analysis
The integral $\int_{0}^{\infty} \frac{x}{(x^3+1)^2} ~dx$
Substitute $x = t^{\frac{1}{3}}$
The integral becomes $\frac{1}{3} \int_{0}^{\infty} \frac{t^{- \frac{1}{3}}}{(t+1)^2} ~ dt$
Then use this formula :
$\int_{0}^{\infty} x^{p-1} Q(x) ~dx = \frac{2\pi i}{1 - e^{2p \pi i}} \sum_{all~poles} Res[(z)^{p-1} Q(z)]$
Now sub. p into $\frac{2}{3}$
$= \frac{1}{3} \frac{2\pi i}{e^{\frac{2 \pi i}{3}}(e^{-\frac{2 \pi i}{3}} - e^{\frac{2 \pi i}{3}} )} ( (z)^{- \frac{1}{3}} )'_{z= -1}$
$= \frac{1}{3} \frac{2\pi i}{e^{\frac{2 \pi i}{3}}(-2i sin{ \frac{2 \pi}{3}} )} ( -\frac{1}{3} ) (-1)^{-\frac{4}{3} }$
$= \frac{ 2 \pi i}{ -6 i \sin{ \frac{ 2\pi }{3}} (-3) }$
$= \frac{2 \pi}{9 \sqrt3 }$
4. Originally Posted by nonsingular
Hi I have another integral question that I wanted to ask.
Question: Prove that $\displaystyle \int_{-\infty}^\infty\frac{e^{2x}}{(e^{3x}+1)^2}dx=\frac{ 2\pi}{9\sqrt{3}}$
let $e^{3x}=\tan^2t.$ then $\int_{-\infty}^\infty\frac{e^{2x}}{(e^{3x}+1)^2}dx=\frac{ 2}{3}\int_0^{\frac{\pi}{2}}(\sin t)^{\frac{1}{3}} (\cos t)^{\frac{5}{3}} \ dt=\frac{1}{3}B \left(\frac{2}{3},\frac{4}{3} \right)=\frac{1}{3} \Gamma \left(\frac{2}{3} \right) \Gamma \left(\frac{4}{3} \right),$ and the result follows because $\Gamma \left(\frac{4}{3} \right)=\frac{1}{3}\Gamma \left(\frac{1}{3} \right)$ and by Euler's reflection formula:
$\Gamma \left(\frac{2}{3} \right) \Gamma \left(\frac{1}{3} \right)=\frac{\pi}{\sin(\frac{\pi}{3})}=\frac{2\pi }{\sqrt{3}}.$
|
# Simplify the expressions. (5x^(-2x))(-6x^5)=?
Amina Richards 2022-10-13 Answered
Simplify the expressions.
$\left(5{x}^{-2}\right)\left(-6{x}^{5}\right)=?$
You can still ask an expert for help
Expert Community at Your Service
• Live experts 24/7
• Questions are typically answered in as fast as 30 minutes
• Personalized clear answers
Solve your problem for the price of one coffee
• Available 24/7
• Math expert for every subject
• Pay only if we can solve it
## Answers (1)
spornya1
Answered 2022-10-14 Author has 18 answers
Answer:
$\left(5{x}^{-2}\right)\left(-6{x}^{5}\right)=5x\left(-6{x}^{5}\right)-2\left(-6{x}^{5}\right)\phantom{\rule{0ex}{0ex}}=-30{x}^{6}+12{x}^{5}$
###### Did you like this example?
Expert Community at Your Service
• Live experts 24/7
• Questions are typically answered in as fast as 30 minutes
• Personalized clear answers
Solve your problem for the price of one coffee
• Available 24/7
• Math expert for every subject
• Pay only if we can solve it
|
# RF và mạch lạc lò vi sóng P3
Chia sẻ: Va Line Line | Ngày: | Loại File: PDF | Số trang:48
0
68
lượt xem
19
## RF và mạch lạc lò vi sóng P3
Mô tả tài liệu
Transmission lines are needed for connecting various circuit elements and systems together. Open-wire and coaxial lines are commonly used for circuits operating at low frequencies. On the other hand, coaxial line, stripline, microstrip line, and waveguides are employed at radio and microwave frequencies. Generally, the lowfrequency signal characteristics are not affected as it propagates through the line. However, radio frequency and microwave signals are affected signi®cantly because of the circuit size being comparable to the wavelength...
Chủ đề:
Bình luận(0)
Lưu
|
# Schatten-p
## Understanding and Enhancing Data Recovery Algorithms - From Noise-Blind Sparse Recovery to Reweighted Methods for Low-Rank Matrix Optimization
We prove new results about the robustness of noise-blind decoders for the problem of re- constructing a sparse vector from underdetermined linear measurements. Our results imply provable robustness of equality-constrained l1-minimization for random …
## Harmonic Mean Iteratively Reweighted Least Squares for Low-Rank Matrix Recovery
We propose a new iteratively reweighted least squares (IRLS) algorithm for the recovery of a matrix $X \in \mathbb{C}^{d_1 \times d_2}$ of rank $r \ll \min(d_1,d_2)$ from incomplete linear observations, solv- ing a sequence of low complexity linear …
## Harmonic Mean Iteratively Reweighted Least Squares for Low-Rank Matrix Recovery
This is a first conference version of the paper on Harmonic Mean Iteratively Reweighted Least Squares.
|
Prove that $a^2 + b^2 \geq 8$ if $x^4 + ax^3 + 2x^2 + bx + 1 = 0$ has at least one real root.
If it is known that the equation $$x^4 + ax^3 + 2x^2 + bx + 1 = 0$$ has a (real) root, prove the inequality $$a^2 + b^2 \geq 8.$$
I am stuck on this problem, though, it is a very easy problem for my math teacher. Anyway, I can't figure out.
I assume that $a,b$ ar real.
Suppose that $a^2+b^2<8$ and the polynomial has the real root $\xi$. It follows that: $$0=\xi^4 + a\xi^3 + 2\xi^2 + b\xi + 1>\xi^4 + a\xi^3 + \frac{a^2+b^2}{4}\xi^2 + b\xi + 1=\xi^2\left(\xi+\frac{a}{2}\right)^2+\left(\frac{b}{2}\xi+1\right)^2$$ But the sum of squares of real numbers is always positive, therefore $\xi$ has to be complex; contradiction. Thus we have $a^2+b^2\ge8$.
• The sum of squares of real numbers is always non-negative. It remains to show that both squares can't be $0$ at the same time. – principal-ideal-domain May 26 '16 at 9:24
• That would still contradict the strict inequality. – Joffan May 26 '16 at 9:39
The proof from user109899 is wonderful. Here is another one and the idea is the same. Note \begin{eqnarray} &&x^4 + ax^3 + 2x^2 + bx + 1\\ &=&x^2(x^2+ax)+2x^2+bx+1\\ &=&x^2(x+\frac a2)^2-\frac{a^2}{4}x^2+2x^2+bx+1\\ &=&x^2(x+\frac a2)^2+\frac{8-a^2}{4}x^2+bx+1\\ &=&x^2(x+\frac a2)^2+\frac{8-a^2}{4}\left(x^2+\frac{4b}{8-a^2}x\right)+1\\ &=&x^2(x+\frac a2)^2+\frac{8-a^2}{4}\left(x+\frac{2b}{8-a^2}\right)^2+1-\frac{b^2}{8-a^2}\\ &=&x^2(x+\frac a2)^2+\frac{8-a^2}{4}\left(x+\frac{2b}{8-a^2}\right)^2+\frac{8-a^2-b^2}{8-a^2} \end{eqnarray} Hence if $a^2+b^2<8$, then $8-a^2>0, 8-a^2-b^2>0$ and thus $$x^4 + ax^3 + 2x^2 + bx + 1>0.$$
Multiply though by $4$ to obtain $$4x^4+4ax^2+8x^2+4bx+4=0$$Now note that this can be rewritten $$x^2(2x+a)^2+(bx+2)^2+(8-a^2-b^2)x^2=0$$
If we have $a^2+b^2\lt 8$ then every term is non-negative, and they can't all be zero together (the last term would require $x=0$, but then $(bx+2)^2\gt0$).
Let $$x$$ be a root.
Thus, by C-S $$(x^2+1)^2=-x(ax^2+b^2)\leq\sqrt{x^2(ax^2+b)^2}\leq\sqrt{x^2(x^4+1)(a^2+b^2)}.$$ Id est, it's enough to prove that: $$\frac{(x^2+1)^4}{x^2(x^4+1)}\geq8$$ or $$(x^2-1)^4\geq0.$$
|
# Why does Merton's fraction give unintuitive quantities using real world data?
The solution to Merton's portfolio problem suggests that an investor invest $$\frac{\mu - r}{\sigma^2 \gamma}$$ percent of their wealth in the stock market, where $$\mu$$ is the rate of return of the stock mareket, $$r$$ is the risk free interest rate, $$\sigma$$ the annual volatility of the stock market, and $$\gamma$$ a measure of the risk-aversion of the investor. Taking $$\gamma = 1$$ corresponds to logarithmic utility (quite risk averse).
As of writing, the current LIBOR rate is 1.75%. The historical SPX rate of return is on the order of 10%, while an (overestimate) of its volatility is 0.20 (of course excluding the current bear market). With these estimates, we come to
$$\text{Estimate of Merton's Fraction} = \frac{(.1 - .0175)}{(0.2)^2} = 2.0625$$
Even using the current value of the vix $$\sigma = .35$$, Merton's solutions suggests investing two thirds of your wealth in the stock market in these turubulent times.
What causes these aggressive suggestions?
• Logarithmic utility is quite risk tolerant. For risk averse I would suggest $\gamma=3$ or higher. Apr 25, 2020 at 14:17
|
## Course Content and Outcome Guide for MTH 60
Course Number:
MTH 60
Course Title:
Introductory Algebra - First Term
Credit Hours:
4
Lecture Hours:
30
Lecture/Lab Hours:
20
Lab Hours:
0
Special Fee:
\$6.00
#### Course Description
Introduction to algebraic concepts and processes with a focus on linear equations and inequalities in one and two variables. Applications, graphs, functions, formulas, and proper mathematical notation are emphasized throughout the course. A scientific calculator is required. The TI-30X II is recommended. Audit available.
,
• ,
Students will be evaluated not only on their ability to get correct answers and perform correct steps, but also on the accuracy of the presentation itself.
,
• ,
• ,
Application problems must be answered in complete sentences.
,
• ,
#### Intended Outcomes for the course
,
• Use a variable to represent an unknown in a simple linear problem at home or in an academic or work environment, create a linear equation that represents the situation, and find the solution to the problem using algebra.
• ,
• Recognize a linear pattern in ordered paired data collected or observed at home or in an academic or work environment, calculate and interpret the rate of change (slope) in the data, create a linear model using two data points, and use the observed pattern to make predictions.
• ,
• Be successful in future coursework that requires an understanding of the basic algebraic concepts covered in the course.
• ,
#### Outcome Assessment Strategies
,
1. The following must be assessed in a proctored, closed-book, no-note, and no-calculator setting: arithmetic with signed rational numbers, simplifying expressions (including exponential expressions), graphing lines, and solving linear equations and inequalities in one variable
2. ,
3. At least two proctored, closed-book, no-note examinations (one of which is the comprehensive final) must be given. These exams must consist primarily of free response questions although a limited number of multiple choice and/or fill in the blank questions may be used where appropriate.
4. ,
5. Assessment must include evaluation of the students ability to arrive at correct and appropriate conclusions using proper mathematical procedures and proper mathematical notation. Additionally, each student must be assessed on their ability to use appropriate organizational strategies and their ability to write conclusions appropriate to the problem.
6. ,
7. At least two of the following additional measures must also be used,
,
1. Take-home examinations
2. ,
4. ,
5. Quizzes
6. ,
7. Projects
8. ,
9. In-class activities
10. ,
11. Portfolios
12. ,
,
8. ,
#### Course Content (Themes, Concepts, Issues and Skills)
THEMES:
,
,
1. ,
Algebra skills
,
2. ,
3. ,
Graphical understanding
,
4. ,
5. ,
Problem solving
,
6. ,
7. ,
Effective communication
,
8. ,
9. ,
Critical thinking
,
10. ,
11. ,
Applications, formulas, and modeling
,
12. ,
13. ,
Functions
,
14. ,
,
SKILLS:
,
,
1. REAL NUMBERS,
,
1. Review prerequisite skills signed number and fraction arithmetic
2. ,
3. Simplify arithmetic expressions using the order of operations
4. ,
5. Evaluate powers with whole number exponents; emphasize order of operations with negative bases
6. ,
7. Simplify arithmetic expressions involving absolute values
8. ,
9. Order real numbers along a real number line
10. ,
11. Identify numbers as elements of the subsets of the real numbers
12. ,
,
2. ,
3. VARIABLES AND EXPRESSIONS,
,
1. Simplify algebraic expressions
2. ,
3. Evaluate algebraic expressions
4. ,
5. Recognize equivalent expressions and non-equivalent expressions
6. ,
7. Distinguish between evaluating expressions, simplifying expressions and solving equations
8. ,
9. Translate from words into algebraic expressions and vice versa
10. ,
11. Apply the distributive, commutative, and associative properties
12. ,
13. Recognize additive and multiplicative identities and inverses
14. ,
15. Distinguish between factors and terms
16. ,
17. Apply the product rule, product to a power rule, and power-to-a-power rule to expressions with positive integer exponents emphasizing the logic behind these rules of exponents
18. ,
,
4. ,
5. GEOMETRY APPLICATIONS,
,
1. Evaluate formulas and apply basic dimensional analysis
2. ,
3. Know and apply appropriate units for various situations; e.g. perimeter units, area units, volume units, rate units, etc
4. ,
5. Memorize and apply the perimeter and area formulas for rectangles, circles, and triangles
6. ,
7. Memorize and apply the volume formula for a rectangular solid and a right circular cylinder
8. ,
9. Find the perimeter of any polygon
10. ,
11. Evaluate other geometric formulas
12. ,
13. Use estimation to determine reasonableness of solution
14. ,
,
6. ,
7. LINEAR EQUATIONS AND INEQUALITIES IN ONE VARIABLE,
,
1. Identify linear equations and inequalities in one variable
2. ,
3. Understand the definition of a solution; e.g. 2 is a solution to$x<5$ ; 3 is the solution to x+1=4
4. ,
5. Distinguish between solutions and solution sets
6. ,
7. Recognize equivalent equations and non-equivalent equations
8. ,
9. Solve linear equations and non-compound linear inequalities symbolically
10. ,
11. Express inequality solution sets graphically, with interval notation, and with set-builder notation
12. ,
13. Distinguish between solutions to equations and equivalent equations (e.g. The solution is 2. vs. $x=2$ )
14. ,
,
8. ,
9. GENERAL APPLICATIONS,
,
1. Create and solve linear equations and inequalities in one variable that model real life situations (e.g. fixed cost + variable cost equals total cost),
,
1. Properly define variables; include units in variable definitions
2. ,
3. Apply dimensional analysis while solving problems
4. ,
5. State contextual conclusions using complete sentences
6. ,
7. Use estimation to determine reasonableness of solution
8. ,
,
2. ,
3. Apply general percent equations ($A=PB$ )
4. ,
5. Create and solve percent increase/decrease equations
6. ,
7. Create and solve ratio/proportion equations
8. ,
9. Solve applications in which two values are unknown but their total is known; for example, a 50 foot board cut into two pieces of unknown length
10. ,
,
10. ,
11. LITERAL EQUATIONS AND FORMULAS,
,
1. Solve an equation for a specified variable in terms of other variables
2. ,
3. Input values into a formula and solve for the remaining variable
4. ,
,
12. ,
13. INTRODUCTION TO TABLES AND GRAPHS,
,
1. Briefly review line graphs, bar graphs and pie charts
2. ,
3. Plot points on the Cartesian coordinate system; determine coordinates of points
4. ,
5. Classify points by quadrant or as points on an axis; identify the origin
6. ,
7. Label and scale axes on all graphs
8. ,
9. Interpret graphs in the context of an application
10. ,
11. Create a table of values from an equation
12. ,
13. Plot points from a table
14. ,
,
14. ,
15. INTRODUCTION TO FUNCTION NOTATION,
,
1. Determine whether a given relation presented in graphical form represents a function
2. ,
3. Evaluate functions using function notation from a set, graph or formula
4. ,
5. Interpret function notation in a practical setting
6. ,
7. Identify ordered pairs from function notation
8. ,
,
16. ,
17. LINEAR EQUATIONS IN TWO VARIABLES,
,
1. Identify a linear equation in two variables
2. ,
3. Emphasize that the graph of a line is a visual representation of the solution set to a linear equation
4. ,
5. Find ordered pairs that satisfy a linear equation written in standard or slope-intercept form including equations for horizontal and vertical lines; graph the line using the ordered pairs
6. ,
7. Find the intercepts given a linear equation; express the intercepts as ordered pairs
8. ,
9. Graph the line using intercepts and check with a third point
10. ,
11. Find the slope of a line from a graph and from two points
12. ,
13. Given the graph of a line identify the slope as positive, negative, zero, or undefined. Given two non-vertical lines, identify the line with greater slope
14. ,
15. Graph a line with a known point and slope
16. ,
17. Manipulate a linear equation into slope-intercept form; identify the slope and the vertical-intercept given a linear equation and graph the line using the slope and vertical-intercept and check with a third point
18. ,
19. Recognize equations of horizontal and vertical lines and identify their slopes as zero or undefined
20. ,
21. Given the equation of two lines, classify them as parallel, perpendicular, or neither
22. ,
23. Find the equation of a line using slope-intercept form
24. ,
25. Find the equation of a line using point-slope form
26. ,
,
18. ,
19. Applications of linear equations in two variables,
,
1. Interpret intercepts and other points in the context of an application
2. ,
3. Write and interpret a slope as a rate of change
4. ,
5. Create and graph a linear model based on data and make predictions based upon the model
6. ,
7. Create tables and graphs that fully communicate the context of an application problem
8. ,
,
20. ,
21. LINEAR INEQUALITIES IN TWO VARIABLES,
,
1. Identify a linear inequality in two variables
2. ,
3. Graph the solution set to a linear inequality in two variables
4. ,
5. Model application problems using an inequality in two variables
6. ,
,
22. ,
,
,
MTH 60 is the first term of a two term sequence in beginning algebra. One major problem experienced by beginning algebra students is difficulty conducting operations with fractions and negative numbers. It would be beneficial to incorporate these topics throughout the course, whenever possible, so that students have ample exposure. Encourage them throughout the course to get better at performing operations with fractions and negative numbers, as it will make a difference in this and future math courses.
,
Vocabulary is an important part of algebra. Instructors should make a point of using proper vocabulary throughout the course. Some of this vocabulary should include, but not be limited to, inverses, identities, the commutative property, the associative property, the distributive property, equations, expressions and equivalent equations.
,
The difference between expressions, equations, and inequalities needs to be emphasized throughout the course. A focus must be placed on helping students understand that evaluating an expression, simplifying an expression, and solving an equation or inequality are distinct mathematical processes and that each has its own set of rules, procedures, and outcomes.
,
Proper usage of equal signs must be stressed at all times. Students need to be taught that equal signs are used to communicate multiple ideas and they need to be taught the manner in which equal signs are used to communicate these ideas.
,
Equivalence of expressions is always communicated using equal signs. Students need to be taught that when they simplify or evaluate an expression they are not solving an equation despite the presence of equal signs. Instructors should also stress that it is not acceptable to write equal signs between nonequivalent expressions.
,
Instructors should demonstrate that both sides of an equation need to be written on each line when solving an equation. An emphasis should be placed on the fact that two equations are not equal to one another but they can be equivalent to one another.
,
The distinction between an equal sign and an approximately equal sign should be noted and students should be taught when it is appropriate to use one sign or the other.
,
The manner in which one presents the steps to a problem is very important. We want all of our students to recognize this fact; thus the instructor needs to emphasize the importance of writing mathematics properly and students need to be held accountable to the standard. When presenting their work, all students in a Math 60 course should consistently show appropriate steps using correct mathematical notation and appropriate forms of organization. All axes on graphs should include scales and labels. A portion of the grade for any free response problem should be based on mathematical syntax.
|
6 Replies Latest reply: Feb 6, 2012 1:08 PM by Jon Scholten
# Furious after McAfee Web Gateway 7 has suddenly appeared on my computer and is blocking sites.
In the last hour somehow McAfee Web Gateway 7 has been installed onto my computer and is now actively preventing me from viewing certain webpages.
The first thing I will say is that in the last hour I have not installed anything. I have Nod32 Firewall and AV. I have done a scan.
Secondly I have DEP enabled and UAC is enabled.
I also have Malwarebytes paid, actively protecting my PC and I have immunized my computer with Spybot S&D.
I have done scans with ALL of these programs and can find NOTHING.
This problem only appears on Firefox. It does not appear on Google Chrome or Internet Explorer.
It has only been happening for an hour. I have so many security modules in place that I have no idea how this sort of thing can happen and it honestly really frustrates me that this can happen.
I have tried uninstalling (to my absolute displeasure) Firefox and reinstalling it. Nothing works...
Whenever I try to go to some websites I get this page:
[IMG]http://i39.tinypic.com/i1z3fl.png[/IMG]
Alternatively when I download something it goes to the same page and apparently searches the file for viruses before letting me download it.
I have NEVER installed this program or anything from McAfee. I have no idea in hell how this has gotten onto my computer. This is one of the most frustrating things ever because I cannot simply find the extension in Firefox settings where I can remove it.
I have disabled all of my addons, and then checked and disabled all of my extensions and plugins as well. Nothing stops this from showing up.
I have searched my whole computer for 'McAfee' including my registry and I do not have a single entry at all with anything to do with McAfee.
Before 1 hour ago I could download files normally without this annoying popup and I could browse any website I wanted without this stupid thing telling me that I cannot...
I have tried disabling all of my protection and re-enabling. I have tried restarting and shutting down.
Here is a Hijack This log file:
Logfile of Trend Micro HijackThis v2.0.4
Scan saved at 20:11:28, on 6.2.2012
Platform: Windows 7 SP1 (WinNT 6.00.3505)
MSIE: Internet Explorer v9.00 (9.00.8112.16421)
Boot mode: Normal
Running processes:
C:\Program Files (x86)\ASUS\AI Suite II\AsRoutineController.exe
C:\Program Files (x86)\ASUS\AI Manager\AsShellApplication.exe
C:\Program Files (x86)\Malwarebytes' Anti-Malware\mbamgui.exe
C:\Users\Kuutti\AppData\Roaming\Dropbox\bin\Dropbox.exe
C:\Program Files (x86)\MagicDisc\MagicDisc.exe
C:\Program Files (x86)\ASUS\AI Suite II\EPU\EPUHelp.exe
C:\Program Files (x86)\Mozilla Firefox\firefox.exe
C:\Program Files (x86)\Mozilla Firefox\plugin-container.exe
C:\Program Files (x86)\Trend Micro\HiJackThis\HiJackThis.exe
R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://asus.msn.com/
R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = http://asus.msn.com/
R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://asus.msn.com
R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant =
R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch =
R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Local Page = C:\Windows\SysWOW64\blank.htm
R1 - HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings,ProxyOverride = *.local
F2 - REG:system.ini: UserInit=userinit.exe
O2 - BHO: Java(tm) Plug-In SSV Helper - {761497BB-D6F0-462C-B6EB-D4DAF1D92D43} - C:\Program Files (x86)\Java\jre6\bin\ssv.dll
O2 - BHO: Windows Live ID Sign-in Helper - {9030D464-4C02-4ABF-8ECC-5164760863C6} - C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live\WindowsLiveLogin.dll
O2 - BHO: URLRedirectionBHO - {B4F3A835-0E21-4959-BA22-42B3008E02FF} - C:\PROGRA~2\MICROS~1\Office14\URLREDIR.DLL
O2 - BHO: Java(tm) Plug-In 2 SSV Helper - {DBC80044-A445-435b-BC74-9C25C1C588A9} - C:\Program Files (x86)\Java\jre6\bin\jp2ssv.dll
O4 - HKLM\..\Run: [RunAIShell] C:\Program Files (x86)\ASUS\AI Manager\AsShellApplication.exe
O4 - HKLM\..\Run: [Malwarebytes' Anti-Malware] "C:\Program Files (x86)\Malwarebytes' Anti-Malware\mbamgui.exe" /starttray
O4 - HKCU\..\Run: [PeerBlock] C:\Program Files\PeerBlock\peerblock.exe
O4 - HKUS\S-1-5-19\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'LOCAL SERVICE')
O4 - HKUS\S-1-5-20\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'NETWORK SERVICE')
O4 - HKUS\S-1-5-21-2863558729-1456226192-851000428-1003\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'UpdatusUser')
O4 - Startup: Dropbox.lnk = Kuutti\AppData\Roaming\Dropbox\bin\Dropbox.exe
O4 - Global Startup: AsusVibeLauncher.lnk = C:\Program Files (x86)\ASUS\AsusVibe\AsusVibeLauncher.exe
O4 - Global Startup: Rainmeter.lnk = C:\Program Files\Rainmeter\Rainmeter.exe
O9 - Extra button: @C:\Program Files (x86)\Windows Live\Writer\WindowsLiveWriterShortcuts.dll,-1004 - {219C3416-8CB2-491a-A3C7-D9FCDDC9D600} - C:\Program Files (x86)\Windows Live\Writer\WriterBrowserExtension.dll
O9 - Extra 'Tools' menuitem: @C:\Program Files (x86)\Windows Live\Writer\WindowsLiveWriterShortcuts.dll,-1003 - {219C3416-8CB2-491a-A3C7-D9FCDDC9D600} - C:\Program Files (x86)\Windows Live\Writer\WriterBrowserExtension.dll
O10 - Unknown file in Winsock LSP: c:\program files (x86)\common files\microsoft shared\windows live\wlidnsp.dll
O10 - Unknown file in Winsock LSP: c:\program files (x86)\common files\microsoft shared\windows live\wlidnsp.dll
O11 - Options group: [ACCELERATED_GRAPHICS] Accelerated graphics
O18 - Protocol: wlpg - {E43EF6CD-A37A-4A9B-9E6F-83F89B8E6324} - C:\Program Files (x86)\Windows Live\Photo Gallery\AlbumDownloadProtocolHandler.dll
O18 - Filter hijack: text/xml - {807573E5-5146-11D5-A672-00B0D022E945} - C:\Program Files (x86)\Common Files\Microsoft Shared\OFFICE14\MSOXMLMF.DLL
O23 - Service: @%SystemRoot%\system32\Alg.exe,-112 (ALG) - Unknown owner - C:\Windows\System32\alg.exe (file missing)
O23 - Service: Apple Mobile Device - Apple Inc. - C:\Program Files (x86)\Common Files\Apple\Mobile Device Support\AppleMobileDeviceService.exe
O23 - Service: ASUS Com Service (asComSvc) - Unknown owner - C:\Program Files (x86)\ASUS\AXSP\1.00.13\atkexComSvc.exe
O23 - Service: ASUS HM Com Service (asHmComSvc) - Unknown owner - C:\Program Files (x86)\ASUS\AAHM\1.00.11\aaHMSvc.exe
O23 - Service: ASUS System Control Service (AsSysCtrlService) - Unknown owner - C:\Program Files (x86)\ASUS\AsSysCtrlService\1.00.10\AsSysCtrlService.exe
O23 - Service: Bonjour-palvelu (Bonjour Service) - Apple Inc. - C:\Program Files\Bonjour\mDNSResponder.exe
O23 - Service: Device Handle Service - ASUSTeK Computer Inc. - C:\Windows\SysWOW64\AsHookDevice.exe
O23 - Service: @%SystemRoot%\system32\efssvc.dll,-100 (EFS) - Unknown owner - C:\Windows\System32\lsass.exe (file missing)
O23 - Service: ESET Service (ekrn) - ESET - C:\Program Files\ESET\ESET Smart Security\x86\ekrn.exe
O23 - Service: @%systemroot%\system32\fxsresm.dll,-118 (Fax) - Unknown owner - C:\Windows\system32\fxssvc.exe (file missing)
O23 - Service: InstallDriver Table Manager (IDriverT) - Macrovision Corporation - C:\Program Files (x86)\Common Files\InstallShield\Driver\1050\Intel 32\IDriverT.exe
O23 - Service: iPod-palvelu (iPod Service) - Apple Inc. - C:\Program Files\iPod\bin\iPodService.exe
O23 - Service: @keyiso.dll,-100 (KeyIso) - Unknown owner - C:\Windows\system32\lsass.exe (file missing)
O23 - Service: Intel(R) Management and Security Application Local Management Service (LMS) - Intel Corporation - C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\LMS\LMS.exe
O23 - Service: MBAMService - Malwarebytes Corporation - C:\Program Files (x86)\Malwarebytes' Anti-Malware\mbamservice.exe
O23 - Service: @comres.dll,-2797 (MSDTC) - Unknown owner - C:\Windows\System32\msdtc.exe (file missing)
O23 - Service: @%SystemRoot%\System32\netlogon.dll,-102 (Netlogon) - Unknown owner - C:\Windows\system32\lsass.exe (file missing)
O23 - Service: NVIDIA Display Driver Service (NVSvc) - Unknown owner - C:\Windows\system32\nvvsvc.exe (file missing)
O23 - Service: NVIDIA Update Service Daemon (nvUpdatusService) - NVIDIA Corporation - C:\Program Files (x86)\NVIDIA Corporation\NVIDIA Updatus\daemonu.exe
O23 - Service: @%systemroot%\system32\psbase.dll,-300 (ProtectedStorage) - Unknown owner - C:\Windows\system32\lsass.exe (file missing)
O23 - Service: @%systemroot%\system32\Locator.exe,-2 (RpcLocator) - Unknown owner - C:\Windows\system32\locator.exe (file missing)
O23 - Service: @%SystemRoot%\system32\samsrv.dll,-1 (SamSs) - Unknown owner - C:\Windows\system32\lsass.exe (file missing)
O23 - Service: @%SystemRoot%\system32\snmptrap.exe,-3 (SNMPTRAP) - Unknown owner - C:\Windows\System32\snmptrap.exe (file missing)
O23 - Service: Splashtop® Remote Service (SplashtopRemoteService) - Splashtop Inc. - C:\Program Files (x86)\Splashtop\Splashtop Remote\Server\SRService.exe
O23 - Service: @%systemroot%\system32\spoolsv.exe,-1 (Spooler) - Unknown owner - C:\Windows\System32\spoolsv.exe (file missing)
O23 - Service: @%SystemRoot%\system32\sppsvc.exe,-101 (sppsvc) - Unknown owner - C:\Windows\system32\sppsvc.exe (file missing)
O23 - Service: Splashtop Software Updater Service (SSUService) - Splashtop Inc. - C:\Program Files (x86)\Splashtop\Splashtop Software Updater\SSUService.exe
O23 - Service: @%SystemRoot%\system32\ui0detect.exe,-101 (UI0Detect) - Unknown owner - C:\Windows\system32\UI0Detect.exe (file missing)
O23 - Service: Intel(R) Management and Security Application User Notification Service (UNS) - Intel Corporation - C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\UNS\UNS.exe
O23 - Service: @%SystemRoot%\system32\vaultsvc.dll,-1003 (VaultSvc) - Unknown owner - C:\Windows\system32\lsass.exe (file missing)
O23 - Service: @%SystemRoot%\system32\vds.exe,-100 (vds) - Unknown owner - C:\Windows\System32\vds.exe (file missing)
O23 - Service: @%systemroot%\system32\vssvc.exe,-102 (VSS) - Unknown owner - C:\Windows\system32\vssvc.exe (file missing)
O23 - Service: @%SystemRoot%\system32\Wat\WatUX.exe,-601 (WatAdminSvc) - Unknown owner - C:\Windows\system32\Wat\WatAdminSvc.exe (file missing)
O23 - Service: @%systemroot%\system32\wbengine.exe,-104 (wbengine) - Unknown owner - C:\Windows\system32\wbengine.exe (file missing)
O23 - Service: @%Systemroot%\system32\wbem\wmiapsrv.exe,-110 (wmiApSrv) - Unknown owner - C:\Windows\system32\wbem\WmiApSrv.exe (file missing)
O23 - Service: @%PROGRAMFILES%\Windows Media Player\wmpnetwk.exe,-101 (WMPNetworkSvc) - Unknown owner - C:\Program Files (x86)\Windows Media Player\wmpnetwk.exe (file missing)
--
End of file - 9865 bytes
I cannot see anything related at all to McAfee or even Firefox in that log... and I can't see anything wrong with anything there...
So why is this happening to me? Why can I find no trace of this program or little piece of software in my 'Add/Remove Programs'? Why can I find no trace of this at all anywhere.
I would consider myself a highly skilled computer user. I do freelance coding in Ruby and this is my work computer, it is extremely important that I do not have any sort of unauthorised software on my computer that I did not allow to be installed personally. No one else has acess to this computer, it's just my computer. Yet I am mindful of everything that I click on, I scan everything before opening, I never make mistakes so I have no idea how on earth this could have gotten onto my computer and why even when I do an uninstall of Firefox that it is still persistant.
I will also send an email to the customer support here, because this is unacceptable regardless. The fact that there is absolutely no easy, end-user-friendly way of uninstalling this little piece of software, or addon or whatever it is. Is absolutely absurd.
Lastly I have also run this file: MCPR.exe
I have run the program and let it uninstall any sort of trace of anything McAfee related, not that I have installed anything in the first place.
There is also nothing new in 'Startup items' in msconfig, as well as 'Services'.
There is also no suspicious processes running in taskmanager and other than this problem nothing at all is out of the ordinary.
I honestly have no idea in all of my years how this has happened and why it has happened and how I cannot be able to remove this. I have never had a problem that I have not been able to find an answer to on google. But searching google just gives me garbage links that lead me nowhere at all.
Please someone provide assistance on removing this horrible thing from my computer.
Also I am sorry for the long post but I did not want anyone to offer simple advice when I have more or less covered all the bases to no avail. You have no idea how frustrating this is....
Message was edited by: leijonasisu on 06/02/12 12:22:21 CST
• ###### 1. Re: Furious after McAfee Web Gateway 7 has suddenly appeared on my computer and is blocking sites.
McAfee Web Gateway is not something that you install on your computer. No amount of uninstalling will make it go away.
It is a gateway device that your organization has decided to put on the network to prevent access to sites and scan for malware coming in according to your organizations policy.
You should contact your helpdesk for assistance.
• ###### 2. Re: Furious after McAfee Web Gateway 7 has suddenly appeared on my computer and is blocking sites.
That is impossible. If this was the case then I would not be able to access that website from a different broswer but I still can... and my friends are not reporting the same issue on their machines either using Firefox.
• ###### 3. Re: Furious after McAfee Web Gateway 7 has suddenly appeared on my computer and is blocking sites.
It's totally possible. Administrators can selectively authenticate and identify specific users to have specific policies. One group of users may be be able to access something while others are not.
But in your case, the fact that IE behaves differently than FF tells me that you have different LAN settings defined in each. A default install of FF uses IE settings, but they can be unlinked and individually set as well.
• ###### 4. Re: Furious after McAfee Web Gateway 7 has suddenly appeared on my computer and is blocking sites.
Do you have an anonymizing plugin in Firefox? If so, the proxy it's using is probably a Web Gateway that was left exposed, then discovered and added to some public proxy list now.
• ###### 5. Re: Furious after McAfee Web Gateway 7 has suddenly appeared on my computer and is blocking sites.
I worked out what it was. I had accidently swapped to using my work proxy. Naturally they have many things blocked which I have funnily enough, not even in 6 years ever come across a website I have been blocked to at work. Which is why I had never familiarised myself with this problem. I simply cleared out the custom settings in Firefox (which is the bowser I use for work) and voilá! Everything works again.
I wrote up a longer reply but when I pressed post, it cleared everything instead of posting it.
Sorry for the frustration and thank you to everyone for your time and help in this matter. I had really feared the worst. At least I can put my faith back into my methods, at least in just that respect.
I am quite embarassed that I spent such a large amount of time trying to find out what the problem was. Not only that but it explains why I could not find any trace of anything... I am going to do a little bit of digging into this tool that swapped my settings over and stop it from running every time I start Python...
Thanks again everyone and have a really good weekend.
I am glad that I had not yet bothered customer support with this message.
• ###### 6. Re: Furious after McAfee Web Gateway 7 has suddenly appeared on my computer and is blocking sites.
Is this a computer of yours or something your company issued?
Edit: nevermind, people have already replied...
~Jon
Message was edited by: jscholte on 2/6/12 1:08:22 PM CST
|
# Question 18423
Jul 20, 2015
Orbital hybridization actually tells you s-character.
#### Explanation:
Before deciding how much s-character a hybrid orbital has, you must first determine the type of hybrid orbital you're dealing with.
To do that, draw the Lewis structures of the molecules.
Let's start with xenon tetrafluoride, $X e {F}_{4}$. The molecule has a total of 36 valence electrons - 8 from xenon and 7 from each of the four fluorine atoms.
The xenon atom will be the central atom of the molecule and it will form a single bond with each of the four fluorine atoms, which will have 3 lone pairs of electrons attached.
The remaining 4 valence electrons will be placed as lone pairs on the xenon atom.
Now take a look at how many regions of electron density surround the central atom. This number, which is called steric number, will tell you what the hybridization of the central atom is.
The xenon atom is bonded to 4 fluorine atoms and has 2 lone pairs present, which means that it is surrounded by a total of 6 regions of electron density.
This means that it must use 6 hybrid orbitals
• one s-orbital
• three p-orbitals
• two d-orbitals
The central atom is $s {p}^{3} {d}^{2}$ hybridized. To get the s-character of these orbitals, simply divide the number of s-orbitals that were used to form the hybrids by the total number of orbitals used to form the hybrids.
In this case, you have one s-orbital and six total orbitals, which means that you get
"1 s-orbital"/"6 orbitals in total" * 100 = color(green)("16.7% s-character")
Now take a look at the carbonate ion, $C {O}_{3}^{2 -}$. Its Lewis structure looks like this
You can draw three Lewis structures for the carbonate ion - these are called resonance structures.
Once again, the important thing to look for is the number of regions of electron density that surround the central atom.
Since the carbon atom is bonded to 3 oxygen atoms and has no lone pairs present, its steric number will be equal to 3.
This means that it uses 3 hybrid orbitals
• one s-orbital
• two p-orbitals
The central atom is $s {p}^{2}$ hybridized. The s-character will now be
"1 s-orbital"/"3 orbitals in total" * 100 = color(green)("33.3% s-character")#
|
Latest news 2019-11-04: The giveaway of two signed copies of “Quack, quack, quack. Give my hat back!” has closed and the winning entrants have been selected. Thank you to everyone who took part.
# Gallery: Sorting
If you have a small screen, you may prefer to switch on the small image setting.
The default behaviour of bib2gls is to sort according to the value of the sort field. However, typically, this field should not be explicitly set. Instead, bib2gls has a set of fallback fields that vary according to the entry type.
All the examples here are based on the following document:
\documentclass{article}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage[record,stylemods=longextra,style=long-name-sym-desc]{glossaries-extra}
src={mixed-entries},
selection=all,
sort={en-GB}
]
\begin{document}
\printunsrtglossaries
\end{document}
This document requires entries that are defined in mixed-entries.bib. This file has a mixture of entry types:
Abbreviations
These are defined with the @abbreviation entry type and have a long and short field but no description field. The default long-short abbreviation style will set the description field to the long form, but bib2gls doesn’t know this. Ordinarily I would make the label match the short form as closely as possible (since it makes it easier to remember when referencing it), but here I’ve used the prefix “markup-” for demonstration purposes:
@abbreviation{markup-XML,
short = {XML},
long = {extensible markup language}
}
@abbreviation{markup-HTML,
short = {HTML},
long = {hypertext markup language}
}
Note that if you use @acronym instead of @abbreviation you will need to set the abbreviation style for the acronym category:
\setabbreviationstyle[acronym]{long-short}
(This must be done before \GlsXtrLoadResources.)
Mathematical Constants
These are defined using the @number entry type with the name set to the symbol. The labels are the English translation of the symbols.
@number{gamma,
name = {\ensuremath{\gamma}},
description = {Euler's constant}
}
@number{pi,
name={\ensuremath{\pi}},
description={Archimedes' constant}
}
@number{zeta3,
name={\ensuremath{\zeta(3)}},
description={Apéry's constant}
}
Note that one of the descriptions includes a non-ASCII character (Apéry's constant). The encoding is set at the start of the .bib file:
% Encoding: UTF-8
The document also needs to have the encoding set to UTF-8. This means either using a native Unicode engine (XeLaTeX or LuaLaTeX) or use the inputenc package with the utf8 option. (Modern versions of PDFLaTeX will load this automatically. The example document loads it explicitly to highlight that UTF-8 support is required.) If you are restricted to using ASCII, bib2gls recognises the standard accent commands, so you can instead use:
description={Ap\'ery's constant}
Terms with a Description
These terms are defined with @index, which doesn’t require the description field but in this case that field has been supplied. Unlike @entry (where the name is either required or inherited from a parent entry), the @index field will assume that the name field is the same as the label if it hasn’t been supplied.
@index{zebra,
description={African wild horse with black and white stripes}
}
@index{aardvark,
description={African mammal with tubular snout and long tongue}
}
@index{matriculate,
description={enrol or be enrolled at a college or university}
}
@index{zither,
description={type of stringed musical instrument}
}
Topic Terms
These are terms that don’t have a description. Typically these would have child entries but they don’t in this example document. As with @index, the name and description fields are optional but, in this case, if the name field isn’t set, its value is obtained from the plural field. If the plural field isn’t set its value is obtained by appending “s” to the value of the text field. If the text field isn’t set, its assumed to be the same as the label.
@indexplural{example}
@indexplural{matrix,plural={matrices}}
so in the first case (where the label is example) the text field is set to “example” (=label), the plural field is set to “examples” (=text+“s”) and the name is set to “examples” (=plural).
Metals
These are entries that have both a description and a symbol. As with the abbreviations I would normally try to match the label with the name, but for illustrative purposes I’ve used “metal-” as a prefix. These terms are defined with @entry that requires both a name (either explicitly set or inherited from a parent) and description. The symbol field is optional.
@entry{metal-lead,
description={heavy metal that's soft and malleable with a low melting point},
symbol={Pb}
}
@entry{metal-tin,
name = {tin},
description={soft, silvery metal with a faint yellow hue},
symbol={Sn}
}
@entry{metal-zirconium,
name={zirconium},
description={grey-white, strong transition metal},
symbol={Zr}
}
Card Suits
A set of entries defined with @symbol, these are card suit symbols. I’ve decided to prefix the labels for these with “card-” for illustrative purposes, but this maybe useful in case the labels happen to cause a conflict with future additions. (For example, I may decide to add the diamond element to the list of entries).
@symbol{card-spade,
}
@symbol{card-heart,
name={\ensuremath{\heartsuit}},
description={heart (card suit)}
}
@symbol{card-club,
name={\ensuremath{\clubsuit}},
description={club (card suit)}
}
@symbol{card-diamond,
name={\ensuremath{\diamondsuit}},
description={diamond (card suit)}
}
The document uses the long-name-sym-desc style, which creates a three-column glossary with the name in the first column, the symbol in the second and the description in the third. Only a few terms (the metals) actually have all three columns filled. I’ve used this style for clarity for this particular example.
All entries are selected (with the selection=all option) and the resulting document just contains the glossary (shown in the image above).
This base example lists the entries in the following order:
1. aardvark
2. examples
3. γ
4. HTML
6. matrices
7. matriculate
8. π
9. tin
10. XML
11. zebra
12. ζ(3)
13. zirconium
14. zither
The default action is to sort according to the sort field. If the designated field isn’t set (which it isn’t in this case), bib2gls will use the fallback for the given entry type. The default fallback behaviour in the event of a missing sort field is as follows:
• @index and @indexplural will fall back on the name field. If that isn’t supplied, then the fallback for the name field is used. In the case of @index this is the label. In the case of @indexplural this is the plural. There’s no option to change the fallback field specifically for the @index or @indexplural entry types. (You can, however, use the more general missing-sort-fallback)
• @symbol and @number will fall back on the label (not the name). You can change the fallback for @symbol and @number with the symbol-sort-fallback option.
• @entry will fall back on the name. If that isn’t supplied, then the fallback for the name is used (obtained from the parent). You can change the fallback with the entry-sort-fallback option.
• @abbreviation and @acronym will fall back on the short field. Remember that it’s the abbreviation style that assigns the name field, which bib2gls doesn’t know about. If you are using a style that starts the name with the long form then it would be more appropriate to use the long field as the fallback. You can change the fallback for @abbreviation and @acronym with the abbreviation-sort-fallback option.
If you use sort-field to use a different field for sorting then if that field is missing the fallback will be the designated fallback for that field (not the fallback for the sort field). For example, if you use sort-field=name then the abbreviations, which don’t have the name field set, will use the value of the name fallback, which also happens to be short but may be changed with the abbreviation-name-fallback option. So if you do, for example:
sort-field=name,
abbreviation-sort-fallback=long
then the abbreviations will be sorted according to the short field (which is the fallback for name) not according to long (which is now the fallback for the sort field but that field is no longer being referenced).
The effects of changing the default value of sort-field=sort is illustrated in the examples below. Where the designated field is missing, the fallback for that field is used. The final fallback used by the sort method (in the event that there is no fallback for the given field) is the entry’s label. Note that there’s a difference between a field that’s not set (which triggers a fallback) and a field that’s set but whose value is reduced to the empty string when parsed by the comparator.
## Sort by Name
This example instructs the sort method to use the name field for the sort value. This just requires one extra resource option added to the base example:
\GlsXtrLoadResources[
src={mixed-entries},
selection=all,
sort={en-GB},
sort-field={name}
]
This results in the following order (square brackets show actual sort value used):
1. ♣ [“” followed by “card-club”]
2. ♦ [“” followed by “card-diamond”]
3. ♥ [“” followed by “card-heart”]
4. ♠ [“” followed by “card-spade”]
5. aardvark [aardvark|]
6. examples [examples|]
7. HTML [HTML|]
9. matrices [matrices|]
10. matriculate [matriculate|]
11. tin [tin|]
12. XML [XML|]
13. zebra [zebra|]
14. zirconium [zirconium|]
15. zither [zither|]
16. γ [𝛾|]
17. ζ(3) [𝜁|3|]
18. π [𝜋|]
The transcript file shows the following messages:
Identical sort values for 'card-heart' and 'card-spade'
Falling back on ID
Identical sort values for 'card-club' and 'card-spade'
Falling back on ID
Identical sort values for 'card-club' and 'card-heart'
Falling back on ID
Identical sort values for 'card-diamond' and 'card-spade'
Falling back on ID
Identical sort values for 'card-diamond' and 'card-heart'
Falling back on ID
Identical sort values for 'card-diamond' and 'card-club'
Falling back on ID
The TeX parser library has been used to interpret the values that contain commands. The results are also shown in the transcript file:
texparserlib: {}\ensuremath{\gamma} -> 𝛾
texparserlib: {}\ensuremath{\pi} -> 𝜋
texparserlib: {}\ensuremath{\zeta(3)} -> 𝜁(3)
texparserlib: {}\ensuremath{\heartsuit} -> ♡
texparserlib: {}\ensuremath{\clubsuit} -> ♣
texparserlib: {}\ensuremath{\diamondsuit} -> ♢
So you might be wondering why, for example, \ensuremath{\zeta(3)}, which has been interpreted as “𝜁(3)”, has ended up as “𝜁|3|” or why the sort method considers \ensuremath{\diamondsuit}, which has been interpreted as “♢”, and \ensuremath{\clubsuit}, which has been interpreted as “♣”, to be identical.
The answer lies in the sort method being used: sort=en-GB. This uses a locale comparator designed for text in a particular language (in this case British English). The default behaviour is to strip punctuation and to mark break points (word ends, by default) with the break point marker (the pipe or vertical bar symbol | by default). So the parentheses are stripped from “𝜁(3)” and the break point marker is inserted after each “word” resulting in “𝜁|3|”. In the case of the card suits, the Unicode symbols ♠ (U+2660), ♡ (U+2661), ♣ (U+2663) and ♢ (U+2662) are stripped leaving empty sort values (from the comparator’s point of view). These empty strings are all identical so the comparator then uses the entry labels to order those entries relative to each other but the empty sort value means they end up before “aardvark”. This is different to the default result shown at the top of this page where the label is used as the sort fallback.
Note there’s a slight difference between the way LaTeX and the TeX parser library interpret the card suit commands.
## Case-Insensitive Letter Sort by Name
This is a minor variation to the previous example. Here the sort method is changed to a case-insensitive letter sort:
\GlsXtrLoadResources[
src={mixed-entries},
selection=all,
sort={letter-nocase},
sort-field={name}
]
Again the sort value is obtained from the name field (or the fallback for the name field if not set) and again the values containing commands are interpreted, but this time the value is converted to lower case, punctuation characters aren’t stripped and there are no break points. The order is now (sort value shown in square brackets):
1. aardvark [aardvark]
2. examples [examples]
3. HTML [html]
5. matrices [matrices]
6. matriculate [matriculate]
7. tin [tin]
8. XML [xml]
9. zebra [zebra]
10. zirconium [zirconium]
11. zither [zither]
12. ♠ [♠]
13. ♥ [♡]
14. ♦ [♢]
15. ♣ [♣]
16. γ [𝛾]
17. ζ(3) [𝜁(3)]
18. π [𝜋]
The ordering is now obtained from a simple character code comparison of the derived sort values.
## Sort by Description
This example instructs the sort method to use the description field for the sort value. This just requires one extra resource option added to the base example:
\GlsXtrLoadResources[
src={mixed-entries},
selection=all,
sort={en-GB},
sort-field={description}
]
This results in the following order (square brackets show actual sort value used):
1. aardvark [African|mammal|with|tubular|snout|and|long|tongue|]
2. zebra [African|wild|horse|with|black|and|white|stripes|]
3. ζ(3) [Apéry's|constant|]
4. π [Archimedes|constant|]
5. ♣ [club|card|suit|]
6. ♦ [diamond|card|suit|]
7. matriculate [enrol|or|be|enrolled|at|a|college|or|university|]
8. γ [Euler's|constant|]
9. examples [example|]
10. zirconium [grey-white|strong|transition|metal|]
11. ♥ [heart|card|suit|]
13. HTML [markup-HTML|]
14. XML [markup-XML|]
15. matrices [matrix|]
16. tin [soft|silvery|metal|with|a|faint|yellow|hue|]
18. zither [type|of|stringed|musical|instrument|]
There are four entries that don’t have the description field set: “HTML” and “XML” (defined with @abbreviation) and “examples” and “matrices” (defined with @indexplural). There’s no fallback for the description field, so the label is used instead if it’s missing.
## Sort by Symbol
This example instructs the sort method to use the description field for the sort value. This just requires one extra resource option added to the base example:
\GlsXtrLoadResources[
src={mixed-entries},
selection=all,
sort={en-GB},
sort-field={symbol}
]
Note that this example is using a locale comparator not a letter comparator. This results in the following order (square brackets show actual sort value used):
1. aardvark [aardvark|]
2. ♣ [card-club|]
3. ♦ [card-diamond|]
4. ♥ [card-heart|]
6. examples [example|]
7. γ [gamma|]
8. HTML [markup-HTML|]
9. XML [markup-XML|]
10. matriculate [matriculate|]
11. matrices [matrix|]
13. π [pi|]
14. tin [Sn|]
15. zebra [zebra|]
16. ζ(3) [zeta3|]
17. zither [zither|]
18. zirconium [Zr|]
Only three of the entries actually have the symbol field set. There’s no fallback for that field so the label is used.
## Altering the Fallback
In this example, instead of selecting a particular field to sort by (such as name) I’ve changed the sort fallback settings:
\GlsXtrLoadResources[
src={mixed-entries},
selection=all,
sort={en-GB},
symbol-sort-fallback={name},
abbreviation-sort-fallback={long},
entry-sort-fallback={symbol}
]
No entries have the sort field set so they will all use the fallback field according to their entry type. In this case, symbols (@symbol and @number) will fallback on the name, abbreviations will fallback on the long field and general entries (@entry) will fallback on the symbol field. The @index and @indexplural terms still fallback on the name. This results in the following order (square brackets show actual sort value used):
1. ♣ [“” followed by “card-club”]
2. ♦ [“” followed by “card-diamond”]
3. ♥ [“” followed by “card-heart”]
4. ♠ [“” followed by “card-spade”]
5. aardvark [aardvark|]
6. examples [examples|]
7. XML [extensible|markup|language|]
8. HTML [hypertext|markup|language|]
9. matrices [matrices|]
10. matriculate [matriculate|]
12. tin [Sn|]
13. zebra [zebra|]
14. zither [zither|]
15. zirconium [Zr|]
16. γ [𝛾|]
17. ζ(3) [𝜁|3|]
18. π [𝜋|]
As with the sort by name example, the card suit characters are discarded by the locale sort comparator leaving an empty string (which puts them before “aardvark”). This means those four entries have identical sort values (“”) and so are ordered relative to each other according to their labels.
The abbreviations are now ordered according to their long form. This may be more appropriate if the style chosen shows the long form (or long followed by short) as the name.
The metals (lead, tin and zirconium) are now ordered according to their symbol so, for example, “zirconium” now comes after “zither”.
## Blocks
Each resource command (\GlsXtrLoadResources) sorts and collates the selected entries and writes their definitions to a .glstex file. The \printunsrtglossary command (used internally by \printunsrtglossaries) simply iterates over the entry labels according to their definition (from the glossaries package’s point of view). That is, according to the order they are written in the .glstex files. This means that if you have multiple instances of \GlsXtrLoadResources for the same glossary then that glossary listing will have sub-sorted blocks (where the blocks are in the same order as the corresponding \GlsXtrLoadResources commands). There need not necessarily be any visual separation between those blocks (although that can be added, see Logical Glossary Divisions (type vs group vs parent)).
In this example, the symbols (@symbol and @number) are first sorted (case-sensitive with name as the sort fallback), then the abbreviations are sorted (locale with long as the sort fallback), then the metals (@entry) are sorted (case-insensitive letter with symbol as the sort fallback), then finally the terms (@index and @indexplural) are sorted (locale).
\GlsXtrLoadResources[
src={mixed-entries},
selection=all,
sort={letter-case},
symbol-sort-fallback={name},
match={entrytype={symbol|number}}
]
src={mixed-entries},
selection=all,
sort={en-GB},
abbreviation-sort-fallback={long},
match={entrytype=abbreviation}
]
src={mixed-entries},
selection=all,
sort={letter-nocase},
entry-sort-fallback={symbol},
match={entrytype=entry}
]
src={mixed-entries},
selection=all,
sort={en-GB},
match={entrytype={index|indexplural}}
]
This results in the following order (square brackets show actual sort value used):
1. ♠ [♠]
2. ♥ [♡]
3. ♦ [♢]
4. ♣ [♣]
5. γ [𝛾]
6. ζ(3) [𝜁(3)]
7. π [𝜋]
8. XML [extensible|markup|language|]
9. HTML [hypertext|markup|language|]
|
Login Join Maker Pro
Or sign in with
# Resistance range to output voltage range
#### xe351c
Jul 15, 2014
8
Hi all,
I'm a newby here and only have a fairly basic electronics knowledge.
What I am wanting to know is how to convert a variable resister to a variable voltage output, very low current draw as it just runs an electronic fuel gauge.
The output of the sender varies from 10 ohms (empty) to 149 ohms (full).
The input to the gauge needs to be 0.1v empty and 4.5v full.
Supply voltage is a regulated 12VDC.
I played around with a resistor calculator but couldn't find that happy medium.
Can any one help me with this? Can this be done simply or will it require something a little more elaborate??
Thanks in advance!
Matt
#### BobK
Jan 5, 2010
7,682
As you have determined, you cannot get this relationship with just a voltage divider.
Normally, one would use a constant current source to convert a resistance into a voltage linearly. But your two points indicate that that would not work either.
At the high end: I = V / R = 4.5 / 149 = 0.03 = 30 mA
At the low end: I = V / R = 0.1 / 10 = 0.01 = 10mA
Can you get some more points to see if it is indeed something like linear maybe with the low end being off a bit? I.e. can you get the voltage and resistance needed to read 1/2 full?
Bob
#### shumifan50
Jan 16, 2014
579
Well a 248R resistor will give 4.5V on full and 0.465V on empty, which will render 10% at the low end unusable, if the meter is linear. The other thing to watch is the heat generated as that will generate 0.5watt on empty which is when there are the most fumes around.
#### xe351c
Jul 15, 2014
8
Hey guys, Thanks for the responses so far.
For the fuel sender the resistances are as follows:
Empty - 10
1/4 - 40.5
1/2 - 66
3/4 - 95
Full - 149
For the voltages, all I have is:
Empty - 0-0.1
Half - 2.2-2.8
Full - 4.1-4.5
Hope that is to some assistance?
Matt
#### xe351c
Jul 15, 2014
8
yeah, I imagine that .5 watts would be too much for the sender, and as mentioned it is located inside the fuel tank......
#### Gryd3
Jun 25, 2014
4,098
yeah, I imagine that .5 watts would be too much for the sender, and as mentioned it is located inside the fuel tank......
Ill play around with some numbers in my head... I'm thinking of a wheetstone bridge... perhaps someone can chime in about it's plausibilty
#### shumifan50
Jan 16, 2014
579
How about using a little PIC and reading the voltage drop across the sensor(ADC) and then use PWM to drive the meter.It will be a simple circuit and easy to calibrate. This would allow a much lower voltage across the sensor(reference voltage can be 1.5v for example would halve the wattage).
Last edited:
Jun 25, 2014
4,098
#### shumifan50
Jan 16, 2014
579
That app note only handles resistance down to 1K2.
#### KrisBlueNZ
##### Sadly passed away in 2015
Nov 28, 2011
8,393
Fuel gauges aren't linear anyway in my experience. They report less than half full when the tank is half full, to encourage you to fill up without waiting too long. And the sender isn't likely to be perfectly linear because of its design and manufacturing errors. Some non-linearity is sure to be tolerable.
I've graphed your figures from post #4 FYI:
Does the sender have two wires, fully isolated from the chassis, or does it have one wire and the other side is connected to the chassis?
Is the supply voltage only present when the vehicle is running? Or is it present all the time? This affects the amount of current that can be drawn from it.
#### xe351c
Jul 15, 2014
8
Wow, thanks everyone for the input! I kinda didn't expect much!
Gryd3/shumifan50, I have no idea what you are talking about (lol) but I will google up on your suggestions to get my head around it.
KrisBlueNZ, Nice graph! Didn't even think of doing that myself with Excel.
Answers to your questions.
The sender is two wires, independent of chassis/ground.
Power supply is ignition ON only (99% sure), even if not, could trigger it with a relay if required.
Just a bit of background for this.
Fuel gauges in Australian 1979-1987 Ford Falcons use a Capacitive type fuel sender (no moving parts) These are now no longer available and any old stock goes for a small fortune(A$300-A$500) and there are several different types depending on body type (sedan/wagon etc).
Now I'm not a tight ass so as to not buy one for this price. There are other issues with these, the big one, is that E10 fuel plays up in how they read. Petrol has a dialectic property of 2.0, water is 80.3 and Ethanol is 16.2. That doesn't mean a lot to me but what it does mean is that it will read full for 3/4+ of a tank on E10 and then plummets! Not really ideal.
From 1988, Ford changed to an ohm meter style gauge and sender with a float. Since I'm mechanically sound (diesel mechanic by trade) im good in making square pegs go in round holes but not so good in soldering diodes and IC on circuit boards lol.
For a little more background on the older sender have a read here http://www.ozfalcon.com.au/index.ph...-level-display-capacitive-probe/?fromsearch=1
See the attached photos of my car
Thanks again for everyone's input!
#### xe351c
Jul 15, 2014
8
Something that I have come across and might be an easier conversion, you can get an interface that is used to kinda do what I am asking, sort of, bear with me!
Now! 99.8% of automotive LP Gas tank gauges have a level sender that works on 0 ohms empty - 90 ohms full.
I wonder if using this might get me closer??
Thoughts??
#### KrisBlueNZ
##### Sadly passed away in 2015
Nov 28, 2011
8,393
I don't know about the CP94, CP95 and CP96 boards. If they're cheap, you could buy one and try it. Without seeing a schematic diagram for them, I couldn't say whether they'll work or not.
I could draw up a circuit with about 10~15 components, including an 8-pin IC:
... which you can build up on stripboard:
Do you feel confident that you could build it up? I could include links to all components on Digi-Key or Radio Shack. I just want to know whether there's any point drawing up a design.
#### shumifan50
Jan 16, 2014
579
@kris:
The circuit would be interesting to see even if it is not used - if you have some time on your hands.
#### xe351c
Jul 15, 2014
8
I could definitely build it, not a problem. If I had a list of components and drawing, I could put it together. I just would never be able to design it from scratch lol.
If I had a list, I could walk in to my local Jaycar (www.jaycar.com.au) store and they would get the components it for me.
#### KrisBlueNZ
##### Sadly passed away in 2015
Nov 28, 2011
8,393
OK, here's what I suggest.
There's a complete list of Jaycar parts there. The total cost is about AUD 21. You will also need solder, soldering iron, enclosure, wire, etc.
The circuit is a "non-inverting amplifier" using an LMC6482 dual op-amp. This is the most suitable device that Jaycar have. D1 protects the circuit against reversed power supply. U2 is a 7808 regulator that generates a regulated, stable +8V supply for the rest of the circuit. R1 causes about 1 mA to flow through the sender, which produces a voltage at R2 of about 0~150 mV according to the sender resistance. R2 and C1 remove noise from this signal, and U1A amplifies this voltage, relative to the 0V rail, by a factor of around 30 (adjustable via VR1) to produce a voltage range of 0~4.5V at the output, which drives the dashboard meter.
There is no processing performed on the signal (apart from noise removal) so the dashboard meter will respond directly and proportionally to the sender resistance. I think this will be fine for this application, but if an offset is needed, the meter can be offset to the left slightly by adding a resistor of around 1 MΩ between U1 pin 2 and the +8V rail. Use a lower resistance for more offset.
I have suggested 2-pin latching polarised headers for the connections to the sender and the dashboard meter, and a 2-pin polarised shrouded plug/socket combination for the power input. Accidentally interchanging the sender and the meter is unlikely to cause any damage.
You will need to translate the schematic into a stripboard layout. There are several tutorials out there; Google is your friend. Have a decent go at it, and if you get stuck with any specific questions, ask them here.
#### BobK
Jan 5, 2010
7,682
The 10 Ohms at empty is what bothers me. Couldn't we subtract the voltage at 10 Ohms with the op amp to get a 0.1V at empty and adjust the gain to get the full scale value of 4.5V? I think this would improve the circuit.
Here is a modification of the circuitry around the op amp in Kris's circuit that does this.
R6 is the gas-tank sender.
Output is 0.102V at 10 Ohms and 4.48V at 149 Ohms.
A 10K trimmer could be placed in series with R4 to adjust full output to 4.5V.
Also a 0.1uF cap across R6 and R8 would help stabilize it.
Last edited:
#### KrisBlueNZ
##### Sadly passed away in 2015
Nov 28, 2011
8,393
The 10 Ohms at empty is what bothers me. Couldn't we subtract the voltage at 10 Ohms with the op amp to get a 0.1V at empty and adjust the gain to get the full scale value of 4.5V? I think this would improve the circuit.
I don't think that will be an issue in the big scheme of things; it's only a fuel level meter. But I did consider the possibility:
There is no processing performed on the signal (apart from noise removal) so the dashboard meter will respond directly and proportionally to the sender resistance. I think this will be fine for this application, but if an offset is needed, the meter can be offset to the left slightly by adding a resistor of around 1 MΩ between U1 pin 2 and the +8V rail. Use a lower resistance for more offset.
#### BobK
Jan 5, 2010
7,682
The OP said it should output 0.1V at 10 Ohms for the empty indication. I can see the gas gauge having E marked somewhere above the 0 point so that you can distinguish between empty and meter off. I believe your method will have something like 0.3V output for 10 Ohms, which means it will never indicate empty.
Edited: OK, I see now how a resistor from - input to V+ would offset it as well. So that should be set for 0.1V then the gain set for 4.5V full scale and it works pretty much the same as my circuit.
Bob
Last edited:
#### xe351c
Jul 15, 2014
8
That is excellent Kris! Thanks for your efforts!!! I did email the company about the interface mentioned, and the wanted about A\$80 delivered.
This should, hopefully (depending on my skills lol), work for a much cheaper price and be more accurate!
I doubt I will have a crack at is this weekend, but I will next! I will post up pics of my efforts and results.
Thanks again!!
Replies
9
Views
1K
Replies
6
Views
2K
Replies
5
Views
9K
Replies
17
Views
1K
E
Replies
27
Views
3K
Ralph Mowery
R
|
American Institute of Mathematical Sciences
November 2011, 5(4): 589-607. doi: 10.3934/amc.2011.5.589
The merit factor of binary arrays derived from the quadratic character
1 Department of Mathematics, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
Received July 2010 Revised July 2011 Published November 2011
We calculate the asymptotic merit factor, under all cyclic rotations of rows and columns, of two families of binary two-dimensional arrays derived from the quadratic character. The arrays in these families have size $p\times q$, where $p$ and $q$ are not necessarily distinct odd primes, and can be considered as two-dimensional generalisations of a Legendre sequence. The asymptotic values of the merit factor of the two families are generally different, although the maximum asymptotic merit factor, taken over all cyclic rotations of rows and columns, equals $36/13$ for both families. These are the first non-trivial theoretical results for the asymptotic merit factor of families of truly two-dimensional binary arrays.
Citation: Kai-Uwe Schmidt. The merit factor of binary arrays derived from the quadratic character. Advances in Mathematics of Communications, 2011, 5 (4) : 589-607. doi: 10.3934/amc.2011.5.589
References:
[1] S. Alquaddoomi and R. A. Scholtz, On the nonexistence of Barker arrays and related matters, IEEE Trans. Inform. Theory, 35 (1989), 1048-1057. doi: 10.1109/18.42220. [2] L. Bömer and M. Antweiler, Optimizing the aperiodic merit factor of binary arrays, Signal Process, 30 (1993), 1-13. doi: 10.1016/0165-1684(93)90047-E. [3] L. Bömer, M. Antweiler and H. Schotten, Quadratic residue arrays, Frequenz, 47 (1993), 190-196. doi: 10.1515/FREQ.1993.47.7-8.190. [4] P. Borwein, K.-K. S. Choi, and J. Jedwab, Binary sequences with merit factor greater than $6.34$, IEEE Trans. Inform. Theory, 50 (2004), 3234-3249. doi: 10.1109/TIT.2004.838341. [5] D. Calabro and J. K. Wolf, On the synthesis of two-dimensional arrays with desirable correlation properties, Inform. Control, 11 (1967), 537-560. doi: 10.1016/S0019-9958(67)90755-3. [6] J. A. Davis, J. Jedwab and K. W. Smith, Proof of the Barker array conjecture, Proc. Amer. Math. Soc., 135 (2007), 2011-2018. doi: 10.1090/S0002-9939-07-08703-5. [7] H. Eggers, "Synthese zweidimensionaler Folgen mit guten Autokorrelationseigenschaften,'' Ph.D thesis, RWTH Aachen, Germany, 1986. [8] T. A. Gulliver and M. G. Parker, The multivariate merit factor of a Boolean function, in "Coding and Complexity'' (ed. M.J. Dinneen), IEEE, (2005), 58-62. [9] T. Høholdt and H. E. Jensen, Determination of the merit factor of Legendre sequences, IEEE Trans. Inform. Theory, 34 (1988), 161-164. doi: 10.1109/18.2620. [10] T. Høholdt, H. E. Jensen and J. Justesen, Aperiodic correlations and the merit factor of a class of binary sequences, IEEE Trans. Inform. Theory, IT-31 (1985), 549-552. doi: 10.1109/TIT.1985.1057071. [11] J. Jedwab, A survey of the merit factor problem for binary sequences, in "Sequences and Their Applications'' (eds. T. Helleseth et al.), Springer-Verlag, (2005), 30-55. doi: 10.1007/11423461_2. [12] J. Jedwab and K.-U. Schmidt, The merit factor of binary sequence families constructed from $m$-sequences, Contemp. Math., 518 (2010), 265-278. [13] J. Jedwab and K.-U. Schmidt, The $L_4$ norm of Littlewood polynomials derived from the Jacobi symbol,, to appear in Pacific J. Math., (). [14] H. E. Jensen and T. Høholdt, Binary sequences with good correlation properties, in "Applied Algebra, Algebraic Algorithms and Error-Correcting Codes'' (eds. L. Huguet and A. Poli), Springer-Verlag, (1989), 306-320. [15] J. M. Jensen, H. E. Jensen and T. Høholdt, The merit factor of binary sequences related to difference sets, IEEE Trans. Inform. Theory, 37 (1991), 617-626. doi: 10.1109/18.79917. [16] R. Lidl and H. Niederreiter, "Finite Fields,'' 2nd edition, Cambridge University Press, 1997. [17] J. E. Littlewood, "Some Problems in Real and Complex Analysis,'' D. C. Heath and Co. Raytheon Education Co., Lexington, MA, 1968. [18] M. J. Mossinghoff, Wieferich pairs and Barker sequences, Des. Codes Cryptogr., 53 (2009), 149-163. doi: 10.1007/s10623-009-9301-3. [19] D. V. Sarwate, Mean-square correlation of shift-register sequences, IEE Proc., 131 (1984), 101-106. [20] K.-U. Schmidt, J. Jedwab and M. G. Parker, Two binary sequence families with large merit factor, Adv. Math. Commun., 3 (2009), 135-156. doi: 10.3934/amc.2009.3.135. [21] M. R. Schroeder, "Number Theory in Science and Communication: with Applications in Cryptography, Physics, Digital Information, Computing, and Self-similarity,'' 3rd edition, Springer, Berlin, 1997. [22] R. Turyn and J. Storer, On binary sequences, Proc. Amer. Math. Soc., 12 (1961), 394-399. doi: 10.1090/S0002-9939-1961-0125026-2. [23] R. G. van Schyndel, A. Z. Tirkel, I. D. Svalbe, T. E. Hall and C. F. Osborne, Algebraic construction of a new class of quasi-orthogonal arrays for steganography, Proc. SPIE, 3657 (1999), 354-364. doi: 10.1117/12.344685.
show all references
References:
[1] S. Alquaddoomi and R. A. Scholtz, On the nonexistence of Barker arrays and related matters, IEEE Trans. Inform. Theory, 35 (1989), 1048-1057. doi: 10.1109/18.42220. [2] L. Bömer and M. Antweiler, Optimizing the aperiodic merit factor of binary arrays, Signal Process, 30 (1993), 1-13. doi: 10.1016/0165-1684(93)90047-E. [3] L. Bömer, M. Antweiler and H. Schotten, Quadratic residue arrays, Frequenz, 47 (1993), 190-196. doi: 10.1515/FREQ.1993.47.7-8.190. [4] P. Borwein, K.-K. S. Choi, and J. Jedwab, Binary sequences with merit factor greater than $6.34$, IEEE Trans. Inform. Theory, 50 (2004), 3234-3249. doi: 10.1109/TIT.2004.838341. [5] D. Calabro and J. K. Wolf, On the synthesis of two-dimensional arrays with desirable correlation properties, Inform. Control, 11 (1967), 537-560. doi: 10.1016/S0019-9958(67)90755-3. [6] J. A. Davis, J. Jedwab and K. W. Smith, Proof of the Barker array conjecture, Proc. Amer. Math. Soc., 135 (2007), 2011-2018. doi: 10.1090/S0002-9939-07-08703-5. [7] H. Eggers, "Synthese zweidimensionaler Folgen mit guten Autokorrelationseigenschaften,'' Ph.D thesis, RWTH Aachen, Germany, 1986. [8] T. A. Gulliver and M. G. Parker, The multivariate merit factor of a Boolean function, in "Coding and Complexity'' (ed. M.J. Dinneen), IEEE, (2005), 58-62. [9] T. Høholdt and H. E. Jensen, Determination of the merit factor of Legendre sequences, IEEE Trans. Inform. Theory, 34 (1988), 161-164. doi: 10.1109/18.2620. [10] T. Høholdt, H. E. Jensen and J. Justesen, Aperiodic correlations and the merit factor of a class of binary sequences, IEEE Trans. Inform. Theory, IT-31 (1985), 549-552. doi: 10.1109/TIT.1985.1057071. [11] J. Jedwab, A survey of the merit factor problem for binary sequences, in "Sequences and Their Applications'' (eds. T. Helleseth et al.), Springer-Verlag, (2005), 30-55. doi: 10.1007/11423461_2. [12] J. Jedwab and K.-U. Schmidt, The merit factor of binary sequence families constructed from $m$-sequences, Contemp. Math., 518 (2010), 265-278. [13] J. Jedwab and K.-U. Schmidt, The $L_4$ norm of Littlewood polynomials derived from the Jacobi symbol,, to appear in Pacific J. Math., (). [14] H. E. Jensen and T. Høholdt, Binary sequences with good correlation properties, in "Applied Algebra, Algebraic Algorithms and Error-Correcting Codes'' (eds. L. Huguet and A. Poli), Springer-Verlag, (1989), 306-320. [15] J. M. Jensen, H. E. Jensen and T. Høholdt, The merit factor of binary sequences related to difference sets, IEEE Trans. Inform. Theory, 37 (1991), 617-626. doi: 10.1109/18.79917. [16] R. Lidl and H. Niederreiter, "Finite Fields,'' 2nd edition, Cambridge University Press, 1997. [17] J. E. Littlewood, "Some Problems in Real and Complex Analysis,'' D. C. Heath and Co. Raytheon Education Co., Lexington, MA, 1968. [18] M. J. Mossinghoff, Wieferich pairs and Barker sequences, Des. Codes Cryptogr., 53 (2009), 149-163. doi: 10.1007/s10623-009-9301-3. [19] D. V. Sarwate, Mean-square correlation of shift-register sequences, IEE Proc., 131 (1984), 101-106. [20] K.-U. Schmidt, J. Jedwab and M. G. Parker, Two binary sequence families with large merit factor, Adv. Math. Commun., 3 (2009), 135-156. doi: 10.3934/amc.2009.3.135. [21] M. R. Schroeder, "Number Theory in Science and Communication: with Applications in Cryptography, Physics, Digital Information, Computing, and Self-similarity,'' 3rd edition, Springer, Berlin, 1997. [22] R. Turyn and J. Storer, On binary sequences, Proc. Amer. Math. Soc., 12 (1961), 394-399. doi: 10.1090/S0002-9939-1961-0125026-2. [23] R. G. van Schyndel, A. Z. Tirkel, I. D. Svalbe, T. E. Hall and C. F. Osborne, Algebraic construction of a new class of quasi-orthogonal arrays for steganography, Proc. SPIE, 3657 (1999), 354-364. doi: 10.1117/12.344685.
[1] Kai-Uwe Schmidt, Jonathan Jedwab, Matthew G. Parker. Two binary sequence families with large merit factor. Advances in Mathematics of Communications, 2009, 3 (2) : 135-156. doi: 10.3934/amc.2009.3.135 [2] Richard Hofer, Arne Winterhof. On the arithmetic autocorrelation of the Legendre sequence. Advances in Mathematics of Communications, 2017, 11 (1) : 237-244. doi: 10.3934/amc.2017015 [3] Zilong Wang, Guang Gong. Correlation of binary sequence families derived from the multiplicative characters of finite fields. Advances in Mathematics of Communications, 2013, 7 (4) : 475-484. doi: 10.3934/amc.2013.7.475 [4] Ji-Woong Jang, Young-Sik Kim, Sang-Hyo Kim. New design of quaternary LCZ and ZCZ sequence set from binary LCZ and ZCZ sequence set. Advances in Mathematics of Communications, 2009, 3 (2) : 115-124. doi: 10.3934/amc.2009.3.115 [5] Xiaohui Liu, Jinhua Wang, Dianhua Wu. Two new classes of binary sequence pairs with three-level cross-correlation. Advances in Mathematics of Communications, 2015, 9 (1) : 117-128. doi: 10.3934/amc.2015.9.117 [6] Huaning Liu, Xi Liu. On the correlation measures of orders $3$ and $4$ of binary sequence of period $p^2$ derived from Fermat quotients. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021008 [7] Arne Winterhof, Zibi Xiao. Binary sequences derived from differences of consecutive quadratic residues. Advances in Mathematics of Communications, 2022, 16 (1) : 83-93. doi: 10.3934/amc.2020100 [8] Moulay Rchid Sidi Ammi, Ismail Jamiai. Finite difference and Legendre spectral method for a time-fractional diffusion-convection equation for image restoration. Discrete and Continuous Dynamical Systems - S, 2018, 11 (1) : 103-117. doi: 10.3934/dcdss.2018007 [9] Amer Rasheed, Aziz Belmiloudi, Fabrice Mahé. Dynamics of dendrite growth in a binary alloy with magnetic field effect. Conference Publications, 2011, 2011 (Special) : 1224-1233. doi: 10.3934/proc.2011.2011.1224 [10] Chun-Hao Teng, I-Liang Chern, Ming-Chih Lai. Simulating binary fluid-surfactant dynamics by a phase field model. Discrete and Continuous Dynamical Systems - B, 2012, 17 (4) : 1289-1307. doi: 10.3934/dcdsb.2012.17.1289 [11] Hélène Hibon, Ying Hu, Shanjian Tang. Mean-field type quadratic BSDEs. Numerical Algebra, Control and Optimization, 2022 doi: 10.3934/naco.2022009 [12] Guangmei Shao, Wei Xue, Gaohang Yu, Xiao Zheng. Improved SVRG for finite sum structure optimization with application to binary classification. Journal of Industrial and Management Optimization, 2020, 16 (5) : 2253-2266. doi: 10.3934/jimo.2019052 [13] Valery Y. Glizer, Oleg Kelis. Singular infinite horizon zero-sum linear-quadratic differential game: Saddle-point equilibrium sequence. Numerical Algebra, Control and Optimization, 2017, 7 (1) : 1-20. doi: 10.3934/naco.2017001 [14] Yoshikazu Katayama, Colin E. Sutherland and Masamichi Takesaki. The intrinsic invariant of an approximately finite dimensional factor and the cocycle conjugacy of discrete amenable group actions. Electronic Research Announcements, 1995, 1: 43-47. [15] Grégory Berhuy, Jean Fasel, Odile Garotta. Rank weights for arbitrary finite field extensions. Advances in Mathematics of Communications, 2021, 15 (4) : 575-587. doi: 10.3934/amc.2020083 [16] Martino Bardi. Explicit solutions of some linear-quadratic mean field games. Networks and Heterogeneous Media, 2012, 7 (2) : 243-261. doi: 10.3934/nhm.2012.7.243 [17] Kai Du, Jianhui Huang, Zhen Wu. Linear quadratic mean-field-game of backward stochastic differential systems. Mathematical Control and Related Fields, 2018, 8 (3&4) : 653-678. doi: 10.3934/mcrf.2018028 [18] Olivier Guéant. New numerical methods for mean field games with quadratic costs. Networks and Heterogeneous Media, 2012, 7 (2) : 315-336. doi: 10.3934/nhm.2012.7.315 [19] Zhenghong Qiu, Jianhui Huang, Tinghan Xie. Linear-Quadratic-Gaussian mean-field controls of social optima. Mathematical Control and Related Fields, 2021 doi: 10.3934/mcrf.2021047 [20] Denis Danilov, Britta Nestler. Phase-field modelling of nonequilibrium partitioning during rapid solidification in a non-dilute binary alloy. Discrete and Continuous Dynamical Systems, 2006, 15 (4) : 1035-1047. doi: 10.3934/dcds.2006.15.1035
2020 Impact Factor: 0.935
|
## Differential Propositional Calculus • Discussion 5
HR:
1. I think I like very much your Cactus Graphs. Meaning that I am in the process of understanding them, and finding it much better not to have to draw circles, but lines.
2. Less easy for me is the differential calculus. Where is the consistency between $\texttt{(} x \texttt{,} y \texttt{)}$ and $\texttt{(} x \texttt{,} y \texttt{,} z \texttt{)}?$ $\texttt{(} x \texttt{,} y \texttt{)}$ means that $x$ and $y$ are not equal and $\texttt{(} x \texttt{,} y \texttt{,} z \texttt{)}$ means that one of them is false. Unequality and truth/falsity for me are two concepts so different I cannot think them together or see a consistency between them.
3. What about $\texttt{(} w \texttt{,} x \texttt{,} y \texttt{,} z \texttt{)}?$
MB:
So, if I want to transform a circle into a line I have to use a function $f : \mathbb{B}^n \to \mathbb{B}?$
This is the base of temporal logic? I’m using $f : \mathbb{N}^n \to \mathbb{N}.$
Dear Mauro,
If I understand what Helmut is saying about “circles” and “lines”, he is talking about the passage from forms of enclosure on plane sheets of paper — such as those used by Peirce and Spencer Brown — to their topological duals in the form of rooted trees. There is more discussion of this transformation at the following sites.
This is the first step in the process of converting planar maps to graph-theoretic data structures. Further transformations take us from trees to the more general class of cactus graphs, which implement a highly efficient family of logical primitives called minimal negation operators. These are described in the following article.
### 3 Responses to Differential Propositional Calculus • Discussion 5
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
FRET
Fluorescence resonance energy transfer (FRET) is a radiation-less energy transfer between two electronic dipoles occurring with a first-order rate constant inversely proportional to the sixth power of the distance between them2. It is a quantum mechanical phenomenon that occurs between two fluorophores (a fluorescence donor and a fluorescence acceptor) that are in molecular proximity of each other (< 80 Å apart), if the emission spectrum of the donor overlaps the excitation spectrum of the acceptor5: under these conditions, energy (E) is transferred non-radiatively from the donor to the acceptor with an efficiency defined by the equation below, where r is the distance between the two fluorophores and R0 (Förster distance) is the distance at which 50% energy transfer takes place (typically 20–60 Å). E = R06/(R06 + r6) R0 is dependent on the extent of spectral overlap between the donor and acceptor, the quantum yield of the donor and the relative orientation of the donor and acceptor. Excitation of a donor fluorophore in a FRET pair leads to quenching of the donor emission and to an increased, sensitized, acceptor emission. Intensity-based FRET detection methods include monitoring the donor intensity with or without acceptor photobleaching, the sensitized acceptor emission or the ratio between the donor and acceptor intensity5.
|
# 1.3 Rates of Change and Behavior of Graphs
Since functions represent how an output quantity varies with an input quantity, it is natural to ask about the rate at which the values of the function are changing.
For example, the function C(t) below gives the average cost, in dollars, of a gallon of gasoline t years after 2000.
t 2 3 4 5 6 7 8 9 C(t) 1.47 1.69 1.94 2.3 2.51 2.64 3.01 2.14
If we were interested in how the gas prices had changed between 2002 and 2009, we could compute that the cost per gallon had increased from $1.47 to$2.14, an increase of $0.67. While this is interesting, it might be more useful to look at how much the price changed per year. You are probably noticing that the price didn’t change the same amount each year, so we would be finding the average rate of change over a specified amount of time. The gas price increased by$0.67 from 2002 to 2009, over 7 years, for an average of dollars per year. On average, the price of gas increased by about 9.6 cents each year.
### Rate of Change
A rate of change describes how the output quantity changes in relation to the input quantity. The units on a rate of change are “output units per input units
Some other examples of rates of change would be quantities like:
• A population of rats increases by 40 rats per week
• A barista earns $9 per hour (dollars per hour) • A farmer plants 60,000 onions per acre • A car can drive 27 miles per gallon • A population of grey whales decreases by 8 whales per year • The amount of money in your college account decreases by$4,000 per quarter
#### Average Rate of Change
The average rate of change between two input values is the total change of the function values (output values) divided by the change in the input values.
Average rate of change = =
Example 1
Using the cost-of-gas function from earlier, find the average rate of change between 2007 and 2009
From the table, in 2007 the cost of gas was $2.64. In 2009 the cost was$2.14.
The input (years) has changed by 2. The output has changed by $2.14 -$2.64 = -0.50. The average rate of change is then = -0.25 dollars per year
Try it Now
1. Using the same cost-of-gas function, find the average rate of change between 2003 and 2008
Notice that in the last example the change of output was negative since the output value of the function had decreased. Correspondingly, the average rate of change is negative.
Example 2
Given the function g(t) shown here, find the average rate of change on the interval [0, 3].
At t = 0, the graph shows
At t = 3, the graph shows
The output has changed by 3 while the input has changed by 3, giving an average rate of change of:
Example 3
On a road trip, after picking up your friend who lives 10 miles away, you decide to record your distance from home over time. Find your average speed over the first 6 hours.
t (hours) 0 1 2 3 4 5 6 7 D(t) (miles) 10 55 90 153 214 240 292 300
Here, your average speed is the average rate of change.
You traveled 282 miles in 6 hours, for an average speed of
= 47 miles per hour
We can more formally state the average rate of change calculation using function notation.
### Average Rate of Change using Function Notation
Given a function f(x), the average rate of change on the interval [a, b] is
Average rate of change =
Example 4
Compute the average rate of change of on the interval [2, 4]
We can start by computing the function values at each endpoint of the interval
Now computing the average rate of change
Average rate of change =
Try it Now
2. Find the average rate of change of on the interval [1, 9]
Example 5
The magnetic force F, measured in Newtons, between two magnets is related to the distance between the magnets d, in centimeters, by the formula . Find the average rate of change of force if the distance between the magnets is increased from 2 cm to 6 cm.
We are computing the average rate of change of on the interval [2, 6]
Average rate of change = Evaluating the function
=
Simplifying
Combining the numerator terms
Simplifying further
Newtons per centimeter
This tells us the magnetic force decreases, on average, by 1/9 Newtons per centimeter over this interval.
Example 6
Find the average rate of change of on the interval . Your answer will be an expression involving a.
Using the average rate of change formula
Evaluating the function
Simplifying
Simplifying further, and factoring
Cancelling the common factor a
This result tells us the average rate of change between t = 0 and any other point t = a. For example, on the interval [0, 5], the average rate of change would be 5+3 = 8.
Try it Now
3. Find the average rate of change of on the interval .
Graphical Behavior of Functions
As part of exploring how functions change, it is interesting to explore the graphical behavior of functions.
#### Increasing/Decreasing
A function is increasing on an interval if the function values increase as the inputs increase. More formally, a function is increasing if f(b) > f(a) for any two input values a and b in the interval with b>a. The average rate of change of an increasing function is positive.
A function is decreasing on an interval if the function values decrease as the inputs increase. More formally, a function is decreasing if f(b) < f(a) for any two input values a and b in the interval with b>a. The average rate of change of a decreasing function is negative.
Example 7
Given the function p(t) graphed here, on what intervals does the function appear to be increasing?
The function appears to be increasing from t = 1 to t = 3, and from t = 4 on.
In interval notation, we would say the function appears to be increasing on the interval and the interval
Notice in the last example that we used open intervals (intervals that don’t include the endpoints) since the function is neither increasing nor decreasing at t = 1, 3, or 4.
Definition: Local Extrema
• A point where a function changes from increasing to decreasing is called a local maximum
• A point where a function changes from decreasing to increasing is called a local minimum.
Together, local maxima and minima are called the local extrema, or local extreme values, of the function.
Example 8
Using the cost of gasoline function from the beginning of the section, find an interval on which the function appears to be decreasing. Estimate any local extrema using the table.
t 2 3 4 5 6 7 8 9 C(t) 1.47 1.69 1.94 2.3 2.51 2.64 3.01 2.14
It appears that the cost of gas increased from t = 2 to t = 8. It appears the cost of gas decreased from t = 8 to t = 9, so the function appears to be decreasing on the interval (8, 9).
Since the function appears to change from increasing to decreasing at t = 8, there is local maximum at t = 8.
Example 9
Use a graph to estimate the local extrema of the function . Use these to determine the intervals on which the function is increasing.
Using technology to graph the function, it appears there is a local minimum somewhere between x = 2 and x =3, and a symmetric local maximum somewhere between x = -3 and x = -2.
Most graphing calculators and graphing utilities can estimate the location of maxima and minima. Below are screen images from two different technologies, showing the estimate for the local maximum and minimum.
Based on these estimates, the function is increasing on the intervals and . Notice that while we expect the extrema to be symmetric, the two different technologies agree only up to 4 decimals due to the differing approximation algorithms used by each.
Try it Now
4. Use a graph of the function to estimate the local extrema of the function. Use these to determine the intervals on which the function is increasing and decreasing.
### Concavity
The total sales, in thousands of dollars, for two companies over 4 weeks are shown.
Company A Company B
As you can see, the sales for each company are increasing, but they are increasing in very different ways. To describe the difference in behavior, we can investigate how the average rate of change varies over different intervals. Using tables of values,
Company A
Week Sales Rate of Change 0 0 5 1 5 2.1 2 7.1 1.6 3 8.7 1.3 4 10
Company B
Week Sales Rate of Change 0 0 0.5 1 0.5 1.5 2 2 2.5 3 4.5 3.5 4 8
From the tables, we can see that the rate of change for company A is decreasing, while the rate of change for company B is increasing
When the rate of change is getting smaller, as with Company A, we say the function is concave down. When the rate of change is getting larger, as with Company B, we say the function is concave up.
Definition: Concavity
• A function is concave up if the rate of change is increasing.
• A function is concave down if the rate of change is decreasing.
• A point where a function changes from concave up to concave down or vice versa is called an inflection point.
Example 10
An object is thrown from the top of a building. The object’s height in feet above ground after t seconds is given by the function for . Describe the concavity of the graph.
Sketching a graph of the function, we can see that the function is decreasing. We can calculate some rates of change to explore the behavior
t h(t) Rate of Change 0 144 -16 1 128 -48 2 80 -80 3 0
Notice that the rates of change are becoming more negative, so the rates of change are decreasing. This means the function is concave down.
Example 11
The value, V, of a car after t years is given in the table below. Is the value increasing or decreasing? Is the function concave up or concave down?
t 0 2 4 6 8 V(t) 28000 24342 21162 18397 15994
Since the values are getting smaller, we can determine that the value is decreasing. We can compute rates of change to determine concavity.
t 0 2 4 6 8 V(t) 28000 24342 21162 18397 15994 Rate of change -1829 -1590 -1382.5 -1201.5
Since these values are becoming less negative, the rates of change are increasing, so this function is concave up.
Try it Now
5. Is the function described in the table below concave up or concave down?
x 0 5 10 15 20 g(x) 10000 9000 7000 4000 0
Graphically, concave down functions bend downwards like a frown, and
concave up function bend upwards like a smile.
Example 12
Estimate from the graph shown the intervals on which the function is concave down and concave up.
On the far left, the graph is decreasing but concave up, since it is bending upwards. It begins increasing at x = -2, but it continues to bend upwards until about x = -1.
From x = -1 the graph starts to bend downward, and continues to do so until about x = 2. The graph then begins curving upwards for the remainder of the graph shown.
From this, we can estimate that the graph is concave up on the intervals and , and is concave down on the interval . The graph has inflection points at x = -1 and x = 2.
Try it Now
6. Using the graph from Try it Now 4, , estimate the intervals on which the function is concave up and concave down.
### Behaviors of the Toolkit Functions
We will now return to our toolkit functions and discuss their graphical behavior.
Function Increasing/Decreasing Concavity Constant Function Neither increasing nor decreasing Neither concave up nor down Identity Function Increasing Neither concave up nor down Quadratic Function Increasing on Decreasing on Minimum at x = 0 Concave up Cubic Function Increasing Concave down on Concave up on Inflection point at (0,0) Reciprocal Decreasing Concave down on Concave up on Function Increasing/Decreasing Concavity Reciprocal squared Increasing on Decreasing on Concave up on Cube Root Increasing Concave down on Concave up on Inflection point at (0,0) Square Root Increasing on Concave down on Absolute Value Increasing on Decreasing on Neither concave up or down
### Important Topics of This Section
• Rate of Change
• Average Rate of Change
• Calculating Average Rate of Change using Function Notation
• Increasing/Decreasing
• Local Maxima and Minima (Extrema)
• Inflection points
• Concavity
1. = 0.264 dollars per year.
2. Average rate of change =
3.
4. Based on the graph, the local maximum appears to occur at (-1, 28), and the local minimum occurs at (5,-80). The function is increasing on and decreasing on .
5. Calculating the rates of change, we see the rates of change become more negative, so the rates of change are decreasing. This function is concave down.
x 0 5 10 15 20 g(x) 10000 9000 7000 4000 0 Rate of change -1000 -2000 -3000 -4000
6. Looking at the graph, it appears the function is concave down on and concave up on .
### Section 1.3 Exercises
1. The table below gives the annual sales (in millions of dollars) of a product. What was the average rate of change of annual sales…
a) Between 2001 and 2002? b) Between 2001 and 2004?
year 1998 1999 2000 2001 2002 2003 2004 2005 2006 sales 201 219 233 243 249 251 249 243 233
2. The table below gives the population of a town, in thousands. What was the average rate of change of population…
a) Between 2002 and 2004? b) Between 2002 and 2006?
year 2000 2001 2002 2003 2004 2005 2006 2007 2008 population 87 84 83 80 77 76 75 78 81
3. Based on the graph shown, estimate the average rate of change from x = 1 to x = 4.
4. Based on the graph shown, estimate the average rate of change from x = 2 to x = 5.
Find the average rate of change of each function on the interval specified.
5. on [1, 5] 6. on [-4, 2]
7. on [-3, 3] 8. on [-2, 4]
9. on [-1, 3] 10. on [-3, 1]
Find the average rate of change of each function on the interval specified. Your answers will be expressions involving a parameter (b or h).
11. on [1, b] 12. on [4, b]
13. on [2, 2+h] 14. on [3, 3+h]
15. on [9, 9+h] 16. on [1, 1+h]
17. on [1, 1+h] 18. on [2, 2+h]
19. on [x, x+h] 20. on [x, x+h]
For each function graphed, estimate the intervals on which the function is increasing and decreasing.
21. 22.
23. 24.
For each table below, select whether the table represents a function that is increasing or decreasing, and whether the function is concave up or concave down.
25.
x f(x) 1 2 2 4 3 8 4 16 5 32
26.
x g(x) 1 90 2 70 3 80 4 75 5 72
27.
x h(x) 1 300 2 290 3 270 4 240 5 200
28.
x k(x) 1 0 2 15 3 25 4 32 5 35
29.
x f(x) 1 -10 2 -25 3 -37 4 -47 5 -54
30.
x g(x) 1 -200 2 -190 3 -160 4 -100 5 0
31.
x h(x) 1 -100 2 -50 3 -25 4 -10 5 0
32.
x k(x) 1 -50 2 -100 3 -200 4 -400 5 -900
For each function graphed, estimate the intervals on which the function is concave up and concave down, and the location of any inflection points.
33. 34.
35. 36.
Use a graph to estimate the local extrema and inflection points of each function, and to estimate the intervals on which the function is increasing, decreasing, concave up, and concave down.
37. 38.
39. 40.
41. 42.
### Contributors
• David Lippman (Pierce College)
• Melonie Rasmussen (Pierce College)
|
P8Nx(19N3j@ЉI{̎qǂ̏
@ @ 23 m @ @ 2007N3
m̐eĂȂqǂ11.8%eƒň炿AS62̂̏ʂ͑17ʂłB
˗eϑA{ݓ̔r @ ˈm̗eēgo
˗eϑPTւ̓c @ eēy[WȂꍇ
̂gõgbv\܂B
˗eA{{݁E@ [̔r @
˓s{ߎsʁ@o^eȂǂ̐(m)ւ̃N @ @ @
eϑA{ݓ̔r TOP
eĂ邱ƂoȂqǂAǂň̂̊Ă܂Bqǂ̌20uƒqǂ̉ƒňvɋts{̎{ݒŠ́AAqǂ̌ψPĂ܂B
s{s {{ݎ @ v S @ @
m 136l 916l 99l 1,151l 17/62 @ @
11.8% 79.6% 8.6% 100% @ @
S 3,424l 29,889l 3,013l 36,326l @ @ @
9.4% 82.3% 8.3% 100% @ @ @
@
@ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
eϑPTւ̓c TOP
@JȂ́AuqǂEqĉvv̐lڕWƂāA21Nx܂łɗeϑ15.0ɂv𗧂Ă܂BS̗̎̂eϑ15ȂAv̒B͓ł傤BZ܂̎̂̎ƒɁAǂ̂悤Ȍv𗧂ĂẮAAĉBЉI{KvƂqǂƗeϑĂqǂ̎тA\loAeϑ15Bo邩Z܂B
s{s @ {{ݎ @ v{쎙v @
@ 14Nx 6.6% 69l 917l 66l 1,052l @
@ 15Nx 8.5% 91l 895l 86l 1,072l @
m 16Nx 8.0% 86l 904l 88l 1,078l @
@ 17Nx 11.1% 127l 927l 86l 1,140l @
BłłB̃y[Xł܂傤 18Nx 11.8% 136l 916l 99l 1,151l @
19Nx\ 13.0% 153l 1,026l 1,178l \l
20Nx\ 14.1% 170l 1,035l 1,205l \l
21Nx\ 15.2% 187l 1,045l 1,232l \l
@ 14Nx 7.4% 2,517l 28,983l 2,689l 34,189l @
@ 15Nx 8.1% 2,811l 29,134l 2,746l 34,691l @
S 16Nx 8.4% 3,022l 29,809l 2,934l 35,765l @
@ 17Nx 9.1% 3,293l 29,850l 3,008l 36,151l @
@15%܂ŁA1583lȏϑ𑝂₷Kv܂ 18Nx 9.4% 3,424l 29,889l 3,013l 36,326l @
19Nx\ 10.0% 3,702l 33,442l 37,145l \l
20Nx\ 10.4% 3,932l 33,786l 37,718l \l
21Nx\ 10.9% 4,161l 34,130l 38,291l \l
\lExcelTRENDgp
@
@ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
eւ̎ϑA{{݁E@ [̔r TOP
@o^e𗢐eϑ̒Ɖ肵Aeւ̎ϑƎ{{݁E@̒[Ƃr܂B{{݁E@̒ςłĂAeւ̎ϑ́AႢƂ낪̂͂Ȃł傤B
@ s{ o^e ϑe ϑ ψϑ ϑ
S
ϑ
e m 251ƒ 91ƒ 36.3% 1.5l 18 136l
S 7,882ƒ 2,453ƒ 31.1% 1.4l @ 3,424l
@ @ @ @ @ @ @ @
@ s{ @ [ ϒ [
S
{{ m 964l 916l 95.0% 57 14 17
S 33,878l 29,889l 88.2% 60 @ 560
@ m 109l 99l 90.8% 27 34 4
S 3,742l 3,013l 80.5% 31 @ 121
@
@ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@
@ @ @ 2008/7/31 Update by sido ( http://foster-family.jp/ )
19Nx̃f[^͕20N12\ @ @ @ @
|
If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
# Price elasticity of demand and price elasticity of supply
How do quantities supplied and demanded react to changes in price?
## Key points
• Price elasticity measures the responsiveness of the quantity demanded or supplied of a good to a change in its price. It is computed as the percentage change in quantity demanded—or supplied—divided by the percentage change in price.
• Elasticity can be described as elastic—or very responsive—unit elastic, or inelastic—not very responsive.
• Elastic demand or supply curves indicate that the quantity demanded or supplied responds to price changes in a greater than proportional manner.
• An inelastic demand or supply curve is one where a given percentage change in price will cause a smaller percentage change in quantity demanded or supplied.
• Unitary elasticity means that a given percentage change in price leads to an equal percentage change in quantity demanded or supplied.
## What is price elasticity?
Both demand and supply curves show the relationship between price and the number of units demanded or supplied. Price elasticity is the ratio between the percentage change in the quantity demanded, start text, Q, end text, start subscript, d, end subscript, or supplied, start text, Q, end text, start subscript, s, end subscript, and the corresponding percent change in price.
The price elasticity of demand is the percentage change in the quantity demanded of a good or service divided by the percentage change in the price. The price elasticity of supply is the percentage change in quantity supplied divided by the percentage change in price.
Elasticities can be usefully divided into five broad categories: perfectly elastic, elastic, perfectly inelastic, inelastic, and unitary. An elastic demand or elastic supply is one in which the elasticity is greater than one, indicating a high responsiveness to changes in price. An inelastic demand or inelastic supply is one in which elasticity is less than one, indicating low responsiveness to price changes. Unitary elasticities indicate proportional responsiveness of either demand or supply.
Perfectly elastic and perfectly inelastic refer to the two extremes of elasticity. Perfectly elastic means the response to price is complete and infinite: a change in price results in the quantity falling to zero. Perfectly inelastic means that there is no change in quantity at all when price changes.
If . . .It Is Called . . .
start fraction, percent, space, c, h, a, n, g, e, space, i, n, space, q, u, a, n, t, i, t, y, divided by, percent, space, c, h, a, n, g, e, space, i, n, space, p, r, i, c, e, end fraction, equals, infinityPerfectly elasti
start fraction, percent, space, c, h, a, n, g, e, space, i, n, space, q, u, a, n, t, i, t, y, divided by, percent, space, c, h, a, n, g, e, space, i, n, space, p, r, i, c, e, end fraction, is greater than, 1Elastic
start fraction, percent, space, c, h, a, n, g, e, space, i, n, space, q, u, a, n, t, i, t, y, divided by, percent, space, c, h, a, n, g, e, space, i, n, space, p, r, i, c, e, end fraction, equals, 1Unitary
start fraction, percent, space, c, h, a, n, g, e, space, i, n, space, q, u, a, n, t, i, t, y, divided by, percent, space, c, h, a, n, g, e, space, i, n, space, p, r, i, c, e, end fraction, is less than, 1Inelastic
start fraction, percent, space, c, h, a, n, g, e, space, i, n, space, q, u, a, n, t, i, t, y, divided by, percent, space, c, h, a, n, g, e, space, i, n, space, p, r, i, c, e, end fraction, equals, 0Perfectly inelastic
### Using the midpoint method to calculate elasticity
To calculate elasticity, instead of using simple percentage changes in quantity and price, economists sometimes use the average percent change in both quantity and price. This is called the Midpoint Method for Elasticity:
start text, M, i, d, p, o, i, n, t, space, m, e, t, h, o, d, space, f, o, r, space, e, l, a, s, t, i, c, i, t, y, end text, equals, start fraction, start fraction, Q, start subscript, 2, end subscript, minus, Q, start subscript, 1, end subscript, divided by, left parenthesis, start fraction, Q, start subscript, 2, end subscript, plus, Q, start subscript, 1, end subscript, divided by, 2, end fraction, right parenthesis, end fraction, divided by, start fraction, P, start subscript, 2, end subscript, minus, P, start subscript, 1, end subscript, divided by, left parenthesis, start fraction, P, start subscript, 2, end subscript, plus, P, start subscript, 1, end subscript, divided by, 2, end fraction, right parenthesis, end fraction, end fraction
The advantage of the midpoint method is that we get the same elasticity between two price points whether there is a price increase or decrease. This is because the formula uses the same base for both cases. The midpoint method is referred to as the arc elasticity in some textbooks.
### Using the point elasticity of demand to calculate elasticity
A drawback of the midpoint method is that as the two points get farther apart, the elasticity value loses its meaning. For this reason, some economists prefer to use the point elasticity method. In this method, you need to know what values represent the initial values and what values represent the new values.
start text, P, o, i, n, t, space, e, l, a, s, t, i, c, i, t, y, space, end text, equals, start fraction, start fraction, start text, n, e, w, space, end text, Q, minus, start text, i, n, i, t, i, a, l, space, end text, Q, divided by, start text, i, n, i, t, i, a, l, space, end text, Q, end fraction, divided by, start fraction, start text, i, n, i, t, i, a, l, space, end text, P, minus, start text, n, e, w, space, end text, P, divided by, start text, i, n, i, t, i, a, l, space, end text, P, end fraction, end fraction
## Calculating price elasticity of demand
Let’s apply these formulas to a practice scenario. We'll calculate the elasticity between points start text, A, end text and start text, B, end text in the graph below.
The graph shows a downward sloping line that represents the price elasticity of demand.
Image credit: Figure 1 in "Price Elasticity of Demand and Price Elasticity of Supply" by OpenStaxCollege, CC BY 4.0
First, apply the formula to calculate the elasticity as price decreases from $70 at point start text, B, end text to$60 at point start text, A, end text:
$\begin{array}{ccc} \mathrm{\% ~ change in quantity} & = & \frac{3,000–2,800}{\left (3,000+2,800\right )/2}~ \times ~ 100\\ & = & \frac{200}{2,900}~ \times ~ 100\\ & = & 6.9\\ \mathrm{\% change in price} & = & \frac{60–70}{\left (60+70\right )/2}~ \times ~ 100\\ & = & \frac{–10}{65}~ \times ~ 100\\ & = & –15.4\\ \text{Price elasticity of demand} & = & \frac{~ ~ ~ ~ 6.9\% }{–15.4\% }\\ & = & 0.45 \end{array}$
The elasticity of demand between point start text, A, end text and point start text, B, end text is start fraction, space, space, space, space, 6, point, 9, percent, divided by, –, 15, point, 4, percent, end fraction, or 0.45. Because this amount is smaller than one, we know that the demand is inelastic in this interval.
This means that, along the demand curve between point start text, B, end text and point start text, A, end text, if the price changes by 1%, the quantity demanded will change by 0.45%. A change in the price will result in a smaller percentage change in the quantity demanded. For example, a 10% increase in the price will result in only a 4.5% decrease in quantity demanded. A 10% decrease in the price will result in only a 4.5% increase in the quantity demanded.
## Calculating the price elasticity of supply
Now let's try calculating the price elasticity of supply. We use the same formula as we did for price elasticity of demand:
$\begin{array}{ccc} \text{Price elasticity of supply} & = & \frac{\mathrm{\% ~ change ~in ~quantity}}{\mathrm{\% ~ change~ in ~price}} \end{array}$
Assume that an apartment rents for $650 per month and, at that price, 10,000 units are rented—you can see these number represented graphically below. When the price increases to$700 per month, 13,000 units are supplied into the market.
By what percentage does apartment supply increase? What is the price sensitivity?
The graph shows an upward sloping line that represents the supply of apartment rentals.
Image credit: Figure 2 in "Price Elasticity of Demand and Price Elasticity of Supply" by OpenStaxCollege, CC BY 4.0
We'll start by using the Midpoint Method to calculate percentage change in price and quantity:
$\begin{array}{ccc} \mathrm{\% ~ change~ in~ quantity} & = & \frac{13,000–10,000}{\left (13,000+10,000\right )/2}~ \times ~ 100\\ & = & \frac{3,000}{11,500}~ \times ~ 100\\ & = & 26.1\\ \mathrm{\% ~ change~ in~ price} & = & \frac{\ 700–\ 650}{\left (\ 700+\ 650\right )/2}~ \times ~ 100\\ & = & \frac{50}{675}~ \times ~ 100\\ & = & 7.4\end{array}$
Next, we take the results of our calculations and plug them into the formula for price elasticity of supply:
$\begin{array}{ccc} \mathrm{Price~ elasticity~ of~ supply} & = & \frac{\mathrm{\% change~ in~ quantity}}{\mathrm{\% change~ in~ price}}\\ & = & \frac{26.1}{7.4}\\ & = & 3.53 \end{array}$
Again, as with the elasticity of demand, the elasticity of supply is not followed by any units. Elasticity is a ratio of one percentage change to another percentage change—nothing more. It is read as an absolute value. In this case, a 1% rise in price causes an increase in quantity supplied of 3.5%. The greater than one elasticity of supply means that the percentage change in quantity supplied will be greater than a one percent price change.
## Summary
• Price elasticity measures the responsiveness of the quantity demanded or supplied of a good to a change in its price. It is computed as the percentage change in quantity demanded—or supplied—divided by the percentage change in price.
• Elasticity can be described as elastic—or very responsive—unit elastic, or inelastic—not very responsive.
• Elastic demand or supply curves indicate that the quantity demanded or supplied responds to price changes in a greater than proportional manner.
• An inelastic demand or supply curve is one where a given percentage change in price will cause a smaller percentage change in quantity demanded or supplied.
• Unitary elasticity means that a given percentage change in price leads to an equal percentage change in quantity demanded or supplied.
## Self-check questions
Using the data shown in the table below about demand for smart phones, calculate the price elasticity of demand from point start text, B, end text to point start text, C, end text, point start text, D, end text to point start text, E, end text, and point start text, G, end text to point start text, H, end text. Classify the elasticity at each point as elastic, inelastic, or unit elastic.
PointsPQ
A603,000
B702,800
C802,600
D902,400
E1002,200
F1102,000
G1201,800
H1301,600
Using the data shown in in the table below about supply of alarm clocks, calculate the price elasticity of supply from: point start text, J, end text to point start text, K, end text, point start text, L, end text to point start text, M, end text, and point start text, N, end text to point start text, P, end text. Classify the elasticity at each point as elastic, inelastic, or unit elastic.
PointPriceQuantity Supplied
J$850 K$970
L$1080 M$1188
N$1295 P$13100
## Review Questions
• What is the formula for calculating elasticity?
• What is the price elasticity of demand? Can you explain it in your own words?
• What is the price elasticity of supply? Can you explain it in your own words?
## Critical-thinking questions
• Transatlantic air travel in business class has an estimated elasticity of demand of 0.40 less than transatlantic air travel in economy class, which has an estimated price elasticity of 0.62. Why do you think this is the case?
• What is the relationship between price elasticity and position on the demand curve? For example, as you move up the demand curve to higher prices and lower quantities, what happens to the measured elasticity? How would you explain that?
## Problems
• The equation for a demand curve is P, equals, 48, –, 3, Q. What is the elasticity in moving from a quantity of 5 to a quantity of 6?
• The equation for a demand curve is P, equals, 2, slash, Q. What is the elasticity of demand as price falls from 5 to 4? What is the elasticity of demand as the price falls from 9 to 8? Would you expect these answers to be the same?
• The equation for a supply curve is 4, P, equals, Q. What is the elasticity of supply as price rises from 3 to 4? What is the elasticity of supply as the price rises from 7 to 8? Would you expect these answers to be the same?
• The equation for a supply curve is P, equals, 3, Q, –, 8. What is the elasticity in moving from a price of 4 to a price of 7?
## Want to join the conversation?
• Transatlantic air travel in business class has an estimated elasticity of demand of 0.40 less than transatlantic air travel in economy class, with an estimated price elasticity of 0.62. Why do you think this is the case? For me, I feel that it is because business flights has a higher degree of necessity as compared to economy flights meant for leisure. As such, a change in airfare prices for business flights wouldn't impact much of the quantity demanded due to its higher indispensability.
• I would say that it is simply because business class travelers care less about the price, given that they are already not buying the cheapest option.
• What is the answer to critical question No.1
• The change in price has a less impact on preferences of the people with higher incomes, who prefer the business class. The price is not the most important criterion for these people, that is why any given percentage change in price will cause a smaller percentage change in quantity demanded.
The influence of the change in price is bigger for the people with less incomes, who prefer the economy class. This statistic shows us that the price has a remarkable significance for these people, because their purchasing power is less than people in business class, that is why the price should be a more important criterion for them. Additionally, despite the wage gap between the consumers the demand curve is inelastic for both classes in Transatlantic Air Travel.
• Hey for the first set of elasticity, from going to G to H it says the % change in price is 7.81 but when I divide 10 by 125 I get 0.08 which is 8%, so am I doing it wrong or is the answer wrong?
• You're right. It's supposed to be 8%. The answer is wrong.
• to find the price elasticity of the entire demand curve, is it correct to find the elasticity for all points and find the average of those values?
• No, there is no such thing as a price elasticity of the entire curve. The price elasticity is different for every point.
• Here's my answers to the problems:
(1) E = |(1/5.5)/(-3/31.5)| = 1.91 Thus the demand curve is elastic here.
(2) E1 = |(0.1/0.45)/(-1/4.5)| = 1 E2 = | [(1/36)/(17/72)]/(-1/8.5) | = 1 The demand curve at the two points are both unit elastic. In fact, the elasticity of price at every point of the curve equals to 1. Since P always equals to 2/Q on this curve, ΔP = P(new) - P(old) = 2/Q(new) - 2/Q(old) = 2(Q(old) - Q(new))/Q(new)*Q(old) = 2ΔQ/[Q(new)*Q(old)]; similarly, P(average) = (Q(new) + Q(old)) / Q(new)*Q(old). Thus we get %ΔP = 2ΔQ/(Q(new) + Q(old)) and plug it into the formula E = %ΔQ/%ΔP, we get E = (ΔQ * (Q(new) + Q(old))) / (0.5 * (Q(new) + Q(old)) * 2ΔQ) = 1.
(3) The method is the same as above. E1 = E2 = 1.
(4) E = 0.41
Hope it helps! :)
• Should i use the absolute values for the change of quantity and price if i want to measure elasticity? Because if i don't most of the problems turn out to be inelastic since the Elasticity is smaller than 1. is that correct?
• You could use the absolute values for the change of quantity and price, as you said, or you could just do the absolute value of the result of your calculation.Both methods are correct.
• Another question is, what happen if both demand and supply increase, but there is a larger increase in demand than the supply? How would the graph looks like? Pls do help me
• In the example above (apartments), when calculating the price elasticity of supply, why do you also split the change in price by two?
According to the example the answer is 7.4 %, shouldn't it be 14.8 % ?
• Incorrect. He made a wrong input in the y value but did the math correct. It was supposed to be 700-650 not 600.
(1 vote)
• I'm confused on problem 2. Since the problem Already gave me the price I thought all I had to do was find the quantity demanded. So I rewrote the equation as 2P=Q. But with that I end up with an equation of a supply curve then a demand. Please help!
(1 vote)
• P = 2/Q cannot be re-written as 2P = Q. That's incorrect algebra.
• Are there answers to the problems?
(1 vote)
• The answers to the self-check questions are below the questions. The answers to the other questions aren't given.
|
# Part 1: A bipartisan list of people who are bad for America
Imagine that alien researchers visited America to learn about our political culture. If they wrote a report to send back to their planet, I imagine it would look something like this:
Americans have split themselves up into two opposing political tribes. Most people who associate with these groups are well-intentioned, but occasionally some members of a tribe do something bad or say something dumb. Whenever this happens, members of the opposite side feel good about themselves.
Certain writers and media personalities have learned to exploit this fact for personal gain. They have found that they can maximize their TV ratings and social media points by writing news stories that either cherry pick the worst actions of the other side or which interpret the other side’s actions in the least charitable way possible. As a result, news readers have developed increasingly distorted beliefs about their political opponents. The civic culture of the society is broken.
Below is a bipartisan list of people who are stoking partisan outrage for personal gain. Some of them do it for retweets, some of them do it for TV ratings, and some of them – still culpable – do it because they have entered a filter bubble themselves, fueling their own distorted and harmful sense of mission.
• Sean Davis (The Federalist) His Twitter account is deliberately uncharitable.
• Sopan Deb (New York Times) During the presidential campaign, his Twitter feed was nonstop, “look what this stupid Trump supporter said”.
• Stephen Miller (The Wilderness, ex-NRO)
• Chris Cillizza (The Washington Post)
• Sean Hannity (Fox News)
• Tucker Carlson (Fox News)
• Samantha Bee I know, she’s a comedian. I like jokes. But given how many people get their news from selectively edited comedy shows, it’s fair to say that comedians bear some responsibility.
• John Oliver (HBO) It pains me to include him on this list, since he is funny and since his show also includes some constructive policy explainers. But much of the content is selectively edited clips that paint a very distorted picture of the other side.
• Greg Gutfeld (Fox News comedian)
It doesn’t matter if some of the people on this list do accurate reporting. What matters is that their reporting is selective. It doesn’t matter if some of the people on this list support some good policy ideas. What matters is that listening to them will destroy your brain’s ability to understand where the other side is coming from. And it doesn’t matter if one side is more filter-bubbled than the other. Both sides are badly filter-bubbled. Avoiding the people in this list is a good place to start.
In Part 2, I’ll post a bipartisan list of people who argue in good faith.
# Three questions for social scientists: Internet virtue edition
This isn’t news to anybody, but the internet is changing our culture. Recently, I’ve been thinking about how it has changed our moral culture, and I realized that most of our beliefs on this topic are weirdly in tension with one another. Below are three questions that I feel are very much unresolved. I don’t have any good answers to them, and so I think they might be good topics for social science research.
#### 1. Moral Substitution and Moral Licensing versus Moral Contagion
When people do the Ice Bucket Challenge or put a Pride symbol on their profile avatar, they are sometimes accused of virtue signalling, a derogatory term akin to moral grandstanding. Virtue signallers are said to care more about showcasing their virtue than about creating real change.
Virtue signalling is bad, allegedly, for two reasons: First, instead of performing truly impactful moral acts, virtue signallers spend more time performing easy and symbolic acts. This could be called moral substitution. Second, after doing something good, people often feel like they’ve earned enough virtue points that they can get away with doing something bad. This well-studied phenomenon is called moral licensing.
While there are some clear ways that virtue signalling can be bad, there is another way in which it is good. Doing good things makes other people more likely to do good things. This process, known as moral contagion, was famously demonstrated in the Milgram experiments. Participants in those experiments who saw other participants behave morally were dramatically more likely to behave morally as well.
If the social science research is right, then we can conclude that putting a Pride symbol on your avatar make you behave worse (via moral licensing and moral substitution), but it makes other people behave better (via moral contagion). This leaves a couple of open questions:
First, how do the pros and cons balance out? Perhaps your Pride avatar is net positive if you have a large audience on Facebook, but net negative if you have a small audience. And second, how does moral contagion work with symbolic acts? Does the Pride avatar just make other people add Pride symbols to their avatars? Or does it make them behave more ethically in real and impactful ways?
We are beginning to get some quantitative answers to these questions. Clever research from Linda Skitka and others has shown that committing a moral act makes you about 40% less likely to commit another moral act later in the day (moral licensing), whereas hearing about someone else’s moral act makes you about 25% more likely to commit a moral act later in the day (moral contagion), although the latter finding fell short of statistical significance. More research is needed though, particularly when it comes to social media and symbolic virtue signalling.
#### 2. Slacktivism versus Violent Revolution
This question is more for political scientists.
Many people are concerned that the internet encourages slacktivism, a phenomenon closely related to moral substitution. It’s easier to slap a Pride symbol on your Facebook than to engage in real activism. In this way, the internet is really a tool of the already powerful.
On the other hand, some people are concerned that the internet cultivates violent radicalism. Online filter bubbles create anger and online networks create alliances, ultimately leading to violent rhetoric and homegrown terrorism. Many observers already sense the undercurrents of violent revolution.
How can we be worried that the internet is causing both slacktivism and violent radicalism? One possibility is that we only need to worry about slacktivism, and that the violent rhetoric isn’t actually violent – it’s just rhetoric. But the other possibility is that the internet has made slacktivists out of people who otherwise wouldn’t be doing anything at all, and it has made violent radicals out of people who would otherwise be mere activists. I’m not sure what the answer is, but it would be useful to understand this more.
#### 3. Political Correctness: Overton Windows versus Wolf Crying
Perhaps because of filter bubbles on both sides of the political spectrum, the term “political correctness” is back with a vengeance. Leaving aside the question of whether political correctness is good or bad, it would be interesting to understand whether it is effective. On the one hand, political correctness may help define an Overton Window, setting useful bounds around opinions that can be aired in polite company. But on the other hand, if the enforcers squeeze the boundaries too much, imposing stricter and stricter controls on the range of acceptable discourse, they risk undermining their own credibility by “crying wolf”. For what it’s worth, many internet trolls credit their success to a perception that so-called social justice warriors overplayed their cards. I’m not sure how much to believe them, but it seems possible.
Just in terms of effectiveness, is there a point at which political correctness starts to backfire? And more broadly, what is the optimal level of political correctness for a society? Neither of these questions seems easy to answer, but I would love to learn more.
# Keyboard shortcuts I couldn't live without
Keyboard shortcuts are interesting. Even though I know they are almost always worth learning, I often find myself shying away from the uncomfortable task of actually learning them. But after years of clumsily reaching for the mouse while my colleagues looked at me with a kindly sense of pity, I have slowly accumulated enough keyboard tricks that I’d like to share them. This set is probably far from optimal, and different people have their own solutions, but it has worked well for me. Before jumping in, here’s a reference table of key symbols and their common names:
Key Symbol Key Name
Command
Shift
Control
Alt/Option
Enter
Table 1. Key symbol map.
### Sublime Text
Sublime has lots of great shortcuts. My favorites is ⌘D, which allows you to sequentially select exact matches of highlighted text. Once the matches are selected, you can simultaneously edit them with a multi-cursor. If you want to select all matches simultaneously, rather than sequentially, you can use ⌃⌘G.
Figure 1. Demonstration of ⌘D, ⌘←, ⌘→, ⌘A and ⌘KU in Sublime Text.
### Chrome
With the exception of scrolling and link clicking, everything you do in Chrome should be done with keyboard only. If you’re new to this, I’d recommend starting with ⌘L, ⌘T, ⌘W and then expanding from there. Special bonus: if you ever accidentally close a tab, you can reopen it with ⌘⇧T.
Figure 2. The "No Touching" Chrome Zone. Your mouse should never come anywhere near here.
### Mac OS X
On Mac OS X, ⌘ Tab switches between applications, and ⌘ switches between windows of the same application. The ⌘+ and ⌘- shortcuts change the display size of text and other items. You can jump to the beginning of a line with ⌘← or ⌃A, and to the end of a line with ⌘→ or ⌃E. In Terminal, you can delete to the beginning of a line with ⌃U.
For easy window snapping, use the Spectacle app. Because Spectacle’s default mappings conflict with Chrome’s tab switching shortcuts, I’d recommend setting the four main screen position shortcuts to ⌘⌃←, ⌘⌃→, ⌘⌃↑, ⌘⌃↓, and eliminating all the other shortcuts, except the full screen shortcut, which should be set to ⌘⌃F.
Figure 3. Arranging windows with custom shortcuts in Spectacle.
### Gmail
If you’re using a mouse on Gmail, you’re doing it wrong. With the exception of a few word processing operations, literally everything you do in Gmail should be done with keyboard only. Compose, Reply, Reply All, Forward, Send, Search, Navigate, Open, Inbox, Sent, Drafts, Archive. All of these should be done with the keyboard. To enable these shortcuts, you must go into your Settings and navigate to the General tab. Once shortcuts have been enabled, you can see a list of all them by typing ?.
Figure 4. A small sample of things you can do on Gmail without ever touching your mouse.
With shortcuts similar to Gmail’s, you can jump to different pages using only the keyboard: gh brings you to the Home Timeline and gu lets you jump to another user’s profile. The most useful shortcut is probably ., which loads any new tweets that are waiting for you at the top of the Timeline. You can see a list of all shortcuts by typing ?.
### JetBrains
JetBrains products like DataGrip, PyCharm, and IntelliJ offer plenty of keyboard shortcuts. My favorites are ⌃G, for sequential highlighting, and ⌥⌥↓ and ⌥⌥↑ for multi-line cursors.
### Jupyter
Jupyter has tons of essential keyboard shortcuts that can be found by typing ? while in command mode. In addition, it’s possible to get Sublime-style text editing by following the instructions described here.
Figure 5. Common Jupyter workflow done entirely with the keyboard, with help from some Sublime-style editing: c and v to copy and paste a cell, ⌘D for multiple selection, ⌘→ to jump to the end of line, dd` to delete a cell.
# Learning by flip-flopping
I recently came across Artir’s Pyramid of Economic Insight and Virtue. It’s not actually a pyramid, but is instead a riff on the Expanding Brain meme. Check it out:
What’s interesting about Artir’s Pyramid is that at every step, the position flip-flops from the previous step. This isn’t just a dialogue between two sides. It is a description of the belief sequence that people traverse as they learn more about an issue. We might call this learning by flip-flopping.
This got me thinking: In what other issues do people go through a sequence of flip-flops as they learn more about it? In this blog post, I’d like to suggest a few.
Let me stress that in presenting these I don’t necessarily think that the “highest” levels in these examples are correct, nor do I think I have a strong understanding on many of these issues. It’s just something that’s fun to think about.
### Increasing the minimum wage
This is arguably a special case of Artir’s Pyramid and is probably the canonical example of learning by flip-flopping.
When it comes to the minimum wage debate and other debates, I often sometimes see a Stage 2 person talking to someone they believe is at Stage 1 but who is in fact at Stage 3.
Further reading on the minimum wage: Card and Krueger, criticism of Card and Krueger’s data, another case against Card and Krueger, two better studies.
### How to deal with a recession
Recession flip-flopping is less related to Artir’s Pyramid, but is still quite common. I may be bungling some of the later stages here, as my macro knowledge is mostly cobbled together from parody rap videos, so I welcome any suggestions for additional further reading.
Further reading on recessions: The government is not a household, Keynesian economics, boom and bust cycles, and a wonderful book by Tim Harford.
I’d like to keep this blog post as value-judgment free as possible, but I’ll make a special exception for this one. The 140 character limit is no longer a good idea, and Stage 3 is the correct stage.
### The meaning of life
David Chapman writes about how STEM-trained people should think about meaning. Extending Robert Kegen’s theory of human development, he believes that most STEM-trained people can find meaning in ideological rationalism (Stage 4) but, upon finding that rationality does not provide any meaning, they become in danger of falling into the Nihilism trap (Stage 4.5). Chapman claims that there is a Stage 5, sometimes called meta-rationality or fluidity, in which meaning can once again be found. You can read more about it on his blog.
What other examples of learning by flip-flopping are out there?
UPDATE: John McDonnell points me towards Hegelian Dialectic.
# Empirical Bayes for multiple sample sizes
Here’s a data problem I encounter all the time. Let’s say I’m running a website where users can submit movie ratings on a continuous 1-10 scale. For the sake of argument, let’s say that the users who rate each movie are an unbiased random sample from the population of users. I’d like to compute the average rating for each movie so that I can create a ranked list of the best movies.
Take a look at my data:
Figure 1. Each circle represents a movie rating by a user. Diamonds represent sample means for each movie.
I’ve got two big problems here. First, nobody is using my website. And second, I’m not sure if I can trust these averages. The movie at the top of the rankings was only rated by two users, and I’ve never even heard of it! Maybe the movie really is good. But maybe it just got lucky. Maybe just by chance, the two users who gave it ratings happened to be the users who liked it to an unusual extent. It would be great if there were a way to adjust for this.
In particular, I would like a method that will give me an estimate closer to the movie’s true mean (i.e. the mean rating it would get if an infinite number of users rated it). Intuitively, movies with mean ratings at the extremes should be nudged, or “shrunk”, towards the center. And intuitively, movies with low sample sizes should be shrunk more than the movies with large sample sizes.
Figure 2. Each circle represents a movie rating by a user. Diamonds represent sample means for each movie. Arrows point towards shrunken estimates of each movie's mean rating. Shrunken estimates are obtained using the MSS James-Stein Estimator, described in more detail below.
As common a problem as this is, there aren’t a lot of accessible resources describing how to solve it. There are tons of excellent blog posts about the Beta-binomial distribution, which is useful if you wish to estimate the true fraction of an occurrence among some events. This works well in the case of Rotten Tomatoes, where one might want to know the true fraction of “thumbs up” judgments. But in my case, I’m dealing with a continuous 1-10 scale, not thumbs up / thumbs down judgments. The Beta-binomial distribution will be of little use.
Many resources mention the James-Stein Estimator, which provides a way to shrink the mean estimates only when the variances of those means can be assumed to be equal. That assumption usually only holds when the sample sizes of each group are equal. But in most real world examples, the sample sizes (and thus the variances of the means) are not equal. When that happens, it’s a lot less clear what to do.
After doing a lot of digging and asking some very helpful folks on Twitter, I found several solutions. For many of the solutions, I ran simulations to determine which worked best. This blog post is my attempt to summarize what I learned. Along the way, we’ll cover the original James-Stein Estimator, two extensions to the James-Stein Estimator, Markov Chain Monte Carlo (MCMC) methods, and several other strategies.
Before diving in, I want to include a list of symbol definitions I’ll be using because – side rant – it sure would be great if all stats papers did this, given that literally every paper I read used its own idiosyncratic notations! I’ll define everything again in the text, but this is just here for reference:
Symbol Definition
$k$ The number of groups.
$\theta_i$ The true mean of a group.
$x_i$ The sample mean of a group. The MLE estimate of $\theta_i$.
$\epsilon^{2}_i$ The true variance of observations within a group.
$\epsilon^2$ The true variance of observations within a group if we assume all groups have the same variance.
$s^{2}_i$ The sample variance of a group. The MLE estimate of $\epsilon^{2}_i$.
$n_i$ The number of observations in a group.
$n$ The number of observations in a group, if we assume all groups have the same size.
$\sigma^{2}_i$ The true variance of a group's mean. If each group has the same variance of observations, then $\sigma^{2}_i = \epsilon^{2} / n_i$. If each group has different variances of observations, then $\sigma^{2}_i = \epsilon^{2}_i / n_i$.
$\sigma^{2}$ Like $\sigma^{2}_i$, but if we assume all groups had the same variance of the mean. Equal to $\epsilon^{2} / n$.
$\hat{\sigma^{2}_i}$ Estimate of $\sigma^{2}_i$.
$\hat{\sigma^{2}}$ Estimate of $\sigma^{2}$.
$\mu$ The true mean of the $\theta_i$'s (the true group means). The mean of the distribution from which the $\theta_i$'s are drawn.
$\overline{X}$ The sample mean of the sample means.
$\tau^{2}$ The true variance of the $\theta_i$'s (the true group means). The variance of the distribution from which the $\theta_i$'s are drawn.
$\hat{\tau^{2}}$ Estimate of $\tau^{2}$.
$\hat{B}$ Estimate of the best term for weighting $x_i$ and $\overline{X}$ when calculating $\hat{\theta_i}$. Assumes each group has the same $\sigma^2$.
$\hat{B_i}$ Estimate of the best term for weighting $x_i$ and $\overline{X}$ when calculating $\hat{\theta_i}$. Does not assume that all group's have the same $\sigma^{2}_i$.
$\hat{\theta_i}$ Estimate of a true group means. Its value depends on the method we use.
$k_{\Gamma}$ Shape parameter for the Gamma distribution from which sample sizes are drawn.
$\theta_{\Gamma}$ Scale parameter for the Gamma distribution from which sample sizes are drawn.
$\mu_v$ In simulations in which group observation variances $\epsilon^{2}_i$ are allowed to vary, this is the mean parameter of the log-normal distribution from which the $\epsilon^{2}_i$'s are drawn.
$\tau^{2}_v$ In simulations in which group observation variances $\epsilon^{2}_i$ are allowed to vary, this is the variance parameter of the log-normal distribution from which the $\epsilon^{2}_i$'s are drawn.
### Quick Analytic Solutions
Our goal is to find a better way of estimating $\theta_i$, the true mean of a group. A common theme in many of the papers I read is that good estimates of $\theta_i$ are usually weighted averages of the group’s sample mean $x_i$ and the global mean of all group means $\overline{X}$. Let’s call this weighting factor $\hat{B_i}$.
This seems very sensible. We want something that is in between the sample mean (which is probably too extreme) and the mean of means. But how do we know what value to use for $\hat{B_i}$? Different methods exist, and each leads to different results.
Let’s start by defining $\sigma^{2}_i$, the true variance of a group’s mean. This is equivalent to $\epsilon^{2}_i / n_i$, where $\epsilon^{2}_i$ is the true variance of the observations within that group, and $n_i$ is the sample size of the group. According to the original James-Stein approach, if we assume that all the group means have the same known variance $\hat{\sigma^2}$, which would usually only happen if the groups all had the same sample size, then we can define a common $\hat{B}$ for all groups as:
This formula seems really weird and arbitrary, but it begins to make more sense if we rearrange it a bit and sweep that pesky $\left(k-3\right)$ under the rug and replace it with a $(k-1)$. Sorry hardliners!
Before getting to why this makes sense, I should explain the last step above. The denominator $\sum{(x_i - \overline{X})^2}/\left(k-1\right)$ is the observed variance of the observed sample means. This variance comes from two sources: $\tau^2$ is the true variance in the true means and $\sigma^2$ is the true variance caused by the fact that each $x_i$ is computed from a sample. Since variances add, the total variance of the observed means is $\tau^{2} + \sigma^{2}$.
Anyway, back to the result. This result is actually pretty neat. When we estimate a $\theta_i$, the weight that we place on the global mean $\overline{X}$ is the fraction of total variance in the means that is caused by within-group sampling variance. In other words, when the sample mean comes with high uncertainty, we should weight the global mean more. When the sample mean comes with low uncertainty, we should weight the global mean less. At least directionally, this formula makes sense. Later in this blog post, we’ll see how it falls naturally out of Bayes Law.
The James-Stein Estimator is so widely applicable that many other fields have discovered it independently. In the image processing literature, it is a special case of the Wiener Filter, assuming that both the signal and the additive noise are Gaussian. In the insurance world, actuaries call it the Bühlmann model. And in animal breeding, early researchers called it the Best Unbiased Linear Prediction or BLUP (technically the Empirical BLUP). The BLUP approach is so useful, in fact, that it has received the highly coveted endorsement of the National Swine Improvement Federation.
While the original James-Stein formula is useful, the big limitation is that it only works when we believe that all groups have the same $\sigma^2$. In cases where we have different sample sizes, each group will have it’s own $\sigma^{2}_i$. (Recall that $\sigma^{2}_i = \epsilon^{2}_i / n_i$.) We’re going to want to shrink some groups more than others, and the original James-Stein estimator does not allow this. In the following sections, we’ll look at a couple of extensions to the James-Stein estimator. These extensions have analogues in the Bühlmann model and BLUP literature.
#### The Multi Sample Size James-Stein Estimator
The most natural extension of James-Stein is to define each group’s $\hat{\sigma^{2}_i}$ as the squared standard error of the group’s mean. This allows us to estimate a weighting factor $\hat{B_i}$ tailored to each group. Let’s call this the Multi Sample Size James-Stein Estimator, or MSS James-Stein Estimator.
The denominator can just be estimated as the variance across group sample means.
As reasonable as this approach sounds, it somehow didn’t feel totally kosher to me. But when I looked into the literature, it seems like most researchers basically said “yup, that sounds pretty reasonable”.
To test this approach, I ran some simulations on 1000 artificial datasets. Each dataset involved 25 groups with sample sizes drawn from a Gamma distribution $\Gamma(k_{\Gamma}=1.5,\theta_{\Gamma}=10)$. True group means ($\theta_i$’s) were sampled from a Normal distribution $\mathcal{N}(\mu, \tau^2)$. Observations within each group were sampled from $\mathcal{N}(\theta_i, \epsilon^2)$, where $\epsilon$ was shared between groups.
For each dataset I computed the Mean Squared Error (MSE) between the vector of true group means and the vector of estimated group means. I then averaged the MSEs across datasets. This process was repeated for a variety of different values of $\epsilon$ and for two different estimators: The MSS James-Stein Estimator and the Maximum Likelihood Estimator (MLE). To compute the MLE, I just used $x_i$ as my $\hat{\theta_i}$ estimate.
Figure 3. Mean Squared Error between true group means and estimated group means. In this simulation, the $\epsilon$ parameter for within-group variance of observations is shared by all groups.
As expected, the MSS James-Stein Estimator outperformed the MLE, with lower MSEs particularly for high values of $\epsilon$. This make sense. When the raw sample means are noisy, the MLE should be especially untrustworthy and it makes sense to pull extreme estimates back towards the global mean.
#### The Multi Sample Size Pooled James-Stein Estimator
One thing that’s a little weird about the MSS James-Stein Estimator is that even though we know all the groups should have the same within-group variance $\epsilon^2$, we still estimate each group’s standard error separately. Given what we know, it might make more sense to pool the data from all groups to estimate a common $\epsilon^2$. Then we can estimate each group’s $\sigma^{2}_i$ as $\epsilon^2 / n_i$. Let’s call this approach the MSS Pooled James-Stein Estimator.
Figure 4. Mean Squared Error between true group means and estimated group means. In this simulation, the $\epsilon$ parameter for within-group variance of observations is shared by all groups.
This works a bit better. By obtaining more accurate estimates of each group’s $\sigma^{2}_i$, we are able to find a more appropriate shrinking factor $B_i$ for each group.
Of course, this only works better because we created the simulation data in such a way that all groups have the same $\epsilon^2$. But if we run a different set of simulations, in which each group’s $\epsilon_i$ is drawn from a log-normal distribution $ln\mathcal{N}\left(\mu_v, \tau^{2}_v\right)$, we obtain the reverse results. The MSS James-Stein Estimator, which estimates a separate $\hat{\epsilon^{2}_i}$ for each group, does a better job than the MSS Pooled James-Stein Estimator. This makes sense.
Figure 5. Mean Squared Error between true group means and estimated group means. In this simulation, each group has its own variance parameter $\epsilon^2$ for the observations within the group. These parameters are sampled from a log-normal distribution $ln\mathcal{N}\left(\mu_v, \tau^{2}_v\right)$. For simplicity, the two parameters of this distribution are always set to be identical, and are shown on the horizontal axis.
Which method you choose should depend on whether you think your groups have similar or different variances of their observations. Here’s an interim summary of the methods covered so far.
#### Summary of analytic solutions
All of these estimators define $\hat{\theta_i}$ as a weighted average of the group sample mean $x_i$ and the mean of group sample means $\overline{X}$. $$\hat{\theta_i} = \left(1-\hat{B_i}\right) x_i + \hat{B_i} \overline{X}$$ Make sure to clip $\hat{B_i}$ to the range [0, 1].
1. Maximum Likelihood Estimation (MLE)
$\hat{B_i} = 0$
2. MSS James-Stein Estimator
$\hat{B_i} = \frac{s^{2}_i/n_i}{\sum\frac{(x_i - \overline{X})^2}{k-1}}$
where $s_i$ is the standard deviation of observations with a group.
3. MSS Pooled James-Stein Estimator
$\hat{B_i} = \frac{s^{2}_p/n_i}{\sum\frac{(x_i - \overline{X})^2}{k-1}}$
where $s^{2}_p$ is the pooled estimate of variance.
Implementations in Python and R are available here.
#### A Bayesian interpretation of the analytic solutions
So far, the analytic approaches make sense directionally. As described above, our estimate of $\theta_i$ should be a weighted average of $x_i$ and $\overline{X}$, where the weight depends on the ratio of sample mean variance to total variance of the means.
But is this really the best weighting? Why use a ratio of variances instead of, say, a ratio of standard deviations? Why not use something else entirely?
It turns out this formula falls out naturally from Bayes Law. Imagine for a moment that we already know the prior distribution $\mathcal{N}\left(\mu, \tau^2\right)$ over the $\theta_i$’s. And imagine we know the likelihood function for a group mean is $\mathcal{N}\left(x_i, \epsilon^{2}_i/n_i\right)$. According to the Wikipedia page on conjugate priors, the posterior distribution for the group mean is itself a Gaussian distribution with mean:
(Note that the Wikipedia page uses the symbol ‘$x_i$’ to refer to observations, whereas this blog post will always use the term to refer to the sample mean, including in the equation above. Also note that Wikipedia refers to the variance of observations within a group as ‘$\sigma^2$’ whereas this blog post uses $\epsilon^{2}_i$.)
If we multiply all terms in the numerator and denominator by $\frac{\tau^2 \epsilon^{2}_i}{n_i}$, we get:
Or equivalently,
where
This looks familiar! It is basically the MSS James-Stein estimator. The only difference is that in the pure Bayesian approach you must somehow know $\mu$, $\tau^2$, and $\sigma^{2}_i$ in advance. In the MSS James-Stein approach, you estimate those parameters from the data itself. This is the key insight in Empirical Bayes: Use priors to keep your estimates under control, but obtain the priors empirically from the data itself.
### Hierarchical Modeling with MCMC
In previous sections we looked at some analytic solutions. While these solutions have the advantage of being quick to calculate, they have the disadvantage of being less accurate than they could be. For more accuracy, we can turn to Hierarchical Model estimation using Markov Chain Monte Carlo (MCMC) methods. MCMC is an iterative process for approximate Bayesian inference. While it is slower than analytical approximations, it tends to be more accurate and has the added benefit of giving you the full posterior distribution. I’m not an expert in how it works internally, but this post looks like a good place to start.
To implement this, I first defined a Hierarchical Model of my data. The model is a description of how I think the data is generated: True means are sampled from a normal distribution, and observations are sampled from a normal distribution centered around the true mean of each group. Of course, I know exactly how my data was generated, because I was the one who generated it! The key thing to understand though is that the Hierarchical Model does not contain any information about the value of the parameters. It’s the MCMC’s job to figure that out. In particular, I used PyStan’s MCMC implementation to fit the parameters of the model based on my data, although I later learned that it would be even easier to use bambi.
Figure 6. Mean Squared Error between true group means and estimated group means. In this simulation, the $\epsilon$ parameter for within-group variance of observations is shared by all groups.
For simulated data with shared $\epsilon^2$, MCMC did well, outperforming both the MSS James-Stein estimator and the MSS Pooled James-Stein estimator.
If you don’t care about speed and are willing to write the Stan code, then this is probably your best option. It’s also good to learn about MCMC methods, since they can be applied to more complicated models with multiple variables. But if you just want a quick estimate of group means, then one of the analytic solutions above makes more sense.
### Other solutions
There are several other solutions that I did not include in my simulations.
1. Regularization. Pick the set of $\hat{\theta_i}$’s that minimize $\sum{(\hat{\theta_i} - x_i)^2} + \lambda \sum{(\hat{\theta_i} - \overline{X})^2}$. Use cross-validation to choose the best $\lambda$. This will probably work pretty well, although it takes a bit more work and time than the analytic solutions described above.
2. Mixed Models. Over in Mixed Models World, there’s a whole ’nother literature on how to shrink estimates depending on sample size. They are especially suited for the movie ratings situation because in addition to shrinking the means, they also can correct for rater bias. I don’t really understand all the math behind Mixed Models, but I was able to use lme4 to estimate group means in simulated data under the assumption that the group means are a random effect. This gave me slightly different results compared to the James-Stein / Empirical Bayes approach. I would love if some expert who understood this could write an accessible and authoritative blog post on the differences between Mixed Models and Empirical Bayes. The closest I could find was this comment by David Harville.
3. Efron and Morris’ generalization of James-Stein to unequal sample sizes (Section 3 of their paper). I thought this paper was difficult to read. A more accessible presentation can be found at the end of this column. The Efron and Morris approach is a numerical solution that seemed to work reasonably well when I played around with it, but I didn’t take it very far. If you want to implement it, be sure to prevent any variance estimates from falling below zero. If one of them does, just set it to zero and then compute your estimates of the means. That being said, I feel like if you’re going to go with a numerical solution, you may as well just go with MCMC.
4. Double Shrinkage. When we think that different groups not only have different sample sizes, but also different $\epsilon_i$’s, we are faced with an interesting conundrum. As shown above, the MSS James-Stein Estimator outperforms the MSS Pooled James-Stein Estimator, because it computes $\hat{\epsilon_i}$’s specific to each group. However, these estimates of group variances are probably noisy! Just like we don’t trust the raw estimates of group sample means, why should we trust the raw estimates of group sample variances? One way to address this is to use Zhao’s Double Shinkage Estimator, which not only shrinks the means, but also shrinks the variances.
5. Kleinman’s weighted moment estimator. Apparently this was motivated by groups of proportions (i.e. the Rotten Tomatoes case), but the estimator can be applied generally.
### Conclusion
Which method you choose depends on your situation. If you want a simple and computationally fast estimate, and if you don’t want to assume that the group variances $\epsilon^{2}_i$ are identical, I would recommend either the MSS James-Stein Estimator or the Double Shrinkage Estimator if you can get it to work. If you want a fast estimate and can assume all groups share the same $\epsilon^{2}$, I’d recommend the MSS Pooled James-Stein Estimator. If you don’t care about speed or code complexity, I’d recommend MCMC, Mixed Models, or regularization with a cross-validated penalty term.
### Acknowledgements
Special thanks to the many people who responded to my original question on Twitter, including: Sean Taylor, John Myles White, David Robinson, Joe Blitzstein, Manjari Narayan, Otis Anderson, Alex Peysakhovich, Nathaniel Bechhofer, James Neufeld, Andrew Mercer, Patrick Perry, Ronald Richman, Timothy Sweetster, Tal Yarkoni and Alex Coventry. Special thanks also to Marika Inhoff and Leo Pekelis for many discussions.
All code used in this blog post is available on GitHub.
|
# Average speed that you observe for the button as it falls
1. Feb 13, 2005
### ProBasket
You are at an airport watching people go by on a "people mover" conveyer belt that moves at a constant speed of 0.60m/s. As a person goes by, they drop a button from a height of 1.2m. What is the average speed that you observe for the button as it falls to the ground?
well i have:
x=1.2m
v=0.60 m/s
a = 0
so i'll use this formula:
V^2=v(0)^2 +2a(x-x(0))
well a is 0 so the right side cancels to zero.
and v(0) = 0.60
so that means the answer is 0.60m/s, but it's incorrectly. the answer should be 2.5m/s. can someone tell me what i'm doing wrong?
2. Feb 13, 2005
### vsage
Where's the movement in the y direction?
3. Feb 13, 2005
### ProBasket
that's the whole question. what do you mean by in the y direction? isnt this just a 1-D problem?
4. Feb 13, 2005
### Gokul43201
Staff Emeritus
$$v_{avg} = \frac {\int _0 ^T [v_x^2(t) + v_y^2(t)]^{1/2}dt }{T}$$
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
|
## anonymous one year ago integration problem Heat Transfer
1. anonymous
Ill have to write out the problem. give me a tic
2. anonymous
$(\frac{ V*\rho*C }{ hA })\frac{ dT _{s} }{ dt }+T _{s}=T _{f}$
3. anonymous
Now volume, density and heat capacity are constant values but we need to integrate this to get $T _{s}-T _{f}=(T _{s}(0)-T _{f})\exp(\frac{ -t }{ \tau })$
4. anonymous
This is the case for a sphere being heated to a temperature Ts(0) and suddenly subjected to an airflow of constant temperature Tf (at t=0)
5. anonymous
to yield the last equation i've provided
6. anonymous
where tau is the time constant and t is time
7. anonymous
I figure doing this $(\frac{ 1 }{ T _{f}-T _{s} })dT _{s}=\frac{ hA }{ \rho*V*C _{s} }dt$
8. anonymous
any suggestions?
9. Astrophysics
What exactly is the problem?
10. anonymous
let me see if i can attach the file
11. ganeshie8
I figure doing this $(\frac{ 1 }{ T _{f}-T _{s} })dT _{s}=\color{red}{\frac{ hA }{ \rho*V*C _{s} }}dt$ lump them and call it $$\color{Red}{\tau}$$
12. anonymous
i need to integrate the last equation to obtain an expression for tau
13. anonymous
would it be reasonable to find the unit for the lumped bit and see if it has units of seconds?
14. ganeshie8
lets not even wry about the units because we're allowed to lump constants and w/o that it is looking messy
15. anonymous
i'm only concerned on the theortical notes section. I need to provide a detailed version of the theory behind this rather than just copy word for word in my report with what they have already provided us with
16. anonymous
yep, alright ganeshie8. Lets say we will lump them to say they are tau
17. ganeshie8
if you prefer, call it with some other name like $$k$$ or something
18. anonymous
i'll stick with tau for the time being! cheers
19. ganeshie8
yeah good, you have already separated, integrate and finish it off ?
20. anonymous
let me write it out for my benifit
21. anonymous
$(\frac{ 1 }{ T _{f} -T _{s}})dT _{s}=\tau*dt$
22. anonymous
now how do we integrate the LHS, like what would our limits be? from Ts=Ts(0) to Ts=Ts?
23. ganeshie8
it should be $\int (\frac{ 1 }{ T _{f} -T _{s}})dT _{s}=\int \frac{1}{\tau}dt$ right ?
24. anonymous
yep!
25. ganeshie8
evaluate the integrals both sides
26. anonymous
why is it 1/tau?
27. ganeshie8
maybe lets lump it with some other name, $$k$$, because we don't know the expression for $$\tau$$ yet
28. anonymous
ok
29. ganeshie8
$(\frac{ 1 }{ T _{f}-T _{s} })dT _{s}=\color{red}{\frac{ hA }{ \rho*V*C _{s} }}dt$ lump them and call it $$\color{Red}{k}$$ $\int (\frac{ 1 }{ T _{f} -T _{s}})dT _{s}=\int k ~dt$
30. ganeshie8
evaluate the integrals and solve $$T_s$$
31. anonymous
would it be: $-\ln(T _{f}-T _{s})=kt$
32. ganeshie8
careful, $$T_s\gt T_f$$
33. ganeshie8
it should be : $-\ln |T _{f}-T _{s}|=kt+C$
34. anonymous
sorry, i should be a little more pedantic
35. ganeshie8
you will get wrong answer if you don't put the absolute bars
36. anonymous
$-|T _{f}-T _{s}|=e^{kt+C}$
37. ganeshie8
it should be : $-\ln |T _{f}-T _{s}|=kt+C$ $\ln |T _{f}-T _{s}|=-kt-C$
38. anonymous
so we deal with two cases then
39. ganeshie8
now rise both sides to power e
40. anonymous
yep
41. ganeshie8
there are no two cases because you knw that $$T_s\gt T_f$$, so $$|T_f-T_s| = T_s-T_f$$
42. anonymous
oh right, yea thats fair. so we have $|T _{f}-T _{s}|=-e ^{kt+C}=e ^{-kt-C}$
43. anonymous
But we can sub $T _{s}-T _{f}=|T _{f}-T _{s}|$
44. anonymous
right?
45. ganeshie8
Yes, because |3-4| = 4-3
46. anonymous
yep
47. anonymous
so i get that far, what would be the likely thing to do next?
48. anonymous
to make it look like equation 4
49. ganeshie8
isolate $$T_s$$
50. anonymous
$T _{s}=T _{f}-e ^{kt+C}$ $T _{s}=T _{f}+e ^{-kt-C}$
51. anonymous
im abit slow with the equation syntax sorry
52. ganeshie8
so the general solution of given differential equation is $T _{s}(t)=T _{f}+e ^{-\color{red}{k}t-C}$ where $$\large \color{Red}{k}=\color{red}{\frac{ hA }{ \rho*V*C _{s} }}$$
53. anonymous
why is that the only general solution
54. ganeshie8
but nobody leaves that arbitrary constant in the top, we can make it look better by substituting $$C_1=e^{-C}$$ $T _{s}(t)=T _{f}+C_1e ^{-\color{red}{k}t}$
55. anonymous
yep thats right
56. ganeshie8
plugin the initial condition and eliminate $$C_1$$
57. ganeshie8
say at $$t=0$$, the temperature of sphere is $$T_s(0)$$
58. ganeshie8
plugin $$t=0$$ in the general solution and solve $$C_1$$
59. anonymous
ahhh!!!
60. anonymous
so C1=Ts(0)-Tf
61. ganeshie8
Yes, compare that with the given solution and you can get an expression for $$\tau$$
62. anonymous
so by plugging in the initial condition we have $T _{s}-T _{f}=(T _{s}(0)-T _{f})e ^{-kt}$
63. anonymous
perhaps we should lump the bits in equation 3 as tau so that when we re-arrange it is infact 1/tau
64. anonymous
thats the same idea hey
65. anonymous
so how do we then get an expression for tau by integrating? would it be $\int\limits_{?}^{?}(T _{s}-T _{f})dT _{s}=(T _{s(0)}-T _{f})\int\limits_{?}^{?}e ^{-kt}dt$
66. ganeshie8
Yes $$\large \tau = \frac{1}{\color{red}{k}} = \dfrac{1}{\color{red}{\frac{ hA }{ \rho*V*C _{s} }}}$$ doesn't really matter, they all are same
67. ganeshie8
we're done, don't need to integrate again
68. anonymous
oh whoops
69. Astrophysics
Yay, that was fun to watch :D
70. anonymous
just simply re-arrange
71. anonymous
yeah how epic was that!!!!
72. anonymous
that helps me so much
73. Astrophysics
Haha too bad I could only give one medal, good job @chris00 :)
74. anonymous
one thing, im not sure how proficient you are in energy balances, but would you think we need to construct the energy balance and simplify it to obtain equation 2 in the attached sheet
75. anonymous
it simply says for us to carry out the integration and obtain an expression for tau. i wouldn't think we need to carry out an energy balance. perhaps i'll just state it but focus on the integration as part of my report for that section
76. anonymous
thanks so much guys! moral support goes to astrophysics aha and well the medals obviously go to ganeshie8
77. ganeshie8
i think you know more about physics part of this than me, never got a chance to study the heat equation..
78. anonymous
haha, thanks but that integration saved me. You're the true hero today! cheers again
79. anonymous
this semester is quite jammed packed with maths i think. Im doing process control this semester so hopefully i'll be back here getting help with laplace transforms! whoop
80. Astrophysics
Could you not just solve for tau then?
81. anonymous
yea it should be $\frac{ -t }{ \ln \left( \frac{ T _{s}-T _{f} }{ T _{s(0)}-T _{f} } \right) }=\tau$
82. Astrophysics
Yes, you shouldn't have to integrate again, I think that's good
83. anonymous
yea, minor brain fade sorry haha
84. ganeshie8
$$\tau$$ is supposed to be a constant right
85. anonymous
the time constant, yes
86. anonymous
in seconds
87. ganeshie8
doesn't make much sense to express it as a weird function of Ts, t etc..
88. ganeshie8
$$\large \tau = \frac{1}{\color{red}{k}} = \dfrac{1}{\color{red}{\frac{ hA }{ \rho*V*C _{s} }}}$$ simplify and this should be good enough
89. anonymous
what do you mean?
90. ganeshie8
$$\tau$$ is not a function of time, it is a constant, doesn't vary over time
91. anonymous
so you think its silly to express it as a function of time then?
92. ganeshie8
they are not asking you to express it as a function of time
93. Astrophysics
Nice catch ganeshie
94. anonymous
but dont we simply re-arrange equation 4 to obtain an expression for tau
95. anonymous
thats what it says though
96. ganeshie8
why do you think below is not a good expression for $$\tau$$ ? $$\large \tau = \frac{1}{\color{red}{k}} = \dfrac{1}{\color{red}{\frac{ hA }{ \rho*V*C _{s} }}}$$
97. ganeshie8
thats not just expression, thats how we're defining $$\tau$$ after integrating.
98. anonymous
oh! I see what you mean
99. anonymous
i was slightly confused since we already assumed that tau was that lump of vairables
100. anonymous
right
101. ganeshie8
yeah ikr, it was a bad idea to let $$\tau$$ as our letter for lumped constants, but sooner we realized that and changed it to $$k$$ :)
102. anonymous
you would rather write equation 4 with the subject being time right? since we want to get a plot of ln (...) vs time
103. anonymous
yep i understand now what you mean by that
104. anonymous
thanks again!
105. anonymous
time to write it up now! cheers everyone
106. ganeshie8
$\large T _{s}(t) -T _{f}=[T _{s}(0)-T _{f}]e ^{-t/\tau}$ where $$\tau =\color{red}{\frac{ \rho*V*C _{s} }{hA }}$$
107. anonymous
thanks again mate
108. Astrophysics
That was awesome nice work both of you xD
109. anonymous
i'm glad you find it interesting! that was epic the last 30mins haha
|
# VQA-LOL: Visual Question Answering under the Lens of Logic
@article{Gokhale2020VQALOLVQ,
title={VQA-LOL: Visual Question Answering under the Lens of Logic},
author={Tejas Gokhale and Pratyay Banerjee and Chitta Baral and Yezhou Yang},
journal={ArXiv},
year={2020},
volume={abs/2002.08325}
}
Logical connectives and their implications on the meaning of a natural language sentence are a fundamental aspect of understanding. In this paper, we investigate whether visual question answering (VQA) systems trained to answer a question about an image, are able to answer the logical composition of multiple such questions. When put under this \textit{Lens of Logic}, state-of-the-art VQA models have difficulty in correctly answering these logically composed questions. We construct an…
MLP Architectures for Vision-and-Language Modeling: An Empirical Study
• Yi-Liang Nie, +6 authors Lijuan Wang
• Computer Science
ArXiv
• 2021
The first empirical study on the use of MLP architectures for vision-and-language (VL) fusion finds that without pre-training, using MLPs for multimodal fusion has a noticeable performance gap compared to transformers; however, VL pre- training can help close the performance gap; and suggests that MLPs can effectively learn to align vision and text features extracted from lower-level encoders without heavy reliance on self-attention.
Adversarial VQA: A New Benchmark for Evaluating the Robustness of VQA Models
• Computer Science
ArXiv
• 2021
Surprisingly, it is found that during dataset collection, non-expert annotators can easily attack SOTA VQA models successfully, revealing the fragility of these models while demonstrating the effectiveness of the adversarial dataset.
Discovering the Unknown Knowns: Turning Implicit Knowledge in the Dataset into Explicit Training Examples for Visual Question Answering
It is found that many of the “unknowns” to the learned VQA model are indeed “known” in the dataset implicitly, and a simple data augmentation pipeline SIMPLEAUG is presented to turn this “ known” knowledge into training examples for V QA.
HySTER: A Hybrid Spatio-Temporal Event Reasoner
• Computer Science
ArXiv
• 2021
This work defines a method based on general temporal, causal and physics rules which can be transferred across tasks and applies it to the CLEVRER dataset and demonstrates state-of-the-art results in question answering accuracy.
Semantically Distributed Robust Optimization for Vision-and-Language Inference
• Computer Science
ArXiv
• 2021
SDRO† is presented, a model-agnostic method that utilizes a set linguistic transformations in a distributed robust optimization setting, along with an ensembling technique to leverage these transformations during inference.
A Closer Look at the Robustness of Vision-and-Language Pre-trained Models
• Computer Science
ArXiv
• 2020
This work proposes Mango, a generic and efficient approach that learns a Multimodal Adversarial Noise GeneratOr in the embedding space to fool pre-trained V+L models, and enables universal performance lift for pre- trained models over diverse tasks designed to evaluate broad aspects of robustness.
WeaQA: Weak Supervision via Captions for Visual Question Answering
• Computer Science
FINDINGS
• 2021
This work presents a method to train models with synthetic Q-A pairs generated procedurally from captions, and demonstrates the efficacy of spatial-pyramid image patches as a simple but effective alternative to dense and costly object bounding box annotations used in existing VQA models.
Self-Supervised VQA: Answering Visual Questions using Images and Captions
• Computer Science
ArXiv
• 2020
This work presents a method to train models with procedurally generated Q-A pairs from captions using techniques, such as templates and annotation frameworks like QASRL, which surpass prior supervised methods on VQA-CP and are competitive with methods without object features in fully supervised setting.
RODA: Reverse Operation Based Data Augmentation for Solving Math Word Problems
• Computer Science
IEEE/ACM Transactions on Audio, Speech, and Language Processing
• 2022
A novel data augmentation method is proposed that reverses the mathematical logic of math word problems to produce new high-quality math problems and introduce new knowledge points that can benefit learning the mathematical reasoning logic.
Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question Answering
• JianJian Cao, Xiameng Qin, Jianbing Shen
• Computer Science
ArXiv
• 2021
A Graph Matching Attention (GMA) network that not only builds graph for the image, but also constructsgraph for the question in terms of both syntactic and embedding information and achieves state-of-the-art performance on the GQA dataset and the VQA 2.0 dataset.
## References
SHOWING 1-10 OF 68 REFERENCES
We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language
GloVe: Global Vectors for Word Representation
• Computer Science
EMNLP
• 2014
A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
• Computer Science
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
• 2017
This work balances the popular VQA dataset by collecting complementary images such that every question in this balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question.
The principle of four-cornered negation in indian philosophy
• The Review of Metaphysics pp. 694–713
• 1954
Ethics, translated by andrew boyle, introduction by ts gregory
• 1934
Ethics, translated by andrew boyle, introduction by ts gregory, 1934
• 1934
Logic-Guided Data Augmentation and Regularization for Consistent Question Answering
• Computer Science
ACL
• 2020
This paper addresses the problem of improving the accuracy and consistency of responses to comparison questions by integrating logic rules and neural models by leveraging logical and linguistic knowledge to augment labeled training data and then uses a consistency-based regularizer to train the model.
Unified Vision-Language Pre-Training for Image Captioning and VQA
• Computer Science
AAAI
• 2020
VLP is the first reported model that achieves state-of-the-art results on both vision-language generation and understanding tasks, as disparate as image captioning and visual question answering, across three challenging benchmark datasets: COCO Captions, Flickr30kCaptions, and VQA 2.0.
Video2Commonsense: Generating Commonsense Descriptions to Enrich Video Captioning
• Computer Science
EMNLP
• 2020
This work presents the first work on generating commonsense captions directly from videos, in order to describe latent aspects such as intentions, attributes, and effects, and finetune their commonsense generation models on the V2C-QA task where they ask questions about the latent aspects in the video.
A Corpus for Reasoning about Natural Language Grounded in Photographs
• Computer Science
ACL
• 2019
This work introduces a new dataset for joint reasoning about natural language and images, with a focus on semantic diversity, compositionality, and visual reasoning challenges, and Evaluation using state-of-the-art visual reasoning methods shows the data presents a strong challenge.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.