url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://indico.ijclab.in2p3.fr/event/6175/
|
Pour vous authentifier, privilégiez eduGAIN / To authenticate, prefer eduGAIN
Thèses
# Anne MEYER, "Experimental study of the 13N(a,p)16O and 30P(p,g)31S reactions, and impact on extremely high 13C, 15N and 30Si isotopic abundances in presolar grains" (Pôle Physique Nucléaire)
Europe/Paris
100/-1-A900 - Auditorium Joliot Curie (IJCLab)
100
Show room on map
Description
Title of Defense
### Experimental study of the 13N(a,p)16O and 30P(p,g)31S reactions, and impact on extremely high 13C, 15N and 30Si isotopic abundances in presolar grains.
Abstract
Primitive meteorites contain several types of dust grains that condensed in different stellar environments and survived destruction in the early Solar System. The stellar sources where these presolar grains come from are identified through comparisons between measurements of isotopic abundances and predictions by stellar models. In this manuscript is presented a detailed analysis of two experiments performed at the ALTO facility, using the split-pole magnetic spectrometer, aiming at reducing the nuclear uncertainties associated to two reactions which rate uncertainty affects the synthesis of isotopes used to identify putative novae grains. These grains are characterised by extremely high 13C, 15N and 30Si isotopic abundances, but isotopic signatures found in a few grains indicate also a possible core-collapse supernovae (CCSN) origin. We first study the impact of the 13N(a,p)16O reaction rate uncertainty on 13C abundances predicted by recent CCSN models. We perform a re-evaluation of this reaction rate using a Monte Carlo approach to obtain meaningful statistical uncertainties. Alpha partial widths of states in the 17F compound nucleus are determined using the spectroscopic informations of the analog states in the 17O mirror nucleus that were measured using the 13C(7Li,t)17O alpha-transfer reaction. We then study the 30P(p,g)31S reaction, which is one of the few remaining reactions which rate uncertainty has a strong impact on classical novae model predictions, in particular for 30Si abundances. To reduce the nuclear uncertainties associated to this reaction, we studied the 31P(3He,t)31S reaction. Triton and proton decays from the populated states in 31S were detected simultaneously using the spectrometer and silicon strip detectors. The study of the angular correlations of proton decays is presented and branching ratios are extracted.
Organized by
Membres du jury :
Fairouz HAMMACHE, IJCLab, CNRS, Directeur de thèse
François DE OLIVEIRA, GANIL, Rapporteur
Georges MEYNET, Université de Genève, Rapporteur
Elias KHAN, IJCLab, Université Paris-Saclay, Président
Sandrine COURTIN, IPHC, CNRS, Examinateur
Nicolas DE SEREVILLE, IJCLab, Université Paris-Saclay, Examinateur
|
2022-08-14 22:05:34
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8056491613388062, "perplexity": 8869.36876325249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00087.warc.gz"}
|
https://ch.mathworks.com/help/reinforcement-learning/ug/train-dqn-agent-to-swing-up-and-balance-pendulum.html
|
# Train DQN Agent to Swing Up and Balance Pendulum
This example shows how to train a deep Q-learning network (DQN) agent to swing up and balance a pendulum modeled in Simulink®.
For more information on DQN agents, see Deep Q-Network (DQN) Agents. For an example that trains a DQN agent in MATLAB®, see Train DQN Agent to Balance Cart-Pole System.
### Pendulum Swing-up Model
The reinforcement learning environment for this example is a simple frictionless pendulum that initially hangs in a downward position. The training goal is to make the pendulum stand upright without falling over using minimal control effort.
Open the model.
```mdl = "rlSimplePendulumModel"; open_system(mdl)```
For this model:
• The upward balanced pendulum position is 0 radians, and the downward hanging position is `pi` radians.
• The torque action signal from the agent to the environment is from –2 to 2 N·m.
• The observations from the environment are the sine of the pendulum angle, the cosine of the pendulum angle, and the pendulum angle derivative.
• The reward ${\mathit{r}}_{\mathit{t}}$, provided at every timestep, is
`${\mathit{r}}_{\mathit{t}}=-\left({{\theta }_{\mathit{t}}}^{2}+0.1{\stackrel{˙}{{\theta }_{\mathit{t}}}}^{2}+0.001{\mathit{u}}_{\mathit{t}-1}^{2}\right)$`
Here:
• ${\theta }_{\mathit{t}}$ is the angle of displacement from the upright position.
• $\stackrel{˙}{{\theta }_{\mathit{t}}}$ is the derivative of the displacement angle.
• ${\mathit{u}}_{\mathit{t}-1}$ is the control effort from the previous time step.
### Create Environment Interface
Create a predefined environment interface for the pendulum.
`env = rlPredefinedEnv("SimplePendulumModel-Discrete")`
```env = SimulinkEnvWithAgent with properties: Model : rlSimplePendulumModel AgentBlock : rlSimplePendulumModel/RL Agent ResetFcn : [] UseFastRestart : on ```
The interface has a discrete action space where the agent can apply one of three possible torque values to the pendulum: –2, 0, or 2 N·m.
To define the initial condition of the pendulum as hanging downward, specify an environment reset function using an anonymous function handle. This reset function sets the model workspace variable `theta0` to `pi`.
`env.ResetFcn = @(in)setVariable(in,"theta0",pi,"Workspace",mdl);`
Get the observation and action specification information from the environment
`obsInfo = getObservationInfo(env)`
```obsInfo = rlNumericSpec with properties: LowerLimit: -Inf UpperLimit: Inf Name: "observations" Description: [0x0 string] Dimension: [3 1] DataType: "double" ```
`actInfo = getActionInfo(env)`
```actInfo = rlFiniteSetSpec with properties: Elements: [3x1 double] Name: "torque" Description: [0x0 string] Dimension: [1 1] DataType: "double" ```
Specify the simulation time `Tf` and the agent sample time `Ts` in seconds.
```Ts = 0.05; Tf = 20;```
Fix the random generator seed for reproducibility.
`rng(0)`
### Create DQN Agent
A DQN agent approximates the long-term reward, given observations and actions, using a parametrized Q-value function critic.
For DQN agents with a discrete action space, you have the option to create a vector (that is a multi-output) Q-value function critic, which is generally more efficient than a comparable single-output critic. A vector Q-value function is a mapping from an environment observation to a vector in which each element represents the expected discounted cumulative long-term reward when an agent starts from the state corresponding to the given observation and executes the action corresponding to the element number (and follows a given policy afterwards).
To model the Q-value function within the critic, use a deep neural network. The network must have one input layer (which receives the content of the observation channel, as specified by `obsInfo`) and one output layer (which returns the vector of values for all the possible actions). Note that `prod(obsInfo.Dimension)` returns the number of dimensions of the observation space (regardless of whether they are arranged as a row vector, column vector, or matrix, while `numel(actInfo.Elements)` returns the number of elements of the discrete action space.
Define the network as an array of layer objects.
```criticNet = [ featureInputLayer(prod(obsInfo.Dimension)) fullyConnectedLayer(24) reluLayer fullyConnectedLayer(48) reluLayer fullyConnectedLayer(numel(actInfo.Elements))];```
Convert to `dlnetwork` and display the number of weights.
```criticNet = dlnetwork(criticNet); summary(criticNet)```
``` Initialized: true Number of learnables: 1.4k Inputs: 1 'input' 3 features ```
View the critic network configuration.
`plot(criticNet)`
For more information on creating value functions that use a deep neural network model, see Create Policies and Value Functions.
Create the critic using `criticNet`, as well as observation and action specifications. For more information, see `rlVectorQValueFunction`.
`critic = rlVectorQValueFunction(criticNet,obsInfo,actInfo);`
Specify options for the critic optimizer using `rlOptimizerOptions`.
`criticOpts = rlOptimizerOptions(LearnRate=0.001,GradientThreshold=1);`
To create the DQN agent, first specify the DQN agent options using `rlDQNAgentOptions`.
```agentOptions = rlDQNAgentOptions(... SampleTime=Ts,... CriticOptimizerOptions=criticOpts,... ExperienceBufferLength=3000,... UseDoubleDQN=false);```
Then, create the DQN agent using the specified critic and agent options. For more information, see `rlDQNAgent`.
`agent = rlDQNAgent(critic,agentOptions);`
### Train Agent
To train the agent, first specify the training options. For this example, use the following options.
• Run each training for at most 1000 episodes, with each episode lasting at most 500 time steps.
• Display the training progress in the Episode Manager dialog box (set the `Plots` option) and disable the command line display (set the `Verbose` option to `false`).
• Stop training when the agent receives an average cumulative reward greater than –1100 over five consecutive episodes. At this point, the agent can quickly balance the pendulum in the upright position using minimal control effort.
• Save a copy of the agent for each episode where the cumulative reward is greater than –1100.
For more information, see `rlTrainingOptions`.
```trainingOptions = rlTrainingOptions(... MaxEpisodes=1000,... MaxStepsPerEpisode=500,... ScoreAveragingWindowLength=5,... Verbose=false,... Plots="training-progress",... StopTrainingCriteria="AverageReward",... StopTrainingValue=-1100,... SaveAgentCriteria="EpisodeReward",... SaveAgentValue=-1100);```
Train the agent using the `train` function. Training this agent is a computationally intensive process that takes several minutes to complete. To save time while running this example, load a pretrained agent by setting `doTraining` to `false`. To train the agent yourself, set `doTraining` to `true`.
```doTraining = false; if doTraining % Train the agent. trainingStats = train(agent,env,trainingOptions); else % Load the pretrained agent for the example. load("SimulinkPendulumDQNMulti.mat","agent"); end```
### Simulate DQN Agent
To validate the performance of the trained agent, simulate it within the pendulum environment. For more information on agent simulation, see `rlSimulationOptions` and `sim`.
```simOptions = rlSimulationOptions(MaxSteps=500); experience = sim(env,agent,simOptions);```
|
2023-03-23 07:35:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8683992624282837, "perplexity": 1454.084039008093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00467.warc.gz"}
|
http://www.math.muni.cz/aktuality/archiv-aktualit.html?start=8
|
Archiv aktualit
Záznam z habilitační přednášky Lenky Zalabové, Ph.D.
Přednáška se konala ve středu 11.3.2020 v 16.00 v posluchárně M1 - záznamy:
#### Symmetric spaces and their filtered generalizations
Abstract: We focus on the role of symmetries in geometry and geometric control theory. We introduce symmetric spaces as important examples of geometries with many symmetries and we study consequences of existence of special symmetries. Finally we study various generalizations of symmetric spaces.
Aktualizováno Pátek, 13 Březen 2020 13:34
Online seminář z algebry - 7.5.2020
Další seminář z algebry se koná 7.5.2020 od 13.00 online na platformě ZOOM. Informace pro připojení a další program semináře je zde.
#### Christian Espindola
Topos-theoretic completeness theorems
Abstrakt:
In this talk we will delve into the background details of the previous talk by introducing syntactic proof systems and their categorical semantics, including the construction of syntactic categories and $\kappa$-classifying toposes, as well as the role of certain properties of Grothendieck topologies and Kripke-Joyal semantics. We will then study some topos-theoretic completeness theorems for certain infinitary logics that generalize results of Deligne and Joyal.
Aktualizováno Pondělí, 04 Květen 2020 10:43
Více článků...
|
2020-07-04 05:46:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47333824634552, "perplexity": 5024.67988186865}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655884012.26/warc/CC-MAIN-20200704042252-20200704072252-00374.warc.gz"}
|
https://www.mathemania.com/lesson/naming-decimal-places/
|
# Naming decimal places
Naming decimal places can be defined as an expression of place value in words.
Apart from integers whose value has to be greater than “negative infinity” (symbol: $-\infty$) and smaller than “positive infinity” or just “infinity” (symbol: $+\infty$), there are also decimal numbers that represent the number of equal portions between two adjacent integers (adjacent integers are numbers that have been placed before and after the representing number). A decimal number is made of an integer part, placed on the left side of a decimal point, and a fractional part, placed on the right side of a decimal point. As a matter a fact, decimals are numbers which tells us how many parts of a whole we have. We use them to mark measure units of things that are not completely whole.
Decimals are numbers which tells us how many parts of a whole we have. We use them to mark measure units of things that are not completely whole.
Unit is a part of a decimal which tells us how many whole parts we have while mantissa tells us how many parts of a whole do we have left.
The decimal or a decimal number can be written as a fraction. As an example of a decimal number, let’s take the number $28.531$. Now, number $28$ (the left side of the decimal point) represents the integer part of the decimal number and $.531$ (the right side of the decimal point) represents the decimal part of the number. The $.531$ represents a value which is smaller than $1$, but larger than $0$ and can also be represented as a fraction $\frac{531}{1000}$. No matter how many digits there are after the decimal, their combined value is always less than $1$ but more than $0$. Thus, the value of the number $\ 28.531$ is greater than the value of the whole number $28$, but lesser than the value of the whole number $29$.
Naming decimal places plays an important role in the representation of the number as a whole. Since the decimal system we use is a positional numeric system, all of the digits in a decimal number is termed according to their position in respect to the decimal point and it is important to name the decimal places properly. The entire decimal system is completely based on number $10$ and all of the digits, before and after the decimal point, are defined in terms of ten because of that.
The digit placed furthest to the right of the decimal point has the smallest value. Hence, in the number $28.531$, the digit 5 is placed furthest from the decimal point and hence has the smallest value. The entire number can be defined as $2\cdot 10+8 \cdot 1+5 \cdot \frac{1}{10}+3 \cdot \frac{1}{100}+1 \cdot \frac{1}{1000}$. The number after the decimal point can be collectively pronounced as $6$ tenths, $4$ hundredths and $5$ thousandths or, more simply, as $645$ thousandths.
All the place values of the numbers depend on position on the left or right side of a decimal point. Look at the example with more digits.
Let’s take a look at, for example the number $1,987,654,321.123456$,
The first digit before the decimal point represents the ones (number $1$),
-the second stands for the tens (number $2$), the third for the hundreds (number $3$),
-the fourth for the thousands (number $4$, after the comma),
-the fifth for the ten thousands (number $5$),
-the sixth for the hundred thousands (number $6$),
-the seventh for the millions (number $7$, after the second comma),
-the eight for the ten millions (number $8$),
-the ninth for the hundred millions (number $9$) and
-the tenth for the billions (number $1$, after the third comma).
All digits after the decimal point are called decimals.
-The first digit represents tenths (number $1$),
-the second digit stands for the hundredths (number $2$),
-third for the thousandths (number $3$),
-fourth for the ten thousandths (number $4$),
-fifth for the hundred thousandths (number $5$),
-sixth for the millionths (number $6$)
There are larger and smaller place values, but these ones are used the most. It doesn’t matter how large the number of digits is, they can be read and understood with ease. Test the knowledge with worksheets.
Addition of decimal numbers is pretty much the same as the one with the whole numbers, only with a decimal point.
Example: Solve:
$1.5555 + 1.7$
Write one beneath the other, in a way that the decimal points align. This is the most important step; this is how you know which parts you’re adding.
As in any other addition you start from the left, and if you gain more than ten you simply transfer one to the other side. The same goes for the subtraction.
Multiplication
$2.56 \cdot 1.5$
First step in multiplication of two decimal numbers is to multiply each one of them with $10$, $100$, $1000$ and so on to get whole numbers.
For our example that would be $256 \cdot 15$
And this is something we know $2568 \cdot 150 = 3840$
The last step is to put the decimal point in place. Where would that be? This depends on the number of elements in mantissas of numbers you’re multiplying; their sum will be the number of elements in mantissa in their product.
$2.56$ – number of elements in mantissa is $2$
$1.5$ – number of elements in mantissa is $1$
$2.56 \cdot 1.5$ – numbers of elements in mantissa is $3$.
This means that in the number $3870$ we have to put the decimal point in the third place from the right.
This leads us to our final solution:
$2.56 \cdot 1.5 = 3.84$ (the last zero can be disregarded)
Second way to do the multiplication:
You multiply as you always multiplied, but just when you finish take down the decimal point:
### Division
Division of two decimal numbers is similar to division of two whole numbers; but there are few changes.
The quotient won’t change if you multiply each number with the same number. This means that you can transform your decimal numbers into whole numbers, and with them you already know how to calculate.
Example 1:
First you multiply it with $10$, $100$, $1000$ to get both numbers whole. In this example we’ll multiply both numbers with number $1000$:
$2.514 : 1.257$
$2514 : 1257 = 2$ and this is your solution.
Example 2:
$2.5 : 1.25 = ?$
$2.5\cdot 100$
$1.25\cdot 100$
$250 : 125 = 2$
Example 3:
How about when you have two whole numbers, but divisor is greater than dividend?
$1 : 2 = 0$
As you already know, one does not contain any two’s. this means that our decimal number will be something in a form of $0.*$
This is the point where you put a decimal place behind that zero, and then you simply add zeros to the left side and continue your division. We can do that because we know that every whole number can be written as a unit and infinitely many zeros behind the decimal point, which means that our one becomes $1.0000$ as many zeros we need.
$1 : 2 = 0.5$ (now one zero comes down)
Example 4:
$3 : 4 = ?$
Let’s explain decimals in an example.
You’re eating in a restaurant and order a pie. Now you have one whole pie.
How many pies do you have if you eat one half?
As you know you have one half, or $\frac{1}{2}$ pie, now you have to transform it into a decimal. Since we learned that this should be easy.
You have $0.5$ pie.
And if you buy another pie, how many pies in decimals do you have?
Now you have $\ 1 + 0.5 = 1.5$ pies.
Let’s remember how we could write any whole number using decomposition in thousands, hundreds, tens and ones.
For example number $2 554$ can be written as:
$2 554 = 2000 + 500 +50 + 4$
Which means that number $2 554$ contains two thousands, $5$ hundreds, $5$ tens and $4$ ones.
What if we try to do that with decimals?
For example, number 3.14 can be represented as $\ 3 \cdot 1 + \frac{1}{10} + \frac{4}{100}$
That is the rule:
Considering this, we can manipulate decimal numbers in any way we want.
Number $3.14$ can also be written as:
$3\cdot 1 + \frac{14}{100}$,
$\frac{314}{1000}$,
$\frac{3140}{10000}$…
How do you do that?
Any decimal number can be written as a fraction whose numerator and denominator are whole numbers. The easiest way to remember this is: as many decimal places does your number have, that’s how many zeros in a denominator you’ll have:
$3.14 = \frac{314}{100}$
$3.145 = \frac{3145}{1000}$
$31.5 = \frac{315}{10}$
$512.512 = \frac{512}{1000}$
### Comparing decimals
One decimal is greater than the other if one has greater value.
How would you know which one is greater?
You simply go by the decimals and the first different digit will tell you. If that digit is greater than the other one, than that whole number is greater.
Example: Compare two numbers.
$A = 1.23457$
$B = 1.23456$
You go digit one by one and see that first five digits are the same. But sixth digit in number $A$ is greater than the sixth digit in number $B$. that means that
$A > B$.
Example 2:
$C = 2.12345678954545$
$D = 1. 12345678954545$
Here we have no problem, because the numbers differ in the first digit which means that $C > D$
Example 3:
$E = 12.35478$
$F = 1.235478$
On first look you might thing these two numbers are the same, but be careful about the position of the decimal point. $E > F$.
### Decimals on the number line
Decimal numbers are, just like whole numbers, divided on the positive ones, and negative ones.
Positive decimal numbers are found on the right side of the point of origin, and negative ones on the left.
Between any two numbers on the number line lies infinitely many decimal numbers.
The safest way to be precise about placing a decimal number on the number line is to convert it into a fraction.
Example:
Place number $0.25$ on the number line.
As we already learned we can transform this number into a fraction:
$0.25 = \frac{25}{100}$
And we can shorten this fraction into $\frac{1}{4}$.
This means that this point is $\frac{1}{4}$ away from the point of origin to the right. First we’ll divide our segment from 0 to 1 into four parts and take the first dot.
Example 2:
Place number $1.2$ on the number line.
We’ll again transform it into a fraction: $1.2 = \frac{12}{10} = \frac{6}{5} = \frac{11}{5}$
This means that our number is located between 1 and 2, $\frac{1}{5}$ away from $1$.
Example 3:
Place number $- 2.45$ on a number line.
$-2.45 = – \frac{245}{100} = – \frac{49}{20} = – 1\frac{29}{20}$
For numbers with many decimals or decimal number that are not obvious, like fractions $\frac{1}{2}$ , $\frac{1}{4}$ and so on, you can use approximated place. For example, this number is very close to $-2,5$ or a fraction $-2 \frac{1}{2}$ so we’ll draw it close to it, but slightly to the right, because number $-2.45 > -2.5$.
## Naming decimal places worksheets
Naming decimal places
Ones - tens - hundreds - thousands (79.6 KiB, 1,659 hits)
Thousands - ten thousands - hundred thousands - millions (78.7 KiB, 1,070 hits)
Millions - ten millions - hundred millions - billions (83.2 KiB, 1,170 hits)
Ones - tenths - hundredths - thousandths (76.7 KiB, 1,586 hits)
Thousandths - ten thousandths - hundred thousandths - millionths (77.4 KiB, 1,040 hits)
Millionths - billionths (151.3 KiB, 1,103 hits)
Two positive decimals (178.5 KiB, 1,028 hits)
Three positive decimals (207.1 KiB, 961 hits)
Four positive decimals (244.3 KiB, 941 hits)
Two decimals (240.5 KiB, 971 hits)
Three decimals (296.2 KiB, 856 hits)
Four decimals (352.7 KiB, 851 hits)
Subtractions
Two positive decimals (84.6 KiB, 899 hits)
Three positive decimals (97.7 KiB, 827 hits)
Four positive decimals (105.8 KiB, 802 hits)
Two decimals (109.5 KiB, 853 hits)
Three decimals (130.8 KiB, 818 hits)
Four decimals (152.6 KiB, 917 hits)
Multiplication
Two positive decimals (59.0 KiB, 858 hits)
Three positive decimals (64.1 KiB, 876 hits)
Four positive decimals (68.1 KiB, 864 hits)
Two decimals (88.9 KiB, 886 hits)
Three decimals (104.6 KiB, 792 hits)
Four decimals (121.0 KiB, 794 hits)
Division
Two positive decimals (77.4 KiB, 984 hits)
Two decimals (130.0 KiB, 950 hits)
|
2020-06-06 08:16:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6900686621665955, "perplexity": 806.6536139939766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348511950.89/warc/CC-MAIN-20200606062649-20200606092649-00017.warc.gz"}
|
https://zbmath.org/?q=an:0722.14032
|
## Local properties of algebraic group actions.(English)Zbl 0722.14032
Algebraische Transformationsgruppen und Invariantentheorie, DMV Semin. 13, 63-75 (1989).
[For the entire collection see Zbl 0682.00008.]
Let G be a connected linear algebraic group and X be a normal G-variety over an algebraically closed field of characteristic zero. The authors give a new proof of the following result of Sumihiro:
Let $$Y\subset X$$ be an orbit in X. There is a finite-dimensional rational representation $$G\to GL(V)$$ and a G-stable open neighborhood U of Y in X which is G-equivariantly isomorphic to a G-stable locally closed subvariety of the projective space P(V).
The main technical ingredients are G-linearizations of line bundles and the study of the Picard group of a linear algebraic group.
### MSC:
14L30 Group actions on varieties or schemes (quotients) 20G05 Representation theory for linear algebraic groups 14C22 Picard groups
### Keywords:
Picard group of a linear algebraic group
Zbl 0682.00008
|
2022-08-08 16:49:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6021958589553833, "perplexity": 722.0643034889973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00736.warc.gz"}
|
https://runestone.academy/ns/books/published/APEX/sec_limit_analytically.html
|
# APEX Calculus
## Section1.3Finding Limits Analytically
In Section 1.1 we explored the concept of the limit without a strict definition, meaning we could only make approximations. In the previous section we gave the definition of the limit and demonstrated how to use it to verify our approximations were correct. Thus far, our method of finding a limit is
1. make a really good approximation either graphically or numerically, and
2. verify our approximation is correct using a $$\varepsilon$$-$$\delta$$ proof.
Recognizing that $$\varepsilon$$-$$\delta$$ proofs are cumbersome, this section gives a series of theorems which allow us to find limits much more quickly and intuitively.
Suppose that $$\lim_{x\to 2} f(x)=2$$ and $$\lim_{x\to 2} g(x) = 3\text{.}$$ What is $$\lim_{x\to 2}(f(x)+g(x))\text{?}$$ Intuition tells us that the limit should be $$5\text{,}$$ as we expect limits to behave in a nice way. The following theorem states that already established limits do behave nicely.
We apply the theorem to an example.
### Example1.3.3.Using basic limit properties.
Let
\begin{align*} \lim_{x\to 2} f(x)\amp=2\amp\lim_{x\to 2} g(x)\amp= 3\amp p(x)\amp = 3x^2-5x+7\text{.} \end{align*}
Find the following limits:
1. $$\displaystyle \lim\limits_{x\to 2}(f(x) + g(x))$$
2. $$\displaystyle \lim\limits_{x\to 2}(5f(x) + g(x)^2)$$
3. $$\displaystyle \lim\limits_{x\to 2}p(x)$$
Solution 1. Video solution
Solution 2.
1. Using the Sums/Differences property, we know that
\begin{align*} \lim_{x\to 2}(f(x) + g(x)) \amp = \lim_{x\to 2}f(x) + \lim_{x\to 2}g(x)\\ \amp = 2+3 =5\text{.} \end{align*}
2. Using the Scalar Multiples, Sums/Differences, and Powers properties, we find that
\begin{align*} \lim_{x\to 2}(5f(x) + g(x)^2) \amp = \lim_{x\to 2}(5f(x))+\lim_{x\to 2}(g(x)^2)\\ \amp = 5\lim_{x\to 2}f(x) + \mathopen{}\left(\lim_{x\to 2}g(x)\right)^2\mathclose{}\\ \amp = 5\cdot 2 + 3^2 = 19\text{.} \end{align*}
3. Here we combine the Powers, Scalar Multiples, Sums/Differences and Constants properties. We show quite a few steps, but in general these can be omitted:
\begin{align*} \lim_{x\to 2} p(x) \amp = \lim_{x\to 2}\left(3x^2-5x+7\right)\\ \amp = \lim_{x\to 2}\mathopen{}\left(3x^2\right)\mathclose{}-\lim_{x\to 2}(5x)+\lim_{x\to 2}7\\ \amp = 3\bigl(\lim_{x\to 2}x\bigr)^2-5\lim_{x\to 2}(x) +7\\ \amp = 3\cdot 2^2 - 5\cdot 2+7\\ \amp = 9 \end{align*}
Part c of the previous example demonstrates how the limit of a quadratic polynomial can be determined using the properties of Theorem 1.3.1. Not only that, recognize that
\begin{equation*} \lim_{x\to 2} p(x) = 9 = p(2); \end{equation*}
i.e., the limit at $$2$$ could have been found just by plugging $$2$$ into the function. This holds true for all polynomials, and also for rational functions (which are quotients of polynomials), as stated in the following theorem.
### Example1.3.6.Finding a limit of a rational function.
Using Theorem 1.3.4, find
\begin{equation*} \lim_{x\to -1} \frac{3x^2-5x+1}{x^4-x^2+3}\text{.} \end{equation*}
Solution.
Using Theorem 1.3.4, we can quickly state that
\begin{align*} \lim_{x\to -1}\frac{3x^2-5x+1}{x^4-x^2+3} \amp = \frac{3(-1)^2-5(-1)+1}{(-1)^4-(-1)^2+3}\\ \amp = \frac{9}{3} =3\text{.} \end{align*}
It was likely frustrating in Section 1.2 to do a lot of work with $$\varepsilon$$ and $$\delta$$ to prove that
\begin{equation*} \lim_{x\to 2} x^2 = 4 \end{equation*}
as it seemed fairly obvious. The previous theorems state that many functions behave in such an “obvious” fashion, as demonstrated by the rational function in Example 1.3.6.
Polynomial and rational functions are not the only functions to behave in such a predictable way. The following theorem gives a list of functions whose behavior is particularly “nice” in terms of limits. In Section 1.5, we will give a formal name to these functions that behave “nicely.”
### Example1.3.8.Evaluating limits analytically.
Evaluate the following limits.
1. $$\displaystyle \lim\limits_{x\to \pi} \cos(x)$$
2. $$\displaystyle \lim\limits_{x\to 3} \left(\sec^2(x) - \tan^2(x)\right)$$
3. $$\displaystyle \lim\limits_{x\to \pi/2}(\cos(x)\sin(x))$$
4. $$\displaystyle \lim\limits_{x\to 1} e^{\ln(x)}$$
5. $$\displaystyle \lim\limits_{x\to 0} \dfrac{\sin(x)}{x}$$
Solution.
1. This is a straightforward application of Theorem 1.3.7: $$\lim\limits_{x\to \pi} \cos(x) = \cos(\pi) = -1\text{.}$$
2. We can approach this in at least two ways. First, by directly applying Theorem 1.3.7, we have:
\begin{equation*} \lim_{x\to 3} \left(\sec^2(x) - \tan^2(x)\right) = \sec^2(3)-\tan^2(3)\text{.} \end{equation*}
Using the Pythagorean Theorem, this last expression is $$1\text{;}$$ therefore
\begin{equation*} \lim_{x\to 3} \left(\sec^2(x) - \tan^2(x)\right) = 1\text{.} \end{equation*}
We can also use the Pythagorean Theorem from the start.
\begin{equation*} \lim_{x\to 3} \left(\sec^2(x) - \tan^2(x)\right) = \lim_{x\to 3} 1 = 1\text{,} \end{equation*}
using the Constants rule. Either way, we find the limit is $$1\text{.}$$
3. Applying the Products rule and Theorem 1.3.7 gives
\begin{equation*} \lim\limits_{x\to \pi/2} \cos(x)\sin(x) = \cos(\pi/2)\sin(\pi/2) = 0\cdot 1 = 0\text{.} \end{equation*}
4. Again, we can approach this in two ways. First, we can use the exponential/logarithmic identity that $$e^{\ln(x)} = x$$ and evaluate $$\lim\limits_{x\to 1} e^{\ln(x)} = \lim\limits_{x\to 1} x = 1\text{.}$$
We can also use the Compositions rule. Using Theorem 1.3.7, we have $$\lim\limits_{x\to 1}\ln(x) = \ln(1) = 0$$ and $$\lim_{x\to 0} e^x= e^0=1\text{,}$$ satisfying the conditions of the Compositions rule. Applying this rule,
\begin{equation*} \lim_{x\to 1} e^{\ln(x)} = e^{\lim_{x\to 1} \ln(x)}=e^{\ln(1)} = e^0 = 1\text{.} \end{equation*}
Both approaches are valid, giving the same result.
5. We encountered this limit in Section 1.1. Applying our theorems, we attempt to find the limit as
\begin{equation*} \lim_{x\to 0}\frac{\sin(x)}{x}\rightarrow \frac{\sin(0) }{0}\text{,} \end{equation*}
which is of the form $$\frac{0}{0}\text{.}$$ This, of course, violates a condition of the Quotients rule, as the limit of the denominator is not allowed to be $$0\text{.}$$ Therefore, we are still unable to evaluate this limit with tools we currently have at hand.
Based on what we've done so far, this section could have been titled “Using Known Limits to Find Unknown Limits.” By knowing certain limits of functions, we can find limits involving sums, products, powers, etc., of these functions. We further the development of such comparative tools with the Squeeze Theorem, a clever and intuitive way to find the value of some limits.
Before stating this theorem formally, suppose we have functions $$f\text{,}$$ $$g\text{,}$$ and $$h$$ where $$g$$ always takes on values between $$f$$ and $$h\text{;}$$ that is, for all $$x$$ in an interval,
\begin{equation*} f(x) \leq g(x) \leq h(x)\text{.} \end{equation*}
If $$f$$ and $$h$$ have the same limit at $$c\text{,}$$ and $$g$$ is always “squeezed” between them, then $$g$$ must have the same limit as well. That is what the Squeeze Theorem states. This is illustrated in Figure 1.3.9.
It can take some work to figure out appropriate functions by which to “squeeze” a given function. However, that is generally the only place where work is necessary; the theorem makes the “evaluating the limit part” very simple.
The Squeeze Theorem can be used to show that limits of $$\sin(x)$$ can be done by direct substitution, as the videos in Figure 1.3.12 illustrate.
We use the Squeeze Theorem in the following example to finally prove that $$\lim\limits_{x\to 0} \frac{\sin(x)}{x} = 1\text{.}$$
### Example1.3.13.Using the Squeeze Theorem.
Use the Squeeze Theorem to show that
\begin{equation*} \lim_{x\to 0} \frac{\sin(x)}{x} = 1\text{.} \end{equation*}
Solution 1. Video solution
Solution 2.
We begin by considering the unit circle. Each point on the unit circle has coordinates $$(\cos(\theta),\sin(\theta))$$ for some angle $$\theta$$ as shown in Figure 1.3.14. Using similar triangles, we can extend the line from the origin through the point to the point $$(1,\tan(\theta))\text{,}$$ as shown. (Here we are assuming that $$0\leq \theta \leq \pi/2\text{.}$$ Later we will show that we can also consider $$\theta \leq 0\text{.}$$)
Figure 1.3.14 shows three regions have been constructed in the first quadrant, two triangles and a sector of a circle, which are also drawn below. The area of the large triangle is $$\frac{1}{2}\tan(\theta)\text{;}$$ the area of the sector is $$\theta/2\text{;}$$ the area of the triangle contained inside the sector is $$\frac{1}{2}\sin(\theta)\text{.}$$ It is then clear from Figure 1.3.15 that
\begin{equation*} \frac{\tan(\theta)}{2}\geq\frac{\theta}{2}\geq\frac{\sin(\theta)}{2}\text{.} \end{equation*}
(You may need to recall that the area of a sector of a circle is $$\frac{1}{2}r^2 \theta$$ with $$\theta$$ measured in radians.)
Multiply all terms by $$\frac{2}{\sin(\theta)}\text{,}$$ giving
\begin{equation*} \frac{1}{\cos(\theta)} \geq \frac{\theta}{\sin(\theta)} \geq 1\text{.} \end{equation*}
Taking reciprocals reverses the inequalities, giving
\begin{equation*} \cos(\theta) \leq \frac{\sin(\theta)}{\theta} \leq 1\text{.} \end{equation*}
(These inequalities hold for all values of $$\theta$$ near $$0\text{,}$$ even negative values, since $$\cos(-\theta) = \cos(\theta)$$ and $$\sin(-\theta) = -\sin(\theta)\text{.}$$)
Now take limits.
\begin{align*} \lim_{\theta\to 0} \cos(\theta)\amp \leq \lim_{\theta\to 0} \frac{\sin(\theta)}{\theta} \leq \lim_{\theta\to 0} 1\\ \cos(0) \amp\leq \lim_{\theta\to 0} \frac{\sin(\theta)}{\theta} \leq 1\\ 1\amp \leq \lim_{\theta\to 0} \frac{\sin(\theta)}{\theta} \leq 1 \end{align*}
Clearly this means that $$\lim\limits_{\theta\to 0} \frac{\sin(\theta)}{\theta}=1\text{.}$$
With the limit $$\lim\limits_{\theta\to 0} \frac{\sin(\theta)}{\theta}=1$$ finally established, we can move on to other limits involving trigonometric functions, as the video in Figure 1.3.16 demonstrates.
Two notes about the Example 1.3.13 are worth mentioning. First, one might be discouraged by this application, thinking “I would never have come up with that on my own. This is too hard!” Don't be discouraged; within this text we will guide you in your use of the Squeeze Theorem. As one gains mathematical maturity, clever proofs like this are easier and easier to create.
Second, this limit tells us more than just that as $$x$$ approaches $$0\text{,}$$ $$\sin(x)/x$$ approaches $$1\text{.}$$ Both $$x$$ and $$\sin(x)$$ are approaching $$0\text{,}$$ but the ratio of $$x$$ and $$\sin(x)$$ approaches $$1\text{,}$$ meaning that they are approaching $$0$$ in essentially the same way. Another way of viewing this is: for small $$x\text{,}$$ the functions $$y=x$$ and $$y=\sin(x)$$ are essentially indistinguishable.
We include this special limit, along with three others, in the following theorem.
A short word on how to interpret the latter three limits. We know that as $$x$$ goes to $$0\text{,}$$ $$\cos(x)$$ goes to $$1\text{.}$$ So, in the second limit, both the numerator and denominator are approaching $$0\text{.}$$ However, since the limit is $$0\text{,}$$ we can interpret this as saying that “$$\cos(x)$$ is approaching $$1$$ faster than $$x$$ is approaching $$0\text{.}$$
In the third limit, inside the parentheses we have an expression that is approaching $$1$$ (though never equaling $$1$$), and we know that $$1$$ raised to any power is still $$1\text{.}$$ At the same time, the power is growing toward infinity. What happens to a number near $$1$$ raised to a very large power? In this particular case, the result approaches Euler's number, $$e\text{,}$$ approximately $$2.718\text{.}$$
In the fourth limit, we see that as $$x\to 0\text{,}$$ $$e^x$$ approaches $$1$$ “just as fast” as $$x\to 0\text{,}$$ resulting in a limit of $$1\text{.}$$
The special limits stated in Theorem 1.3.17 are called indeterminate forms; in this case they are of the form $$0/0\text{,}$$ except the third limit, which is of a different form. You'll learn techniques to find these limits exactly using calculus in Section 6.7.
Our final theorem for this section will be motivated by the following example.
### Example1.3.18.Using algebra to evaluate a limit.
Evaluate the following limit:
\begin{equation*} \lim_{x\to 1}\frac{x^2-1}{x-1}\text{.} \end{equation*}
Solution.
We begin by attempting to apply Theorem 1.3.4 and substituting $$1$$ for $$x$$ in the quotient. This gives:
\begin{equation*} \lim_{x\to 1}\frac{x^2-1}{x-1} = \frac{1^2-1}{1-1} \end{equation*}
which is of the form $$\frac{0}{0}\text{,}$$ an indeterminate form. We cannot apply the theorem.
By graphing the function, as in Figure 1.3.19, we see that the function seems to be linear, implying that the limit should be easy to evaluate. Recognize that the numerator of our quotient can be factored:
\begin{equation*} \frac{x^2-1}{x-1} = \frac{(x-1)(x+1)}{x-1}\text{.} \end{equation*}
The function is not defined when $$x=1\text{,}$$ but for all other $$x\text{,}$$
\begin{align*} \frac{x^2-1}{x-1}\amp = \frac{(x-1)(x+1)}{x-1}\\ \amp = \frac{\cancel{(x-1)}(x+1)}{\cancel{(x-1)} }\\ \amp = x+1, \quad \text{if } x \neq 1 \end{align*}
Clearly $$\lim\limits_{x\to 1}(x+1) = 2\text{.}$$ Recall that when considering limits, we are not concerned with the value of the function at $$1\text{,}$$ only the value the function approaches as $$x$$ approaches $$1\text{.}$$ Since $$(x^2-1)/(x-1)$$ and $$x+1$$ are the same at all points except at $$x=1\text{,}$$ they both approach the same value as $$x$$ approaches $$1\text{.}$$ Therefore we can conclude that
\begin{align*} \lim_{x \to 1} \frac{x^2-1}{x-1} \amp =\lim_{x \to 1} (x+1)\\ \amp =2 \end{align*}
The key to Example 1.3.18 is that the functions $$y=(x^2-1)/(x-1)$$ and $$y=x+1$$ are identical except at $$x=1\text{.}$$ Since limits describe a value the function is approaching, not the value the function actually attains, the limits of the two functions are always equal.
The Fundamental Theorem of Algebra tells us that when dealing with a rational function of the form $$g(x)/f(x)$$ and directly evaluating the limit $$\lim\limits_{x\to c} \frac{g(x)}{f(x)}$$ returns “0/0”, then $$(x-c)$$ is a factor of both $$g(x)$$ and $$f(x)\text{.}$$ One can then use algebra to factor this binomial out, cancel, then apply Theorem 1.3.20. We demonstrate this once more.
### Example1.3.21.Evaluating a limit using Theorem 1.3.20.
Evaluate
\begin{equation*} \lim\limits_{x\to 3} \frac{x^3-2 x^2-5 x+6}{2 x^3+3 x^2-32 x+15}\text{.} \end{equation*}
Solution 1. Video solution
Solution 2.
We attempt to apply Theorem 1.3.4 by substituting $$3$$ for $$x\text{.}$$ This returns the familiar indeterminate form of “0/0”. Since the numerator and denominator are each polynomials, we know that $$(x-3)$$ is factor of each. Using whatever method is most comfortable to you, factor out $$(x-3)$$ from each (using polynomial division, synthetic division, a computer algebra system, etc.). We find that
\begin{equation*} \frac{x^3-2 x^2-5 x+6}{2 x^3+3 x^2-32 x+15} = \frac{(x-3)\left(x^2+x-2\right)}{(x-3)\left(2 x^2+9 x-5\right)}\text{.} \end{equation*}
We can cancel the $$(x-3)$$ factors as long as $$x\neq 3\text{.}$$ Using Theorem 1.3.20 we conclude:
\begin{align*} \lim_{x\to 3} \frac{x^3-2 x^2-5 x+6}{2 x^3+3 x^2-32 x+15} \amp = \lim_{x\to 3}\frac{(x-3)\left(x^2+x-2\right)}{(x-3)\left(2 x^2+9 x-5\right)}\\ \amp =\lim_{x\to 3} \frac{x^2+x-2}{2 x^2+9 x-5}\\ \amp = \frac{10}{40}\\ \amp = \frac{1}{4}\text{.} \end{align*}
### Example1.3.22.Evaluating a Limit with a Hole.
Evaluate
\begin{equation*} \lim\limits_{x\to 9} \frac{\sqrt{x}-3}{x-9}\text{.} \end{equation*}
Solution 1. Video solution
Solution 2.
We begin by trying to apply the Quotients limit rule, but the denominator evaluates to zero. In fact, this limit is of the indeterminate form $$0/0\text{.}$$ We will do some algebra to resolve the indeterminate form. In this case, we multiply the numerator and denominator by the conjugate of the numerator.
\begin{align*} \frac{\sqrt{x}-3}{x-9} \amp = \frac{\sqrt{x}-3}{x-9} \cdot \frac{\left(\sqrt{x}+3\right)}{\left(\sqrt{x}+3\right)}\\ \amp= \frac{x-9}{(x-9)(\sqrt{x}+3)} \end{align*}
We can cancel the $$(x-9)$$ factors as long as $$x\neq 9\text{.}$$ Using Theorem 1.3.20 we conclude:
\begin{align*} \lim_{x\to 9} \frac{\sqrt{x}-3}{x-9} \amp =\lim_{x\to 9} \frac{x-9}{(x-9)\left(\sqrt{x}+3\right)}\\ \amp = \lim_{x\to 9 }\frac{1}{\sqrt{x}+3}\\ \amp = \frac{1}{\lim_{x\to 9}\sqrt{x}+\lim_{x \to 9}3}\\ \amp = \frac{1}{\sqrt{\lim_{x\to 9}x}+3}\\ \amp = \frac{1}{\sqrt{3+3}}\\ \amp = \frac{1}{6}\text{.} \end{align*}
We end this section by revisiting a limit first seen in Section 1.1, a limit of a difference quotient. Let $$f(x) = -1.5x^2+11.5x\text{;}$$ we approximated the limit $$\lim\limits_{h\to 0}\frac{f(1+h)-f(1)}{h}\approx 8.5\text{.}$$ We formally evaluate this limit in the following example.
### Example1.3.23.Evaluating the limit of a difference quotient.
Let $$f(x) = -1.5x^2+11.5x\text{;}$$ find $$\lim\limits_{h\to 0}\frac{f(1+h)-f(1)}{h}\text{.}$$
Solution.
Since $$f$$ is a polynomial, our first attempt should be to employ Theorem 1.3.4 and substitute $$0$$ for $$h\text{.}$$ However, we see that this gives us “$$0/0\text{.}$$” Knowing that we have a rational function hints that some algebra will help. Consider the following steps:
\begin{align*} \lim_{h\to 0}\frac{f(1+h)-f(1)}{h}\amp = \lim_{h\to 0}\frac{-1.5(1+h)^2 + 11.5(1+h) - \left(-1.5(1)^2+11.5(1)\right)}{h}\\ \amp =\lim_{h\to 0}\frac{-1.5(1+2h+h^2) + 11.5+11.5h - 10}{h}\\ \amp =\lim_{h\to 0}\frac{-1.5h^2 +8.5h}{h}\\ \amp =\lim_{h\to 0}\frac{h(-1.5h+8.5)}h\\ \amp =\lim_{h\to 0}(-1.5h+8.5) \quad (\text{using } \knowl{./knowl/thm_limit_allbut1.html}{\text{Theorem 1.3.20}}, \text{ as } h\neq 0 )\\ \amp =8.5 \quad (\text{using } \knowl{./knowl/thm_poly_rat.html}{\text{Theorem 1.3.4}} ) \end{align*}
This matches our previous approximation.
This section contains several valuable tools for evaluating limits. One of the main results of this section is Theorem 1.3.7; it states that many functions that we use regularly behave in a very nice, predictable way. In Section 1.5 we give a name to this nice behavior; we label such functions as continuous. Defining that term will require us to look again at what a limit is and what causes limits to not exist.
### ExercisesExercises
#### Terms and Concepts
##### 1.
Explain in your own words, without using $$\varepsilon$$-$$\delta$$ formality, why $$\lim\limits_{x\to c}b=b\text{.}$$
##### 2.
Explain in your own words, without using $$\varepsilon$$-$$\delta$$ formality, why $$\lim\limits_{x\to c}x=c\text{.}$$
##### 3.
What does the text mean when it says that certain functions’ “behavior is ‘nice’ in terms of limits”? What, in particular, is “nice”?
##### 4.
Sketch a graph that visually demonstrates the Squeeze Theorem.
##### 5.
You are given the following information:
\begin{equation*} \begin{aligned} \lim_{x\to 1}f(x)\amp=0\amp\lim_{x\to 1}g(x)\amp=0\amp\lim_{x\to 1}\frac{f(x)}{g(x)}\amp=2 \end{aligned} \end{equation*}
What can be said about the relative sizes of $$f(x)$$ and $$g(x)$$ as $$x$$ approaches $$1\text{?}$$
##### 6.
• True
• False
$$\lim\limits_{x\to 1}\ln x = 0\text{.}$$
#### Problems
##### Exercise Group.
Use the following information to evaluate the given limit, when possible.
\begin{align*} \lim\limits_{x\to9}f(x)\amp=6\amp\lim\limits_{x\to6}f(x)\amp=9\amp f(9)\amp=6\\ \lim\limits_{x\to9}g(x)\amp=3\amp\lim\limits_{x\to6}g(x)\amp=3\amp g(6)\amp=9 \end{align*}
###### 7.
$$\lim\limits_{x\to 9}(f(x)+g(x))$$
###### 8.
$$\lim\limits_{x\to 9}\left(\frac{3f(x)}{g(x)}\right)$$
###### 9.
$$\lim\limits_{x\to 9}\left(\frac{f(x)-2g(x)}{g(x)}\right)$$
###### 10.
$$\lim\limits_{x\to 6}\left(\frac{f(x)}{3-g(x)}\right)$$
###### 11.
$$\lim\limits_{x\to 9}g(f(x))$$
###### 12.
$$\lim\limits_{x\to 6}f(g(x))$$
###### 13.
$$\lim\limits_{x\to 6}g(f(f(x)))$$
###### 14.
$$\lim\limits_{x\to 6}\left(f(x)g(x)-f(x)^2+g(x)^2\right)$$
##### Exercise Group.
Use the following information to evaluate the given limit, when possible. If it is not possible to determine the limit, state why not.
\begin{align*} \lim_{x\to1}f(x)\amp=2\amp\lim_{x\to10}f(x)\amp=1\amp f(1)\amp=1/5\\ \lim_{x\to1}g(x)\amp=0\amp\lim_{x\to10}g(x)\amp=\pi\amp g(10)\amp=\pi \end{align*}
###### 15.
$$\lim\limits_{x\to 1}\left(f(x)g(x)\right)$$
###### 16.
$$\lim\limits_{x\to 10}\cos(g(x))$$
###### 17.
$$\lim\limits_{x\to 1}g(5f(x))$$
###### 18.
$$\lim\limits_{x\to 1}5^{g(x)}$$
##### Exercise Group.
Evaluate the given limit.
###### 19.
$$\lim\limits_{x\to 6}\left({x^{2}-3x+5}\right)$$
###### 20.
$$\lim\limits_{x\to\pi}{\left(\frac{x-5}{x-8}\right)^{4}}$$
###### 21.
$$\lim\limits_{x\to {\frac{\pi }{6}}}\cos(x)\sin(x)$$
###### 22.
$$\lim\limits_{x\to6}{\frac{-\left(5x+2\right)}{x+4}}$$
###### 23.
$$\lim\limits_{x\to0}\ln(x)$$
###### 24.
$$\lim\limits_{x\to 2}{4^{x^{3}-2x}}$$
###### 25.
$$\lim\limits_{x\to {\frac{\pi }{3}}}\csc(x)$$
###### 26.
$$\lim\limits_{x\to0}{\ln\!\left(4+x\right)}$$
###### 27.
$$\lim\limits_{x\to\pi}{\frac{x^{2}-4x-2}{2x^{2}-2x+1}}$$
###### 28.
$$\lim\limits_{x\to\pi}{\frac{2x-4}{5x-5}}$$
###### 29.
$$\lim\limits_{x\to 5}{\frac{x^{2}-11x+30}{x^{2}-14x+45}}$$
###### 30.
$$\lim\limits_{x\to0}{\frac{x^{2}-7x}{x^{2}+2x}}$$
###### 31.
$$\lim\limits_{x\to 9}{\frac{x^{2}-x-72}{x^{2}-14x+45}}$$
###### 32.
$$\lim\limits_{x\to -8}{\frac{x^{2}+3x-40}{x^{2}+13x+40}}$$
###### 33.
$$\lim\limits_{x\to -6}{\frac{x^{2}+8x+12}{x^{2}+3x-18}}$$
###### 34.
$$\lim\limits_{x\to -4}{\frac{x^{2}+13x+36}{x^{2}+12x+32}}$$
##### Exercise Group.
Use the Squeeze Theorem to evaluate the limit.
###### 35.
$$\lim\limits_{x\to0}\left(x\sin\mathopen{}\left(\frac{1}{x}\right)\mathclose{}\right)$$
###### 36.
$$\lim\limits_{x\to0}\left(\sin(x)\cos\mathopen{}\left(\frac{1}{x^2}\right)\mathclose{}\right)$$
###### 37.
$$\lim\limits_{x\to1} f(x)\text{,}$$ where $$3x-2\leq f(x) \leq x^3$$
###### 38.
$$\lim\limits_{x\to3} f(x)\text{,}$$ where $$6x-9\leq f(x) \leq x^2$$
##### Exercise Group.
The following exercises challenge your understanding of limits but can be evaluated using the knowledge gained in Section 1.3.
###### 39.
$$\lim\limits_{x\to0}{\frac{\sin\!\left(8x\right)}{x}}$$
###### 40.
$$\lim\limits_{x\to0}{\frac{\sin\!\left(9x\right)}{8x}}$$
###### 41.
$$\lim\limits_{x\to0}\frac{\ln(1+x)}{x}$$
###### 42.
$$\lim\limits_{x\to0}\frac{\sin(x)}{x}\text{,}$$ where $$x$$ is measured in degrees, not radians.
##### 43.
Let $$f(x)=0$$ and $$g(x)=\frac{x}{x}\text{.}$$
1. Explain why $$\lim\limits_{x\to2}f(x)=0\text{.}$$
2. Explain why $$\lim\limits_{x\to0}g(x)=1\text{.}$$
3. Explain why $$\lim\limits_{x\to2} g(f(x))$$ does not exist.
4. Explain why the previous statement does not violate the Composition Rule of Theorem 1.3.1.
|
2023-02-07 04:59:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9750127792358398, "perplexity": 1058.2820797659483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500384.17/warc/CC-MAIN-20230207035749-20230207065749-00159.warc.gz"}
|
https://cambridgedistrictscoutarchive.com/20th-cambridge-district-outline/
|
# 20th Cambridge District: Outline
Cambridge District Scout Archive
Note
• Cambridge District troops are those outside Cambridge Town boundaries. Some troops altered from Cambridge District to Cambridge as the boundaries altered. This system ended in steps between 1928 and 1935 and all Groups used Cambridge numbers.
• Packs were not formally attached to troops until the move to the Group system in 1928.
• Many of these District troops are poorly recorded.
*
20th Cambridge District 1915
A 20th Cambridge District can only be assumed by mention of 19th and 21st in 1915. No name or location is named.
Histon Lone Patrol is listed in 20th place in the list of Jan 1923 but not given a number. It was registered as IHQ 9848 in 1922 but did not return census figures for 1923.
The only other known but un-numbered Cambridge District Troop is Six Mile Bottom.
JWR Archivist Jan 2023
|
2023-03-31 22:39:25
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8712318539619446, "perplexity": 9450.254885255448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00414.warc.gz"}
|
https://proxieslive.com/tag/exists/
|
## Why to use captcha when SOP exists? [closed]
I just read about SOP (same origin policy), and I tried to learn this more and learn how it works. I tried to reset my password (In a chat forum) using HTTP Requests in Python, and most of them have captcha, why do websites need captcha when SOP exists?
The SOP doesn’t allow other places to perform those tasks, right? And how can attackers bypass the SOP and try to reset other people’s passwords?
## What happens when a banished creature would return to an extradimensional space that no longer exists?
Consider the following scenarios.
### 1. Banished from a portable hole, portable hole is destroyed.
A portable hole is described as a ten foot deep, six foot diameter extradimensional space. Suppose I jump into my portable hole after spreading it out on the ground, and I am followed by an enemy. Once we are both inside my portable hole, I cast banishment:
If the target is native to a different plane of existence than the one you’re on, the target is banished with a faint popping noise, returning to its home plane. If the spell ends before 1 minute has passed, the target reappears in the space it left or in the nearest unoccupied space if that space is occupied. Otherwise, the target doesn’t return.
My enemy is banished to its home plane. Next, I climb out of my hole, get a safe distance away, and toss in my bag of holding:
Placing a bag of holding inside an extradimensional space created by a handy haversack, portable hole, or similar item instantly destroys both items and opens a gate to the Astral Plane.
The portable hole is destroyed, and finally I break my concentration on banishment before the full minute has passed.
### 2. Banished from a rope trick right before the spell ends.
Rope trick says:
an invisible entrance opens to an extradimensional space that lasts until the spell ends. […] Anything inside the extradimensional space drops out when the spell ends.
So I cast rope trick while I’m being chased, and my pursuer pursues me into my little rope trick room, where I am patiently holding banishment. I banish my pursuer, climb out of my rope trick room, and and cast dispel magic on the rope:
Choose one creature, object, or magical effect within range. Any spell of 3rd level or lower on the target ends.
Again, no space to return to as I break my concentration on banishment before the one minute is up.
What happens to the banished creature when banishment ends? Banishment is very specific that the creature returns to the space it left from. Both the actual extradimensional space and the 5 foot square space the creature previously occupied is gone, as well as all nearest unoccupied spaces. What happens?
## Show that for every language $A$, this language $B$ exists
I came across this problem that I could not figure out… For every language $$A$$, there is supposed to be a language $$B$$ such that:
$$A \leq_T B$$
but:
$$B \not \leq_T A$$
If it is $$A \leq_TB$$ and $$B \leq_T A$$, this is easy since we can just let $$B := \bar{A}$$, but for the above I could not think of anything. Any help ?
## Is this language L = {w $\in$ {a,b}$^*$ : ($\exists n \in \mathbb{N}$)[$w|_b = 5^n$]} regular?
Let’s say we have the language L = {w $$\in$$ {a,b}$$^*$$ : ($$\exists n \in \mathbb{N}$$)[$$w|_b = 5^n$$]}. I want to know if this is a regular language or not. How do I go about doing this? I’m familiar with the Myhill-Nerode theorem but I don’t know how to apply it.
## How are scientific research projects planned? In particular computer science, but possibly there exists processes for all research?
Possibly some context: Take any research endeavor. Finding a vaccine. Going to the moon. Clean energy. I don’t have a background in these scientific areas so picking one closer to home (Comp Sci) might be better. But the idea is how to battle the Unknowns? How to do it economically? How to make progress while not getting analysis paralysis?
Some might suggest a scrum, or an agile process try to solve these questions. I’m not certain that they address the same level or kinds of Unknowns.
Are there previous experiences that work, and those that don’t work? And why? The questions grows on the way forward through unknowns, and by definition the Unknown doesn’t exactly have a road map, and new context is developed regularly.
This maybe simply the question of the ‘meta’ variety: Is there research on comp sci research? If so does Comp Sci research have approaches or techniques that they rely on?
## Mysql, Getting all rows in which field ends in a specific character, and another field exists that is the same but doesn’t end in that character
I need to get all rows which end in a specific character, P for example, but in which a similar key sans the P exists. I have no idea how to approach this one in MySQL.
This is very small example, my actual data is huge with other columns also.
+------------+ | key | +------------+ | value_100 | | value_100P | | value_101 | | value_101 | | value_102 | | value_102P | | value_103P | | value_104P | +------------+
The query would output,
+------------+ | key | +------------+ | value_100P | | value_102P | +------------+
## Proving a certain primitive recursive function exists
Assume $$f\colon ω × ω → ω$$ is a computable function. How can we prove that there is a primitive recursive function $$g\colon ω × ω → ω$$ where the following holds:
$$∀n [∃s(f(n, s) = 1) ↔ ∃k(g(n, k) = 1)]$$
So for every $$n$$, there is an $$s$$ such that $$f(n, s) = 1$$ if and only if there is a $$k$$ such that $$g(n, k) = 1$$.
## Given a CFG $G=(V_N, V_T, R, S)$ and one of its nonterminals $v$ determine if there exists a production chain $S \Rightarrow^* v \alpha$?
I am supposed to find an algorithm solving the following problem:
Given a CFG $$\;G=(V_N, V_T, R, S)$$ and a nonterminal $$v \in V_N$$ determine if there exists a production chain $$S \Rightarrow^* v \alpha$$, where $$\alpha = (V_N + V_T)^*$$.
Not sure if that’s the right term, but in other words we are trying to check if you can yield $$v$$ from $$S$$ – the starting symbol.
I don’t know anything about the form of the grammar and I can’t convert it into Chomsky’s form as it would introduce new nonterminals and possibly remove $$v$$. Where do I start with this? Any suggestions?
Thanks
## Does a set of formal languages exist such that any DFA for it has $\Omega(c^k)$ states and there exists an NFA for it with $O(k)$ states?
Given an alphabet $$\Sigma : |\Sigma|=c$$, can a set of languages $$\{L_k\}$$ be created, such that any DFA for $$L_k$$ has $$\Omega(c^k)$$ states and a NFA for $$L_k$$ exists with $$O(k)$$ states?
I’m having trouble creating an $$L_k$$ such that any DFA for it has $$\Omega(c^k)$$ states. Is a language of strings with a suffix of $$s_k, |s_k|=k$$ such a language? Following is a draft proof of that.
Proof by contradiction: let a DFA $$\langle Q, \Sigma, \delta, q_0, F\rangle$$ have $$|Q|. Let $$a, b$$ be strings of length $$k$$ and $$a_k=(s_k)_1\not=b_k$$
Let $$q_a$$ and $$q_b$$ denote $$\delta(q_0, a)$$ and $$\delta(q_0, b)$$, respectively.
There are two cases:
I. there are no $$a,b$$ such that $$q_a=q_b$$. Then each string corresponds to a different state, but there are $$c^{k-1}$$ such strings, therefore $$|Q|\geq c^{k-1}$$, which is not possilbe.
II. There are $$a,b$$ such that $$q_a=q_b$$. Then $$\delta(q_a, s_2s_3\ldots s_k)=\delta(q_b, s_2s_3\ldots s_k)=q_c$$. $$as_2s_3\ldots s_k$$ should be accepted and $$bs_2s_3\ldots s_k$$ shouldn’t, therefore $$q_c$$ is both an accepting state and not an accepting state, which is not possible.
This seems to prove that any DFA for $$L_k$$ has at least $$c^{k-1}$$ nodes, which is sufficient for $$\Omega(c^k)$$. If my proof is correct, the only task left is to prove that a NFA containing $$O(k)$$ nodes exists for $$L_k$$.
The simplest way to do this is to create such a NFA, however I’m not sure how to do that. $$O(k)$$ suggests that $$i$$-th node should correspond to the state of “prefix of $$s$$ of length $$i$$ matches the suffix of the input string”, however I do not follow how such a NFA can be created.
## show that this decidable set $C$ exists
I came across this problem which says that given disjoint sets $$A$$ and $$B$$ s.t $$\bar{A}$$ and $$\bar{B}$$ are both computably enumerable (c.e.), there exists a decidable set $$C$$ s.t. $$A \subseteq C$$ and $$A \cap B = \emptyset$$.
I tried out an attempt, but there’s a part of the attempt of which am not so sure… So here’s my attempt:
Since both $$\bar{A}$$ and $$\bar{B}$$ are c.e. and the fact that $$A$$ and $$B$$ are disjoint, we have $$\bar{A}-\bar{B}=B$$. We claim that $$B$$ is also c.e. (and this is the part which am not sure of). To show that $$\bar{A}-\bar{B}=B$$ is c.e., we can have this enumerator $$E$$ for $$\bar{A}-\bar{B}$$:
$$E$$: ignore any input
enumerate words $$\{s_1,s_2,s_3,…,s_k\} \in \Sigma^*$$ lexicographically from $$k=1$$ to $$\infty$$
run recognizer $$M_{\bar{A}}$$ on input words $$\{s_1,s_2,s_3,…,s_k\}$$ for $$k$$ steps
run recognizer $$M_{\bar{B}}$$ on input words $$\{s_1,s_2,s_3,…,s_k\}$$ for $$k$$ steps
if both $$M_{\bar{A}}$$ and $$M_{\bar{B}}$$ accept an input word $$s \in \{s_1,s_2,s_3,…,s_k\}$$, print $$s$$
Assuming that $$E$$ correctly enumerates $$\bar{A}-\bar{B}$$, since $$B=\bar{A}-\bar{B}$$, we have that $$B$$ is c.e.. Since both $$B$$ and $$\bar{B}$$ are c.e., it follows that $$B$$ is decidable.
So to construct $$C$$, we can just set $$\bar{B}=C$$. It follows that $$A \subseteq C$$ and $$C$$ is decidable.
Is this attempt correct or am I missing something?
|
2020-08-13 02:47:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 104, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46301567554473877, "perplexity": 1456.342466701846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.61/warc/CC-MAIN-20200813014639-20200813044639-00514.warc.gz"}
|
https://projecteuclid.org/euclid.ecp
|
## Electronic Communications in Probability
The Electronic Communications in Probability (ECP) publishes short research articles in probability theory. Its sister journal, the Electronic Journal of Probability (EJP), publishes full-length articles in probability theory. Short papers, those less than 12 pages, should be submitted to ECP first. EJP and ECP share the same editorial board, but with different Editors in Chief.
EJP and ECP are free access official journals of the Institute of Mathematical Statistics (IMS) and the Bernoulli Society.
Author or publication fees are not required. Voluntary fees or donations to the Open Access Fund are accepted. Copyright for all articles in ECP is CC BY 4.0.
On multiplication in $q$-Wiener chaoses
|
2018-05-26 23:39:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39385801553726196, "perplexity": 7578.421685075183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867949.10/warc/CC-MAIN-20180526225551-20180527005551-00564.warc.gz"}
|
https://ncatlab.org/nlab/show/real+projective+space
|
Contents
Idea
The real projective space $\mathbb{R}P^n$ is the projective space of the real vector space $\mathbb{R}^{n+1}$.
Equivalently this is the Grassmannian $Gr_1(\mathbb{R}^{n+1})$.
Properties
Cell structure
Proposition
(CW-complex structure)
For $n \in \mathbb{N}$, the real projective space $\mathbb{R}P^n$ admits the structure of a CW-complex.
Proof
Use that $\mathbb{R}P^n \simeq S^n/(\mathbb{Z}/2)$ is the quotient space of the Euclidean n-sphere by the $\mathbb{Z}/2$-action which identifies antipodal points.
The standard CW-complex structure of $S^n$ realizes it via two $k$-cells for all $k \in \{0, \cdots, n\}$, such that this $\mathbb{Z}/2$-action restricts to a homeomorphism between the two $k$-cells for each $k$. Thus $\mathbb{R}P^n$ has a CW-complex structure with a single $k$-cell for all $k \in \{0,\cdots, n\}$.
Relation to classifying space
The infinite real projective space $\mathbb{R}P^\infty \coloneqq \underset{\longrightarrow}{\lim}_n \mathbb{R}P^n$ is the classifying space for real line bundles. It has the homotopy type of the Eilenberg-MacLane space $K(\mathbb{Z}/2,1) = B \mathbb{Z}/2$.
References
Last revised on June 11, 2017 at 10:44:58. See the history of this page for a list of all contributions to it.
|
2019-10-14 01:43:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 18, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8623980283737183, "perplexity": 226.9963869126581}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00449.warc.gz"}
|
https://journal.psych.ac.cn/acps/EN/10.3724/SP.J.1041.2016.00578
|
ISSN 0439-755X
CN 11-1911/B
Acta Psychologica Sinica ›› 2016, Vol. 48 ›› Issue (5): 578-587.
### The impact of interpersonal relationship on social responsibility
HUANG SiLin1; HAN MingYue2; ZHANG Mei2
1. (1 Institute of Developmental Psychology, Beijing Normal University, Beijing 100875, China) (2 Department of Psychology, Central University of Finance and Economics, Beijing 100081, China)
• Received:2015-08-10 Published:2016-05-25 Online:2016-05-25
• Contact: HUANG SiLin, E-mail: [email protected]
Abstract:
Social responsibility predicts job performance, academic achievement, personality, coping with frustration, self-acceptance and altruism. Previous studies mainly focused on social responsibility’s concept, psychological structure, influencing factors, and instruction. Findings from recent correlational studies suggest that interpersonal relationship influences social responsibility. This study examined the causal relationship between the two variables. Furthermore, prior results postulate that empathy may play an important mediating role in this causal mechanism. Therefore, the present study was designed to test the causal impact of interpersonal relationship on social responsibility and the mediating function of empathy in the association between the two variables. Three studies were conducted. Study 1 was a survey study testing the relationship between interpersonal relationship and social responsibility, and whether empathy played a mediating role, among 335 undergraduates. Study 2 tested the effect of the utility of relationship on social responsibility. 234 undergraduates were randomly assigned to three groups: high utility, low utility and control group. Based on the result of Study 2, Study 3 further manipulated the intimacy of relationship in which participants were asked to imagine a close friend or a classmate who was newly introduced, and examined the effect of utility and intimacy of relationship on social responsibility simultaneously. 192 undergraduates were randomly assigned to four groups classified by the utility (high vs. low) and intimacy (high vs. low) of relationship. The results showed that: (1) Interpersonal relationship correlated positively with social responsibility, and empathy acted as a partial mediating variable. (2) The manipulation of utility of relationship significantly impacted social responsibility. The high utility group showed significantly a higher level of social responsibility than did the low utility and the control group, and the low utility group showed an even lower level of social responsibility than did the control group. (3) The manipulation of intimacy of relationship also impacted social responsibility. In contrast to the low intimacy group, the high intimacy group showed significantly superior level of social responsibility. More importantly, the interaction between utility and intimacy of relationship was significant. Specifically, for the low intimacy group, those with high utility exhibited a significant higher level of social responsibility than did those with low utility. However, in the high intimacy group, no difference in social responsibility was found between the two levels of utility. In conclusion, the present study for the first time confirmed the causal impact of interpersonal relationship on social responsibility and the partial mediating role of empathy. The present results are consistent with the “pattern of difference sequence” account of social responsibility in China.
|
2023-01-27 04:57:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22696347534656525, "perplexity": 5626.988238581915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494936.89/warc/CC-MAIN-20230127033656-20230127063656-00688.warc.gz"}
|
https://cqcl.github.io/tket/pytket/api/optype.html
|
# pytket.circuit.OpType
Enum for available operations compatible with the tket Circuit class.
Warning
All parametrised OpTypes which take angles (e.g. Rz, CPhase, FSim) expect parameters in multiples of pi (half-turns). This may differ from other quantum programming tools you have used, which have specified angles in radians, or perhaps even degrees. Therefore, for instance circuit.add_gate(OpType.Rx, 1, [0]) is equivalent in terms of the unitary to circuit.add_gate(OpType.X, [0])
class pytket.circuit.OpType
Enum for available operations compatible with tket Circuit s.
Members:
Z : Pauli Z: $$\left[ \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right]$$
X : Pauli X: $$\left[ \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right]$$
Y : Pauli Y: $$\left[ \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right]$$
S : $$\left[ \begin{array}{cc} 1 & 0 \\ 0 & i \end{array} \right] = \mathrm{U1}(\frac12)$$
Sdg : $$\mathrm{S}^{\dagger} = \left[ \begin{array}{cc} 1 & 0 \\ 0 & -i \end{array} \right] = \mathrm{U1}(-\frac12)$$
T : $$\left[ \begin{array}{cc} 1 & 0 \\ 0 & e^{i\pi/4} \end{array} \right] = \mathrm{U1}(\frac14)$$
Tdg : $$\mathrm{T}^{\dagger} = \left[ \begin{array}{cc} 1 & 0 \\ 0 & e^{-i\pi/4} \end{array} \right] = \mathrm{U1}(-\frac14)$$
V : $$\frac{1}{\sqrt 2} \left[ \begin{array}{cc} 1 & -i \\ -i & 1 \end{array} \right] = \mathrm{Rx}(\frac12)$$
Vdg : $$\mathrm{V}^{\dagger} = \frac{1}{\sqrt 2} \left[ \begin{array}{cc} 1 & i \\ i & 1 \end{array} \right] = \mathrm{Rx}(-\frac12)$$
SX : $$\frac{1}{2} \left[ \begin{array}{cc} 1 + i & 1 - i \\ 1 - i & 1 + i \end{array} \right] = e^{\frac{i\pi}{4}}\mathrm{Rx}(\frac12)$$
SXdg : $$\mathrm{SX}^{\dagger} = \frac{1}{2} \left[ \begin{array}{cc} 1 - i & 1 + i \\ 1 + i & 1 - i \end{array} \right] = e^{\frac{-i\pi}{4}}\mathrm{Rx}(-\frac12)$$
H : Hadamard gate: $$\frac{1}{\sqrt 2} \left[ \begin{array}{cc} 1 & 1 \\ 1 & -1 \end{array} \right]$$
Rx : $$(\alpha) \mapsto e^{-\frac12 i \pi \alpha \mathrm{X}} = \left[ \begin{array}{cc} \cos\frac{\pi\alpha}{2} & -i\sin\frac{\pi\alpha}{2} \\ -i\sin\frac{\pi\alpha}{2} & \cos\frac{\pi\alpha}{2} \end{array} \right]$$
Ry : $$(\alpha) \mapsto e^{-\frac12 i \pi \alpha \mathrm{Y}} = \left[ \begin{array}{cc} \cos\frac{\pi\alpha}{2} & -\sin\frac{\pi\alpha}{2} \\ \sin\frac{\pi\alpha}{2} & \cos\frac{\pi\alpha}{2} \end{array} \right]$$
Rz : $$(\alpha) \mapsto e^{-\frac12 i \pi \alpha \mathrm{Z}} = \left[ \begin{array}{cc} e^{-\frac12 i \pi\alpha} & 0 \\ 0 & e^{\frac12 i \pi\alpha} \end{array} \right]$$
U1 : $$(\lambda) \mapsto \mathrm{U3}(0, 0, \lambda) = e^{\frac12 i\pi\lambda} \mathrm{Rz}(\lambda)$$. U-gates are used by IBM. See https://qiskit.org/documentation/tutorials/circuits/3_summary_of_quantum_operations.html for more information on U-gates.
U2 : $$(\phi, \lambda) \mapsto \mathrm{U3}(\frac12, \phi, \lambda) = e^{\frac12 i\pi(\lambda+\phi)} \mathrm{Rz}(\phi) \mathrm{Ry}(\frac12) \mathrm{Rz}(\lambda)$$, defined by matrix multiplication
U3 : $$(\theta, \phi, \lambda) \mapsto \left[ \begin{array}{cc} \cos\frac{\pi\theta}{2} & -e^{i\pi\lambda} \sin\frac{\pi\theta}{2} \\ e^{i\pi\phi} \sin\frac{\pi\theta}{2} & e^{i\pi(\lambda+\phi)} \cos\frac{\pi\theta}{2} \end{array} \right] = e^{\frac12 i\pi(\lambda+\phi)} \mathrm{Rz}(\phi) \mathrm{Ry}(\theta) \mathrm{Rz}(\lambda)$$
TK1 : $$(\alpha, \beta, \gamma) \mapsto \mathrm{Rz}(\alpha) \mathrm{Rx}(\beta) \mathrm{Rz}(\gamma)$$
TK2 : $$(\alpha, \beta, \gamma) \mapsto \mathrm{XXPhase}(\alpha) \mathrm{YYPhase}(\beta) \mathrm{ZZPhase}(\gamma)$$
CX : Controlled $$\mathrm{X}$$ gate
CY : Controlled $$\mathrm{Y}$$ gate
CZ : Controlled $$\mathrm{Z}$$ gate
CH : Controlled $$\mathrm{H}$$ gate
CV : Controlled $$\mathrm{V}$$ gate
CVdg : Controlled $$\mathrm{V}^{\dagger}$$ gate
CSX : Controlled $$\mathrm{SX}$$ gate
CSXdg : Controlled $$\mathrm{SX}^{\dagger}$$ gate
CRz : $$(\alpha) \mapsto$$ Controlled $$\mathrm{Rz}(\alpha)$$ gate
CRx : $$(\alpha) \mapsto$$ Controlled $$\mathrm{Rx}(\alpha)$$ gate
CRy : $$(\alpha) \mapsto$$ Controlled $$\mathrm{Ry}(\alpha)$$ gate
CU1 : $$(\lambda) \mapsto$$ Controlled $$\mathrm{U1}(\lambda)$$ gate. Note that this is not equivalent to a $$\mathrm{CRz}(\lambda)$$ up to global phase, differing by an extra $$\mathrm{Rz}(\frac{\lambda}{2})$$ on the control qubit.
CU3 : $$(\theta, \phi, \lambda) \mapsto$$ Controlled $$\mathrm{U3}(\theta, \phi, \lambda)$$ gate. Similar rules apply.
CCX : Toffoli gate
ECR : $$\frac{1}{\sqrt 2} \left[ \begin{array}{cccc} 0 & 0 & 1 & i \\0 & 0 & i & 1 \\1 & -i & 0 & 0 \\-i & 1 & 0 & 0 \end{array} \right]$$
SWAP : Swap gate
CSWAP : Controlled swap gate
noop : Identity gate. These gates are not permanent and are automatically stripped by the compiler
Barrier : Meta-operation preventing compilation through it. Not automatically stripped by the compiler
Label : Label for control flow jumps. Does not appear within a circuit
Branch : A control flow jump to a label dependent on the value of a given Bit. Does not appear within a circuit
Goto : An unconditional control flow jump to a Label. Does not appear within a circuit.
Stop : Halts execution immediately. Used to terminate a program. Does not appear within a circuit.
BRIDGE : A CX Bridge over 3 qubits. Used to apply a logical CX between the first and third qubits when they are not adjacent on the device, but both neighbour the second qubit. Acts as the identity on the second qubit
Measure : Z-basis projective measurement, storing the measurement outcome in a specified bit
Reset : Resets the qubit to $$\left|0\right>$$
CircBox : Represents an arbitrary subcircuit
PhasePolyBox : An operation representing arbitrary circuits made up of CX and Rz gates, represented as a phase polynomial together with a boolean matrix representing an additional linear transformation.
Unitary1qBox : Represents an arbitrary one-qubit unitary operation by its matrix
Unitary2qBox : Represents an arbitrary two-qubit unitary operation by its matrix
Unitary3qBox : Represents an arbitrary three-qubit unitary operation by its matrix
ExpBox : A two-qubit operation corresponding to a unitary matrix defined as the exponential $$e^{itA}$$ of an arbitrary 4x4 hermitian matrix $$A$$.
PauliExpBox : An operation defined as the exponential $$e^{-\frac{i\pi\alpha}{2} P}$$ of a tensor $$P$$ of Pauli operations.
QControlBox : An arbitrary n-controlled operation
CustomGate : $$(\alpha, \beta, \ldots) \mapsto$$ A user-defined operation, based on a Circuit $$C$$ with parameters $$\alpha, \beta, \ldots$$ substituted in place of bound symbolic variables in $$C$$, as defined by the CustomGateDef.
Conditional : An operation to be applied conditionally on the value of some classical register
ISWAP : $$(\alpha) \mapsto e^{\frac14 i \pi\alpha (\mathrm{X} \otimes \mathrm{X} + \mathrm{Y} \otimes \mathrm{Y})} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & \cos\frac{\pi\alpha}{2} & i\sin\frac{\pi\alpha}{2} & 0 \\ 0 & i\sin\frac{\pi\alpha}{2} & \cos\frac{\pi\alpha}{2} & 0 \\ 0 & 0 & 0 & 1 \end{array} \right]$$
PhasedISWAP : $$(p, t) \mapsto \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & \cos\frac{\pi t}{2} & i\sin\frac{\pi t}{2}e^{2i\pi p} & 0 \\ 0 & i\sin\frac{\pi t}{2}e^{-2i\pi p} & \cos\frac{\pi t}{2} & 0 \\ 0 & 0 & 0 & 1 \end{array} \right]$$ (equivalent to: Rz(p)[0]; Rz(-p)[1]; ISWAP(t); Rz(-p)[0]; Rz(p)[1])
XXPhase : $$(\alpha) \mapsto e^{-\frac12 i \pi\alpha (\mathrm{X} \otimes \mathrm{X})} = \left[ \begin{array}{cccc} \cos\frac{\pi\alpha}{2} & 0 & 0 & -i\sin\frac{\pi\alpha}{2} \\ 0 & \cos\frac{\pi\alpha}{2} & -i\sin\frac{\pi\alpha}{2} & 0 \\ 0 & -i\sin\frac{\pi\alpha}{2} & \cos\frac{\pi\alpha}{2} & 0 \\ -i\sin\frac{\pi\alpha}{2} & 0 & 0 & \cos\frac{\pi\alpha}{2} \end{array} \right]$$
YYPhase : $$(\alpha) \mapsto e^{-\frac12 i \pi\alpha (\mathrm{Y} \otimes \mathrm{Y})} = \left[ \begin{array}{cccc} \cos\frac{\pi\alpha}{2} & 0 & 0 & i\sin\frac{\pi\alpha}{2} \\ 0 & \cos\frac{\pi\alpha}{2} & -i\sin\frac{\pi\alpha}{2} & 0 \\ 0 & -i\sin\frac{\pi\alpha}{2} & \cos\frac{\pi\alpha}{2} & 0 \\ i\sin\frac{\pi\alpha}{2} & 0 & 0 & \cos\frac{\pi\alpha}{2} \end{array} \right]$$
ZZPhase : $$(\alpha) \mapsto e^{-\frac12 i \pi\alpha (\mathrm{Z} \otimes \mathrm{Z})} = \left[ \begin{array}{cccc} e^{-\frac12 i \pi\alpha} & 0 & 0 & 0 \\ 0 & e^{\frac12 i \pi\alpha} & 0 & 0 \\ 0 & 0 & e^{\frac12 i \pi\alpha} & 0 \\ 0 & 0 & 0 & e^{-\frac12 i \pi\alpha} \end{array} \right]$$
XXPhase3 : A 3-qubit gate XXPhase3(α) consists of pairwise 2-qubit XXPhase(α) interactions. Equivalent to XXPhase(α)[0, 1] XXPhase(α)[1, 2] XXPhase(α)[0, 2].
PhasedX : $$(\alpha,\beta) \mapsto \mathrm{Rz}(\beta)\mathrm{Rx}(\alpha)\mathrm{Rz}(-\beta)$$ (matrix-multiplication order)
NPhasedX : $$(\alpha, \beta) \mapsto \mathrm{PhasedX}(\alpha, \beta)^{\otimes n}$$ (n-qubit gate composed of identical PhasedX in parallel.
CnRy : $$(\alpha)$$ := n-controlled $$\mathrm{Ry}(\alpha)$$ gate.
CnX : n-controlled X gate.
ZZMax : $$e^{-\frac{i\pi}{4}(\mathrm{Z} \otimes \mathrm{Z})}$$, a maximally entangling ZZPhase
ESWAP : $$\alpha \mapsto e^{-\frac12 i\pi\alpha \cdot \mathrm{SWAP}} = \left[ \begin{array}{cccc} e^{-\frac12 i \pi\alpha} & 0 & 0 & 0 \\ 0 & \cos\frac{\pi\alpha}{2} & -i\sin\frac{\pi\alpha}{2} & 0 \\ 0 & -i\sin\frac{\pi\alpha}{2} & \cos\frac{\pi\alpha}{2} & 0 \\ 0 & 0 & 0 & e^{-\frac12 i \pi\alpha} \end{array} \right]$$
FSim : $$(\alpha, \beta) \mapsto \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & \cos \pi\alpha & -i\sin \pi\alpha & 0 \\ 0 & -i\sin \pi\alpha & \cos \pi\alpha & 0 \\ 0 & 0 & 0 & e^{-i\pi\beta} \end{array} \right]$$
Sycamore : $$\mathrm{FSim}(\frac12, \frac16)$$
ISWAPMax : $$\mathrm{ISWAP}(1) = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & i & 0 \\ 0 & i & 0 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right]$$
ClassicalTransform : A general classical operation where all inputs are also outputs
WASM : Op containing a classical wasm function call
SetBits : An operation to set some bits to specified values
CopyBits : An operation to copy some bit values
RangePredicate : A classical predicate defined by a range of values in binary encoding
ExplicitPredicate : A classical predicate defined by a truth table
ExplicitModifier : An operation defined by a truth table that modifies one bit
MultiBit : A classical operation applied to multiple bits simultaneously
ClassicalExpBox : A box for holding compound classical operations on Bits.
static from_name(arg0: json)
Construct from name
property name
|
2022-08-09 05:40:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6334725022315979, "perplexity": 5761.6034248738515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570901.18/warc/CC-MAIN-20220809033952-20220809063952-00557.warc.gz"}
|
https://axiomsofchoice.org/multilinear_functional
|
## Multilinear functional
### Set
context $X$…$\mathcal F$-vector space context $n\in \mathbb N$
definiendum $M\in \mathrm{MultiLin}(X^n)$
context $M:X^n \to \mathcal F$
$X^n$ being the cartesian product of $n$ instances of the vector space $X$.
$a,b\in \mathcal F$ $v_1,\dots,v_n,w\in X$ $1\le j\le n$
postulate $M(v_1,\dots,a\cdot v_j+b\cdot w,\dots,v_n)=a\ M(v_1,\dots,v_j,\dots,v_n)+b\ M(v_1,\dots,w,\dots,v_n)$
|
2019-02-19 11:25:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106414914131165, "perplexity": 1943.2148155952939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489933.47/warc/CC-MAIN-20190219101953-20190219123953-00402.warc.gz"}
|
https://indico.hiskp.uni-bonn.de/event/40/contributions/604/
|
# The 39th International Symposium on Lattice Field Theory (Lattice 2022)
Aug 8 – 13, 2022
Hörsaalzentrum Poppelsdorf
Europe/Berlin timezone
## Reformulation of anomaly inflow on the lattice and construction of lattice chiral gauge theories
Aug 10, 2022, 2:20 PM
20m
CP1-HSZ/1st-1.001 - HS5 (CP1-HSZ)
### CP1-HSZ/1st-1.001 - HS5
#### CP1-HSZ
50
Oral Presentation Theoretical Developments and Applications beyond Particle Physics
### Speaker
Juan William Pedersen (The University of Tokyo)
### Description
This research aims to analyze the integrability condition of the chiral determinant of 4D overlap fermions and construct lattice chiral gauge theories.
$\quad$ We formulate the integrability condition with 5D and 6D lattice domain wall fermions. Our formulation parallels the recent cobordism classification of the global ‘t Hooft anomaly using the $\eta$-invariant based on the Dai-Freed theorem and the Atiya-Patodi-Singer index theorem in the continuum theory.
$\quad$ The necessary and sufficient condition for constructing a lattice chiral gauge theory comes down to the statement that "$\exp ( 2\pi i \eta ) = 1$ for any gauge configurations satisfying the admissibility condition in 5D lattice space.", where $\exp ( 2\pi i \eta )$ is defined as the phase of the partition function of the 5D domain wall fermion.
### Primary authors
Juan William Pedersen (The University of Tokyo) Prof. Yoshio Kikukawa (The University of Tokyo)
|
2023-02-02 02:43:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6651185154914856, "perplexity": 2344.6459243056647}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499954.21/warc/CC-MAIN-20230202003408-20230202033408-00294.warc.gz"}
|
https://books.byui.edu/bus_115_business_app/conditional_formattiF?book_nav=true
|
# Conditional Formatting 2
Data bar formatting can be modified from the preset to function more appropriately for the data. In this chapter, we’ll work through different settings that can be applied to the formatting rules.
Use this workbook for the chapter.
First, we need to select the cells to be formatted, press the Conditional Formatting button, and select any preset Data Bar format.
Notice the data bar fills cells based on the highest value in the selected range by default. In this case, the highest value is 16, but what if we want the data bar to be based on a higher value like 100? We will need to modify the format rule.
1. Press the Conditional Formatting button on the ribbon toolbar.
2. Select Manage Rules….
3. Select the data bar rule to be modified and press Edit Rule… (See Figure 23.2)
The window that opens provides options for changing the rule type, format style, minimum and maximum values, and bar appearance settings. For this example, we’ll change the value settings by selecting the type to be Number and input 0 for the minimum and 100 for the maximum. Confirm the change to close the window. The data bars will update to reflect each cell’s value in relation to the maximum 100.
## Color Scale Rules
Color scaling, also known as a heat map, will fill cells with one of three colors based on the value in the cell. This style of formatting is used to visually represent low to high values. (See Figure 23.3)
For example, we can select the Green - Yellow - Red Color Scale to format high numbers with green, low numbers in red, mid-average numbers in yellow, and everything between varying in shade. (See Figure 23.4)
What if we need to use red for high numbers and green for low or simply different colors? We can select the Red - Yellow - Green Color Scale instead, or edit the formatting rules to customize the color scheme and value settings. (See Figure 23.5)
Other color scaling options are available to use only two colors, such as the Green - White Color Scale to format the highest values in a dark color that progressively lightens toward white for low values.
## Icon Set Rules
Icon sets are useful for showing a change in value. If we want to see an increase or decrease in price, we’ll need individual cells with the old and new values. For this example, we’ll set up a random number generator to represent the change value. (See Figure 23.6)
1. Type =RANDBETWEEN and a left parenthesis ( ( ) in a new cell.
2. Type a bottom value, then a comma ( , ).
1. In this example, we’ll insert -10 as the bottom value.
3. Type a top value, then a right parenthesis ( ) ).
1. In this example, we’ll insert 100 as the top value.
4. Press Enter to complete the formula.
Next, we’ll calculate the price difference by using a formula to multiply the old price by the randomly generated value as a percentage.
1. Type an equals sign ( = ) to begin the formula.
2. Select the cell containing the old price.
1. In this example, the old price is contained in L3.
3. Type an asterisk ( * ) to multiply.
4. Select the cell containing the randomly generated value.
1. In this example, the random value is contained in M3.
5. Type a forward slash ( / ) to divide.
6. Type 100.
7. Press Enter to complete the formula. (See Figure 23.7)
Note: We can copy and paste the price difference value over the completed formulas by using Paste Value without the formula to prevent continuous changes based on the randomly generated number.
Next, we need to add the old price with the price difference to calculate the new price in a separate cell.
Finally, we determine the percentage change by dividing the new price by the old price minus one.
1. Type an equals sign ( = ) to begin the formula.
2. Select the cell containing the new price.
1. In this example, the new price is contained in O3.
3. Type a forward slash ( / ) to divide.
4. Select the cell containing the old price.
1. In this example, the new price is contained in L3.
5. Type a hyphen ( - ) to subtract.
6. Type a 1 and press Enter to complete the formula. (See Figure 23.8)
Now we can format the price change percentage cells to include icon sets. The icons can visually show positive or negative changes. However, the preset settings don’t appropriately demonstrate increase or decrease, so we’ll need to edit the rule.
The formatting rule settings include icon style and value designations to determine when and what icon is displayed. (See Figure 23.9)
## Format by Formula
Up to this point, we have learned how to format a cell based on the value it contains. Now we’ll learn how to format a cell based on another cell’s value using formulas. In this example, we want to format a list of part numbers based on related data cells. (See Figure 23.10)
We need to create a rule by selecting the cells to be formatted.
1. Press the Conditional Formatting button in the ribbon toolbar.
2. Select New Rule…
3. Select the Use a formula to determine which cells to format rule type.
4. Press the Format… button to choose a format.
We’re going to format the part number cells with a light-yellow fill based on if the part’s class is A.
1. Type an equals sign ( = ) in the Format values where this formula is true input field.
2. Select the cell containing the appropriate class value.
1. The selected cell will be anchored by default in the formula. However, we do not want the cell to be anchored for this formula, so we’ll remove the anchor by pressing F4 or deleted the dollar signs ( $). 3. Type an equals sign ( = ) to determine if the selected cell matches the next input. 4. Type double quotation marks ( " ) to indicate a textual string. 5. Type the letter A. 6. Type double quotation marks ( " ) to close the text string and complete the formula (=W3="A"). (See Figure 23.11) Note: To format the full row of cells associated with the appropriate part number, the formatting rule must apply to the whole range of data, and the formula must anchor the column (=$W3="A"). (See Figure 23.12)
If we want to format the rows of parts over a specific price, we can create an input cell containing the threshold value. Then, we'll add a conditional formatting formula to check the part's price in relation to the input cell and format the cells with a red border.
1. Type an equals sign ( = ) in the Format values where this formula is true input field.
2. Select the cell containing the part's price.
1. The selected cell needs to anchor the column, but not the row to work properly.
3. Type an greater than sign ( > ).
4. Select the input cell containing the desired threshold value.
5. Confirm formatting rule formula. (See Figure 23.13)
### End-of-Chapter Survey
: How would you rate the overall quality of this chapter?
1. Very Low Quality
2. Low Quality
3. Moderate Quality
4. High Quality
5. Very High Quality
Comments will be automatically submitted when you navigate away from the page.
|
2023-03-22 06:52:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39118027687072754, "perplexity": 1446.7759366697492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00260.warc.gz"}
|
https://www.scipost.org/submissions/scipost_202101_00011v1/
|
# High Efficiency Configuration Space Sampling -- probing the distribution of available states
### Submission summary
As Contributors: Paweł Jochym · Jan Łażewski Preprint link: scipost_202101_00011v1 Date submitted: 2021-01-20 09:59 Submitted by: Jochym, Paweł Submitted to: SciPost Physics Academic field: Physics Specialties: Condensed Matter Physics - Theory Approaches: Theoretical, Computational
### Abstract
Substantial acceleration of research and more efficient utilization of resources can be achieved in modelling investigated phenomena by identifying the limits of system's accessible states instead of tracing the trajectory of its evolution. The proposed strategy uses the Metropolis-Hastings Monte-Carlo sampling of the configuration space probability distribution coupled with physically-motivated prior probability distribution. We demonstrate this general idea by presenting a high performance method of generating configurations for lattice dynamics and other computational solid state physics calculations corresponding to non-zero temperatures. In contrast to the methods based on molecular dynamics where only a small fraction of obtained data is consumed, the proposed scheme is distinguished by a considerably higher, reaching even 80%, acceptance ratio.
###### Current status:
Has been resubmitted
### Submission & Refereeing History
Resubmission scipost_202101_00011v3 on 21 May 2021
Resubmission scipost_202101_00011v2 on 26 April 2021
Submission scipost_202101_00011v1 on 20 January 2021
## Reports on this Submission
### Report 2 by Bjorn Wehinger on 2021-3-10 Post-Editorial Recommendation Report (Invited Report)
• Cite as: Bjorn Wehinger, Report on arXiv:scipost_202101_00011v1, delivered 2021-03-10, doi: 10.21468/SciPost.Report.2676
### Strengths
Original new approach for the study of lattice lattice dynamics at finite temperatures.
### Weaknesses
Presentation should be improved.
### Report
The authors of the manuscript "High Efficiency Configuration Space Sampling – probing the distribution of available states" present a new method for studying lattice dynamics at finite temperatures.
Their approach is based on configuration sampling the distribution of available states using the Metropolis-Hasting algorithm with a prior probability distribution derived from harmonic lattice dynamics.
The authors compute anharmonic phonon dispersions and lifetimes for 3C-SiC to validate their approach and claim high computational performance due to a large observed acceptance ratio.
The idea is highly original and I expect significant impact for the study of thermal properties at finite temperatures in large crystalline systems containing many atoms per unit cells.
In order to fully convince the reader and justify publication in SciPost Physics, I recommend the authors to address and clarify the following:
1. The lattice dynamics of 3C-SiC at room temperature can be described fairly well by the harmonic approximation. It seems thus no big surprise that a prior probability distribution derived from harmonic lattice dynamics converges successfully and quickly. But how well does it work for a more anharmonic situation?
Although the chosen potentials might not be accurate close to melting it would be a very nice illustration to compare the dispersions and lifetimes to molecular dynamics simulations at a temperature where anharmonic effects are more important.
2. The performance of the new approach is based on comparing its acceptance ratio to molecular dynamics simulations. How do actual computation times compare? How does the performance (computation time) scale with system size including the possibility to run calculation in parallel on many cores?
3. Presentation. Title and abstract suggest application of the method to a wild variety of problems in solid state physics. However, such are mentioned only marginally in the conclusions while the main text fully focuses on the application to lattice dynamics.
Experts in lattice dynamics may thus overlook this work and its relevance if not highlighted better.
At several points the manuscript would profit from more quantitative statements.
For instance,
Lines 80-81: Limitations and applications should be discussed in more details.
Lines 102-103: Phase transitions are excluded by the "reasonable" class. This should me mentioned and the application of the approach to different kind of phase transitions could be addressed.
Lines 114-115: "too wild" and "very quick" should be quantified.
Lines 136- 137: "barely noticeable" and "hardly visible" obviously depend on how the data is plotted. Please quantify.
Figs. 1 and 2. correlations between $E_{k var}$ and $E_{p var}$ could be discussed.
Lines 186-189: Asymptotic production of target distribution for any non-vanishing prior distribution requires a citation.
Lines 213-214: Please explain why parameters are independent and their values not critical. Are there limitations?
Figures 5 and 6 should be discussed in more detail. Agreement and differences need to be pointed out. Fig. 5 is confusing. It's caption suggest that molecular dynamics was used to extract harmonic phonon frequencies, while the text states that higher order (anharmonic) force constants were extracted. It would be nice to compare harmonic phonon frequencies to both anharmonic phonon frequencies obtained from molecular dynamics and from the new method and discuss agreement and differences in detail for at least two different temperatures.
Fig. 6 is lacking information on lifetimes of the acoustic branches with small momenta and small energies close to the $\Gamma$-point. It would be nice to discuss convergence and numerical limitations for these.
Both figures are very difficult to read because they are small and contain too much data. Splitting into sub-panels where the same number of samples are compared could help.
In summary, the presented approach is highly innovative and worth to be published but the presentation needs to be improved to make it convincing.
### Requested changes
• validity: high
• significance: high
• originality: top
• clarity: good
• formatting: reasonable
• grammar: reasonable
### Author: Paweł Jochym on 2021-04-26 [id 1383]
(in reply to Report 2 by Bjorn Wehinger on 2021-03-10)
Category:
remark
# Reply to the report of Dr Wehinger
We would like to thank Dr Wehinger for careful reading of the manuscript and his positive opinion on our work.
The authors of the manuscript "High Efficiency Configuration Space Sampling – probing the distribution of available states" present a new method for studying lattice dynamics at finite temperatures. Their approach is based on configuration sampling the distribution of available states using the Metropolis-Hasting algorithm with a prior probability distribution derived from harmonic lattice dynamics. The authors compute anharmonic phonon dispersions and lifetimes for 3C-SiC to validate their approach and claim high computational performance due to a large observed acceptance ratio. The idea is highly original and I expect significant impact for the study of thermal properties at finite temperatures in large crystalline systems containing many atoms per unit cells. In order to fully convince the reader and justify publication in SciPost Physics, I recommend the authors to address and clarify the following:
We would like to point out that the presented approach does not depend on strict harmonicity of the system. The Eq. (4) and its description (l. 78-87) explicitly point to the impact of the anharmonicity on the formulas used in the proposed method. In particular, the normality of the distribution is not impacted - since it originates from the Central Limit Theorem (CLT, Eq. 5). What may be influenced is the value of the mean and the variance of the distribution - which will skew the temperature scale and possibly diminish the fidelity of our approximation of the thermal equilibrium state. Since both referees missed this point we have expanded our explanation of this issue to make it more clear to the reader.
Additionally, while we use lattice dynamics as an example in the text, the potential applicability of the proposed method is broader - it may be useful in other places where we need to reproduce the configuration of the system of atoms in thermal equilibrium in non-zero temperature. This fact is mentioned in the abstract but we will expand the conclusions by mentioning it there as well.
1. The lattice dynamics of 3C-SiC at room temperature can be described fairly well by the harmonic approximation. It seems thus no big surprise that a prior probability distribution derived from harmonic lattice dynamics converges successfully and quickly. But how well does it work for a more anharmonic situation? Although the chosen potentials might not be accurate close to melting it would be a very nice illustration to compare the dispersions and lifetimes to molecular dynamics simulations at a temperature where anharmonic effects are more important.
Indeed, the chosen system is not strongly anharmonic at T=300K. But still there is enough anharmonicity in the model to produce 5ps phonon lifetimes plotted in Fig. 6. Also, the Tersoff potential selected for the study is not a simplistic, harmonic, two-body potential. It is a published, effective model of interactions in the Si-C compounds.
We would like to stress that the closeness of the prior distribution to the target (Figs 3 and 4) originates from the size of the system and careful selection of the prior generating algorithm (Eq. 6 and description in l. 172-183). As we noted in the reply to the first referee the extreme cases of anharmonicity dominating the right hand side of the Eq. 4 for all, or most coordinates may be beyond the direct applicability of the proposed method. To illustrate the point, we have added to Figures 3, 4, 5, and 6 the calculations performed for higher temperatures (up to T=2000K) closer to the melting point of 3C-SiC demonstrating effectiveness of the proposed approach even in high temperatures.
1. The performance of the new approach is based on comparing its acceptance ratio to molecular dynamics simulations. How do actual computation times compare? How does the performance (computation time) scale with system size including the possibility to run calculation in parallel on many cores?
The computational cost is essentially proportional to the number of requested configurations plus necessary burn-in samples (1-10, can be limited to 1-2 with careful selection of initial displacement variation). Due to the details of the Metropolis-Hastings algorithm this cost is independent of the acceptance ratio. Low acceptance ratio leads to low quality of the generated distribution, not a direct increase in computational cost. This increase stems from the fact that with low acceptance more samples are required for the reasonable fidelity of the produced distribution. In comparison with MD calculations, each generated configuration is equivalent to one time step in trajectory. However, in case of the DFT-based calculations, the MD procedure can be optimized by starting each step from the charge density/wave functions converged in the previous step. Due to the fact that samples generated by HECSS are independent, this optimization is not easily available in DFT-based calculations. This amounts to approximately twice as many electronic SCF steps per evaluated configuration. Thus, n-configurations HECSS run is equivalent to approximately 2*(n+10) time steps of the MD calculation. In our experience this is not enough to provide even single, well-thermalized sample for n<500.
Regarding the parallel computation: In current implementation each configuration evaluation may be run on multiple cores but the sample generation is strictly serial. The near-independence of generated samples provides opportunity for future splitting of the computation to multiple processes. Naturally, each temperature scan may be run as a separate process with full linear scaling.
We will add analysis of the computational cost of the HECSS approach to the final paragraph of the text.
1. Presentation. Title and abstract suggest application of the method to a wild variety of problems in solid state physics. However, such are mentioned only marginally in the conclusions while the main text fully focuses on the application to lattice dynamics. Experts in lattice dynamics may thus overlook this work and its relevance if not highlighted better. At several points the manuscript would profit from more quantitative statements.
We will expand the abstract to better reflect our focus - which is indeed, at this moment, on lattice dynamics applications. The other applications mentioned in the abstract are our suggestions of other fields where this type of procedure may be beneficial.
Lines 80-81: Limitations and applications should be discussed in more details. Lines 102-103: Phase transitions are excluded by the "reasonable" class. This should me mentioned and the application of the approach to different kind of phase transitions could be addressed.
Following the comment of the first referee we have expanded the description of probable limitations of the proposed method (phase transitions, highly anharmonic systems).
Lines 114-115: "too wild" and "very quick" should be quantified.
Lines 136- 137: "barely noticeable" and "hardly visible" obviously depend on how the data is plotted. Please quantify.
We have replaced these imprecise phrases with quantitative description showing the speed of convergence and cited the appropriate literature.
Figs. 1 and 2. correlations between E_kvar and E_pvar could be discussed.
The correlation between variances of the kinetic and potential distribution comes directly from energy conservation and statistical mechanics. Both energies are part of the Hamiltonian and sum up to the total energy. Thus, due to the energy conservation their variances should match. We have added the appropriate sentence to the discussion at the end of section 2.
Lines 186-189: Asymptotic production of target distribution for any non-vanishing prior distribution requires a citation.
Appropriate citation has been added to the list of references.
Lines 213-214: Please explain why parameters are independent and their values not critical. Are there limitations?
The independence from the system (supercell) size stems from the connection with the displacement distribution - it is our conclusion drawn from the experience gained during the development of the HECSS scheme. If the interactions are reproduced reasonably well in the small supercell (e.g. single crystallographic unit cell) the average size of thermal displacement is expected to be the same as in larger supercell due to the same energy per degree of freedom (i.e. temperature) and very similar shape of the potential. The independence and the practical ranges of the parameters cited in the text are derived from the multiple tests run during the development of the HECSS code. We have added a sentence explaining this property and rephrased the surrounding text to make this issue more clear to the reader.
Figures 5 and 6 should be discussed in more detail. Agreement and differences need to be pointed out. Fig. 5 is confusing. It's caption suggest that molecular dynamics was used to extract harmonic phonon frequencies, while the text states that higher order (anharmonic) force constants were extracted. It would be nice to compare harmonic phonon frequencies to both anharmonic phonon frequencies obtained from molecular dynamics and from the new method and discuss agreement and differences in detail for at least two different temperatures.
The phonon frequencies presented in Fig. 5 are derived by fitting of a third order anharmonic model to both datasets and the frequencies are derived from this model. The lifetimes from the Fig. 6 are obtained from the same model using ALAMODE to compute anharmonic self-energy and phonon lifetimes from the third order coefficients in the fit (using relaxation time approximation). We have corrected a misleading description of Figs 5 and 6 and expanded the description to make the point clear. We have also included the RMS differences between phonon frequencies derived by both methods.
Fig. 6 is lacking information on lifetimes of the acoustic branches with small momenta and small energies close to the Γ-point. It would be nice to discuss convergence and numerical limitations for these.
The access to the vicinity of the zone-center is limited by the supercell size used in the calculation. The closest point provided by the supercell used in the paper (5x5x5, 1000 atoms) and reciprocal space sampling grid (20x20x20) is located at 1/10 of the zone size from the center. All data between this point and the zone center are interpolated from the fitted force constant matrices in real space. We will add information about the reciprocal space sampling to the text. Additionally we have expanded the presented data to include more temperatures: 100K 600K and 2000K.
Both figures are very difficult to read because they are small and contain too much data. Splitting into sub-panels where the same number of samples are compared could help.
Figure 5 is intended to show small difference between frequencies computed from both data sets. We agree that the presentation in both Fig. 5 and 6 will benefit from such split and we have replaced both figures by separate panels containing data sets of the same size. We have also added additional temperatures - as mentioned above. The description has been modified appropriately.
In summary, the presented approach is highly innovative and worth to be published but the presentation needs to be improved to make it convincing.
We hope that the above explanations and corrections to the text make our paper convincing and clarify all the issues raised by Dr Wehinger.
Paweł T. Jochym, Jan Łażewski
### Anonymous Report 1 on 2021-2-26 (Invited Report)
• Cite as: Anonymous, Report on arXiv:scipost_202101_00011v1, delivered 2021-02-26, doi: 10.21468/SciPost.Report.2627
### Report
The authors of the manuscript "High Efficiency Configuration Space Sampling – probing the distribution of available states" present a configuration sampling approach based on the Metropolis-Hastings algorithm.
The authors claim that their approach is more efficient than other established approaches based on molecular dynamics (MD) propagation. They validate their numerical approach by computing the phonon dispersion and lifetimes
of the cubic 3C-SiC crystal and comparing against MD benchmark results.
I have found a few gaps in the presentation which prevent a full grasp of their assumptions and numerical validation. Their arguments and derivations cannot be fully reproduced by qualified experts
For instance, the authors claim they sampled at finite temperature, but the value of the temperature is not provided anywhere in the text.
If the temperature is much smaller than the melting temperature (~3,000 K for SiC), the harmonic approximation is expected to be quite accurate and the the Gaussian "prior" sampling described in Sec. 3 should be almost exact. This seems to agree with the large acceptance ratio (up to 80%) observed by the authors.
A very high acceptance ratio is not necessarily an advantage of the approach as it may imply large correlation in the sample. The authors should discuss the position autocorrelation function obtained from their approach and compare against the one obtained using canonical MD sampling. The best acceptance rate is the one which minimises the autocorrelation time.
The agreement between the Monte Carlo and MD phonon dispersion shown in Fig. 5 is to be expected if the anharmonicity is negligible. The agreement on the phonon lifetimes is a tougher check, but it is hard to draw any quantitative conclusion from Fig. 6. The points are pretty scattered over a semi-log plot, which means that the error can be rather large.
The authors should discuss a more quantitative estimator, e.g., the square root of the sum over the wave-vectors and bands of the square deviation of the Monte Carlo and MD phonon lifetimes.
Lines 34-35: The authors mention "running a 30000 steps MD". Why exactly this number of steps?
Eq. (1): Not all the symbols have been introduced in the text.
Eq. (2): The authors implicitly assume a two-body force field. What would happen in the case of a many-body force field like the embedded atom model or Tersoff potentials?
Lines 80-81: The sentence "Only experience can tell us how good this approximation is and how wide its applicability range is" is not correct and underrate the role of a large body of numerical analysis.
Lines 102-103: The sentence "The reasonable class is very broad here, certainly containing all physically interesting cases by virtue of requiring a finite variance and a well-defined mean" is not correct, unless phase transitions are excluded. The variance of several quantities diverges close to a phase transition.
Eq. (5): The equal sign is not correct as for N\to\infty the variance of the distribution of the right-hand side is zero.
Line 125: A "single server" is not a well-defined object: how many CPU's were used?
In comparing the computational efficiency of MD and Monte Carlo methods, one has to take into consideration the time spent generating random numbers, which may be more computationally demanding than propagating a trajectory, given the same number of force-field evaluations.
In conclusion, I do not feel comfortable in suggesting the manuscript in its current form.
• validity: -
• significance: -
• originality: -
• clarity: -
• formatting: -
• grammar: -
### Author: Paweł Jochym on 2021-03-04 [id 1286]
(in reply to Report 1 on 2021-02-26)
We thank the referee for reading our paper and spotting omissions and mistakes in our text. We would like to immediately address the referee's comments and correct mistakes pointed in his report. Naturally, we intend to include these corrections in the resubmitted manuscript. We hope that the following clarifications will enable full understanding of our approach. We address the comments paragraph by paragraph quoting the referee before our response.
I have found a few gaps in the presentation which prevent a full grasp of their assumptions and numerical validation. Their arguments and derivations cannot be fully reproduced by qualified experts For instance, the authors claim they sampled at finite temperature, but the value of the temperature is not provided anywhere in the text.
The text was intended as a demonstration of the proposed method not as a reproduction or prediction of a particular experimental result. This may perhaps explain unfortunate omission of sampled temperature in the text. While the temperature can in fact be extracted from the average energy in Fig. 3 and 4, it should also be stated in the text explicitly. The sampling temperature for all presented data is 300 K, chosen as standard ambient temperature. However, we have tested our algorithm for a number of other temperatures, up to 2000 K and there was no qualitative difference in its performance or properties.
If the temperature is much smaller than the melting temperature (~3,000 K for SiC), the harmonic approximation is expected to be quite accurate and the the Gaussian "prior" sampling described in Sec. 3 should be almost exact. This seems to agree with the large acceptance ratio (up to 80%) observed by the authors.
The shape of the energy distribution is not determined by harmonicity of the potentials but by the Central Limit Theorem and the size of the system. The difference between prior distribution (which comes from our approximation of the displacement distribution) and the target distribution does not stem from the anharmonicity of the potential but mainly from the fact that the displacements of the atoms in the crystal are not independent (clearly stated in our text). Please note that we have no direct access to the potential energy of the system - we can only specify the geometry and calculate the resulting energy. Thus, we have no ability to directly generate the target energy distribution from Fig. 3. The high acceptance ratio comes from our selection of the displacement distribution and tuning algorithm described in the paper. The Metropolis-Hastings (M-H) algorithm can generate a target distribution from any prior which is non-zero over the domain. However, the acceptance ratio may be very low if you fail to use the prior which is a good approximation of target distribution. It is a well-known fact in the numerical statistics community that the good selection of the prior distribution is the key to an effective use of probability distribution sampling algorithms.
A very high acceptance ratio is not necessarily an advantage of the approach as it may imply large correlation in the sample. The authors should discuss the position autocorrelation function obtained from their approach and compare against the one obtained using canonical MD sampling. The best acceptance rate is the one which minimises the autocorrelation time.
The issue of sample correlation would be indeed important if we used a 'random walk'-type algorithm for the prior generation (which is a popular variant of the M-H algorithm). Instead, we use independent samples and the only possible correlation between them arises from a very small change (less than 2%) in the variance of the position distribution. We discuss this issue in the paragraph starting in line 196. On the other hand, the MD derived data has obvious autocorrelations - note that we can derive phonon frequencies from the Fourier transform of the velocity autocorrelation function along the MD trajectory. Thus, it is necessary to separate sampling points on the trajectory by substantial intervals allowing for these correlations to die out. Nevertheless, all time steps between the sampling points still need to be calculated, which leads to large inefficiency of the MD as a configuration generator. To further clarify the issue we suggest expanding the explanation in the text by the following paragraph:
'The possible correlations introduced in the HECSS generated data result only from the fact that if the n-th sample leads to exceptionally small or large energy, the next sample is drawn from the positional distribution with variance increased or reduced, respectively, by a small amount (no more than $(1+\delta)$ times in extreme cases). Thus, the probability of larger energy following the exceptionally small energy in the sampling chain (or the other way around: a smaller sample following an exceptionally large one) is slightly increased. Note, however, that this does not introduce any correlations in any particular coordinate. In the MD trajectory the correlations arise from the non-random character of the particle trajectory.'
The agreement between the Monte Carlo and MD phonon dispersion shown in Fig. 5 is to be expected if the anharmonicity is negligible. The agreement on the phonon lifetimes is a tougher check, but it is hard to draw any quantitative conclusion from Fig. 6. The points are pretty scattered over a semi-log plot, which means that the error can be rather large. The authors should discuss a more quantitative estimator, e.g., the square root of the sum over the wave-vectors and bands of the square deviation of the Monte Carlo and MD phonon lifetimes.
Indeed, for the purely harmonic system the phonon frequency test is not useful since phonon frequencies are independent from the displacement size. However, if the system considered in the paper were close to harmonic, we would expect to obtain very long phonon lifetimes (since they are infinite in the harmonic system). The data in Fig. 6 demonstrates that most of the phonon modes exhibit lifetimes below 10ps - showing non-negligible anharmonicity in the model. This fact provides justification for the validity of the phonon frequency test. The phonon lifetimes are very sensitive to the accuracy of the model. This is especially true in case of large values which indicate small deviations from harmonicity and usually carry large error bars. Unfortunately, the large range (close to two orders of magnitude) of the values of lifetimes makes the simple RMS measure of differences very misleading - since the differences at high end of the range will dominate the sum. Thus, we are going to consider using RMS of logarithms of lifetimes as a better measure of relative changes in the values obtained using MD and HECSS approaches.
Lines 34-35: The authors mention "running a 30000 steps MD". Why exactly this number of steps?
The number of steps (30 000) used in the introduction was a typical relaxation time of a long-run MD suggested by the often used "rule of thumb" in MD calculations (50 times period of typical vibrations in the system). For 3C-SiC: $f\approx 10$THz = $10^{13}$Hz $\Rightarrow t=10^{-13}$s = 100fs; 50 * 100fs = 5ps. With 1fs time step that equals 5000 steps minimum run where we can use at most half of it for actual data (you need to provide time to obtain thermal equilibrium). If we need approx. 30 data points (as required by anharmonic calculations, see Fig. 6) and they should be separated by at least 1ps interval (at least 10 typical vibrations) we get approximately 30 000 steps. The cited number itself has no 'magical' value and results from the setup of the calculations presented in the paper. To avoid impression that the number 30 000 has any special meaning, we are going to replace the number by the phrase: "thousands of MD steps".
Eq. (1): Not all the symbols have been introduced in the text.
Eq. (1): The sentence introducing missing symbols will be added to the revised text: $x_n$ - generalized coordinate, $H$ - Hamiltonian, $T$ - temperature, $k_B$ - Boltzmann constant, $\delta_{mn}$ - Kronecker delta
Eq. (2): The authors implicitly assume a two-body force field. What would happen in the case of a many-body force field like the embedded atom model or Tersoff potentials?
Eq. (2): We make no two-body assumption, neither implied nor explicit. The formulation of equipartition theorem, Eq. (1), explicitly concerns single coordinates (the only non-zero term due to the Kronecker delta) and makes no assumption on the form of the Hamiltonian H. The Taylor expansion, Eq. (2), is not a complete expansion in all coordinates $q$ (note the scalar $q$ symbol). It is the Taylor expansion in single coordinate with coefficients ($C_n$ - proportional to partial derivatives of energy with respect to this coordinate) which are functions of all the other coordinates in the system. Furthermore, the calculations presented in the paper use the mentioned Tersoff potential developed in refs 17, 18. We understand that due to the formulation of the surrounding text this may not be entirely clear and may confuse the reader. To avoid this we are going to add a clarifying sentence before Eq. (2).
Lines 80-81: The sentence "Only experience can tell us how good this approximation is and how wide its applicability range is" is not correct and underrate the role of a large body of numerical analysis.
The unfortunate sentence 80-81 brings nothing of importance to the text. We are going to remove it in the resubmitted text.
Lines 102-103: The sentence "The reasonable class is very broad here, certainly containing all physically interesting cases by virtue of requiring a finite variance and a well-defined mean" is not correct, unless phase transitions are excluded. The variance of several quantities diverges close to a phase transition.
Variance of several quantities is indeed divergent in some phase transitions. In cases where the transition involves divergent heat capacity this includes energy variance. Thus, our phrase :"...all physically interesting cases..." was indeed wrong. The sentence is going to be corrected. We are going to specify where we can use the described procedure and clearly state that in cases where energy variance diverges, the procedure cannot be used. We thank the referee for spotting this important fact.
Eq. (5): The equal sign is not correct as for N\to\infty the variance of the distribution of the right-hand side is zero.
The Eq. (5) was an attempt to formally write asymptotic relation of the Central Limit Theorem described in the paragraph 99-102. The CLT is indeed not a limit relation but asymptotic distribution convergence relation and the Eq. (5) should use appropriate notation for such relations as convergence in distribution:
$$\sqrt{3N}\left(\frac{1}{N} \sum_i E_i -\langle E\rangle \right) \xrightarrow{d}\mathcal{N}(0, \sigma).$$
The mistake in notation has no consequences for the arguments and conclusions presented in the text. The Eq. (5) is going to be corrected in the revised text.
Line 125: A "single server" is not a well-defined object: how many CPU's were used?
"Single server" mentioned in line 125 was used as a rough indication of the computational effort involved in the described task. It is nothing out of ordinary: 2x4 cores CPU and 32GB RAM. This is actually a fairly under-powered and old machine, less powerful than some of newer generation laptops. The information will be added to the sentence in revised text.
In comparing the computational efficiency of MD and Monte Carlo methods, one has to take into consideration the time spent generating random numbers, which may be more computationally demanding than propagating a trajectory, given the same number of force-field evaluations.
In some cases the random number generation may be indeed fairly expensive but in the case of typical systems of tens of atoms, the energy and forces evaluation is much more time-consuming. For instance, the random number generator we have used (from SciPy.stats library) takes 180$\mu$s to generate 3000 random numbers required to create one sample for the 5x5x5 supercell of 3C-SiC. The single evaluation of energy for the same cell (1000 atoms) takes 4ms (20 times longer) using ASAP3 with OpenKIM model from our calculations. A more sophisticated interaction model is bound to be even more time-consuming. Furthermore, molecular dynamics requires calculating multiple time steps per every generated sample. Considering this facts, we maintain that the proposed HECSS approach offers substantial advantage over MD as a source of configuration data.
|
2022-10-04 16:00:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6157123446464539, "perplexity": 791.3459033661247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00112.warc.gz"}
|
https://r.tiid.org/loading-data-and-subsetting.html
|
In this section we will learn how to load and operate data. Later, Section 3 will expand on subsetting data sets and show you how to make loops.
trade_data <- read.csv("https://r.tiid.org/R_tute_data2007-2017.csv")
It is advised to work in R by setting a working directory. Set your working directory from which your data frames will be imported and to which the resulting files will be exported by using the function setwd. This is a very important step, as it will allow you to import files from this location without having to indicate the full path to this directory. And RStudio will save all output and backup files into that directory automatically.
You can also copy the path from your “File explorer”. Note for PC users: when inputting the path to any directory in RStudio, it is necessary to replace all backslashes \ with forward slashes /, or you will get an error message.
setwd("C:/path/to_your/working_directory")
You can check your working directory using getwd.
getwd()
## [1] "C:/path/to_your/working_directory"
Once you have set your working directory, we can read the CSV file in R using the command read.csv by just referencing the name of the CSV file’s name. (Remember you can run ?read.csv to see all the options of this R function.)
trade_data <- read.csv("data/R_tute_data2007-2017.csv")
It is possible to run into the following error, if so don’t forget to put “.csv”, or make sure the location of your file is correct.
Error in file(file, “rt”) : cannot open the connection
In file(file, “rt”) :
cannot open file ‘C:/path/to_your/working_directory/r_tute_data2007-2017.csv’: No such file or directory
The trade_data file we uploaded on R is a spreadsheet, in R called a “data.frame”.
class(trade_data)
## [1] "data.frame"
We check the variable names of the data set with the names command:
names(trade_data)
## [1] "reporter" "flow" "partner" "year" "value"
We can also open the data set by clicking on it. Remember from the Introduction, your variables and data sets are located in the upper right corner.
From the names function we know that our trade_data has a reporter column which shows the country reporting the trade values and partner is the destination country. flow specifies whether it is an export or an import. year represents the year the trade value was recorded.
Let’s now get into some data subsetting!
## 2.2 Basic data subsetting
We check the first few rows of a data set with the head command:
head(trade_data)
## reporter flow partner year value
## 1 213 I 186 2008 156130202
## 2 213 E 174 2008 117661208
## 3 213 E 172 2008 31986252
## 4 213 E 134 2008 1507966544
## 5 213 I 181 2008 2407260
## 6 213 I 182 2008 80414681
The head function is very useful when we just want to get a small glimpse of the data - for instance what are the variables included. If you need to see more than just the first rows, you can also open it by clicking on it’s name.
If we want to check a specific number of rows, we can do so by specifying this number as in below:
head(trade_data,10)
## reporter flow partner year value
## 1 213 I 186 2008 156130202
## 2 213 E 174 2008 117661208
## 3 213 E 172 2008 31986252
## 4 213 E 134 2008 1507966544
## 5 213 I 181 2008 2407260
## 6 213 I 182 2008 80414681
## 7 213 I 676 2008 991884
## 8 213 E 258 2008 107246580
## 9 614 I 110 2008 15302709034
## 10 213 I 311 2008
This way we can create a smaller subset of the data with the first 10 rows to perform initial tests:
first10 <- head(trade_data,10)
There is also another way to access row data. We can select the entire first row also like this:
first10[1,]
## reporter flow partner year value
## 1 213 I 186 2008 156130202
Notice that we use square brackets because this is a data set. If you run the above code but with () you will get an error:
first10(1,)
Error in first10(1, ) : could not find function “first10”.
This is because we use the round brackets () to specify the options for functions and square brackets [] to specify the options for data sets.
If we want to select the first five rows, both ways give the same result. Notice in the second way 1:5 means from row 1 to row 5 inclusive.
head(first10, 5)
## reporter flow partner year value
## 1 213 I 186 2008 156130202
## 2 213 E 174 2008 117661208
## 3 213 E 172 2008 31986252
## 4 213 E 134 2008 1507966544
## 5 213 I 181 2008 2407260
first10[1:5,]
## reporter flow partner year value
## 1 213 I 186 2008 156130202
## 2 213 E 174 2008 117661208
## 3 213 E 172 2008 31986252
## 4 213 E 134 2008 1507966544
## 5 213 I 181 2008 2407260
We can also get specific rows, by specifying them inside c(). For instance the first and the fifth row only:
first10[c(1,5),]
## reporter flow partner year value
## 1 213 I 186 2008 156130202
## 5 213 I 181 2008 2407260
In fact, when referring to the data set like this, before the comma is the specification for the row(s) and after - for the column(s).
For example this gives you the first column:
first10[,1]
## [1] 213 213 213 213 213 213 213 213 614 213
We can get the first and third column (reporter and partner columns respectively) by index:
first10[,c(1,3)]
## reporter partner
## 1 213 186
## 2 213 174
## 3 213 172
## 4 213 134
## 5 213 181
## 6 213 182
## 7 213 676
## 8 213 258
## 9 614 110
## 10 213 311
Since the columns have names, we can replace the indices with the column/variable names:
first10[,c("reporter","partner")]
## reporter partner
## 1 213 186
## 2 213 174
## 3 213 172
## 4 213 134
## 5 213 181
## 6 213 182
## 7 213 676
## 8 213 258
## 9 614 110
## 10 213 311
There is a third way to select a column from a data set: by specifying the name of the data set and including a $ before the name of the column. For example, in this case our data set is called first10 and we want the column named “flow”: first10$flow
## [1] I E E E I I I E I I
## Levels: E I
To summarize these three ways are all equivalent in getting the reporter column:
first10[,1]
## [1] 213 213 213 213 213 213 213 213 614 213
first10[,"reporter"]
## [1] 213 213 213 213 213 213 213 213 614 213
first10$reporter ## [1] 213 213 213 213 213 213 213 213 614 213 The usage of either of these depends on personal preference, but they are all equivalent. In cases where variable names are helpful to be shown in the code, the second and third way are preferred. If we had a column/variable name from a CSV file with a space in it, we have to include the quotation marks: first10$'reporter code'
These two lines are equivalent:
first10$'flow' ## [1] I E E E I I I E I I ## Levels: E I first10$"flow"
## [1] I E E E I I I E I I
## Levels: E I
We can also check the class for a column:
class(first10$flow) ## [1] "factor" In this case “flow” is a factor. A factor is a variable in R with limited amount of different values, referred to as “levels” in R. A factor is also known as a categorical variable. Let’s combine the knowledge about selecting rows and columns. We can select just the first five reporters: first10[1:5,"reporter"] ## [1] 213 213 213 213 213 Or the first 5 reporters and partners, the two codes are equivalent: first10[1:5,c(1,3)] ## reporter partner ## 1 213 186 ## 2 213 174 ## 3 213 172 ## 4 213 134 ## 5 213 181 first10[1:5,c("reporter", "partner")] ## reporter partner ## 1 213 186 ## 2 213 174 ## 3 213 172 ## 4 213 134 ## 5 213 181 ### Quiz Let’s do a quick practice! Remember to keep R open so you can run the possible answers and see which one is correct. Which answer: - creates a data set called mydata with the first 20 rows selected from the trade_data - and then creates an object called element which chooses the 5th row element from the 2nd column? (Hint: Check how we got the first10 data and how to select elements. Hint 2: Remember, when selecting elements from a data frame - row comes before column, hence if we want row a and column b, this would be: data[a,b]) A. mydata <- head(trade_data,20) element <- mydata[2,5] #Answer A is incorrect because we choose our element from the second row mydata[5,2], fifth column, instead of fifth row, second column mydata[2,5]. Hence the correct is Answer B. B. mydata <- head(trade_data,20) element <- mydata[5,2] #Answer B is correct! C. mydata <- head(trade_data) element <- mydata[7,2] #Answer C is incorrect because we don't specify the first 20 rows to choose in the head() command. Just running the head function without specifying 20 rows, will only give us six rows. Hence the correct is Answer B. ## 2.3 Logical operators Logical operators are used in programming to check whether statements are TRUE or FALSE. They will be very helpful when we expand on data subsetting in the next section. If we want to check if a value is equal to another value, we use ==. Let’s check if the value of “a” (which we just declared above) is 34: a <- 34 #Just in case you cleared your workspace a == 34 ## [1] TRUE If we check for 35, it gives FALSE. a == 35 ## [1] FALSE We can also check if a value is not equal with !=: a != 34 ## [1] FALSE Notice how for 34, we get FALSE as the value of a is 34. For 35, we get TRUE because our value is different from 35. a != 35 ## [1] TRUE Other logical operators are >: greater than, <: smaller than, >=: greater or equal and <=: smaller or equal. a < 34 ## [1] FALSE a <= 34 ## [1] TRUE We can use logical operators also with data: one2five <- 1:5 three2seven <- 3:7 We can now check if the number one is in the sequence: one2five == 1 ## [1] TRUE FALSE FALSE FALSE FALSE three2seven == 1 ## [1] FALSE FALSE FALSE FALSE FALSE If we use the equal logical operator to compare the two sequences we created, it gives us FALSE. This is because it compares row by row from the first to the last number. one2five==three2seven ## [1] FALSE FALSE FALSE FALSE FALSE Fortunately, there is another way to check if any of the numbers in one2five are in three2seven. We use the %in% operator. one2five %in% three2seven ## [1] FALSE FALSE TRUE TRUE TRUE Logical operators are very helpful when subsetting data. For example, let’s select only the row where the reporter value is 614 from our previous data first10. We do so by specifying the data set from which we want to subset and include square brackets, in this case first10[]. Then we specify the condition and put a comma. This means that it applies to all columns. If we wanted the value of a particular column, we could specify it after the comma. first10[first10$reporter==614,]
## reporter flow partner year value
## 9 614 I 110 2008 15302709034
Just by itself the chunk of the code evaluates logically whether the statement is true for values only for the column where the condition is specified. In this case for the reporter column.
first10$reporter==614 ## [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE We can also select only the rows with exports (flow==“E”). first10[first10$flow=="E",]
## reporter flow partner year value
## 2 213 E 174 2008 117661208
## 3 213 E 172 2008 31986252
## 4 213 E 134 2008 1507966544
## 8 213 E 258 2008 107246580
We can have the criteria above and also select rows with value > 100,000,000 by using &:
first10[first10$flow=="E"& first10$value>100000000,]
## Warning in Ops.factor(first10$value, 1e+08): '>' not meaningful for factors ## reporter flow partner year value ## NA NA <NA> NA NA <NA> ## NA.1 NA <NA> NA NA <NA> ## NA.2 NA <NA> NA NA <NA> ## NA.3 NA <NA> NA NA <NA> The & operator stands for “and”. Make sure to include the & operator after the first line of code, otherwise R does not know that the next line is part of the same expression. This give us an error because column “value” is factor. class(first10$value)
## [1] "factor"
We can change the value column from being factor to numeric like this:
first10$value <- as.numeric(as.character(first10$value))
When converting from class factor to numeric, we first need to convert to the character class first then to numeric, otherwise it will take the index in the factor dictionary rather than the actual value. We convert to the character class by using the as.character command. Then to convert the values to numeric - we use as.numeric.
Now the class is numeric:
class(first10$value) ## [1] "numeric" So we can subset by value with the > operator: first10[first10$flow=="E"&
first10$value>100000000,] ## reporter flow partner year value ## 2 213 E 174 2008 117661208 ## 4 213 E 134 2008 1507966544 ## 8 213 E 258 2008 107246580 We can also subset the exports value for more than 100 million for partner 174: first10[first10$flow=="E"&
first10$value>100000000& first10$partner==174,]
## reporter flow partner year value
## 2 213 E 174 2008 117661208
Same as above, but partner is 174 or 134:
first10[first10$flow=="E"& first10$value>100000000&
first10$partner %in% c(174,134),] ## reporter flow partner year value ## 2 213 E 174 2008 117661208 ## 4 213 E 134 2008 1507966544 Select any flows of any value of partner 174 or 134, where “|” is the “or” operator: first10[first10$partner==174|first10$partner==134,] ## reporter flow partner year value ## 2 213 E 174 2008 117661208 ## 4 213 E 134 2008 1507966544 The “or” (|) operator means we select both values if they exist. In fact, these two lines of code are equivalent in this case: first10[first10$partner %in% c(174,134),]
## reporter flow partner year value
## 2 213 E 174 2008 117661208
## 4 213 E 134 2008 1507966544
first10[first10$partner==174|first10$partner==134,]
## reporter flow partner year value
## 2 213 E 174 2008 117661208
## 4 213 E 134 2008 1507966544
### Quiz
Okay, that was a lot. This is a very important part, so let’s try to practice it!
Which answer correctly selects all the conditions listed below correctly:
- import “I” flow
- trade values larger than (>) a 1000
- select both partners with codes 186 and 181 ?
(Hint: We just did something similar, look at the code!)
A. first10[first10$flow=="I"& first10$value>1000&
first10$partner %in% c(186,181),] #Answer A is correct. Good job! :). B. first10[first10$partner==186|first10\$partner==181,]
#Answer B is incorrect because it only selects the partners correctly (186 and 181), but does not select the import ("I") flow and the trade value to be higher than 1000. Hence, answer A is the correct answer.
This concludes Section 2! From now on we will work with different data sets and in order to have more space on R, you can always run the rm command in order to delete the data sets from your environment in R that are not in use. In this case we delete the first10 data set:
rm(first10)
|
2021-09-28 01:54:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.330029159784317, "perplexity": 2518.723665500451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058589.72/warc/CC-MAIN-20210928002254-20210928032254-00046.warc.gz"}
|
https://www.chase2learn.com/coursera-machine-learning-week-9-quiz-answers-recommender-systems/
|
# Coursera machine learning week 9 Quiz answers Recommender Systems | Andrew NG
In this article, you will find Coursera machine learning week 9 Quiz answers Recommender Systems. Use “Ctrl+F” To Find Any Questions or Answers. For Mobile Users, You Just Need To Click On Three dots In Your Browser & You Will Get A “Find” Option There. Use These Options to Get Any Random Questions Answer.
Try to solve all the assignments by yourself first, but if you get stuck somewhere then feel free to browse the code. Don’t just copy-paste the code for the sake of completion. Even if you copy the code, make sure you understand the code first.
### Coursera machine learning week 9 Quiz answers Recommender Systems| Andrew NG
1. Suppose you run a bookstore, and have ratings (1 to 5 stars) of books. Your collaborative filtering algorithm has learned a parameter vector $inline&space;large&space;theta^{(j)}$ for user j, and a feature vector $inline&space;large&space;x^{(i)}$ for each book. You would like to compute the “training error”, meaning the average squared error of your system’s predictions on all the ratings that you have gotten from your users. Which of these are correct ways of doing so (check all that apply)?For this problem, let m be the total number of ratings you have gotten from your users. (Another way of saying this is that $inline&space;large&space;m&space;=&space;sum_{i=1}^{n_m}&space;sum_{j=1}^{n_u}&space;r(i,j)$).
[Hint: Two of the four options below are correct.]
2. In which of the following situations will a collaborative filtering system be the most appropriate learning algorithm (compared to linear or logistic regression)?
• You manage an online bookstore and you have the book ratings from many users. You want to learn to predict the expected sales volume (number of books sold) as a function of the average rating of a book.
• You’re an artist and hand-paint portraits for your clients. Each client gets a different portrait (of themselves) and gives you 1-5 star rating feedback, and each client purchases at most 1 portrait. You’d like to predict what rating your next customer will give you.
• You run an online bookstore and collect the ratings of many users. You want to use this to identify what books are “similar” to each other (i.e., if one user likes a certain book, what are other books that she might also like?)
• You own a clothing store that sells many styles and brands of jeans. You have collected reviews of the different styles and brands from frequent shoppers, and you want to use these reviews to offer those shoppers discounts on the jeans you think they are most likely to purchase.
• You’ve written a piece of software that has downloaded news articles from many news websites. In your system, you also keep track of which articles you personally like vs. dislike, and the system also stores away features of these articles (e.g., word counts, name of author). Using this information, you want to build a system to try to find additional new articles that you personally will like.
• You run an online news aggregator, and for every user, you know some subset of articles that the user likes and some different subset that the user dislikes. You’d want to use this to find other articles that the user likes.
• You manage an online bookstore and you have the book ratings from many users. For each user, you want to recommend other books she will enjoy, based on her own ratings and the ratings of other users.
3. You run a movie empire, and want to build a movie recommendation system based on collaborative filtering. There were three popular review websites (which we’ll call A, B and C) which users to go to rate movies, and you have just acquired all three companies that run these websites. You’d like to merge the three companies’ datasets together to build a single/unified system. On website A, users rank a movie as having 1 through 5 stars. On website B, users rank on a scale of 1 – 10, and decimal values (e.g., 7.5) are allowed. On website C, the ratings are from 1 to 100
. You also have enough information to identify users/movies on one website with users/movies on a different website. Which of the following statements is true?
• You can merge the three datasets into one, but you should first normalize each dataset’s ratings (say rescale each dataset’s ratings to a 0-1 range).
• You can combine all three training sets into one as long as your perform mean normalization and feature scaling after you merge the data.
• Assuming that there is at least one movie/user in one database that doesn’t also appear in a second database, there is no sound way to merge the datasets, because of the missing data.
• It is not possible to combine these websites’ data. You must build three separate recommendation systems.
• You can merge the three datasets into one, but you should first normalize each dataset separately by subtracting the mean and then dividing by (max – min) where the max and min (5-1) or (10-1) or (100-1) for the three websites respectively.
4. Which of the following are true of collaborative filtering systems? Check all that apply.
• Suppose you are writing a recommender system to predict a user’s book preferences. In order to build such a system, you need that user to rate all the other books in your training set.
• Even if each user has rated only a small fraction of all of your products (so r(i, j) = 0 for the vast majority of (i, j) pairs), you can still build a recommender system by using collaborative filtering.
5. Suppose you have two matrices A and B, where is 5×3 and is 3×5. Their product is C = AB, a 5×5 matrix. Furthermore, you have a 5×5 matrix R where every entry is 0 or 1. You want to find the sum of all elements C(i, j) for which the corresponding R(i, j) is 1, and ignore all elements C(i, j) where R(i, j)=0. One way to do so is the following code:
Which of the following pieces of Octave code will also correctly compute this total?
Check all that apply. Assume all options are in code.
• total = sum(sum((A * B) .* R))
• C = (A * B) .* R; total = sum(C(:));
• total = sum(sum((A * B) * R));
• C = (A * B) * R; total = sum(C(:));
• C = A * B; total = sum(sum(C(R == 1)));
• total = sum(sum(A(R == 1) * B(R == 1));
Disclaimer: Hopefully, this article will be useful for you to find all the Coursera machine learning week 9 Quiz answers Recommender Systems and grab some premium knowledge with less effort.
Finally, we are now, in the end, I just want to conclude some important message for you, Feel free to ask doubts in the comment section. I
will try my best to answer it. If you find this helpful by any means like, comment, and share the post. Please share our posts on social media platforms and also suggest to your friends to Join Our Groups. Don’t forget to subscribe. This is the simplest way to encourage me to keep doing such work.
### FAQs
Is Andrew Ng’s Machine Learning course good?
It is the Best Course for Supervised Machine Learning! Andrew Ng Sir has been like always has such important & difficult concepts of Supervised ML with such ease and great examples, Just amazing!
How do I get answers to coursera assignment?
Use “Ctrl+F” To Find Any Questions Answered. & For Mobile Users, You Just Need To Click On Three dots In Your Browser & You Will Get A “Find” Option There. Use These Options to Get Any Random Questions Answer.
How long does it take to finish coursera Machine Learning?
this specialization requires approximately 3 months with 75 hours of materials to complete, and I finished it in 3 weeks and spent an additional 1 week reviewing the whole course.
How do you submit assignments on Coursera Machine Learning?
Submit a programming assignment Open the assignment page for the assignment you want to submit. Read the assignment instructions and download any starter files. Finish the coding tasks in your local coding environment. Check the starter files and instructions when you need to. Reference
Sharing Is Caring
|
2023-03-25 18:23:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19484621286392212, "perplexity": 1336.2352504333376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00283.warc.gz"}
|
https://tel.archives-ouvertes.fr/tel-00070844
|
# L'histoire cosmique des baryons dans un univers hierarchique
Abstract : In the framework of the hierarchical model of galaxy formation, small primordial density fluctuations observed on the cosmological microwave background are amplified by gravitational instability leading to the formation of larger and larger halos. The gas collapses and cools in these dark matter potential wells and forms cold centrifugally supported gas discs. These discs are converted into stellar discs that is to say galaxies. The problem in this scenario is the so-called overcooling problem'': the resulting amount of stars is greater than the observed one by a factor of four.
I have therefore studied the evolution of baryons (hydrogen and helium gas) in the Universe using high resolution hydrodynamical simulations. Based on these results, I have developed a simple analytical model for computing the baryons mass fraction in each of the following phases: stars, cold gas in galactic discs, hot gas in clusters and diffuse gas in the intergalactic medium. The comparison of model results to observations shows us that cosmology controls the cosmic history of star formation. The important cosmological role of galactic winds is also shed to light. They eject the cold gas from discs to hot halos, overcoming the overcooling problem. Finally, I have studied the implication of baryon physics onto the diffuse gamma-ray background from light dark matter particles.
Keywords :
Document type :
Theses
https://tel.archives-ouvertes.fr/tel-00070844
Contributor : Yann Rasera <>
Submitted on : Monday, May 22, 2006 - 3:26:13 AM
Last modification on : Friday, April 10, 2020 - 5:09:46 PM
Long-term archiving on: : Sunday, April 4, 2010 - 8:18:44 PM
### Identifiers
• HAL Id : tel-00070844, version 1
### Citation
Yann Rasera. L'histoire cosmique des baryons dans un univers hierarchique. Astrophysique [astro-ph]. Université Paris-Diderot - Paris VII, 2005. Français. ⟨tel-00070844⟩
Record views
|
2020-11-27 06:47:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2953922748565674, "perplexity": 4048.4373874533403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189141.23/warc/CC-MAIN-20201127044624-20201127074624-00628.warc.gz"}
|
https://db0nus869y26v.cloudfront.net/en/Potency_(pharmacology)
|
Concentration-response curves illustrating the concept of potency. For a response of 0.25a.u., Drug B is more potent, as it generates this response at a lower concentration. For a response of 0.75a.u., Drug A is more potent. a.u. refers to "arbitrary units".
The IUPHAR has stated that 'potency' is "an imprecise term that should always be further defined",[1] for instance as ${\displaystyle {\ce {EC_{50))))$, ${\displaystyle {\ce {IC_{50))))$, ${\displaystyle {\ce {ED_{50))))$, ${\displaystyle {\ce {LD_{50))))$ and so on.
|
2022-06-25 20:59:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5966297388076782, "perplexity": 2852.1354072438626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036099.6/warc/CC-MAIN-20220625190306-20220625220306-00524.warc.gz"}
|
http://marktheballot.blogspot.com/2012/12/more-on-house-effects-over-time.html
|
## Sunday, December 9, 2012
### More on house effects over time
Early last decade, Simon Jackman published his Bayesian approach to poll aggregation. It allowed the house effects (systemic biases) of a polling house to be calibrated (either absolutely in terms of a known election outcome, or relatively against the average of all the other polling houses).
Jackman's approach was two-fold. He theorised that voting intention typically did not change much day-to-day (although his model allows for occasional larger movement in public opinion). On most days, the voting intention of the public is much the same as it was on the previous day. In his model, he identified the most likely path that voting intention took each and every day through the period under analysis. This day-to-day track of likely voting intention then must line up (as best it can) with the published polls as they occurred during this period. To help the modeled day-to-day walk of public opinion line up with the published polls, Jackman's approach assumed that each polling house had a systemic bias which is normally distributed around a constant number of percentage points above or below the actual population's voting intention.
Jackman's approach works brilliantly over the short run. In the next chart, which is based on a 100,000 simulation of possible walks that satisfies the constraints in the model, we pick out the median pathway for each day over the last six months. The result is a reasonably smooth curve. While I have not labeled the end point in the median series, it was 47.8 per cent.
However, over longer periods, Jackman's model is less effective. The problem is the assumption that the distribution of house effects remains constant over time. This is not the case. In the next chart, we apply the same 100,000 simulation approach as above, but to the data since the last election. The end point for this chart is 47.7 per cent.
It looks like the estimated population voting intention line is more choppy (because the constantly distributed house effects element of the model is contributing less to the analysis over the longer run). Previously I noted that over the last three years, Essential's house effect has moved around somewhat in comparison to the other polling houses.
All of this got me wondering whether it was possible to design a model that identified this movement in house effects over time - on (say) a six month rolling average basis. My idea was to take the median line from Jackman's model and use it to benchmark the polling houses. I also wondered whether I could then use the newly identified time-varying house-effect to better identify the underlying population voting intention.
The first step of taking a six month rolling average against the original Jackman line was simple as can be seen in the next chart (noting this is a 10,000 run simulation).
However, designing a model where the fixed and variable sides of the model informed each other proved more challenging than I had anticipated (in part because the JAGS program requires the specification of a directed acyclic graph). At first, I could not find an easy way for the fixed effect side of the model to inform variable effects side of the model and for the variable effects side to inform the fixed effects side, without the whole model becoming a cyclical graph.
When I finally solved the problem, a really nice chart for population voting intention popped out the other end (after 2.5 hours of computer time for the 100,000 run simulation).
Also, the six-monthly moving average for the house effects (which is measured against the line) looked a touch smoother (but this may be the result of a 100,000 run versus a 10,000 run for the earlier chart).
This leads me to another observation. A number of other blogs interested in poll aggregation ignore or down-weight the Morgan face-to-face poll series. I have been asked why I use it.
I use the Morgan face to face series because it is fairly consistent in respect of the other polls. It is a bit like comparing a watch that is consistently five minutes slow with a watch that is sometimes a minute or two fast and at other times a minute or two slow, but which moves randomly between theses two states. A watch that is consistently slow is more informative once it has been benchmarked than a watch that might be closer to the actual time, but whose behaviour around the actual time is random. In short, I think the people who ignore or down-play this Morgan series are not taking advantage of really useful information.
Back to the model: All of the volatility ended up in the variable effects daily walk, which is substantially influenced by the outliers.
For the nerds: My JAGS code for this is a bit more complicated than for earlier models. The variables y and y2 are the polling observations over the period (the series are identical - this is how I ensured the graph was acyclical). The observations are ordered in date order. The lower and upper variables map the range of the six-month centred window for estimating the variable effects against the fixed effects (this is calculated in R before handing to JAGS for the MCMC simulation). The lines marked with a triple $sign are the lines that allow the fixed and variable elements of the model to inform each other. model { ## -- temporal model for voting intention (VI) for(i in 2:PERIOD) { # for each day under analysis ... VI[i] ~ dnorm(VI[i-1], walkVIPrecision) # fixed effects walk VI2[i] ~ dnorm(VI2[i-1], walkVIPrecision2) # $$} ## -- initial fixed house-effects observational model for(i in 1:NUMPOLLS) { # for each poll result ... roundingEffect[i] ~ dunif(-houseRounding[i], houseRounding[i]) yhat[i] <- houseEffects[ house[i] ] + VI[ day[i] ] + roundingEffect[i] ## system y[i] ~ dnorm(yhat[i], samplePrecision[i]) ## distribution } ## -- variable effects 6-month window adjusted observational model for(i in 1:NUMPOLLS) { # for each poll result ... count[i] <- sum(house[ lower[i]:upper[i] ] == house[i]) adjHouseEffects[i] <- sum( (y[ lower[i]:upper[i] ] - VI[ day[i] ]) * (house[ lower[i]:upper[i] ] == house[i]) ) / count[i] roundingEffect2[i] ~ dunif(-houseRounding[i], houseRounding[i]) #$$$
yhat2[i] <- adjHouseEffects[i] + VI2[ day[i] ] + roundingEffect2[i] # $$y2[i] ~ dnorm(yhat2[i], samplePrecision[i]) #$$$} ## -- point-in-time sum-to-zero constraint on constant house effects houseEffects[1] <- -sum( houseEffects[2:HOUSECOUNT] ) ## -- priors for(i in 2:HOUSECOUNT) { ## vague normal priors for house effects houseEffects[i] ~ dnorm(0, pow(0.1, -2)) } sigmaWalkVI ~ dunif(0, 0.01) ## uniform prior on std. dev. walkVIPrecision <- pow(sigmaWalkVI, -2) ## for the day-to-day random walk VI[1] ~ dunif(0.4, 0.6) ## initialisation of the voting intention daily walk sigmaWalkVI2 ~ dunif(0, 0.01) ## $$walkVIPrecision2 <- pow(sigmaWalkVI2, -2) ##$$$
VI2[1] ~ dunif(0.4, 0.6) ## \$
}
I suspect this is more complicated than it needs to be; any help in simplifying the approach would be appreciated.
|
2018-01-23 01:56:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5155225992202759, "perplexity": 1664.2558239862205}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891705.93/warc/CC-MAIN-20180123012644-20180123032644-00192.warc.gz"}
|
https://byorgey.wordpress.com/2022/12/03/competitive-programming-in-haskell-infinite-2d-array-level-1/
|
## Competitive programming in Haskell: Infinite 2D array, Level 1
In my previous post, I challenged you to solve Infinite 2D Array using Haskell. As a reminder, the problem specifies a two-parameter recurrence $F_{x,y}$, given by
• $F_{0,0} = 0$
• $F_{0,1} = F_{1,0} = 1$
• $F_{i,0} = F_{i-1,0} + F_{i-2,0}$ for $i \geq 2$
• $F_{0,i} = F_{0,i-1} + F_{0,i-2}$ for $i \geq 2$
• $F_{i,j} = F_{i-1,j} + F_{i,j-1}$ for $i,j \geq 1$.
We are given particular values of $x$ and $y$, and asked to compute $F_{x,y} \bmod (10^9 + 7)$. The problem is that $x$ and $y$ could be as large as $10^6$, so simply computing the entire $x \times y$ array is completely out of the question: it would take almost 4 terabytes of memory to store a $10^6 \times 10^6$ array of 32-bit integer values. In this post, I’ll answer the Level 1 challenge: coming up with a general formula for $F_{x,y}$.
We need to be more clever about computing a given $F_{x,y}$ without computing every entry in the entire 2D array, so we look for some patterns. It’s pretty obvious that the array has Fibonacci numbers along both the top two rows and the first two columns, though it’s sadly just as obvious that we don’t get Fibonacci numbers anywhere else. The last rule, the rule that determines the interior entries, says that each interior cell is the sum of the cell above it and the cell to the left. This looks a lot like the rule for generating Pascal’s triangle, i.e. binomial coefficients; in fact, if the first row and column were specified to be all 1’s instead of Fibonacci numbers, then we would get exactly binomial coefficients.
I knew that binomial coefficients can also be thought of as counting the number of paths from one point in a grid to another which can only take east or south steps, and this finally gave me the right insight. Each interior cell is a sum of other cells, which are themselves sums of other cells, and so on until we get to the edges, and so ultimately each interior cell can be thought of as a sum of a bunch of copies of numbers on the edges, i.e. Fibonacci numbers. How many copies? Well, the number of times each Fibonacci number on an edge contributes to a particular interior cell is equal to the number of paths from the Fibonacci number to the interior cell (with the restriction that the paths’ first step must immediately be into the interior of the grid, instead of taking a step along the first row or column). For example, consider $F_{3,2} = 11$. The two 1’s along the top row contribute 3 times and 1 time, respectively, whereas the 1’s and 2 along the first column contribute 3 times, 2 times, and once, respectively, for a total of $11$:
The number of paths from $F_{0,k}$ to $F_{x,y}$ is the number of grid paths from $(1,k)$ to $(x,y)$, which is $\binom{(x-1) + (y-k)}{y-k}$. Likewise the number of paths from $F_{k,0}$ to $F_{x,y}$ is $\binom{(x-k) + (y-1)}{x-k}$. All together, this yields the formula
$\displaystyle F_{x,y} = \left(\sum_{1 \leq k \leq x} F_k \binom{x-k+y-1}{x-k}\right) + \left(\sum_{1 \leq k \leq y} F_k \binom{y-k+x-1}{y-k}\right) \pmod{P}$
Commenter Soumik Sarkar found a different formula,
$\displaystyle F_{x,y} = F_{x+2y} + \sum_{1 \leq k \leq y} (F_k - F_{2k}) \binom{y-k+x-1}{y-k} \pmod{P}$
which clearly has some similarity to mine, but I have not been able to figure out how to derive it, and Soumik did not explain how they found it. Any insights welcome!
In any case, both of these formulas involve a sum of only $O(x+y)$ terms, instead of $O(xy)$, although the individual terms are going to be much more work to compute. The question now becomes how to efficiently compute Fibonacci numbers and binomial coefficients modulo a prime. I’ll talk about that in the next post!
|
2023-02-03 04:21:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 33, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7977718114852905, "perplexity": 171.93166777054486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500042.8/warc/CC-MAIN-20230203024018-20230203054018-00557.warc.gz"}
|
https://dial.uclouvain.be/pr/boreal/object/boreal:163889
|
# Existence results for parametric boundary value problems involving the mean curvature operator
## Primary tabs
Add to cart Print Copyright (Sherpa/Romeo) References21
Bibliographic reference Bonanno, Gabriele ; Livrea, Roberto ; Mawhin, Jean. Existence results for parametric boundary value problems involving the mean curvature operator. In: No D E A - Nonlinear Differential Equations and Applications, Vol. 22, no. 3, p. 411-426 (2014) http://hdl.handle.net/2078.1/163889
1. Bereanu C., Mawhin J.: Boundary value problems with non-surjective ϕ-Laplacian and one-side bounded nonlinearity. Adv. Differ. Eqs. 11, 35–60 (2006)
2. Bonanno Gabriele, A critical point theorem via the Ekeland variational principle, 10.1016/j.na.2011.12.003
3. Bonanno G., D’Aguì G.: Critical nonlinearities for elliptic Dirichlet problems. Dyn. Syst. Appl. 22, 411–417 (2013)
4. Bonanno, G., Livrea, R.: Existence and multiplicity of periodic solutions for second order Hamiltonian systems depending on a parameter. J. Convex Anal. 20(4), 1075–1094 (2013)
5. Bonanno Gabriele, Pizzimenti Pasquale F., Existence results for nonlinear elliptic problems, 10.1080/00036811.2011.625013
6. BONANNO GABRIELE, SCIAMMETTA ANGELA, AN EXISTENCE RESULT OF ONE NONTRIVIAL SOLUTION FOR TWO POINT BOUNDARY VALUE PROBLEMS, 10.1017/s0004972711002255
7. Bonheure D., Habets P., Obersnel F., Omari P.: Classical and non-classical positive solutions of a prescribed curvature equation with singularities. Rend. Istit. Math. Univ. Trieste 39, 63–85 (2007)
8. Bonheure Denis, Habets Patrick, Obersnel Franco, Omari Pierpaolo, Classical and non-classical solutions of a prescribed curvature equation, 10.1016/j.jde.2007.05.031
9. Brezis H.: Analyse fonctionnelle. Masson, Paris (1987)
10. Capietto Anna, Dambrosio Walter, Zanolin Fabio, Infinitely many radial solutions to a boundary value problem in a ball, 10.1007/bf02505953
11. Chang K. C., Zhang Tan, MULTIPLE SOLUTIONS OF THE PRESCRIBED MEAN CURVATURE EQUATION, Nankai Tracts in Mathematics (2006) ISBN:9789812700612 p.113-128, 10.1142/9789812772688_0005
12. Cid J., Torres Pedro, Solvability for some boundary value problems with $\phi$-Laplacian operators, 10.3934/dcds.2009.23.727
13. D’Aguì G.: Existence results for a mixed bonundary value probelm with Sturm-Liouville equation. Adv. Pure Appl. Math. 2, 237–248 (2011)
14. D’Aguì, G.: Multiplicity results for nonlinear mixed boundary value problem. Bound. Val. Problems 2012 (2012:134) 12pp.
15. Faraci F.: A note on the existence of infinitely many solutions for the one dimensional prescribed curvature equation. Stud. Univ. Babes Bolyai Math. 55(4), 83–90 (2010)
16. HABETS PATRICK, OMARI PIERPAOLO, MULTIPLE POSITIVE SOLUTIONS OF A ONE-DIMENSIONAL PRESCRIBED MEAN CURVATURE PROBLEM, 10.1142/s0219199707002617
17. Le Vy Khoi, Some Existence Results on Nontrivial Solutions of the Prescribed Mean Curvature Equation, 10.1515/ans-2005-0201
18. Obersnel Franco, Classical and Non-Classical Sign-Changing Solutions of a One-Dimensional Autonomous Prescribed Curvature Equation, 10.1515/ans-2007-0409
19. Obersnel Franco, Omari Pierpaolo, Positive solutions of the Dirichlet problem for the prescribed mean curvature equation, 10.1016/j.jde.2010.07.001
20. Obersnel Franco, Omari Pierpaolo, Multiple non-trivial solutions of the Dirichlet problem for the prescribed mean curvature equation, 10.1090/conm/540/10664
21. Pan Hongjing, One-dimensional prescribed mean curvature equation with exponential nonlinearity, 10.1016/j.na.2008.01.027
|
2018-03-23 19:10:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7451729774475098, "perplexity": 11109.56342808555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648431.63/warc/CC-MAIN-20180323180932-20180323200932-00069.warc.gz"}
|
https://tex.stackexchange.com/questions/linked/18910
|
14 questions linked to/from Multiple citations with pages using BibLaTeX
221 views
### Cite multiple authors with page references [duplicate]
Possible Duplicate: Multiple citations with pages using BibLaTeX How can I best include citations with multiple authors and page references? That is, I would like the output to look similar to ...
18 views
### Adding page numbers to footnotes using style=verbose-ibid style [duplicate]
I've just started using LaTeX and would be really grateful for some help in with citations. I have been using the verbose-ibid style and understand how to add page numbers when referring to a single ...
17 views
### Biblatex: chicago-authordate with multiple sources and specifications [duplicate]
Using biblatex with the chicago-authordate style, I want to typeset a citation with multiple sources and page specifications as "(Hacking 1975, p. 49; Hald 1990, p. 31)". Ideally, it would something ...
128k views
### Cite multiple references with Bibtex? [duplicate]
So I use bibtex and I have set my \bibliographystyle to be unsrt. I want Latex to render something like this: ref [1]-[5] support this claim. but if i write: \cite{x1} - \cite{x5} blah blah ...
27k views
### Natbib: Multiple citations with page numbers in one bracket
I'm using natbib with bibliography style "apalike". I can cite two different papers in one bracket; for instance \citep{adams03,collier09} gives me (Adams and Fournier, 2003; Collier et al., 1990). ...
8k views
### How to cite two references and include their pages with natbib?
\cite[p.~11]{Author1:2003a} This inserts the number of page only for one author like "(Author1, 2003, p. 11)" However, I would like something like "(Author1, 2003, p. 11, Author2, 2003, p. 22)" Any ...
6k views
### Sorting citations using \cites command in biblatex
I am using biblatex for citations and bibliography. Now I often have lists of multiple citations, for which biblatex has the \cites command. Now I can get citations to automatically sort if I don't ...
4k views
### multiple citations with individual page numbers using natbib
I am struggling to combine multiple citations with individual page numbers. I have used this natbib reference sheet, but I am stuck (with what is displayed below). It seems as I cannot combine ...
412 views
### citation call-outs separated with a semicolon
I am using the following: \citep[vgl. bspw. ][S. 380]{Bortz2006}\citep[S. 40]{Fayyad1996}\citep[S. 11]{Maimon2010} to get the output: [vgl. bspw. BD06, S. 380][FPSS96, S. 40][MR10, S. 11] Is it ...
115 views
### Multiple citations with multiple comments
I often use multiple citations to blocks I write. So one example could be: This is one part of my text \cite{cit01, cit02, cit03}. Resulting to: This is one part of my text (Kaplan 1996; Norton ...
217 views
### Mimic behavior of \cites without BibLaTeX?
Per this answer, BibLaTeX allows for the \cites command to combine multiple citations with page numbers. I was wondering if there's a solution to allow for similar (read: identical) functionality ...
108 views
### Citing a book chapter in a multiple entry citation?
I would like to be able to cite a specific chapter in a book within a multi-entry citation and have the additional citation information map correctly (full MWE follows this explanation). I found this ...
|
2019-12-15 15:14:53
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376154541969299, "perplexity": 6327.080332305858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541308604.91/warc/CC-MAIN-20191215145836-20191215173836-00101.warc.gz"}
|
https://www.oercommons.org/browse?batch_start=40&f.alignment=CCSS.Math.Content.7.G.A.1
|
# 59 Results
View
Selected filters:
• CCSS.Math.Content.7.G.A.1
Only Sharing Permitted
CC BY-NC-ND
Rating
This lesson unit is intended to help assess how well students are able to interpret and use scale drawings to plan a garden layout. This involves using proportional reasoning and metric units.
Subject:
Algebra
Geometry
Ratios and Proportions
Material Type:
Assessment
Lesson Plan
Provider:
Shell Center for Mathematical Education
U.C. Berkeley
Provider Set:
Mathematics Assessment Project (MAP)
04/26/2013
Only Sharing Permitted
CC BY-NC-ND
Rating
This lesson unit is intended to help you assess how well students are able to: recognize and use common 2D representations of 3D objects and identify and use the appropriate formula for finding the circumference of a circle.
Subject:
Education
Mathematics
Geometry
Material Type:
Assessment
Lesson Plan
Provider:
Shell Center for Mathematical Education
U.C. Berkeley
Provider Set:
Mathematics Assessment Project (MAP)
Author:
http://map.mathshell.org/
04/26/2013
Only Sharing Permitted
CC BY-NC-ND
Rating
This lesson unit is intended to help you assess how well students are able to use geometric properties to solve problems. In particular, it will support you in identifying and helping students who have the following difficulties: Solving problems relating to using the measures of the interior angles of polygons; and solving problems relating to using the measures of the exterior angles of polygons.
Subject:
Geometry
Material Type:
Assessment
Lesson Plan
Provider:
Shell Center for Mathematical Education
U.C. Berkeley
Provider Set:
Mathematics Assessment Project (MAP)
04/26/2013
Only Sharing Permitted
CC BY-NC-ND
Rating
This lesson unit is intended to help you assess how well students are able to: solve simple problems involving ratio and direct proportion; choose an appropriate sampling method; and collect discrete data and record them using a frequency table.
Subject:
Education
Geometry
Measurement and Data
Ratios and Proportions
Material Type:
Assessment
Lecture Notes
Lesson Plan
Teaching/Learning Strategy
Provider:
Shell Center for Mathematical Education
U.C. Berkeley
Provider Set:
Mathematics Assessment Project (MAP)
Author:
http://map.mathshell.org/
04/26/2013
Conditional Remix & Share Permitted
CC BY-NC-SA
Rating
This task was developed by high school and postsecondary mathematics and design/pre-construction educators, and validated by content experts in the Common Core State Standards in mathematics and the National Career Clusters Knowledge & Skills Statements. It was developed with the purpose of demonstrating how the Common Core and CTE Knowledge & Skills Statements can be integrated into classroom learning - and to provide classroom teachers with a truly authentic task for either mathematics or CTE courses.
Subject:
Architecture and Design
Geometry
Ratios and Proportions
Material Type:
Activity/Lab
Assessment
Homework/Assignment
Lesson Plan
Provider:
National Association of State Directors of Career Technical Education Consortium
Provider Set:
Career Technical Education
03/05/2012
Only Sharing Permitted
CC BY-NC-ND
Rating
This lesson unit is intended to help you assess how well students are able to: Model a situation; make sensible, realistic assumptions and estimates; and use assumptions and estimates to create a chain of reasoning, in order to solve a practical problem.
Subject:
Geometry
Material Type:
Assessment
Lesson Plan
Provider:
Shell Center for Mathematical Education
U.C. Berkeley
Provider Set:
Mathematics Assessment Project (MAP)
04/26/2013
Only Sharing Permitted
CC BY-NC-ND
Rating
This lesson unit is intended to help you assess how well students are able to: Interpret a situation and represent the variables mathematically; select appropriate mathematical methods to use; explore the effects on the area of a rectangle of systematically varying the dimensions whilst keeping the perimeter constant; interpret and evaluate the data generated and identify the optimum case; and communicate their reasoning clearly.
Subject:
Geometry
Material Type:
Assessment
Lesson Plan
Provider:
Shell Center for Mathematical Education
U.C. Berkeley
Provider Set:
Mathematics Assessment Project (MAP)
04/26/2013
Conditional Remix & Share Permitted
CC BY-NC
Rating
Students will join the buildings together to form a city with streets and sidewalks running between the buildings. Student groups will make their presentations, provide feedback to other students’ presentations, and get evaluated on their listening skills.Key ConceptsIn this culminating event, students present their project plan and solution to the class. The presentation allows students to explain their problem-solving plan, communicate their reasoning, and construct a viable argument about a mathematical problem.Students also listen to other project presentations and provide feedback to the presenters. Listeners have the opportunity to critique the mathematical reasoning of others.GoalsPresent projects and demonstrate understanding of the unit concepts.Clarify any misconceptions or difficult areas from the Final Assessment.Give feedback on other project presentations.Exhibit good listening skills.Review the concepts from the unit.
Subject:
Mathematics
Material Type:
Lesson Plan
Provider:
Pearson
09/21/2015
Rating
The evil Scaleo has escaped from prison and is transforming the length, width, and height of objects until they become useless – or dangerous.
Who can put things right? Superheroine Scale Ella uses the power of scale factor to foil the villain.
Subject:
Mathematics
Material Type:
Lecture
Provider:
Learning Games Lab
Author:
NMSU Learning Games Lab
07/15/2015
Educational Use
Rating
Students use bearing measurements to triangulate and determine objects' locations. Working in teams of two or three, they must put on their investigative hats as they take bearing measurements to specified landmarks in their classroom (or other rooms in the school) from a "mystery location." With the extension activity, students are challenged with creating their own maps of the classroom or other school location and comparing them with their classmates' efforts.
Subject:
Education
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Janet Yowell
Malinda Schaefer Zarske
Matt Lippis
10/14/2015
Educational Use
Rating
Acting as if they are biomedical engineers, students design and print 3D prototypes of pressure sensors that measure the pressure of the eyes of people diagnosed with glaucoma. After completing the tasks within the associated lesson, students conduct research on pressure gauges, apply their understanding of radio-frequency identification (RFID) technology and its components, iterate their designs to make improvements, and use 3D software to design and print 3D prototypes. After successful 3D printing, teams present their models to their peers. If a 3D printer is not available, use alternate fabrication materials such as modeling clay, or end the activity once the designs are complete.
Subject:
Engineering
Health, Medicine and Nursing
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Janelle Orange
10/14/2015
Educational Use
Rating
Accuracy of measurement in navigation depends very much on the situation. If a sailor's target is an island 200 km wide, sailing off center by 10 or 20 km is not a major problem. But, if the island were only 1 km wide, it would be missed if off just the smallest bit. Many of the measurements made while navigating involve angles, and a small error in the angle can translate to a much larger error in position when traveling long distances.
Subject:
Education
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Janet Yowell
Jeff White
Malinda Schaefer Zarske
Matt Lippis
10/14/2015
Educational Use
Rating
Students learn how to identify the major features in a topographical map. They learn that maps come in a variety of forms: city maps, road maps, nautical maps, topographical maps, and many others. Map features reflect the intended use. For example, a state map shows cities, major roads, national parks, county lines, etc. A city map shows streets and major landmarks for that city, such as hospitals and parks. Topographical maps help navigate the wilderness by showing the elevation, mountains, peaks, rivers and trails.
Subject:
Education
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Janet Yowell
Malinda Schaefer Zarske
Matt Lippis
10/14/2015
Educational Use
Rating
Students follow the steps of the engineering design process while learning more about assistive devices and biomedical engineering applied to basic structural engineering concepts. Their engineering challenge is to design, build and test small-scale portable wheelchair ramp prototypes for fictional clients. They identify suitable materials and demonstrate two methods of representing design solutions (scale drawings and simple models or classroom prototypes). Students test the ramp prototypes using a weighted bucket; successful prototypes meet all the student-generated design requirements, including support of a predetermined weight.
Subject:
Engineering
Health, Medicine and Nursing
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Jared R. Quinn
Kristen Billiar
Terri Camesano
09/18/2014
Unrestricted Use
CC BY
Rating
The purpose of this task is for students to translate between measurements given in a scale drawing and the corresponding measurements of the object represented by the scale drawing.
Subject:
Mathematics
Geometry
Material Type:
Activity/Lab
Provider:
Illustrative Mathematics
Provider Set:
Illustrative Mathematics
Author:
Illustrative Mathematics
05/01/2012
Conditional Remix & Share Permitted
CC BY-NC
Rating
Students will explore scale and use it to find measurements in scale drawings.Key ConceptsScale drawings are drawn proportionally so that there is a ratio between a given length on the drawing and the actual length. This ratio is used to set up a proportion to find other measurements.GoalsUnderstand that scale drawings are proportional.Use scale to find actual measurements.ELL: Define these terms in the context of the discussion:scalescale drawingscaled to fitproportionalAllow ELLs to use the dictionary if they wish.
Subject:
Geometry
Ratios and Proportions
Material Type:
Lesson Plan
Provider:
Pearson
09/21/2015
Educational Use
Rating
Students learn about two-axis rotations, and specifically how to rotate objects both physically and mentally about two axes. A two-axis rotation is a rotation of an object about a combination of x, y or z-axes, as opposed to a single-axis rotation, which is about a single x, y or z-axis. Students practice drawing two-axis rotations through an exercise using simple cube blocks to create shapes, and then drawing on triangle-dot paper the shapes from various x-, y- and z-axis rotation perspectives. They use the right-hand rule to explore the rotations of objects. A worksheet is provided. This activity is part of a multi-activity series towards improving spatial visualization skills. At activity end, students re-take the 12-question quiz they took in the associated lesson (before conducting four associated activities) to measure how their spatial visualizations skills improved.
Subject:
Mathematics
Material Type:
Activity/Lab
Provider:
TeachEngineering
Author:
Emily Breidt
Jacob Segil
02/07/2017
Educational Use
Rating
Spatial visualization is the study of two- and three-dimensional objects and the practice of mental manipulation of objects. Spatial visualization skills are important in a range of subjects and activities like mathematics, physics, engineering, art and sports! In this lesson, students are introduced to the concept of spatial visualization and measure their spatial visualization skills by taking the provided 12-question quiz. Following the lesson, students complete the four associated spatial visualization activities and then re-take the quiz to see how much their spatial visualization skills have improved.
Subject:
Mathematics
Material Type:
Lesson
Provider:
TeachEngineering
Author:
Emily C. Gill
Jacob Segil
02/07/2017
Conditional Remix & Share Permitted
CC BY-NC
Rating
During this problem-based blended learning module students will be designing their dream bedroom as well as creating a scale drawing of the items they chose to be in their bedroom. The launch activity introduces the students to Scale City, which is a video that explores scale models in the real world. Students are then given dimensions for a fictional bedroom to furnish with items of their choosing. Price is not considered in this module, but a budget could be introduced as an extension of the module. Students will then spend time researching items that they would want to place in their bedroom with the area constraints given. Students will have the opportunity to provide each other peer feedback on their bedroom designs. Once students have a rough idea of their bedroom design, they will spend some time creating a scale drawing of their bedroom on graph paper. This will give students the opportunity to use a scale factor to create a scale drawing. Students will again be provided feedback on their designs and be given time to reflect and redesign as needed. If students need extra time to practice using a scale factor and creating scale models, a station rotation lesson has been included as an optional resource.
Subject:
Mathematics
Material Type:
Lesson Plan
Author:
Blended Learning Teacher Practice Network
07/27/2018
Unrestricted Use
CC BY
Rating
This is a task from the Illustrative Mathematics website that is one part of a complete illustration of the standard to which it is aligned. Each task has at least one solution and some commentary that addresses important aspects of the task and its potential use.
Subject:
Geometry
Material Type:
Activity/Lab
Provider:
Illustrative Mathematics
Provider Set:
Illustrative Mathematics
Author:
Illustrative Mathematics
08/06/2015
Conditional Remix & Share Permitted
CC BY-NC
Rating
Four full-year digital course, built from the ground up and fully-aligned to the Common Core State Standards, for 7th grade Mathematics. Created using research-based approaches to teaching and learning, the Open Access Common Core Course for Mathematics is designed with student-centered learning in mind, including activities for students to develop valuable 21st century skills and academic mindset.
Subject:
Mathematics
Material Type:
Full Course
Provider:
Pearson
10/06/2016
Educational Use
Rating
Students explore orbit transfers and, specifically, Hohmann transfers. They investigate the orbits of Earth and Mars by using cardboard and string. Students learn about the planets' orbits around the sun, and about a transfer orbit from one planet to the other. After the activity, students will know exactly what is meant by a delta-v maneuver!
Subject:
Engineering
Astronomy
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Janet Yowell
Malinda Schaefer Zarske
10/14/2015
Educational Use
Rating
Students construct model landfill liners using tape and strips of plastic, within resource constraints. The challenge is to construct a bag that is able to hold a cup of water without leaking. This represents similar challenges that environmental engineers face when piecing together liners for real landfills that are acres and acres in size.
Subject:
Engineering
Environmental Science
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Janet Yowell
Malinda Schaefer Zarske
Melissa Straten
10/14/2015
Educational Use
Rating
In this activity, students explore the importance of charts to navigation on bodies of water. Using one worksheet, students learn to read the major map features found on a real nautical chart. Using another worksheet, students draw their own nautical chart using the symbols and identifying information learned.
Subject:
Education
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Denise Carlson
Janet Yowell
Malinda Schaefer Zarske
Matt Lippis
10/14/2015
Unrestricted Use
CC BY
Rating
This is a task from the Illustrative Mathematics website that is one part of a complete illustration of the standard to which it is aligned. Each task has at least one solution and some commentary that addresses important aspects of the task and its potential use.
Subject:
Geometry
Material Type:
Activity/Lab
Provider:
Illustrative Mathematics
Provider Set:
Illustrative Mathematics
Author:
Illustrative Mathematics
08/06/2015
Rating
Two besotted rulers must embrace proportional units in order to unite their lands. It takes mathematical reasoning to identify the problem, and solution, when engineers from Queentopia and Kingopolis build a bridge to meet in the middle of the river.
Subject:
Mathematics
Material Type:
Lecture
Provider:
Learning Games Lab
Author:
NMSU Learning Games Lab
07/15/2015
Educational Use
Rating
Students learn about one-axis rotations, and specifically how to rotate objects both physically and mentally to understand the concept. They practice drawing one-axis rotations through a group exercise using cube blocks to create shapes and then drawing those shapes from various x-, y- and z-axis rotation perspectives on triangle-dot paper (isometric paper). They learn the right-hand rule to explore rotations of objects. A worksheet is provided. This activity is part of a multi-activity series towards improving spatial visualization skills.
Subject:
Mathematics
Material Type:
Activity/Lab
Provider:
TeachEngineering
Author:
Emily Breidt
Jacob Segil
02/07/2017
Educational Use
Rating
As students learn more about the manufacturing process, they use the final prototypes created in the previous activity to evaluate, design and manufacture final products. Teams work with more advanced materials and tools, such as plywood, Plexiglas, metals, epoxies, welding materials and machining tools. (Note: Conduct this activity in the context of a design project that students are working on; this activity is Step 6 in a series of six that guide students through the engineering design loop.)
Subject:
Engineering
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Denise W. Carlson
Lauren Cooper
Malinda Schaefer Zarske
09/18/2014
Educational Use
Rating
Students learn how different characteristics of shapes—side lengths, perimeter and area—change when the shapes are scaled, either enlarged or reduced. Student pairs conduct a “scaling investigation” to measure and calculate shape dimensions (rectangle, quarter circle, triangle; lengths, perimeters, areas) from a bedroom floorplan provided at three scales. They analyze their data to notice the mathematical relationships that hold true during the scaling process. They see how this can be useful in real-world situations like when engineers design wearable or implantable biosensors. This prepares students for the associated activity in which they use this knowledge to help them reduce or enlarge their drawings as part of the process of designing their own wearables products. Pre/post-activity quizzes, a worksheet and wrap-up concepts handout are provided.
Subject:
Career and Technical Education
Mathematics
Measurement and Data
Numbers and Operations
Material Type:
Lesson
Provider:
TeachEngineering
Author:
Denise W. Carlson
Evelynne Pyne
Lauchlin Blue
02/07/2017
Educational Use
Rating
Students are introduced to renewable energy, including its relevance and importance to our current and future world. They learn the mechanics of how wind turbines convert wind energy into electrical energy and the concepts of lift and drag. Then they apply real-world technical tools and techniques to design their own aerodynamic wind turbines that efficiently harvest the most wind energy. Specifically, teams each design a wind turbine propeller attachment. They sketch rotor blade ideas, create CAD drawings (using Google SketchUp) of the best designs and make them come to life by fabricating them on a 3D printer. They attach, test and analyze different versions and/or configurations using a LEGO wind turbine, fan and an energy meter. At activity end, students discuss their results and the most successful designs, the aerodynamics characteristics affecting a wind turbine's ability to efficiently harvest wind energy, and ideas for improvement. The activity is suitable for a class/team competition. Example 3D rotor blade designs are provided.
Subject:
Career and Technical Education
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
AMPS
Gisselle Cunningham
Lindrick Outerbridge
Russell Holstein
02/17/2017
Conditional Remix & Share Permitted
CC BY-NC
Rating
Lesson OverviewStudent groups make their presentations, provide feedback on other students’ presentations, and get evaluated on their listening skills.Key ConceptsIn this culminating event, students present their project plan and solution to the class. The presentation allows students to explain their problem-solving plan, communicate their reasoning, and construct a viable argument about a mathematical problem.Students also listen to other project presentations and provide feedback to the presenters. Listeners have the opportunity to critique the mathematical reasoning of others.GoalsPresent project to the class.Give feedback on other project presentations.Exhibit good listening skills.Reflect on the problem-solving process.
Subject:
Mathematics
Material Type:
Lesson Plan
Provider:
Pearson
09/21/2015
Rating
Zooming In On Figures
Unit Overview
Type of Unit: Concept; Project
Length of Unit: 18 days and 5 days for project
Prior Knowledge
Students should be able to:
Find the area of triangles and special quadrilaterals.
Use nets composed of triangles and rectangles in order to find the surface area of solids.
Find the volume of right rectangular prisms.
Solve proportions.
Lesson Flow
After an initial exploratory lesson that gets students thinking in general about geometry and its application in real-world contexts, the unit is divided into two concept development sections: the first focuses on two-dimensional (2-D) figures and measures, and the second looks at three-dimensional (3-D) figures and measures.
The first set of conceptual lessons looks at 2-D figures and area and length calculations. Students explore finding the area of polygons by deconstructing them into known figures. This exploration will lead to looking at regular polygons and deriving a general formula. The general formula for polygons leads to the formula for the area of a circle. Students will also investigate the ratio of circumference to diameter ( pi ). All of this will be applied toward looking at scale and the way that length and area are affected. All the lessons noted above will feature examples of real-world contexts.
The second set of conceptual development lessons focuses on 3-D figures and surface area and volume calculations. Students will revisit nets to arrive at a general formula for finding the surface area of any right prism. Students will extend their knowledge of area of polygons to surface area calculations as well as a general formula for the volume of any right prism. Students will explore the 3-D surface that results from a plane slicing through a rectangular prism or pyramid. Students will also explore 3-D figures composed of cubes, finding the surface area and volume by looking at 3-D views.
The unit ends with a unit examination and project presentations.
Subject:
Mathematics
Geometry
Material Type:
Unit of Study
Provider:
Pearson
Conditional Remix & Share Permitted
CC BY-NC
Rating
Students will resume their project and decide on dimensions for their buildings. They will use scale to calculate the dimensions and areas of their model buildings when full size. Students will also complete a Self Check in preparation for the Putting It Together lesson.Key ConceptsThe first part of the project is essentially a review of the unit so far. Students will find the area of a composite figure—either a polygon that can be broken down into known areas, or a regular polygon. Students will also draw the figure using scale and find actual lengths and areas.GoalsRedraw a scale drawing at a different scale.Find measurements using a scale drawing.Find the area of a composite figure.SWD: Consider what supplementary materials may benefit and support students with disabilities as they work on this project:Vocabulary resource(s) that students can reference as they work:List of formulas, with visual supports if appropriateClass summaries or lesson artifacts that help students to recall and apply newly introduced skillsChecklists of expectations and steps required to promote self-monitoring and engagementModels and examplesStudents with disabilities may take longer to develop a solid understanding of newly introduced skills and concepts. They may continue to require direct instruction and guided practice with the skills and concepts relating to finding area and creating and interpreting scale drawings. Check in with students to assess their understanding of newly introduced concepts and plan review and reinforcement of skills as needed.ELL: As academic vocabulary is reviewed, be sure to repeat it and allow students to repeat after you as needed. Consider writing the words as they are being reviewed. Allow enough time for ELLs to check their dictionaries if they wish.
Subject:
Geometry
Material Type:
Lesson Plan
Provider:
Pearson
09/21/2015
Conditional Remix & Share Permitted
CC BY-NC
Rating
This problem-based learning module is designed to engage students in solving a real problem within the community. The question being “How can I help my community get digitally connected?” Students will choose to investigate one of three solutions of making wifi available in our school district to the most populated areas. They will either choose to put Wifi on bus, placing hotspots in the community or using kajeet. The students will be using Google Earth Pro to place circles on a map and calculating the area of these circles. Students will make a model of these circles onto a hard copy using scale factor. At the conclusion, the students will present findings to administration, the board of education, state and local leaders as well as their peers. These findings can be presented through the choice of a display board, flyer, video production or prezi.This blended module includes teacher-led discussion, group-led investigation and discussions along with technology integration.
Subject:
Mathematics
Geometry
Material Type:
Lesson Plan
Author:
Blended Learning Teacher Practice Network
11/21/2017
Conditional Remix & Share Permitted
CC BY-NC-SA
Rating
In Module 4, students deepen their understanding of ratios and proportional relationships from Module 1 by solving a variety of percent problems. They convert between fractions, decimals, and percents to further develop a conceptual understanding of percent and use algebraic expressions and equations to solve multi-step percent problems. An initial focus on relating 100% to the whole serves as a foundation for students. Students begin the module by solving problems without using a calculator to develop an understanding of the reasoning underlying the calculations. Material in early lessons is designed to reinforce students understanding by having them use mental math and basic computational skills. To develop a conceptual understanding, students use visual models and equations, building on their earlier work with these. As the lessons and topics progress and students solve multi-step percent problems algebraically with numbers that are not as compatible, teachers may let students use calculators so that their computational work does not become a distraction.
Subject:
Ratios and Proportions
Material Type:
Module
Provider:
New York State Education Department
Provider Set:
EngageNY
01/02/2014
Educational Use
Rating
Student teams locate a contaminant spill in a hypothetical site by measuring the pH of soil samples. Then they predict the direction of groundwater flow using mathematical modeling. They also use the engineering design process to come up with alternative treatments for the contaminated water.
Subject:
Engineering
Environmental Science
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Ben Heavner
Janet Yowell
Malinda Schaefer Zarske
Melissa Straten
10/14/2015
Educational Use
Rating
In this activity, students will learn how to actually triangulate using a compass, topographical (topo) map and view of outside landmarks. It is best if a field trip to another location away from school is selected. The location should have easily discernable landmarks (like mountains or radio towers) and changes in elevation (to illustrate the topographical features) to enhance the activity. A national park is an ideal location, and visiting a number of parks, especially parks with hiking trails, is especially beneficial.
Subject:
Education
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Janet Yowell
Malinda Schaefer Zarske
Matt Lippis
10/14/2015
Unrestricted Use
CC BY
Rating
This is a task from the Illustrative Mathematics website that is one part of a complete illustration of the standard to which it is aligned. Each task has at least one solution and some commentary that addresses important aspects of the task and its potential use.
Subject:
Geometry
Material Type:
Activity/Lab
Provider:
Illustrative Mathematics
Provider Set:
Illustrative Mathematics
Author:
Illustrative Mathematics
08/06/2015
Conditional Remix & Share Permitted
CC BY-NC
Rating
Students are introduced to real-world applications of geometry and measurement by looking at architectural plans. Students also begin to get familiar with reading architectural plans and thinking about scale.Key ConceptsSince this lesson is exploratory, all of the mathematics discussed will be informal. However, most of the mathematics that students will see in the unit is introduced in this lesson. Students look at length, area, surface area, and volume and examine how these measurements pertain to architectural plans and determining building costs. Students will also consider scale and how scale is used in architectural plans and math drawings.GoalsThink about what measurements are needed to build a building.Think about what measurements determine the cost of a building.Think about how scale is used in math drawings.SWD: Check in to ensure that students understand the meaning of domain-specific vocabulary terms such as dimensions, scale, and area. You may also need to clarify the meaning of the word contract for some students.ELL: Consider having students compile a list or resource with key vocabulary terms for this unit.
Subject:
Geometry
Material Type:
Lesson Plan
Provider:
Pearson
09/21/2015
Educational Use
Rating
In this unit, students learn the very basics of navigation, including the different kinds of navigation and their purposes. The concepts of relative and absolute location, latitude, longitude and cardinal directions are explored, as well as the use and principles of maps and a compass. Students discover the history of navigation and learn the importance of math and how it ties into navigational techniques. Understanding how trilateration can determine one's location leads to a lesson on the global positioning system and how to use a GPS receiver. The unit concludes with an overview of orbits and spacecraft trajectories from Earth to other planets.
Subject:
Engineering
Physical Geography
Material Type:
Full Course
Provider:
TeachEngineering
Provider Set:
TeachEngineering
10/14/2015
Conditional Remix & Share Permitted
CC BY-NC
Rating
Students critique their work from the Self Check and redo the task after receiving feedback. Students then take a quiz to review the goals of the unit.Key ConceptsStudents understand how to find the area of figures such as rectangles and triangles. They have applied that knowledge to finding the area of composite figures and regular polygons. The area of regular polygons was extended to understand the area of a circle. Students also applied ratio and proportion to interpret scale drawings and redraw them at a different scale.GoalsCritique and revise student work.Apply skills learned in the unit.Understand two-dimensional measurements:Area of composite figures, including regular polygons.Area and circumference of circles.Interpret scale drawings and redraw them at a different scale.SWD: Make sure all students have the prerequisite skills for the activities in this lesson.Students should understand these domain-specific terms:composite figuresregular polygonsareacircumferencescale drawingstwo dimensionalIt may be helpful to preteach these terms to students with disabilities.ELL: As academic vocabulary is reviewed, be sure to repeat it and allow students to repeat after you as needed. Consider writing the words as they are being reviewed. Allow enough time for ELLs to check their dictionaries if they wish.
Subject:
Geometry
Material Type:
Lesson Plan
Provider:
Pearson
09/21/2015
Conditional Remix & Share Permitted
CC BY-NC
Rating
Lesson OverviewStudents will work on the final portion of their project which includes creating the nets for the sides, making a slice in one of their buildings, and putting their buildings together. Once their two model buildings are complete, they will find the surface area and volume for their models and the full-size buildings their models represent.Key ConceptsThe second part of the project is essentially a review of the second half of the unit, while still using scale drawings. Students will find the surface area of a prism as well as the surface area of a truncated prism. The second prism will require estimating and problem solving to figure out the net and find the surface area. Students will also be drawing the figure using scale to find actual surface area.GoalsRedraw a scale drawing at a different scale.Find measurements using a scale drawing.Find the surface area of a prism.SWD: Students with disabilities may have a more challenging time identifying areas of improvement to target in their projects. It may be helpful to model explicitly for students (using an example project or student sample) how to review a project using the rubric to assess and plan for revisions based on that assessment.Students with fine motor difficulties may require grid paper with a larger scale. Whenever motor tasks are required, consider adaptive tools or supplementary materials that may benefit students with disabilities.Students with disabilities may struggle to recall prerequisite skills as they move through the project. It may be necessary to check in with students to review and reinforce estimation skills.
Subject:
Geometry
Material Type:
Lesson Plan
Provider:
Pearson
09/21/2015
Conditional Remix & Share Permitted
CC BY-NC
Rating
In this problem-based learning module, students will work collaboratively to improve the accessibility or safety of their school or community. For example, students could identify that accessibility ramps need to be added to the school property or additional sidewalks need to be created/repaired to increase the safety of students as they walk to school. Students would work together to create models of these improvements and create a communications plan that informs the stakeholders of the materials needed to create these improvements (i.e. using volume to determine the amount of concrete, using angles to determine measurements for ramps, etc..).
Subject:
Geometry
Material Type:
Lesson Plan
Author:
Blended Learning Teacher Practice Network
11/21/2017
Conditional Remix & Share Permitted
CC BY-NC
Rating
Throughout this problem-based learning module students will address real world skills. Students will be asked to brainstorm ideas and think innovatively both independently and collaboratively in addressing a real-world problem that is relevant to their daily lives and surroundings. Students/teams will be encouraged to use the internet for research purposes in their design phase. What components should be included for a modern, updated classroom? Students will utilize various online platforms to design an ideal, modern, 21st century “dream classroom”. Students will incorporate components that would meet the needs of all learners and a classroom that would be able to integrate technology. These classrooms can be shared with relevant individuals in the community and others in the school building.
Subject:
Mathematics
Material Type:
Lesson Plan
Author:
Blended Learning Teacher Practice Network
07/27/2018
Conditional Remix & Share Permitted
CC BY-NC-SA
Rating
In this 30-day Grade 7 module, students build upon sixth grade reasoning of ratios and rates to formally define proportional relationships and the constant of proportionality. Students explore multiple representations of proportional relationships by looking at tables, graphs, equations, and verbal descriptions. Students extend their understanding about ratios and proportional relationships to compute unit rates for ratios and rates specified by rational numbers. The module concludes with students applying proportional reasoning to identify scale factor and create a scale drawing.
Subject:
Ratios and Proportions
Material Type:
Module
Provider:
New York State Education Department
Provider Set:
EngageNY
05/14/2013
Educational Use
Rating
Students use scaling from real-world data to obtain an idea of the immense size of Mars in relation to the Earth and the Moon, as well as the distances between them. Students calculate dimensions of the scaled versions of the planets, and then use balloons to represent their relative sizes and locations.
Subject:
Engineering
Astronomy
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Chris Yakacki
Daria Kotys-Schwartz
Geoffrey Hill
Janet Yowell
Malinda Schaefer Zarske
09/18/2014
Educational Use
Rating
In this activity, students will learn how to read a topographical map and how to triangulate with just a map. True triangulation requires both a map and compass, but to simplify the activity and make it possible indoors, the compass information is given. Students will practice converting a compass measurement to a protractor measurement, as well as reverse a bearing direction (i.e., if they know a tree's bearing is 100 degrees from you, they can determine what bearing they are from the tree). Students will use the accompanying worksheets to take a bearing of certain landmarks and then start at those landmarks to work backwards to figure out where they are.
Subject:
Education
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Janet Yowell
Malinda Schaefer Zarske
Matt Lippis
10/14/2015
Educational Use
Rating
Students capture and examine air particles to gain an appreciation of how much dust, pollen and other particulate matter is present in the air around them. Students place "pollution detectors" at various locations to determine which places have a lot of particles in the air and which places do not have as many. Quantifying and describing these particles is a first step towards engineering methods of removing contaminants from the air.
Subject:
Engineering
Environmental Science
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Ben Heavner
Janet Yowell
Malinda Schaefer Zarske
Melissa Straten
10/14/2015
Unrestricted Use
CC BY
Rating
This is a task from the Illustrative Mathematics website that is one part of a complete illustration of the standard to which it is aligned. Each task has at least one solution and some commentary that addresses important asects of the task and its potential use. Here are the first few lines of the commentary for this task: On the map below, $\frac14$ inch represents one mile. Candler, Canton, and Oteen are three cities on the map. If the distance between the real towns of...
Subject:
Mathematics
Material Type:
Activity/Lab
Provider:
Illustrative Mathematics
Provider Set:
Illustrative Mathematics
Author:
Illustrative Mathematics
09/08/2013
Unrestricted Use
CC BY
Rating
This is a task from the Illustrative Mathematics website that is one part of a complete illustration of the standard to which it is aligned. Each task has at least one solution and some commentary that addresses important aspects of the task and its potential use.
Subject:
Geometry
Material Type:
Activity/Lab
Provider:
Illustrative Mathematics
Provider Set:
Illustrative Mathematics
Author:
Illustrative Mathematics
08/06/2015
Unrestricted Use
CC BY
Rating
Students learn about the history of tangrams. They will learn about each piece in the tangram puzzle and analyze the shapes to complete geometric puzzles and mathematics problems.
Subject:
Mathematics
Geometry
Material Type:
Lesson Plan
Author:
Lynn Ann Wiscount
Vince Mariner
Erin Halovanic
07/13/2020
Educational Use
Rating
Students apply their knowledge of scale and geometry to design wearables that would help people in their daily lives, perhaps for medical reasons or convenience. Like engineers, student teams follow the steps of the design process, to research the wearable technology field (watching online videos and conducting online research), brainstorm a need that supports some aspect of human life, imagine their own unique designs, and then sketch prototypes (using Paint®). They compare the drawn prototype size to its intended real-life, manufactured size, determining estimated length and width dimensions, determining the scale factor, and the resulting difference in areas. After considering real-world safety concerns relevant to wearables (news article) and getting preliminary user feedback (peer critique), they adjust their drawn designs for improvement. To conclude, they recap their work in short class presentations.
Subject:
Career and Technical Education
Mathematics
Measurement and Data
Numbers and Operations
Material Type:
Activity/Lab
Provider:
TeachEngineering
Author:
Denise W. Carlson
Evelynne Pyne
Lauchlin Blue
02/07/2017
Conditional Remix & Share Permitted
CC BY-NC
Rating
Students further explore scale, taking a scale drawing floor plan and redrawing it at a different scale.Key ConceptsStudents explore change from one scale to another, focusing on the ratios. Students will draw a scale model of a house.GoalsRedraw a scale drawing at a different scale.Find measurements using a scale drawing.
Subject:
Geometry
Ratios and Proportions
Material Type:
Lesson Plan
Provider:
Pearson
09/21/2015
Unrestricted Use
CC BY
Rating
This is a task from the Illustrative Mathematics website that is one part of a complete illustration of the standard to which it is aligned. Each task has at least one solution and some commentary that addresses important aspects of the task and its potential use.
Subject:
Geometry
Material Type:
Activity/Lab
Provider:
Illustrative Mathematics
Provider Set:
Illustrative Mathematics
Author:
Illustrative Mathematics
08/06/2015
Educational Use
Rating
Students design their own logo or picture and use a handheld GPS receiver to map it out. They write out a word or graphic on a field or playground, walk the path, and log GPS data. The results display their "art" on their GPS receiver screen.
Subject:
Education
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Janet Yowell
Malinda Schaefer Zarske
Matt Lundberg
10/14/2015
Educational Use
Rating
Celestial navigation is the art and science of finding one's geographic position by means of astronomical observations, particularly by measuring altitudes of celestial objects sun, moon, planets or stars. This activity starts with a basic, but very important and useful, celestial measurement: measuring the altitude of Polaris (the North Star) or measuring the latitude.
Subject:
Engineering
Physical Geography
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Janet Yowell
Malinda Schaefer Zarske
Matt Lippis
10/14/2015
Educational Use
Rating
Normally we find things using landmark navigation. When you move to a new place, it may take you awhile to explore the new streets and buildings, but eventually you recognize enough landmarks and remember where they are in relation to each other. However, another accurate method for locating places and things is using grids and coordinates. In this activity, students will come up with their own system of a grid and coordinates for their classroom and understand why it is important to have one common method of map-making.
Subject:
Education
Material Type:
Activity/Lab
Provider:
TeachEngineering
Provider Set:
TeachEngineering
Author:
Janet Yowell
Jeff White
Malinda Schaefer Zarske
Matt Lippis
10/14/2015
Educational Use
Rating
Students learn how to create two-dimensional representations of three-dimensional objects by utilizing orthographic projection techniques. They build shapes using cube blocks and then draw orthographic and isometric views of those shapes—which are the side views, such as top, front, right—with no depth indicated. Then working in pairs, one blindfolded partner describes a shape by feel alone as the other partner draws what is described. A worksheet is provided. This activity is part of a multi-activity series towards improving spatial visualization skills.
Subject:
Mathematics
Material Type:
Activity/Lab
Provider:
TeachEngineering
Author:
Emily Breidt
Jacob Segil
02/07/2017
Educational Use
Rating
Students learn about isometric drawings and practice sketching on triangle-dot paper the shapes they make using multiple simple cubes. They also learn how to use coded plans to envision objects and draw them on triangle-dot paper. A PowerPoint® presentation, worksheet and triangle-dot (isometric) paper printout are provided. This activity is part of a multi-activity series towards improving spatial visualization skills.
Subject:
Mathematics
Material Type:
Activity/Lab
Provider:
TeachEngineering
Author:
Emily Breidt
Jacob Segil
|
2020-09-23 03:49:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2069745510816574, "perplexity": 3369.713674019398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400209665.4/warc/CC-MAIN-20200923015227-20200923045227-00124.warc.gz"}
|
http://meta.mathoverflow.net/discussion/691/brackets/
|
• ## Discussion Feed
Vanilla 1.1.9 is a product of Lussumo. More Information: Documentation, Community Support.
1.
Hi, I'm sure this has been answered a few times, but I couldn't find where. How do I get the curly brackets {} to show up in a formula? (The LateX \{ command does not seem to work)
2.
The command \{ works when the math expression is surrounded by backticks. For example, `\$V = \{x : x = x\}\$` works as expected.
3.
Thanks!
4.
Alternatively, use \lbrace and \rbrace.
5.
Hehe. Thanks Gerry. I always forget that option.
6.
Or double the backslashes: `\\{`.
|
2013-05-20 15:48:29
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9661146402359009, "perplexity": 2461.820723418052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699068791/warc/CC-MAIN-20130516101108-00094-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://chemistry.stackexchange.com/questions/29152/what-are-isodiaphers
|
# What are isodiaphers?
We know that isodiaphers have same difference in neutrons and protons, but I encountered another definition:
Atoms which have same isotopic axis are called isodiaphers.
I don't know what it means by "isotopic axis". Please try to explain in simple words.
Isotopes are nuclides with a common number of protons $Z$.
Isobars are nuclides with a common number of nucleons $A$.
Isotones are nuclides with a common number of neutrons $N$.
Isodiaphers are nuclides with a common neutron excess $N-Z=A-2Z$.
|
2020-06-01 19:39:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34670165181159973, "perplexity": 1696.1631156141018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419593.76/warc/CC-MAIN-20200601180335-20200601210335-00202.warc.gz"}
|
http://raflausnir.is/triton/100/7681496751cb12d1eebb-velocity-problems-with-solution
|
Problem 9 Two balls A and B of masses 100 grams and 300 grams respectively are pushed horizontally from a table of height 3 meters. Velocity Weight solutions have different products that bring solutions to target unsightly and health-threatening obesity and its related problems. The average velocity is given by Eq. (g=10m/s2) Slope of the graph gives us velocity of the box. If the initial velocity of the ball was 0.8 m/sec and the . Two seconds later it reaches the bottom of the hill with a velocity of 26 m/s. =angular velocity, =linear velocity, and =radius of the circle.
hape spaceship accessories; count of monte cristo quotes life is a storm. Answer: Find distance and displacement of the car. is velocity. The average velocity of an object is equal to its instantaneous velocity if its acceleration is zero. Suppose two trains A and B are moving with uniform velocities along parallel tracks but in opposite directions. Suppose two trains A and B are moving with uniform velocities along parallel tracks but in opposite directions. Solve for speed . In everyday use and in kinematics, the speed of an object is the magnitude of its velocity (the rate of change of its position) - scalar. This section on the velocity problem is very short because in fact we have already solved the velocity problem by solving the tangent problem. A short summary of this paper. Let the velocity of train A be 40 km h-1 due east and that of train B be 40 km h-1 due west. Download solution Kinematics - 2-D and 3-D problems involving instantaneous velocity, average velocity, and average speed - Senior high school and first year college/university Problem # F-1: A plane flies 350 mi east from airport A to airport B in 50 min, and then flies 500 mi south from airport B to airport C in 1.8 h. The objects initial velocity is v (0) = j 3k v ( 0) = j 3 k and the objects initial position is r (0) = 5i +2j 3k r Position time graph of the box is given below. Solution: Given v 1 = 40 km/hr t 1 = Problem statement: A steel ball bearing of mass 3.3 10 5 kg and radius 1.0 mm displaces 4.1 10 5 N of water when fully immersed. This value can be found using the formula: v rms = [3RT/M] 1/2 where v rms = average velocity or root mean square velocity 20 m/s to the south). Solution to Problem 8. A police in a siren car moving with a velocity 20 ms-1 chases a thief who is moving in a car with a velocity v 0 ms-1.
Solution to problem 7:a) average speed = = = 22 km + 12 km + 14 km0.5 hour= 96 km/hb) The shift is the distance between the starting point and the final point and is the hypotenuse DA of the right triangle DAE and is calculated using the Pythagora theorem as follows AE = 22 - 14 = 8 kmDA2 = AE2 + ED2 = 82 + 122 = 64 + 144
The ticket was justified.
The app Athla Velocity comes from the developer Athla LLC and is usually this responsible for fixing problems. R 2 = 10 000 km 2 /hr 2 + 625 km 2 /hr 2. Relative velocity problem.
Back to Problem List. In problem #5, what was Georges velocity in meters per second? Solution. If a ball is travelling in a circle of diameter with velocity , find the angular velocity of the ball. (hint: draw a picture to find his displacement) 7. The ball is allowed to fall through the water until it reaches its terminal velocity.
Calculate the relative velocities of the trains. . Find the average velocity and average speed during the overall time interval.
The ability to solve a few core problems, like the tangent problem, and then recycle the ideas and computational methods discovered for them when solving various other problems, is one key to the efficiency and utility of calculus. If he takes the working escalator while standing still, it takes.
A ball has a momentum of P, collide a wall and reflected. Identify which are the initial conditions of the problem: what is the initial position of the particle, its initial velocity and its acceleration. There cannot be any change in speed or direction of an object if its acceleration is zero. Simple velocity vector problem; IS MY SOLUTION MANUAL WRONG OR AM I?! solve for distance and solve for time. Your first 5 questions are on us! Relative velocity of A with respect to B, vAB = 80 km h1 due east. Solutions for Chapter 6.3 Problem 42E: The velocity of a particle that moves in a straight line under the influence of viscous forces is v(t) = cekt, where c and k are positive constants. It took you one hour and fifteen minutes, or 1.25 hours, to travel 90.0 miles. o'donnell middle school dress code; Calculate the relative velocities of the trains. How do you solve a physics problem? Now we come to the Athla Velocity problems & troubleshooting that can arise for a variety of reasons. Letters with no subscript indicate the quantity value after some time, t t t. So, in the first equation, v v v is the velocity of an object that began at velocity v 0 v_0 v 0 and has moved with constant acceleration a a a for an amount of time t t t. A. The velocity of car = 20ms-1. (These assumptions will often be taken for granted, and not restated, in later problems.)
Establish a clear mental image of the problem. Find the friction constant between box and surface? The average velocity of an object is equal to its instantaneous velocity if its acceleration is zero. Our name says it all: Velocity, which is synonymous with moving quickly, exemplifies our ability to nimbly meet business challenges and swiftly and proactively develop solutions. Section 1-11 : Velocity and Acceleration. Look at the given pictures and find which one of the vectors given in the second figure is the relative velocity of A with respect to B. If the velocity profile of a fluid over a flat plate is parabolic with free stream velocity of 120 cm/s occurring at 20 cm from the plate. In the simplest kind of projectile motion problems, there is no initial velocity. Velocity Procurement . 1. What is the acceleration of . Back to Problem List. when is the average velocity of an object equal to the instantaneous velocity? (g=10m/s2) Slope of the graph gives us velocity of the box. The average velocity for this case is v = v 0 + vv 0 2 = v+v 0 2 (1.3) Other useful equations can be derived from these elementary relations.
Solved Problems in Linear Motion - Distance and displacement 1. Ans: A clock will lose 4 hours per day. Often, it is more meaningful to customize the basic mathematical flow velocity calculation to express a specific distance and/or express a different time increment. After 2 seconds, bird moves as far as 2 x 16 = 32 meters. Distances and times are known: v = x 1 + x 2 + x 3 + t 1 + t 2 + t 3 + \bar {v}=\frac Velocities and times are known: v = v 1 t 1 + v 2 t 2 + v 3 t 3 + t 1 + t 2 + Distances and velocities are known: v = x 1 + x 2 + x 3 + x 1 v 1 + x 2 v 2 + x 3 Calculate the speed in which thief is moving. If a man walks up the broken escalator, it takes. A cheetah is capable of speeds up to 31 m/s (70 mph) for brief periods. After 1 second, bird moves as far as 16 meters. Last Post; Sep 26, 2006; Replies 4 Views 7K. james blake: friends that break your heart. What are its speed and velocity? We only have the displacement. State, for each of the following physical quantities, if it is a scalar or a vector: volume, mass, speed, acceleration, density, number of moles, velocity, angular frequency, displacement, angular velocity. Since the observer is B, we find relative velocity of A with respect to B with following formula; VAB=VA-VB. Related Threads on Difficult relative velocity problem Difficult relative velocity question involving boats. Next lesson. What are the main two different ways to calculate average velocity?V = average velocity.Vf = final velocity.Vi = initial velocity Solution : Think of cars and deer moving at a constant velocity. Instantaneous speed and velocity. Determine the vector velocity of the particle as a function of time. toward the Earths surface). by jaspersoft studio tutorial art of problem solving calculus pdf. In this section we need to take a look at the velocity and acceleration of a moving object. Solution: Circumference of a.) In fall river high school enrollment by March 21, 2021fall river high school enrollment by March 21, 2021 London, UK 3rd Floor 86-90 Paul Street London EC2A 4NE.
Consider a differential element of lenght dl of the ring whose radius vector makes an angle with the x-axis .If the angle subtended by the length dl is d at the center then,dl=Rd. \Delta x_1=750\,\hat {j} x1. The angle between the velocity of the wind and that of the plane is 90. An objects acceleration is given by a = 3ti 4etj +12t2k a = 3 t i 4 e t j + 12 t 2 k . This equation contains velocity, momentum and mass, so it can help in calculation of final velocity when mass and momentum is known. (a) Show that the acceleration is proportional to the velocity. . Watch Recitation 2: Velocity and Acceleration in Translating and Rotating Frames. Homework Equations In a subway station, there are two escalators going upwards toward the exit. The velocity acquired by an object moving with uniform acceleration is 60 m/s in 3 seconds and 120 m/s in 6 seconds. Draw another arrow to the left (west) starting from the previous one (arranged head to tail).
The three-toed sloth is the slowest land mammal. As a result, acceleration of the box becomes zero. Kolomna, Russia. On the ground, the sloth moves at an average speed of 0.23 m/s (0.5 mph). a) Find the initial velocity and the angle at which the projectile is launched.
Solution. The velocity of the train. Displacement from time and velocity example. Practice problems for speed and velocity. Solution: Here, Density, Cross sectional Area, A=25. Solution: First we must find the overall time. Identivs Hirsch Velocity Software is an integrated security management system that manages access control and security operations in hundreds of different IoT Use Cases See the top solutions to IoT problems. Our mission is to provide a free, world-class education to anyone, anywhere. Determine the tangential and normal components of acceleration for the object whose position is given by r (t) = cos(2t),sin(2t),4t r ( t) = cos. . In this case the radius is 5 (half of the diameter) and linear velocity is 20 m/s. To find the instantaneous velocity at any position, we let t 1 = t t 1 = t and t 2 = t + t t 2 = t + t. After inserting these expressions into the equation for the average velocity and taking the limit as t 0 t 0, we find the expression for the instantaneous velocity: 8. Last Post; Sep 22, 2015; Replies 2 Weight Loss solutions. 1 Solutions to Velocities Problems This section will consist of the solutions to problems from problem 1-9 of the handout. Graphs of Motion. 3. But not all problems that occur with Athla Velocity are due to errors by the developer. It took you one hour and fifteen minutes, or 1.25 hours, to travel 90.0 miles. A meteoroid changed velocity from 1.0 km/s to 1.8 km/s in 0.03 seconds. The objects initial velocity is v (0) = j 3k v ( 0) = j 3 k and the objects initial position is r (0) = 5i +2j 3k r ( 0) = 5 i + 2 j 3 k . The two most commonly used graphs of motion are velocity (distance v. time) and acceleration (velocity v. time). Using the Pythagorean theorem, the resultant velocity can be calculated as, R 2 = (100 km/hr) 2 + (25 km/hr) 2. 1. The frequency of car = 300Hz Serpil Kurt.
37 Full PDFs related Calculating average speed and velocity edited. Problem solving with velocity and acceleration - Basic. Contact us today at (910) 254-9383. The velocity of an object is the time rate of change of its position. 10. It then travels along a straight road so that its distance from the light is given by x (t) = bt2 ct3 where b = 2.40 m/s 2 and c = 0.120 m/s 3.
By choosing the $10 tier on Patreon you can immediately unlock all solutions. Inventors Solve Problems, Win Prizes at CSMs First VelocityX Hackathon Team 'The @n0NyM0u5 H@wK$' took home first place in CSMs Tech Eval Challenge. Problem 1: Calculate the acceleration of a bicycle which accelerates from 0 m/s to 75 m/s in 15 s. Solution: Given data: Initial velocity of a bicycle, v i = 0 m/s Final velocity of a bicycle, v f = 75 m/s Time taken by a bicycle, t = 15 s Acceleration of a bicycle, a = ? Athla Velocity problems & Troubleshoot. If the acceleration is a given, you will have to integrate it to obtain the velocity and integrate this velocity to obtain the position vector. A satellite revolves around a small planet in a circular orbit of a radius of $$\;100\;{\rm{km}}.$$ It takes $$\;10\;{\rm{h}}$$to complete one revolution. As a result, acceleration of the box becomes zero. The merry-go-round is rotating at a constant angular velocity of w radians/second, and the ball is released at a radius r from the center of the merry-go-round. Numerical problem on terminal velocity with solution.
|
2022-10-04 23:48:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6188127398490906, "perplexity": 631.1046122092516}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00353.warc.gz"}
|
http://mindymallory.com/PriceAnalysis/commodity-price-analysis-and-forecasting.html
|
# Chapter 2 Commodity Price Analysis and Forecasting
A commodity is a good that can be supplied without qualitative differences. A bushel of wheat is regarded as a bushel of wheat everywhere. Commodities are fully or partially fungible so that the market treats a unit of good the same no matter who produced it or where it was produced. Think of grain elevators, for example. Farmers bring their grain to an elevator at harvest. Sometimes they sell it outright to the elevator, but sometimes they pay the elevator to store it for them. When the farmer decides to come get his grain out of storage do you think he gets the exact same kernels he brought in? Of course that would be impractical. The elevator just gives him back the same amount of grain he brought in of the same quality. The farmer is happy because the wheat is fungible. The grain he will be able to sell the grain he took out just as easily as the grain he put in. This is in stark contrast to differentiated goods where branding and quality make important distinctions between goods, resulting in differentiated demands. Just try to find someone indifferent between iPhone and Android!
Since commodities are fungible, it also makes sense that prices of commodities are determined by the entire (often global) market for the good. They tend to be basic resources such as agricultural and food products, metals, energy, an fibers. The fungibility of commodities enables the commodity to be traded in centralized spot and futures markets.
### 2.0.1 Trasformation Over Space, Time, and Form
Commodities can undergo various transformations. Standard price analysis usually groups these into three broad categories: Space, Time, and Form. Studying a commodity’s transformation over space comes about from the fact that the production of a commodity is often concentrated in a specific geographic location, while consumption of commodities is usually dispersed. In order for traders to have incentive to move a commodity from one location to another, a certain patter of prices must prevail. In short, traders must be able to make a profit, or at least break even in the business of moving a commodity from one location to another.
Studying a commodity’s transformation through time considers the nature of prices required to provide incentive to store the commodity for use at a later date (if it is possible to store the commodity - more on that below), or incentive to bring the commodity to market. Using the example of grain again, grain is produced once per year (in the United States), but consumption of grains occurs all year long. In order for the market to coordinate just the right amount of grain to be stored through time, prices through time give incentive for those holding stocks of grain to bring them to market or hold on longer.
Commodities can be transformed into completely different goods. Sometimes this transformation creates new commodities; for example soybeans are crushed into soybean oil and soybean meal - both of which are considered commodities. Other times the transformation creates products that are no longer considered commodities, where quality and differentiation matters. Meat products are a good example of this. Feeder cattle and live cattle are commodities, but through the slaughter and processing process, the commodity becomes differentiated products - different cuts of meat at the grocery store. Another example is coffee. Green coffee beans are considered a commodity, but once they enter the supply chain companies start transforming it by roasting, grinding, and brewing the coffee. Starbucks, for example, does not sell a commodity. Their product is highly differentiated and they market the fact that their product is highly differentiated in the marketplace.
## 2.1 Storable and Nonstorable
A key difference among commodities is their degree of storability. Some can be stored for long periods of time:
• Corn
• Soybeans
• Wheat
• Peanuts
• Crude Oil
• Natural Gas
Others are highly perishable or otherwise non-storeable :
• Hogs
• Cattle
• Milk
• Potatoes
• Apples
• Tomatoes
• Electricity
The storability of a commodity has profound implications on market prices. With storable commodities, they can be stored from one period to the next. This means the prices in one period must be related to prices in another period because those holding stocks of the commodity will constantly calculating their expectation of when best to sell - now or later. With non-storable commodities, prices can only be affected by the current supply of the commodity, since past supply cannot be brought forward.
## 2.2 Commodity Prices
Commodity prices are important both economically and politically in almost all countries. Commodity prices strongly influence farm income, and this can be quite volatile from year-to-year. The United States has a long history of policies aimed at smoothing out the price volatility and income volatility for farmers.
• Price supports
• Revenue supports
• Subsidized crop insurance programs
Some countries’ economies rely heavily on the export of various kinds of commodities. This leaves their economic growth and prosperity subject to volatility in commodity prices. In other countries, particularly in the developing world, a large share of the population for still engages in agricultural production for their livelihood. For these people, commodity prices determine the bulk of their income, and incomes of the poor is a primary concern in developing economies.
### 2.2.1 Forecasting Commodity Prices in Business
Some companies business model leaves them exposed to risk that comes from price volatility and spend considerable resources forecasting prices. These tend to be companies that deal directly in commodities and need to hedge risks. Some examples include:
• Cargill
• Caterpillar
• ConAgra
• Kraft
• Weyerhauser
There are consistent employment opportunities for students trained in price analysis and forecasting, and a growing interest in expertise in risk management strategies.
### 2.2.2 Price Analysis versus Forecasting
Price analysis and price forecasting are not exactly the same thing. Price analysis tends to be backward looking, while price forecasting is forward looking.
Price Analysis: - Goal is to understand the complex array of forces that influence the level and behavior of commodity prices - Aids in understanding performance of commodity markets - Aids in the development of policy, and is a key component of the policy analysis that leads the a policy’s promotion or demise
Price Forecasting: - Goal is to reliably and accurately forecast future price levels of commodities - The forecasts can be used in marketing and speculative strategies
## 2.3 Forecasting Basics
1. All meaningful forecasts guide decisions
• An awareness of the nature of the decisions will impact the design, use, and evaluation of the forecasting process
2. Form of forecast statement
Directional forecast
Fed steer prices for the first quarter of 2016 will be down compared to the same quarter last year.
Simple point forecast
Fed steer prices for the fist quarter of 2016 = $150/cwt. Interval forecast Fed steer prices for the first quarter of 2016 =$140-$160/cwt Confidence interval forecast We are 80% confident that fed steer prices fore the first quarter of 2016 will be between$140-\$160/cwt
Density forecast
Provides entire probability distribution of forecast price.
3. Forecast horizon
• Forecast horizon is the number of periods between today and the date of the forecast made.
• If dealing with monthly data:
• 1-step ahead = One month beyond the current month
• 2-step ahead = two months beyond the current month
• h-step ahead = h months beyond the current month
• More complex situations are common in crop market forecasting
• Typical unit of time is a ‘marketing year’.[^More on the marketing year in Chapter 3]
• Forecasts are typically updated monthly.
4. Parsimony principle
• Other things equal, simple approaches are preferred
• Also known as Occam’s Razor
The Principle States that among competing hypotheses that predict equally well, the one with the fewest assumptions should be selected. Other, more complicated solutions may ultimately prove to provide better predictions, but - in the absence of differences in predictive ability - the fewer assumptions that are made, the better. (Source: Wikipedia)
• Simple approaches tend to work best in real world applications
• Based on decades of experience and research
• Simpler models can be estimated more precisely
• Because simpler models can be more easily interpreted and understood, unusual behavior and outcomes can be more easily spotted.
• It is easier to communicate the basic behavior and design of simple approaches, so they are more likely to be used by decision-makers.
• Simple approaches lessen the chances of data mining problems.
• If a complex model is tailored to fit historical data very well, but does not capture the true nature of the data process, forecasts will perform poorly.
We focus on two types of “simple” forecasting methods.
1. Fundamental analysis: use of economic models and data on production, consumption, income, etc. to forecast prices. You will recognize this approach as balance sheet analysis in chapter 3.
2. Reduced form time-series econometric: use of statistical econometric models that features minimal inputs beyond a few recent prices to generate a forecasting model.
Not covered here, but a method used widely by day-traders and other market participants is technical analysis, which is the use of past price patterns to predict future price movement. There are scores of books on the topic of technical analysis, if interested.
### 2.3.1 Commodity Production Cycles
The production of agricultural commodities is bound by the biological traits of the life cycle. Forecasting prices requires an awareness of key seasons, and problems that can arise during each phase of the life cycle.
Agricultural Commodity Price Spikes in the 1970s and 1990s: Valuable Lessons for Today
This article was published by staff at the United States Department of Agriculture’s Economic Research Service. They look at historical corn prices and provide some perspective about what caused the price increases in the 1970’s and mid-2000’s.
Market Instability in a New Era of Corn, Soybean, and Wheat Prices
Scott Irwin and Darrel Good had an article in Choices magazine, that examined the price ‘eras’ we described in this chapter. They also discuss the causes of the price paradigm shifts. They argued in 2009 that the new ‘Era’ of crop prices were here to stay, and history has bore this out so far.
The New Era of Corn and Soybean Prices Is Still Alive and Kicking
Scott Irwin and Darrel Good revisit the price ‘Era’s’ discussion again in April of 2016.
## 2.6 Exercises
1. From the readings, describe causes of the rapid and persistent increase in prices in the early 1970’s.
2. From the readings, describe the causes of the rapid and persistent increase in prices in 2006/2007.
3. In your opinion, is there evidence that price trends will hold at their current levels?
|
2018-11-19 20:50:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24449923634529114, "perplexity": 2569.7071109082303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746110.52/warc/CC-MAIN-20181119192035-20181119214035-00555.warc.gz"}
|
https://www.neetprep.com/question/71790-One-mole-ideal-gas-requires--J-heat-raise-temperature--Kwhen-heated-constant-pressure-gas-heated-constant-volumeto-raise-temperature--K-heat-required--J--J--J--JGiven-gas-constantRJmolK/126-Physics--Kinetic-Theory-Gases/688-Kinetic-Theory-Gases
|
One mole of an ideal gas requires 207 J heat to raise the temperature by 10 K when heated at constant pressure. If the same gas is heated at constant volume to raise the temperature by the same 10 K, the heat required is
1. 198.7 J
2. 29 J
3. 215.3 J
4. 124 J
(Given the gas constant )
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
The expansion of an ideal gas of mass m at a constant pressure P is given by the straight line D. Then the expansion of the same ideal gas of mass 2m at a pressure P/ 2 is given by the straight line , where number on graphs indicate slope , is-
1. E
2. C
3. B
4. A
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
An experiment is carried on a fixed amount of gas at different temperatures and at high pressure such that it deviates from the ideal gas behaviour. The variation of $\frac{\mathrm{PV}}{\mathrm{RT}}$ with P is shown in the diagram. The correct variation will correspond to
1. Curve A
2. Curve B
3. Curve C
4. Curve D
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
The graph which represent the variation of mean kinetic energy of molecules with temperature t°C is
1.
2.
3.
4.
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
The adjoining figure shows graph of pressure and volume of a gas at two temperatures ${\mathrm{T}}_{1}$ and ${\mathrm{T}}_{2}$. Which of the following interferences is correct
1. ${\mathrm{T}}_{1}>{\mathrm{T}}_{2}$
2. ${\mathrm{T}}_{1}={\mathrm{T}}_{2}$
3. ${\mathrm{T}}_{1}<{\mathrm{T}}_{2}$
4. No interference can be drawn
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
The expansion of until mass of a perfect gas at constant pressure is shown in the diagram. Here
1. a = volume, b = $°\mathrm{C}$ temperature
2. a = volume, b = $\mathrm{K}$ temperature
3. a = $°\mathrm{C}$ temperature, b = volume
4. a = $\mathrm{K}$ temperature, b = volume
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
An ideal gas is initially at temperature T and volume V. Its volume is increased by $∆\mathrm{V}$ due to an increase in temperature $∆\mathrm{T}$, pressure remaining constant. The quantity $\mathrm{\delta }=∆\mathrm{V}/\left(\mathrm{V}∆\mathrm{T}\right)$ varies with temperature as
1.
2.
3.
4.
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
Pressure versus temperature graph of an ideal gas of equal number of moles of different volumes are plotted as shown in figure. Choose the correct alternative
1.
2.
3.
4.
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
Pressure versus temperature graph of an ideal gas is as shown in figure. Density of the gas at point A is ${\mathrm{\rho }}_{0}$ . Density at B will be
1. $\frac{3}{4}{\mathrm{\rho }}_{0}$
2. $\frac{3}{2}{\mathrm{\rho }}_{0}$
3. $\frac{4}{3}{\mathrm{\rho }}_{0}$
4.
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
The figure shows graphs of pressure versus density for an ideal gas at two temperatures ${\mathrm{T}}_{1}$ and ${\mathrm{T}}_{2}$
1. ${\mathrm{T}}_{1}>{\mathrm{T}}_{2}$
2. ${\mathrm{T}}_{1}={\mathrm{T}}_{2}$
3. ${\mathrm{T}}_{1}<{\mathrm{T}}_{2}$
4. Nothing can be predicted
|
2020-07-09 13:17:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8879279494285583, "perplexity": 1252.2386468310249}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900335.76/warc/CC-MAIN-20200709131554-20200709161554-00164.warc.gz"}
|
https://socratic.org/questions/what-is-the-slope-of-the-line-passing-through-the-following-points-8-2-10-2
|
# What is the slope of the line passing through the following points: (8,2) , (10,-2)?
Mar 25, 2018
Slope $= - 2$
#### Explanation:
Considering that the equation for slope is
$\text{slope} = \frac{{y}_{2} - {y}_{1}}{{x}_{2} - {x}_{1}}$
let's subtract $- 2$ and $2$ first $\left({y}_{2} - {y}_{1}\right)$. So
$- 2 - 2 = - 4$
and then $10 - 8$ for $\left({x}_{2} - {x}_{1}\right)$ to get $2$
$\text{slope} = - \frac{4}{2} = - 2$
|
2022-09-30 18:43:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8746629357337952, "perplexity": 691.7066505265061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00016.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/dcdsb.2016029
|
Article Contents
Article Contents
# Nonlinear stability of stationary points in the problem of Robe
• In 1977 Robe considered a modification of the Restricted Three Body Problem, where one of the primaries is a shell filled with an incompressible liquid. The motion of the small body of negligible mass takes place inside this sphere and is therefore affected by the buoyancy force of the liquid. We investigate the existence and stability of the equilibrium points in the planar circular problem and discuss the range of the parameters for which the problem has a physical meaning.
Our main contribution is to establish the Lyapunov stability for the equilibrium point at the center of the shell. We achieve this by putting the Hamiltonian function of Robe's problem into its normal form and then use the theorems of Arnol'd, Markeev and Sokol'skii. Resonance cases and some exceptional cases require special treatment.
Mathematics Subject Classification: Primary: 70F07, 70H14; Secondary: 37N05.
Citation:
• [1] C. M. Giordani, A. R. Plastino and A. Plastino, Robe's restricted three body-problem with drag, Celest. Mech. & Dyn. Astr., 66 (1996), 229-242.doi: 10.1007/BF00054966. [2] P. P. Hallan and K. B. Mangang, Non linear stability of equilibrium point in the Robe's restricted circular three body problem, Indian J. pure. appl. Math., 38 (2007), 17-30. [3] P. P. Hallan and N. Rana, The existence and stability of equilibrium points in the Robe's restricted three-body problem, Celest. Mech. & Dyn. Astr., 79 (2001), 145-155.doi: 10.1023/A:1011173320720. [4] A. P. Markeev, Linear Hamiltonian Systems and Some Applications to the Problem of Stability of Motion of Satellites Relative to the Center of Mass, R&C Dynamics, Moscow, Izhevsk, 2009. [5] K. R. Meyer, G. R. Hall and D. Offin, Introduction to Hamiltonian Dynamical Systems and the N-Body Problem, Springer, 2nd edition, 2009. [6] K. R. Meyer, J. Palacián and P. Yanguas, Stability of a Hamiltonian system in a limiting case, J. Appl. Math. Mech., 41 (2012), 20-28. [7] K. R. Meyer and D. S. Schmidt, The stability of the Lagrange triangular point and a theorem of Arnol'd, Journal of Differential equations, 62 (1986), 222-236.doi: 10.1016/0022-0396(86)90098-7. [8] A. R. Plastino and A. Plastino, Robe's restricted three body-problem revisited, Celest. Mech. & Dyn. Astr., 61 (1995), 197-206.doi: 10.1007/BF00048515. [9] H. A. G. Robe, A new kind of three body problem, Celest. Mech. & Dyn. Astr., 16 (1977), 197-206. [10] J. Singh and O. Leke, Existence and stability of equilibrium points in the Robe's restricted three-body problem with variable masses, International Journal of Astronomy and Astrophysics, 3 (2013), 113-122.doi: 10.4236/ijaa.2013.32013. [11] A. G. Sokol'skii, On stability of an autonomous Hamiltonian system with two degrees of freedom under first-order resonance, J. Appl. Math. Mech., 41 (1977), 20-28. [12] L. R. Valeriano, Parametric stability in Robe's problem, Regular and Chaotic Dynamics, 21 (2016), 126-135.doi: 10.1134/S156035471601007X.
|
2022-12-01 17:13:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.5021809935569763, "perplexity": 1388.746258468229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00475.warc.gz"}
|
http://astro.fmarion.edu/MASC2018/presentation-list
|
# 2018 MASC Presentation List
The following provides you with a summary of the presentations that will be given at the Meeting of Astronomers of South Carolina. Order in the table does not reflect order of presentations.
Last Name First Name Institution Title of Presentation Abstract
2018 Presentation List
Myers Jeannette Francis Marion University Celebrating 40 Years of Astronomy at Francis Marion University In January 1978, the Planetarium at Francis Marion College first began hosting free programs to the Pee Dee Region. In this presentation we will look back at some of the events from our history and plans we have for the next few years. We'll also discuss our plans for the 50th anniversary of Apollo 11.
Hartmann Dieter Clemson University Binary Neutron Star Mergers: Catching the electromagnetic afterglows of gravitational wave events The LIGO/VIRGO /Fermi discovery of nearby GW170817/GRB170817A and its global follow-up pushed the Multimessenger window wide open. I will discuss the new area of kilonova astrophysics revealed by this groundbreaking event and plans for the next round of ground- and space-based follow-up of gravitational wave events. The improved sensitivity of the coming installment of laser interferometers will push the diastase limit to which we can see such mergers and thus produce these opportunities at an increased rate, so the community is gearing up for exciting times.
Kirby Alexander University of South Carolina Constraining Extinction due to Dust in Distant Galaxies Extinction due to interstellar dust is a ubiquitous phenomenon that dims and reddens the light of background objects. As such, it is essential to apply extinction corrections to observations of distant objects in order to deduce their properties. Since the discovery of interstellar extinction in 1930, astronomers have developed a fairly detailed understanding of the interstellar dust in the Milky Way and other Local Group galaxies, especially the Magellanic Clouds. However, studies of extinction by dust in galaxies beyond the Local Group have been limited. In this work, we seek to generate better constraints on dust extinction in other galaxies in order to improve corrections for observations of objects that lie beyond them. As such, we are constructing spectral energy distributions (SEDs) for quasars/active galactic nuclei whose lines of sight go through foreground galaxies at lower redshifts. We will describe our compilation of archival optical, UV, and IR spectroscopic and photometric data from various observatories. Using the SEDs compiled from these data, and fitting the underlying continuum of the background quasar/AGN, we will estimate dust extinction curves for each foreground galaxy, and compare those with extinction curves in the Milky Way and the Magellanic Clouds.
Desai Abhishek Clemson University EBL with Pass 8 The extragalactic background light (EBL), from ultra-violet to infrared, that encodes the emission from all stars, galaxies and actively accreting black holes in the observable Universe is critically important to probe models of star formation and galaxy evolution, but remains at present poorly constrained. The Large Area Telescope (LAT), on board Fermi, produced an unprecedented measurement (relying on 750 blazars and the first 9 years of Pass 8 data) of the EBL optical depth at 12 different epochs from redshift 0 up to a redshift of 3. In this talk, I will present the measurement and how it constrains the EBL energy density and its evolution with cosmic time. I will also discuss how this paves the road to the first point-source-independent determinations of the star-formation history of the Universe.
Marchesi Stefano Clemson University A multi-observatory X-ray approach to characterize heavily obscured AGN According to the different models of Cosmic X-ray Background (CXB), the diffuse X-ray emission observed in the 1 to ~200-300 keV band, is mainly caused by accreting supermassive black holes, the so-called active Galactic Nuclei (AGN). Particularly, at the peak of the CXB (~30 keV) a significant fraction of emission (10-25%) is expected to be produced by a numerous population of heavily obscured, Compton thick (CT-) AGN, having intrinsic column density NH>=1E24 cm{-2}. Nonetheless, in the nearby Universe (z<=0.1) the observed fraction of CT-AGN with respect to the total population appears to be lower than the one expected on the basis of the majority of CXB model predictions (~20-30%), being between 5 and 10%. This discrepancy between data and models is one of the open challenges for X-ray astronomers, and needs to be solved to get a complete understanding of the AGN population. In this presentation, I will discuss a multi-observatory X-ray approach to find and characterize heavily obscured AGN. Candidate sources are first selected in the 100-month Swift-BAT catalog, the result of a ~7 years all-sky survey in the 15-150 keV band. These objects are then targeted with snapshot (5-10 ks) observations with Chandra and Swift-XRT, which allow us to constrain the intrinsic absorption value within a 20-30% uncertainty. Finally, deep (25-50 ks) observations with XMM-Newton and NuSTAR allow us to study the physics of these complex and elusive sources.
Kulkarni Varsha University of South Carolina Probing Structure in Cold Gas in Galaxies with Gravitationally Lensed Quasars Absorption lines in quasar spectra offer a powerful tool to study the gas and metals in and around galaxies. Gravitationally lensed quasars (GLQs) probe multiple sight lines through foreground galaxies, and can thus enable comparisons of the gas and metal content in different parts of the galaxies. We report spectroscopic observations of 4 GLQs, each consisting of a pair of images with a projected separation of a few kpc at the redshift of the foreground galaxy. We measure the H I absorption lines using HST STIS UV spectra, and metal absorption lines using optical spectra of the GLQs. Combining the H I and metal information, we estimate element abundances along the two sight lines for each field. Using these measurements, together with projected separations between the two sight lines derived from HST imaging data, we estimate the gradients in H I column density and metallicity through the foreground galaxies.
Marcotulli Lea Clemson University The Density of Blazars above 100 MeV and the Origin of the Extragalactic Gamma-ray Background Relying on the first 104 months of Fermi-LAT Pass 8 data, using detailed Monte Carlo simulations, we obtained the most sensitive measurement of the source count distribution of blazars above 100 MeV. The result shows, with high statistical significance, the presence of a break in the distribution at low fluxes. From this, we provide a precise measurement of the contribution of blazars to the extragalactic gamma-ray background (EGB). Furthermore, we confirm that they can not account for the total EGB, therefore, another source class is required to explain the remaining component. In this talk, we will present this new measurement and discuss alternatives for the origin of the missing EGB component.
Poudel Suraj University of South Carolina Metallicity measurements of elements in gas-rich absorbers at redshift z ~5 Element abundances in high-redshift galaxies offer key constraints on models of the chemical evolution of galaxies. The chemical composition of galaxies at z~5 are especially important since they constrain the star formation history in the first ~1 Gyr after the Big Bang and the initial mass function of early stars. Observations of damped Lyman-alpha (DLA) absorbers in quasar spectra enable robust measurements of the element abundances in distant gas-rich galaxies. In particular, abundances of volatile elements such as S, O and refractory elements such as Si, Fe allow determination of the dust-corrected metallicity and the depletion strength in the absorbing galaxies. Unfortunately measurements for volatile (nearly undepleted) elements are very sparse for DLAs at z > 4.5. We present abundance measurements of O, C, Si and Fe for three gas-rich galaxies at z ∼ 5 using observations from the Very Large Telescope (VLT) X-shooter spectrograph and the Keck ESI (Echellette Spectrograph and Imager). Our study has doubled the existing sample of measurements of undepleted elements at z > 4.5. After combining our measurements with those from the literature, we find that the NHI-weighted mean metallicity of z ∼ 5 absorbers is lower at < 0.5 level compared to the prediction based on z < 4.5 DLAs. Thus, we find no significant evidence of a sudden drop in metallicity at z > 4.7 as reported by prior studies. Some of the absorbers show evidence of depletion of elements on dust grains, e.g. low [Si/O] or [Fe/O]. These absorbers along with other z ∼ 5 absorbers from the literature show some peculiarities in the relative abundances, e.g. low [C/O] in several absorbers and high [Si/O] in one absorber. We also find that the metallicity vs. velocity dispersion relation of z ∼ 5 absorbers may be different from that of lower-redshift absorbers.
We acknowledge support from NASA grant NNX14AG74G and NASA/STScI support for HST programs GO-12536, 13801 to the Univ. of South Carolina.
Cashman Francie University of South Carolina Determining Chemical Gradients in Hot Gas at z~2.2 using Sight Lines to Gravitationally Lensed Quasars
Absorption line spectroscopy using quasars is a powerful method to study galaxies and intergalactic systems. Multiple systems along the line of sight to a quasar can be differentiated, since absorption lines are redshifted to longer wavelengths at higher redshifts. Use of gravitationally lensed quasars (GLQs) can extend this technique by probing multiple sight lines through these foreground galaxies, offering an in-depth look at the internal structure of the environment. GLQs with closely spaced sight lines (< 10 kpc) allow us to probe smaller structure within the galaxy’s ISM and thereby study variations in gas, metal, and dust in different regions. We report spectroscopic observations of 2 separate absorbing systems at z=2.2 and z=2.3 along the line of sight to a GLQ at z=2.5. Each observation consists of a pair of images with a projected separation of 6.9 and 7.0 kpc respectively at the redshifts of the foreground galaxies. The H I and metal absorption lines were measured using optical spectra taken with the MagE spectrograph on the 6.5 meter Magellan telescope in Chile and were used to estimate column densities along the lines of sight. We detect elements in high ionization stages, i.e., O VI, C IV, N V, Si IV, and Fe III, allowing us to probe different directions through the high temperature outer envelope of each galaxy.
Co-authors:
Varsha Kulkarni, Dept. of Physics & Astronomy, University of South Carolina, USA
Sebastian Lopez, Dept. de Astronomia, Universidad de Chile, Casilla 36-D, Santiago, Chile
Sara Ellison, University of Victoria, Dept. of Physics and
Astronomy, Victoria, BC, V8W 2Y2, Canada
Debopam Som, Aix Marseille Universit ́e, CNRS, Laboratoire d’Astrophysique de Marseille, UMR 7326, 13388, Marseille, France
Roberts-Pierel Justin University of South Carolina Extending Supernova Spectral Templates for Next Generation Space Telescope Observations Widely used empirical supernova (SN) Spectral Energy Distributions (SEDs) have not historically extended meaningfully into the ultraviolet (UV), or the infrared (IR). However, both are critical for current and future aspects of SN research including UV spectra as probes of poorly understood SN Ia physical properties, and expanding our view of the universe with high-redshift James Webb Space Telescope (JWST) IR observations. We therefore present a comprehensive set of SN SED templates that have been extended into the UV and IR, as well as an open-source software package written in Python that enables a user to generate their own extrapolated SEDs. We have taken a sampling of core-collapse (CC) to get a time-dependent distribution of UV and IR colors (U-B,r’-[JHK]), and then generated color curves are used to extrapolate SEDs into the UV and IR. Type Ia SNe with observations in the UV and IR are used to extrapolate the existing Type Ia SALT2-4 parameterized model. The SED extrapolation process is now easily duplicated using a user’s own data and parameters via our open-source Python package: SNSEDextend. This work develops the tools necessary to explore the JWST’s ability to discriminate between CC and Type Ia SNe, as well as provides a repository of SN SEDs that will be invaluable to future JWST and WFIRST SN studies.
Garrity April Francis Marion University Recoil Detection and Focal Plane Detectors for the SEparator for CApture Reactions (SECAR) The Separator for Capture Reactions (SECAR) will be installed at NSCL/FRIB to directly measure (p,$\gamma$) and ($\alpha$,$\gamma$) reactions that are important in extreme stellar environments. Time-of-flight detectors like those implemented in SECAR are necessary to distinguish between the heavy products of the desired reactions and the unreacted beam. The time resolution and position sensitivity of a micro-channel plate (MCP) detector for the SECAR focal plane instrumentation were tested. We will present the findings of these tests as well as an alternate high energy design of a stopping detector. The new stopping detector will be characterized in further in-beam studies, and both it and the MCP detectors will be installed at NSCL/FRIB by 2022.
Zhao Xiurui Clemson University NuSTAR and XMM-Newton view of NGC 1358 The obscuration of active galactic nuclei (AGN) is important for the co-evolution of the supermassive black hole and its host galaxy, and for cosmic X-ray background (CXB). Significant number of heavily obscured AGNs are thought needed to reproduce the peak (~30 keV) of CXB in nearby universe, while only part of them are found. Marchesi et al. (2018), utilizing Chandra and Swift-BAT spectra, analyze several Seyfert 2 galaxies from the Swift-BAT 100 month catalog. Following this work, we study one of their Compton-Thick AGN (with column density NH >= 1024 cm-2) candidate, NGC 1358, with unprecedented statistics by using NuSTAR (3-79 keV, 50 ks) and XMM-Newton (0.3-10 keV, 48 ks) pointed observations. In this presentation, I will report some physical properties of our target such as the photon index and column density of the absorption material by using phenomenological model and physical models.
|
2018-12-10 20:01:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49290767312049866, "perplexity": 2456.827923246154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823442.17/warc/CC-MAIN-20181210191406-20181210212906-00258.warc.gz"}
|
https://www.onooks.com/tag/where/
|
Categories
## Let R and S be reflexive relations on A. Suppose that R is also transitive. Prove S ⊆ R if and only if (S ◦ R) = R.
To prove this, I was going to assume S ⊆ R and prove (S ◦ R) = R, then do it the other way, assume (S ◦ R) = R and prove S ⊆ R. Then to do the first part I was going to do what my professor calls the “double subset strategy,” where, […]
Categories
## Find the source file causing a Mixed Content warning
Google Chrome has flagged a client’s WordPress site as Not Secure. So we are trying to remove a Mixed Content warning that we get using the Inspect feature of the browser. Mixed Content: The page at ‘https://www.CLIENT.com/‘ was loaded over HTTPS, but requested an insecure image ‘http://www.CLIENT.com/wp-content/uploads/2018/05/CompanyLogo.png‘. This content should also be served over HTTPS. […]
Categories
## On Pitt’s Inequality (Weighted Fourier Inequality)
One of Pitt’s Theorem (from “Theorems on Fourier Series” by H R Pitt, 1937) states that for an integrable periodic function $F$ over $[-\pi,\pi]$, $$\sum_{n=1}^{\infty} |a_n|^q n^{-q\lambda} \leq K(p,q,\lambda) \int_{-\pi}^{\pi}|F(\theta)|^p|\theta|^{p\alpha},$$ where, $a_n$‘s are the Fourier series coefficients, K is independent of $F$, $1<p\leq q <\infty$, ${1}/{p} + {1}/{p’}=1$, $0\leq \alpha < 1/p’$ and […]
Categories
## Prove that $\frac{[ABC]}{[XYZ]}=\frac{2R}{r}$
Prove that $$\frac{[ABC]}{[XYZ]}=\frac{2R}{r}\,,$$ where $[\,\_\,]$ represents area of triangle, $X,Y,Z$ are the points of contact of incircle with sides of triangle $ABC$, $R$ is circumradius, and $r$ is inradius. [Here is my textbook proof] (https://i.stack.imgur.com/C72Xw.jpg) Incase you are wondering what theorem 36 is. Look below Theorem 36: In two triangles $A_1B_1C_1$ and $A_2B_2C_2$ we have […]
Categories
## Computing Constant $C_2$ of Inequality $h_x([2]P) \geq 4h_x(P) − C_2$
Let $E/Q$ be an elliptic curve given by a Weierstrass equation $E : y^2 = x^3 + Ax + B$ with $A, B ∈ Z$ and, $h_x(P) = \log H(x(P))$, where, $x(P) = p/q, H(x(P))=\max\{|p|, |q|\}$. There is a constant $C_2$ that depends on $A$ and $B$ such that $h_x([2]P) \geq 4h_x(P) − C_2$ […]
Categories
## About the sum $S(p_n)=\sum_{1\le k\lt n}\,p_n\mod\;p_k$
For $\,p_n\gt2\,$ let’s define the sum $\,S(p_n)=\sum_{1\le k\lt n}\,p_n\;mod\;p_k$, where $\,p_k\,$ represents the $\,k$-th prime. The first terms of the sequence $\,S(p_n)\,$ (OEIS A033955 – sum of the remainders when the $\,n$-th prime is divided by primes up to the $\,(n-1)$-th prime) are: $S(3)=1\;\;\;(p_2=3)$ $S(5)=3\;\;\;(p_3=5)$ $S(7)=4\;\;\;(p_4=7)$ $S(11)=8\;\;\;(p_5=11)$ $S(13)=13\;\;\;(p_6=13)$ $S(17)=18\;\;\;(p_7=17)$ $S(19)=27\;\;\;(p_8=19)$ $S(23)=29\;\;\;(p_9=23)$ $S(29)=46\;\;\;(p_{10}=29)$ The graphical representation […]
Categories
## About the sum $S(p_n)=\sum_{1\le k\lt n}\,p_n\;mod\;p_k$
For $\,p_n\gt2\,$ let’s define the sum $\,S(p_n)=\sum_{1\le k\lt n}\,p_n\;mod\;p_k$, where $\,p_k\,$ represents the $\,k$-th prime. The first terms of the sequence $\,S(p_n)\,$ (OEIS A033955 – sum of the remainders when the $\,n$-th prime is divided by primes up to the $\,(n-1)$-th prime) are: $S(3)=1\;\;\;(p_2=3)$ $S(5)=3\;\;\;(p_3=5)$ $S(7)=4\;\;\;(p_4=7)$ $S(11)=8\;\;\;(p_5=11)$ $S(13)=13\;\;\;(p_6=13)$ $S(17)=18\;\;\;(p_7=17)$ $S(19)=27\;\;\;(p_8=19)$ $S(23)=29\;\;\;(p_9=23)$ $S(29)=46\;\;\;(p_{10}=29)$ The graphical representation […]
Categories
## About the sum $S(p_n)=\sum_{1\le k\lt n}\,p_n\;mod\;p_k$
For $\,p_n\gt2\,$ let’s define the sum $\,S(p_n)=\sum_{1\le k\lt n}\,p_n\;mod\;p_k$, where $\,p_k\,$ represents the $\,k$-th prime. The first terms of the sequence $\,S(p_n)\,$ (OEIS A033955 – sum of the remainders when the $\,n$-th prime is divided by primes up to the $\,(n-1)$-th prime) are: $S(3)=1\;\;\;(p_2=3)$ $S(5)=3\;\;\;(p_3=5)$ $S(7)=4\;\;\;(p_4=7)$ $S(11)=8\;\;\;(p_5=11)$ $S(13)=13\;\;\;(p_6=13)$ $S(17)=18\;\;\;(p_7=17)$ $S(19)=27\;\;\;(p_8=19)$ $S(23)=29\;\;\;(p_9=23)$ $S(29)=46\;\;\;(p_{10}=29)$ The graphical representation […]
Categories
## About the sum $S(p_n)=\sum_{1\le k\lt n}\,p_n\;mod\;p_k$
For $\,p_n\gt2\,$ let’s define the sum $\,S(p_n)=\sum_{1\le k\lt n}\,p_n\;mod\;p_k$, where $\,p_k\,$ represents the $\,k$-th prime. The first terms of the sequence $\,S(p_n)\,$ (OEIS A033955 – sum of the remainders when the $\,n$-th prime is divided by primes up to the $\,(n-1)$-th prime) are: $S(3)=1\;\;\;(p_2=3)$ $S(5)=3\;\;\;(p_3=5)$ $S(7)=4\;\;\;(p_4=7)$ $S(11)=8\;\;\;(p_5=11)$ $S(13)=13\;\;\;(p_6=13)$ $S(17)=18\;\;\;(p_7=17)$ $S(19)=27\;\;\;(p_8=19)$ $S(23)=29\;\;\;(p_9=23)$ $S(29)=46\;\;\;(p_{10}=29)$ The graphical representation […]
Categories
## A Tensor Calculation with Braids
I am trying to follow the derivation of the Jone’s Polynomial from a braid representation presented in chapter 2 of Ohtsuki’s Quantum Invariants. The representation of the braid $b$ with $n$ strands is a linear operator (which we treat as a matrix after choosing some basis) on $(\mathbb{C}^2)^{\otimes n}$ is $\psi_n(b)$. Let $\sigma_i$ denote the […]
|
2020-04-09 20:57:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9812730550765991, "perplexity": 262.6415769031368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371876625.96/warc/CC-MAIN-20200409185507-20200409220007-00139.warc.gz"}
|
https://discuss.codechef.com/questions/73274/time-complexity-calculation
|
×
Time Complexity Calculation
0 for(i=1;i<=n;i++){ for(j=i;j<=n;j+=i){ sum+=i; } } what is time complexity of above question ?? can we solve n<=10^7 constraints by this algo?? provide more resource to analyze time complexity much more complex....??*** asked 21 Jul '15, 23:21 1.1k●12●29 accept rate: 6%
1 Well, Time complexity seems to be n*ln(n+1) as the inner loop runs for n times for i=1, n/2 times for i=2 and so on till 1 time for i=n. Sum of 1+1/2+1/3....+1/n=ln(n+1) [Lower bound]{There are many proofs on internet for this. You can check this on Quora too}. Whether the constraints can be passed or not is something that largely depends on the time limit. Wait for other comments too in order to confirm. :P answered 21 Jul '15, 23:46 3★ho_oh 75●6 accept rate: 33% time limit : 1sec link for problem...https://www.codechef.com/problems/ALK1105 i know better soution for this problem.... i just want to know why first algo gives tle...O(N)?? (22 Jul '15, 00:04) Because first one has a time complexity of omega(nlog(n+1)) {omega meaning lower bounds}, whereas your second algo has complexity O(n) {Big Oh meaning upper bounds}. Think of n=10^7 [boundary case], your second algo has :at most: 10^7 operations whereas your first algo has :at least: 10^7 * ln(10^7 +1) which nearly equals 16* 10^7 {you can check for yourself}. Now, this value is 16 :times: greater than what your second solution gives. Hence, TLE. (22 Jul '15, 09:29) ho_oh3★ thanks... @ho_oh (22 Jul '15, 12:58)
0 you can solve n=10^7. but not everywhere.it may be possible in your pc..but codechef do not supports it... correct if i m wrong.. answered 11 Sep '15, 22:47 16 accept rate: 100% Actually now we have Giga hertz processor this is directly imply that we can execute around(10^9) instruction per second..but in real life we can execute around (10^8) easily on online judge too... (12 Sep '15, 02:32)
0 still need more explanation................. answered 22 Jul '15, 00:08 1.1k●12●29 accept rate: 6%
toggle preview community wiki:
Preview
By Email:
Markdown Basics
• *italic* or _italic_
• **bold** or __bold__
• image?
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported
• mathemetical formulas in Latex between \$ symbol
Question tags:
×245
×224
×115
question asked: 21 Jul '15, 23:21
question was seen: 2,922 times
last updated: 12 Sep '15, 02:32
|
2018-05-25 08:57:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7695686221122742, "perplexity": 13787.016304998891}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867055.20/warc/CC-MAIN-20180525082822-20180525102822-00433.warc.gz"}
|
https://helpingwithmath.com/tenths-as-decimals/
|
Home » Math Theory » Decimals » Tenths as Decimals
Tenths as Decimals
Introduction
The word decimal comes from the Latin word “Decem” meaning 10. . In algebra, a decimal number can be defined as a number whose complete part and the fractional part are separated by a decimal point. Before we learn what we mean by a tenth of decimal it is important to recall the place value system of decimals that defines the position of a tenth in a decimal number.
Place Value System of Decimals
We know that each place in the place value table has a value ten times the value of the next place on its right. In other words, the value of a place is one-tenth of the value of the next place on its left. We observe that if one digit moves one place left to right its value becomes one-tenth ($\frac{1}{10}$ ) of its previous value and when it moves two places left to right its value comes one-hundredth ( $\frac{1}{100}$ ) of its previous term and so on. Therefore, if we wish to move beyond ones place which is the case of decimals, we will have to extend the place value table by introducing the places of tenths ($\frac{1}{10}$ ), hundredths ($\frac{1}{100}$ ), thousandths ( $\frac{1}{1000}$ ) and so on.
Therefore, the place value table in case of a decimal number will be of the form –
For example, the decimal number 257.32 in the place value system will be written as –
A decimal or a decimal number may contain a whole number part and a decimal part. The following table shows the whole number part and the decimal part of some decimals –
Now, how do we read the decimals using the place value system? Let us find out.
Reading the Decimal Numbers using the Place Value System
In order to read decimals, the following steps are used –
1. Read the whole number part
2. Read the decimal point as point
3. Read the number to the right of the decimal point. For example, 14.35 will be read as Fourteen point three five. Alternatively, the number to the right of the decimal point can also be read by reading the number to the right of the decimal point and naming the place value of the last digit. For instance, the number 8.527 can also be read as eight and five hundred twenty seven thousandths.
What are Tenths in a Decimal?
Consider the following figure. It is divided into ten equal parts and one part is shaded. The shaded part represents one-tenth of the whole figure. It is written as $\frac{1}{10}$. $\frac{1}{10}$ is also written as 0.1 which is read as “ point one “ or “ decimal one “.
Thus the fraction $\frac{1}{10}$ is called one-tenth and is written as 0.1.
Also, 1 ones = 10 tenths.
Consider another figure. The below figure is divided into ten equal parts and three parts are shaded. The shaded parts represent three-tenths of the whole figure. It is written as $\frac{3}{10}$. $\frac{3}{10}$ is also written as 0.3 which is read as “ point three “ or “ decimal three “.
Thus the fraction $\frac{3}{10}$ is called three tenth and is written as 0.3.
Also, consider the below figure. The below figure is divided into ten equal parts and six parts are shaded. The shaded parts represent six-tenths of the whole figure. It is written as $\frac{6}{10}$. $\frac{6}{10}$ is also written as 0.6 which is read as “ point six “ or “ decimal six “.
Thus the fraction $\frac{6}{10}$ is called six-tenth and is written as 0.6.
Similarly, $\frac{2}{10}$ , $\frac{4}{10}$ , $\frac{5}{10}$ , $\frac{7}{10}$ , $\frac{8}{10}$ and $\frac{9}{10}$ are called 2-tenths, 4-tenths, 7-tenths, 8-tenths and 9-tenths respectively and are denoted by 0.2 , 0.3 , 0.4 , 0.5 , 0.7 , 0.8 and 0.9 respectively.
Thus we have,
$\frac{1}{10}$ = 0.1 and is called one-tenths or 1 tenths
$\frac{2}{10}$ = 0.2 and is called two-tenths or 2 tenths
$\frac{3}{10}$ = 0.3 and is called three-tenths or 3 tenths
$\frac{4}{10}$ = 0.4 and is called four-tenths or 4 tenths
$\frac{5}{10}$ = 0.5 and is called five-tenths or 5 tenths
$\frac{6}{10}$ = 0.6 and is called six-tenths or 6 tenths
$\frac{7}{10}$ = 0.7 and is called seven-tenths or 7 tenths
$\frac{8}{10}$ = 0.8 and is called eight-tenths or 8 tenths
$\frac{9}{10}$ = 0.9 and is called nine-tenths or 9 tenths
$\frac{10}{10}$ = 1 and is called ten-tenths or 10 tenths
Also, $\frac{11}{10}$ = 11 tenths = 10 tenths + 1 tenths = 1 + $\frac{1}{10}$ = 1 + 0.1 = 1.1
$\frac{12}{10}$ = 12 tenths = 10 tenths + 2 tenths = 1 + $\frac{2}{10}$ = 1 + 0.2 = 1.2
$\frac{13}{10}$ = 13 tenths = 10 tenths + 3 tenths = 1 + $\frac{3}{10}$ = 1 + 0.3 = 1.3
Similarly, we have
$\frac{20}{10}$ = 20 tenths = 10 tenths + 10 tenths = 1 + 1 = 2
$\frac{21}{10}$ = 21 tenths = 20 tenths + 1 tenths = 2 + $\frac{1}{10}$ = 2 + 0.1 = 2.1
$\frac{22}{10}$ = 22 tenths = 20 tenths + 2 tenths = 2 + $\frac{2}{10}$ = 2 + 0.2 = 2.2
Thus a fraction of the form $\frac{Number}{10}$ is written as decimal obtained by putting decimal point by leaving one right-most digit.
For example, $\frac{325}{10}$ = 32.5 while $\frac{5894}{10}$ = 589.4
Let us understand it through an example.
Example Write each of the following as decimals
1. Five ones and four tenths
2. Twenty and one tenths
Solution We have been given the following and we need to write them as decimals. Let us do them one by one
1. Five ones and four tenths
Note that the whole value of the given decimal is 5 and the decimal part is four tenths. Therefore, we will proceed in the same manner as we defined different tenths above.
We will get,
Five ones and four tenths = 5 ones + 4 tenths = 5 + $\frac{4}{10}$ = 5 + 0.4 = 5.4
Hence, Five ones and four tenths In decimal form will be 5.4.
1. Twenty and one tenths
Note that the whole value of the given decimal is 20 and the decimal part is one-tenths. Therefore, we will proceed in the same manner as we defined different tenths above.
We will get,
Twenty ones and one tenths = 20 ones + 1 tenths = 20 + $\frac{1}{10}$ = 20 + 0.1 = 20.1
Hence, Twenty ones and one tenths in decimal form will be 20.1.
Let us take another example.
Example Write each of the following as decimals
1. 20 + 7 + + $\frac{3}{10}$
2. 500 + 3 + $\frac{7}{10}$
Solution We have been given an expanded form of two numbers and we are required to find the corresponding decimal number. Let us do them one by one.
1. 20 + 7 + + $\frac{3}{10}$
We can see that there are two whole numbers and one fractional number.
Note that the whole values of the given decimal are 20 and 7 and the decimal part is three tenths. Therefore, we will proceed in the same manner as we defined different tenths above.
We will get,
20 + 7 + $\frac{3}{10}$ = 20 + 7 + 0.3 = 27.3
1. 500 + 3 + $\frac{7}{10}$
We can see that there are two whole numbers and one fractional number.
Note that the whole values of the given decimal are 500 and 3 and the decimal part is seven tenths. Therefore, we will proceed in the same manner as we defined different tenths above.
We will get,
500 + 3 + $\frac{7}{10}$ = 500 + 3 + 0.7 = 503.7
Let us now see how to plot the tenths of a decimal on a number line.
Representation of tenths of a decimal on a Number Line
Before we learn how to represent a tenth on a number let us recall what we understand by the term number line.
What is a number line?
A number line is a straight horizontal line with numbers placed at even intervals that provides a visual representation of numbers. Primary operations such as addition, subtraction, multiplication, and division can all be performed on a number line. The numbers increase as we move towards the right side of a number line while they decrease as we move left.
Representation on a Number Line
Above is a visual representation of a standard number line. As is clearly visible, as we move from left to right, there is an increase in the value of numbers while it decreases when we move from right to left.
We already know how to represent fractions on a number line. Let us now represent tenths of a decimal on a number line. We can understand this by an example.
Let us represent 0.4 on a number line. We can clearly see that there are 4 tenths in 0.4. Therefore in order to represent 0.4 on a number line we will divide the unit length between 0 and 1 into 10 equal parts and take 4 parts as shown below –
Now, we can know that 0.4 in fraction form is equal to 4/10. Hence we will mark 4/10 as 0.4 which is our desired mark on the number line.
The steps that we used above to represent a tenth on a number line can be summarised as –
1. We draw a number line between 0 and 1.
2. We then raw 10 lines dividing the total distance between 0 and 1 into 10 equal parts.
3. Now, one whole divided into 10 parts is equal to $\frac{1}{10}$.
4. $\frac{1}{10}$ in decimal form is equal to 0.1.
5. At each new line we are adding $\frac{1}{10}$ or 0.1.
6. So, between 0 and 1 we have, 0 . 1 , 0 . 2 , . 0 . 3 , 0 . 4 , 0 . 5 , 0 . 6 , 0 . 7 , 0 . 8 and 0 . 9. Similarly, between 1 and 2 we have, 1 . 1 , 1 . 2 , 1 . 3 , 1 . 4 , 1 . 5 , 1 . 6 , 1 . 7 , 1 . 8 and 1 . 9.
7. We can also say that the line representing $\frac{1}{2}$ or 0.5 is the half way mark between 0 and 1. Similarly, the line representing $1\frac{5}{10}$ or 1.5 is the half way mark between 1 and 2.
8. Ten tenths is equal to one whole.
Now let us go through some solved examples on tenth of a decimal.
Solved Examples
Example 1 Label the missing decimal numbers on the number line.
Solution We have been given four numbers marked as A , B , C and D on a number line and we need to find out which decimal numbers they represent. Let us mark them one by one.
We will start with completing the marking of the lines that have not been marked on the given number line. It can be clearly seen that there are 10 lines between two whole numbers on the number line. This means that the lines represent one tenth of the number in the decimal form. Therefore, the lines between 7 and 8 will be marked as 7 . 1 , 7 . 2 , 7 . 3 , 7 . 4 , 7 5 , 7 . 6 , 7 . 7 , 7 . 8 and d 7. 9. Similarly, We between the whole numbers 8 and 9 we have, 8 . 1 , 8 . 2 , 8 . 3 , 8 . 4 , 8 . 5 , 8 . 6 , 8 . 7 , 8 . 8 and 8 . 9. The number line so obtained will be –
Now, we shall check the position of the four points on this number line.
We can see that from the number line above, the point A lies on the decimal number 8.7. Hence A = 8 . 7
Now, let us check the position of point B.
We can see that from the number line above, the point B lies on the decimal number 8.2. Hence B = 8 . 2
Now, let us check the position of point C.
We can see that from the number line above, the point C lies on the decimal number 7 . 1. Hence C = 7 . 1
Now, let us check the position of point D.
We can see that from the number line above, the point D lies on the decimal number 7 . 8 . Hence D = 7 . 8
Therefore, we have,
A = 8 . 7
B = 8 . 2
C = 7 . 1
D = 7 . 8
Example 2 Between what two numbers is does the decimal number 5.4 lie on the number line?
Solution We have been given the decimal number 5.4 and we need to check between which two whole numbers will it lie.
On observing the number 5.4 we can see that the number represents a tenth of a decimal as it has one digit after the decimal point.
Also, we know that 5.4 = 5 + $\frac{4}{10}$
This means that 5.4 is equal to 5 whole parts plus 4 tenths. Et us plot it on the number line. we will have,
We can clealry see that 5.4 will lie between 5 and 6. The point on the number line will be –
Hence, we can say that the number 5.4 will be between the whole numbers 5 and 6.
Example 3 Write the following fraction in decimal form
18 $\frac{5}{10}$
Solution We have been given the fraction 18 510 and we need to write it in decimal form. we can see that the given fraction has one whole number 18 and five tenth. We also know that 510 = 0.5 and is called five-tenths or 5 tenths. Therefore, we have,
18 $\frac{5}{10}$ = 18 + $\frac{5}{10}$ = 18 + 0.5 = 18.5
Hence, 18 $\frac{5}{10}$ = 18.5
Key Facts and Summary
1. A decimal number can be defined as a number whose complete part and the fractional part are separated by a decimal point.
2. The Place Value System is the system in which the position of a digit in a number determines its value. The place value of a digit in a number is the value it holds to be at the place in the number.
3. In order to read decimals, we first
4. Read the whole number part
5. Read the decimal point as point
6. Read the number to the right of the decimal point. For example, 14.35 will be read as Fourteen point three five.
7. The fraction $\frac{1}{10}$ is called one-tenth and is written as 0.1.
8. 1 ones = 10 tenths
9. A number line is a straight horizontal line with numbers placed at even intervals that provides a visual representation of numbers. Primary operations such as addition, subtraction, multiplication, and division can all be performed on a number line.
|
2022-05-17 01:10:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7112103700637817, "perplexity": 402.6176432928394}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515466.5/warc/CC-MAIN-20220516235937-20220517025937-00096.warc.gz"}
|
https://teamrankstar.com/2018/06/22/taking-chances-probability-and-the-fall-of-argenport/
|
# Taking Chances: Probability and the Fall of Argenport
Hi, jez2718 here again with another article on using probability to inform deck building. In my last article I introduced the framework for calculating deck building probabilities, and looked at some basic questions you can inform via probability. In this article we’ll be building on that (so if you haven’t read the last article, I recommend reading it first) to look at evaluating a couple of strategies made relevant by spoiled Fall of Argenport cards.
# “Discard is consistency” and pre-boarding
To the consternation of many competitive players (including me) Scarlatch commented the other week that DWD intends to use the market as a stand-in for side boarding in official organised play events. Resheph and I are in the process of writing an article quantifying the relative impacts of sideboarding vs. markets in a best of three, but there are certain things that you simply cannot do with markets. Notably, the earliest you can play a market card is turn 4.
We’ve been seeing a bit of a dry spell for fast aggro the last few months, but anyone who remembers back to when Rally decks (or earlier, Jito decks) were prevalent on ladder will know that some decks can simply kill you by turn 4. So the market is not a reliable saviour against these sorts of decks. A possible solution to this is to pre-board against such matchups, i.e. running in your main deck powerful but narrow cards like Lightning Storm for a prevalent matchup.
The weakness of pre-boarding of course is it dilutes your deck and reduces consistency. Drawing a Lightning Storm vs Big Combrei or Unitless makes you feel silly. This is where discard effects can come in. I here use “discard” in a loose sense of anything that trades a card in your hand for another card or a powerful effect: so alongside looters such as Nocturnal Observer and similar cards like the recently spoiled Lumen Attendant, I also here refer to effects like Strategize and, importantly, Merchants. The motto “discard is consistency” highlights that these sorts of effects can boost the consistency of your deck by converting dead cards into gas. So we ask the question:
## How many discard effects do I need to get away with running situational cards?
Let us consider the specific case of Lightning Storm. Suppose that fast aggro decks are, say, 20% of the meta, and that if you don’t draw a Lightning Storm by turn 3 your deck struggles vs these decks. Suppose the other 80% of the time the card is basically dead.
To reliably draw the Lightning Storm by turn 3, you really need to be running 3 or 4 copies of the card (with probabilities of 32% and 40% on the play respectively). Let’s say you run 3, after all this is a sideboard card, so you don’t want to dilute your deck too much, and you might still want one in the market.
The question is, how many discard effects should I run so that I can be confident of not drawing fewer discards than Lightning Storms on turns say 2 through 8? The probability of this on turn N with D discard effects is given, in the notation of my previous article, by the formula:
$\sum_{i=1}^{3} \text{HG}(75,3,6+N,i,=)\text{HG}(72,D,6+N-i,i,<)$
Recall that the first term of this is the (hypergeometric) probability of drawing exactly i Lightning Storm, and the second term is the probability of drawing fewer than i discard effects given that we draw i Lightning Storm. Calculating this we find
Probability of drawing more Lightning Storms than discard effects (on the play) with 3 Lightning Storm in deck Number of discard effects Turn 2 3 4 5 6 7 8 1 26.5% 29.1% 31.5% 33.7% 35.9% 37.9% 39.9% 2 24.2% 26.2% 28.0% 29.7% 31.2% 32.6% 33.9% 3 22.0% 23.5% 24.9% 26.1% 27.1% 28.1% 28.9% 4 20.0% 21.1% 22.1% 22.9% 23.5% 24.1% 24.5% 5 18.2% 19.0% 19.6% 20.0% 20.4% 20.6% 20.7% 6 16.5% 17.0% 17.3% 17.5% 17.6% 17.6% 17.5% 7 15.0% 15.2% 15.3% 15.3% 15.2% 15.0% 14.8% 8 13.5% 13.6% 13.5% 13.4% 13.1% 12.8% 12.5% 9 12.3% 12.2% 11.9% 11.6% 11.3% 10.9% 10.5% 10 11.1% 10.8% 10.5% 10.1% 9.7% 9.2% 8.8% 11 10.0% 9.7% 9.2% 8.8% 8.3% 7.8% 7.3% 12 9.0% 8.6% 8.1% 7.6% 7.1% 6.6% 6.1%
Looking at these numbers, it seems just running the four Jennev Merchants still leads to a pretty good chance of having a dead Lightning Storm. To really use this strategy effectively, you probably want to be looking at more like 7, 8 or more discard effects. Still with say 4 Strategize, and with the fact that we’ve ignored the possibility of redrawing hands with Lightning Storm or shipping them with Crests, this strategy could well be a viable way to fight against fast aggro without a sideboard. Note the trade-off however. With 8 discard effects, 3 Lightning Storm and a 20% fast aggro meta: 6.5% of the time you draw a Lightning Storm on time and hose fast aggro, 10.5% of the time you draw a mostly dead card and the other 83% of the time nothing much happens. So if you’re employing this strategy, the card has to really have an impact when you want it, and the decks you want it for need to be sufficiently common.
# Standards in Aggro
Last weekend, DWD spoiled the remaining four Standard/Tactic cards, and especially the Fire, Justice and Shadow ones raise an interesting question.
## How many of these should you run in Aggro?
Obviously, the combat trick side of these cards is great. I think the Fire one is especially important, as by the time it transmutes Aggro will typically be trying to push for those final points of damage. And it helps that these don’t reveal their presence with pauses until they transmute. However, since an Aggro deck wants to always curve out, running more depleted power is a risk.
Now, questions like this are deceptively complicated. There are a lot of factors that go in to determining what your power should look like. So though we can use the techniques in my last article to find a few relevant probabilities, we must remember these tell only part of the story. Another approach, which we’ll touch on briefly, is to run simulations to save us very tricky calculations.
## Some important probabilities
For simplicity, let us suppose our Aggro deck runs 25 Power: 4 Seats, 4 Banners, 0-8 Standards and the rest Sigils. Often a Stonescar deck might supplement this with a couple of Vara’s Favor, but for the purposes of curving out a Favor is very unlike even a depleted Power. Recent Rakano decks have been running exactly 25 sources, as did Sunyveil’s Stonescar at Worlds, so this isn’t unreasonable.
### Probability of having undepleted power on curve on turns 1, 2, 3
This probability is the probability of drawing at least X power on turn X, at least one of which is undepleted Power. We can count Seats as depleted (as if you drew an undepleted seat, you already drew a undepleted Power). On turn 1 Banners are depleted (Infernus is a bad card!), and are only undepleted on turn 2 if you drew another undepleted source. We’ll assume they’re undepleted on turn 3 however. So if we run S standards, we have 17-S undepleted sources in deck on turns 1 and 2 and 21-S on turn 3. Thus on turn X = 1,2 we calculate:
$\sum_{i=X}^{6+X} \text{HG}(75,25,6+X,i,=)\text{HG}(25,17-S,i,1,\geq)$
and on turn 3
$\sum_{i=3}^{9} \text{HG}(75,25,9,i,=)\text{HG}(25,21-S,i,1,\geq)$
where the first term is the chance of drawing exactly i power, and the second the chance that at least one of that power is undepleted. We find:
Standards Turn 1 2 3 0 84.9% 78.6% 63.4% 1 82.8% 77.4% 63.3% 2 80.5% 76.0% 63.2% 3 78.0% 74.3% 63.0% 4 75.2% 72.3% 62.6% 5 72.1% 69.9% 62.2% 6 68.7% 67.3% 61.5% 7 64.9% 64.2% 60.7% 8 60.8% 60.6% 59.7%
### Probability of drawing 5 non-Standard Power and a Standard
This tells how often the standard will actually be worth it, since you don’t really want to be relying on your other standards to get your Tactics on line. The probability is relatively simple. On turn N:
$\sum_{i=5}^{5+N} \text{HG}(75,25-S,6+N,i,=)\text{HG}(50+S,S,6+N-i,1,\geq)$
Probability of drawing 5 non-Standard power and a Standard Standards Turn 5 6 7 8 1 2.6% 4.0% 5.8% 7.8% 2 4.3% 6.6% 9.5% 12.9% 3 5.2% 8.1% 11.6% 15.8% 4 5.5% 8.6% 12.5% 17.1% 5 5.4% 8.5% 12.4% 17.1% 6 5.0% 8.0% 11.7% 16.2% 7 4.5% 7.1% 10.6% 14.7% 8 3.8% 6.1% 9.1% 12.8%
### Probability of undepleted Power on curve on turns 2 AND 3
This is made harder by the way Seats work. It can be calculated exactly, but the calculations are a bit too messy for this article. Instead we can estimate more easily with a simple simulation: we shuffle a deck of 75 cards (50 non-power, 4 Banners, 4 Seats, S Standards and 17-S Sigils) thousands of times, and count how often we have undepleted Power on curve. We can even incorporate the redraw rule into our shuffle if we please.
Standards Probability of undepleted Power on curve turns 2 and 3 (estimate from 10,000 shuffles each) 0 53.4% 1 52.6% 2 51.8% 3 50.0% 4 49.1% 5 47.5% 6 45.0% 7 42.5% 8 40.2%
## Simulations in a goldfish bowl
A good metric for the quality of an Aggro list is the “average goldfish kill”, i.e. the average number of turns it takes for the deck to kill an opponent who does literally nothing. Finding the exact effect of running some number of Standards on this quantity would be a hell to calculate. But if you wanted, you could estimate this more easily by running simplified simulations. For example, one could consider a deck with an Aggro-like curve of vanilla X/X for X, 25 Power, some (0 to 8) of which are Standards that turn into “deal 4 damage” spells for 2, and then test a couple of thousand shuffles for their average goldfish kill for each amount of standards.
# Wrapping up
Set 4 is shaping up to be an interesting set, raising various questions for deck builders. Hopefully this article has shed some light on a couple of these questions. The full impact of Merchants in particular will be exciting, and keep your eyes out for an article by Resheph and myself on how markets compare to sideboards.
Finally, don’t forget that TGP hosts a “Casual Friday” tournament every Friday at 5 EDT!
Until next time,
|
2018-07-22 18:14:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6044966578483582, "perplexity": 1378.9809514608824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593438.33/warc/CC-MAIN-20180722174538-20180722194538-00469.warc.gz"}
|
https://www.physicsforums.com/threads/vector-problem.43252/
|
# Vector Problem
1. Sep 15, 2004
### Dorita
I was told to solve this non graphically. Can this problem be solved non graphically? If so, what are the appropriate steps involved. Really need to understand this stuff.
Vector A forms an angle θ with the positive part of the x axis. Find the components of A along x and y if:
a. |A| = 8 m, θ = 60º
b. |A| = 6 m, θ = 120 º
c. |A| = 1.2 m, θ = 225º
m denotes meters
Thank you very much for all the help. This is my first day on this forum and it's really amazing. I've learned so much just reading all the different threads.
Keep it up.
Dora
Last edited: Sep 15, 2004
2. Sep 15, 2004
### Chi Meson
There are different ways of solving for vector components; judging from the way the question is asked, the following might make the most sense.
Whenever you are dealing with the angle FROM the x-axis, the x-component of the vector will be the magnitude of the vector times the cosine of the angle; the y-component will be the magnitude times the sine of the angle.
If you later are given the angle from the y-axis, then the cosine function will give you the y component and the sine will give you the x component.
In general, the component along any axis will always be the magnitude times the cosine of the angle to that axis.
Last edited: Sep 15, 2004
3. Sep 15, 2004
### Dorita
I had to edit the question.
|A| = 8 m, θ = 60º not A = 8 m, θ = 60º
Sorry!
Dora
4. Sep 15, 2004
### Chi Meson
The answer is the same. The absolute value bars around the "A" is the same thing as saying "the magnitude of vector A." In this case the magnitude is 8 m regardless of its direction .
|
2016-10-23 20:41:58
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8737859129905701, "perplexity": 582.0995952475107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719416.57/warc/CC-MAIN-20161020183839-00534-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://crypto.stackexchange.com/questions/86507/probability-of-false-positive-key-matching-two-plaintext-ciphertext-pairs
|
# Probability of false positive key matching two plaintext/ciphertext pairs
Given a keyspace of $$2^{80}$$ and plaintext space of $$2^{64}$$. And two plaintext and ciphertext pairs $$(x_1, y_1)$$ , $$(x_2, y_2)$$. Now we have $$2^{80}/2^{64} = 2^{16}$$ keys that encrypt $$x_1$$ to $$y_1$$ and another $$2^{16}$$ keys that encrypt $$x_2$$ to $$y_2$$, with only one key that is supposed to be the target key (correct key).
What is the probability that once brute-force identifies a first key ($$k_1$$) this same key happen by mistake to also encrypt $$x_2$$ to $$y_2$$, that is this this key happen to be a False-positive (that is, this key will likely not encrypt $$x_3$$ correctly). What is the equation used and how is it derived?
• Hint: when the wrong key was found by accident, we can consider that the cipher implements a random permutation among these that map $x_1$ to $y_1$, and therefore maps $x_2$ to a random element other than $y_1$. The probability that you want follows (because the question looks like homework, our policy require to only give hints).
– fgrieu
Nov 27 '20 at 6:53
• @fgrieu I can't really see the hint here. this question is actually not a homework I just can't see how to calculate this possibility for any block cipher (either for ciphers having larger or smaller key space than plaintext) .
– KMG
Nov 27 '20 at 7:51
• your key space is higher than your plaintext (message space). now cipher function F may use F(K1,P1) = C1 and again once the plaintext is repeated F(K2,P1)=C2. Until you exhausted the key space. you can still decrypt the P1 correctly, with either K2 or K1. It is the Key space that is important. In short, every element e belongs to Key space K . uniquely determine a bijection from Message Space M to ciphertext C.
– SSA
Nov 27 '20 at 9:51
Under an ideal cipher model, every key implements a random permutation. A random wrong key that maps $$x_1$$ to $$y_1$$ thus maps $$x_2\ne x_1$$ to a random ciphertext $$y_2'$$ other than $$y_1$$. For a $$b$$-bit block cipher, there are $$2^b-1$$ such ciphertexts, thus the probability that $$y_2'=y_2$$ is $$1/(2^b-1)$$.
The probability that an incorrect key survives two tests is thus $$p=1/(2^b\,(2^b-1))$$.
A random $$k$$-bit key has probability $$q=2^{-k}$$ to be correct. It passes two tests with certainty if correct, with probability $$p$$ otherwise. Thus a random key has probability $$q+(1-q)\,p$$ to pass two tests [where the $$q$$ term is for the correct key, the $$(1-q)\,p$$ term is for incorrect keys, and obtained as the the probability that a key is incorrect, times the probability that it nevertheless passes the test with $$(x_1,y_1)$$ and $$(x_2,y_2)$$ ].
Thus a random key known to pass two tests has probability $$q/(q+p\,(1-q))$$ to be correct [where the numerator $$q$$ is the probability for a random key to be correct, and the denominator is the probability that a random key pass two tests]. That simplifies to $$1/(1+p\,(1/q-1))$$.
The desired probability of a false positive is the complement, that is \begin{align}1-1/(1+p\,(1/q-1))\,&=\,1/(1+1/(p\,(1/q-1)))\\&=\,1/(1+2^b\,(2^b-1)/(2^k-1))\end{align}
For $$b$$ and $$k$$ at least 7, that's $$1/(1+2^{2b-k})$$ within 1%. When further $$2b-k$$ is at least 7, that's $$2^{k-2b}$$ within 1%, here $$2^{-48}$$, that is less than one in 280 million million.
More generally, it can be shown that the probability of false positive after testing $$n$$ distinct plaintext/ciphertext pairs is $$1/(1+(2^b)!/((2^b-n+1)!(2^k-1)))$$. For common block ciphers like DES and wider, that's very close to $$1/(1+2^{n\,b-k})$$, and when $$n\,b-k$$ is at least 7, that's $$2^{k-n\,b}$$ within 1%.
• thanks a lot first. I seems to follow well the answer until the part q+(1-q)p so if you can explain a little bit more this part will be really helpful.
– KMG
Nov 27 '20 at 10:26
• @Khaled Gaber: that's more detailed now. Also I made a last paragraph covering the case of $n$ plaintext/ciphertext pairs, and made the approximations in the last two paragraphs quantitative.
– fgrieu
Nov 27 '20 at 11:18
• thanks a lot this answer couldn't be better explained, I got it now.
– KMG
Nov 27 '20 at 12:24
From probability: Let X be an experiment with possible different outcomes $$x_1 ,...,x_n$$ with respective probabilities $$P(x_1)=p_1,...P(x_n)=p_n$$ . Let A be the subset of sample space $${ x_1..,x_n}$$ with probability P(A)=p. Let K <= N integers with N >0 and K>=0, $$\begin{pmatrix}N \\k \\ \end{pmatrix} p^k (1-p) ^{ (N-k)} \tag{1}$$ that A occurs in Exactly k of N trials.
now, if we use birthday attack, we are looking for the probability that after n trials at least 2 outcomes will be the same is at least $$1- e^ {-1/2(n-1)n/N} \tag{2}$$. hence, for $$n >{\sqrt {2 ln 2}}{\sqrt N} \tag{3}$$. The probability is at least 1/2 that two outcomes will be the same.
For the proof it is better to compute the probability that no two outcomes are the same and subtract this result from 1 to obtain the desired result. we can consider the n trials in order and compute the probability of no two identical outcomes for n trials in terms of the result for n-1 trials.
For ex. after one trial probability is 1, as there is only one outcome. After two trials, there is only 1/N chance that the second trial had an outcome equal to the outcome of first one. which means , in our case,cipher function F has used same key K, so the probability is 1-(1/N) that outcomes of two trials will be different. so, P(n trials all different )= $${(1-1/N)(1-2/N)... (1-((n-1)/N)) }\tag{4}$$
Comparing with Taylor's Expansion for $$e^x, where,{e^x = 1 + x} \tag{5}$$ for first order approximation. Taking $${x \approx -a/N} \tag{6}$$ equation (5) becomes $${e^ \frac{-a}{N}}\approx 1-\frac{a}{N} \tag{7}$$ , now the equation (4) is.. $${e^ \frac{-1}{N} \cdot e^\frac{-2}{N} }\cdots{e^\frac{-(n-1)} {N}\tag{8}}$$ , We take sum of n natural numbers $${e^ \frac{-(n(n-1))/2}{N}}$$ For a larger n, we can take $$n(n-1)\approx n^2 \tag{9}$$, now P(Same) = 1 - P(different) which is $${1- e^\frac{-n^2}{2N}\tag{10}}$$
• How would your $n$ and $N$ relate to the 64 (or 128) bit block size and 80 bit key size of the question? I still do no see where the birthday bound matters in the question.
– fgrieu
Nov 29 '20 at 14:41
• N is a key space 2^t, and n is the number of trials. It will give us the probability after how many trials a cipher may need for two same keys.
– SSA
Nov 30 '20 at 6:26
|
2021-09-26 12:44:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 61, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8253107666969299, "perplexity": 492.55854391275363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057861.0/warc/CC-MAIN-20210926114012-20210926144012-00576.warc.gz"}
|
https://www.peertechz.com/articles/JCEES-2-111.php
|
ISSN: 2455-488X
##### Journal of Civil Engineering and Environmental Sciences
Research Article Open Access Peer-Reviewed
# Impacts of Meteorological Factors on Particulate Pollution: Design of Optimization Procedure
### Utkan Özdemir*
Department of Environmental Engineering, University of Kocaeli, Umuttepe, Turkey
*Corresponding author: Utkan Özdemir, Department of Environmental Engineering, University of Kocaeli, 41380 Umuttepe, Turkey, Tel: +90 2623033188; Fax: +90 2623033005; E-mail: [email protected]; [email protected]
Accepted: 01 December, 2016 | Received: 29 December, 2016 | Published: 30 December, 2016
Keywords: Meteorological conditions; Orthogonal array; PM10 pollution; Prediction; Taguchi
Cite this as
Özdemir U (2016) Impacts of Meteorological Factors on Particulate Pollution: Design of Optimization Procedure. J Civil Eng Environ Sci 2(1): 030-033. DOI: 10.17352/2455-488X.000011
In this study, Taguchi L8 orthogonal array design was applied to determine the most polluted meteorological conditions in Kocaeli. Meteorological factors were decided as temperature, relative humidity and rainfall in two different levels. Larger is better function was applied for calculation of signal-to-noise ratios. The impact ratios of meteorological factors were also determined by using Taguchi model. PM10 concentrations were predicted by the model. Results of the model showed that predicted and obtained concentrations were closer to each other. These calculations and results show the success of Taguchi model in this study.
### Introduction
Particulate matter (PM) is one of the major air pollutants in urbanization region [1]. Effects of PM pollution on human and environmental systems have been discussed by many of scientists. Particulates with aerodynamic diameters <10µm (PM10) and <2.5µm (PM2.5) causes lung cancer, asthma, morbidity and mortality [2,3]. So, air pollution control and monitoring are very important for saving ecological system.
Relationship between aerosol concentration and meteorological variables should be investigated for better control and monitoring applications. Aerosol concentrations are controlled by atmospheric mixing, chemical transformation, emission etc. [4]. Although this presence of concentration-meteorology system is real, the understanding of the connection between meteorological factors (relative humidity, temperature, wind, rainfall etc.) with particulate matter is not clear [5]. Because in lots of countries, researchers have limited number of studies about different aerosol fractions for a long time to characterization of relationship between meteorological factors and air pollution [4-6].
In last decades, some statistical and optimization tools such as multiple linear regression analyses [7], artificial neural networks [7,8], Box-Behnken designs [9], Taguchi orthogonal arrays [10-12] etc. are used for investigation of pollution in different environments. Taguchi bases on statistical design systems and successfully applied in lots of scientific disciplines. Compared to other statistical methods, Taguchi model is simple, effective and innovator for investigation of environmental risks [13,14]. Harmful effects of multiple factors on environment can be investigated by this model. On the other hand, the influence of individual factors is more important for success of this model [10].
In line with the above information, statistical and optimization model studies that can explain the connection between pollutant concentration and meteorological factors should be increased. Taguchi generally require less data and researchers can make successful comments with orthogonal array designs. So, this method will find a place in the field of the effects of meteorological factors on distribution of air pollutants (especially particulate matter). In this study, the impacts of three different meteorological factors (temperature, relative humidity, rainfall) on the average concentration of particulate matter (PM10) will be examined by Taguchi’s L8 orthogonal arrays. Air pollution levels of PM10 were also predicted by using the model. For this study, air monitoring data (average pollutant levels and meteorological factors) obtained from Ministry of Environment and Urban Planning in industrial areas of Kocaeli.
### Materials and Methods
##### Study area
Kocaeli is one of the important and crowded cities in Turkey. Lots of heavy industrial plants have been located here. On the other hand, it has quite an intense traffic network. So, air pollution is the major environmental problem in this city.
In this study, PM10 and meteorological data sets obtained from the station of Ministry of Environment and Urban Planning. The study area is located between latitude of 40° 46’ North and 29° 31’ East (Figure 1).
##### Taguchi procedure
Selection of the control factors is the most important step in Taguchi applications. Temperature, relative humidity and rainfall with two levels were selected as control factors in this model study (Table 1).
In Taguchi model studies, the variability of factors expressed by signal-to-noise ratios ($S N MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqk0Jf9crFfpeea0xh9v8qiW7rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeaaciGaaiaabeqaamaabaabaaGcbaqcfa4aaSGaaeaacaWGtbaabaGaamOtaaaaaaa@3A4E@$ ). One of the $S N MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqk0Jf9crFfpeea0xh9v8qiW7rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeaaciGaaiaabeqaamaabaabaaGcbaqcfa4aaSGaaeaacaWGtbaabaGaamOtaaaaaaa@3A4E@$ functions must be chosen as “smaller is better”, “nominal is better” and “larger is better” [15]. Larger is better characteristics was chosen for this work. Because this study was designed to investigate the meteorological factors which cause maximum PM10 concentration in Kocaeli. The $S N MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqk0Jf9crFfpeea0xh9v8qiW7rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeaaciGaaiaabeqaamaabaabaaGcbaqcfa4aaSGaaeaacaWGtbaabaGaamOtaaaaaaa@3A4E@$ ratio for the larger is better function was calculated as:
$S N =−10log( 1 n ∑ 1 y i 2 ) MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXafv3ySLgzGmvETj2BSbqefm0B1jxALjhiov2Daebbfv3ySLgzGueE0jxyaibaiKc9yrVq0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadiqacqaaaOqaamaaliaabaGaam4uaaqaaiaad6eaaaGaeyypa0JaeyOeI0IaaGymaiaaicdaciGGSbGaai4BaiaacEgadaqadaqaamaalaaabaGaaGymaaqaaiaad6gaaaWaaabqaeaadaWcaaqaaiaaigdaaeaacaWG5bWaa0baaSqaaiaadMgaaeaacaaIYaaaaaaaaeqabeqdcqGHris5aaGccaGLOaGaayzkaaaaaa@4D23@$
where yi is the observed PM10 concentrations and n is the number of repetitions.
L8 orthogonal array was chosen for this study. Table 2 shows L8 orthogonal array of meteorological factors onto PM10. All calculations and applications of this study were hosted in Minitab 16 software package and Microsoft Excel 2007.
### Results and Discussions
In this study, the general goal was to determine the impacts of some meteorological factors such as temperature, relative humidity and rainfall onto PM10 pollution in an industrial area of Kocaeli. Taguchi method was designed as L8 orthogonal array for this work.
In Taguchi applications, calculating of $S N MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqk0Jf9crFfpeea0xh9v8qiW7rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeaaciGaaiaabeqaamaabaabaaGcbaqcfa4aaSGaaeaacaWGtbaabaGaamOtaaaaaaa@3A4E@$ ratios are the most important stage for evaluate the meteorological and PM10 data sets clearly. $S N MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqk0Jf9crFfpeea0xh9v8qiW7rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeaaciGaaiaabeqaamaabaabaaGcbaqcfa4aaSGaaeaacaWGtbaabaGaamOtaaaaaaa@3A4E@$ ratios show the consistency between control factors (temperature, relative humidity and rainfall) and response data (PM10). Larger is better type of $S N MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqk0Jf9crFfpeea0xh9v8qiW7rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeaaciGaaiaabeqaamaabaabaaGcbaqcfa4aaSGaaeaacaWGtbaabaGaamOtaaaaaaa@3A4E@$ ratios were calculated according to Eq. (1) and the results were given in Table 3.
As seen from Table 3, obtained data sets providing the highest signal-to-noise ratio are as 1-2-2. This code (1-2-2) indicates that PM10 concentration became higher in winter seasons conditions (temperature <15 ºC, relative humidity > %65 and rainfall > 55 mm). PM10 concentrations in winter are higher in Kocaeli because of warming-induced. This shows models better reality capacities of Taguchi models. Minitab graph of were also given in Figure 2.
$S N MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqk0Jf9crFfpeea0xh9v8qiW7rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeaaciGaaiaabeqaamaabaabaaGcbaqcfa4aaSGaaeaacaWGtbaabaGaamOtaaaaaaa@3A4E@$
Taguchi model shows that temperature is the most important meteorological factors on PM10 pollution in Kocaeli. The impact ratio of temperature was calculated as 69.40%. Whereas impacts of temperature were higher, relative humidity was the less effective factor in this model study. Impact ratios of the meteorological factors were presented in Table 4.
Taguchi statistical optimization model has also a prediction capacity. PM10 concentrations in Kocaeli were also predicted by this model. Predicted pollutant concentrations versus obtained concentrations are given in Figure 3.
Figure 3 shows that higher correlation was obtained between predicted and real PM10 concentrations (R2= 0.95). So, Taguchi model was successful in prediction of pollutant concentration of Kocaeli.
### Conclusion
Taguchi’s L8 orthogonal array design was applied in this study for determine the meteorological conditions which have the most intense PM10 concentrations in Kocaeli. Signal-to-noise ratios showed that temperature was the most important factor in particulate pollution for this study. It has 69.40% impact ratio on PM10 concentration. This study demonstrates the seasons of area are very remarkable in particulate concentration. Because Taguchi data indicates that PM10 concentration became higher in winter seasons conditions (temperature <15 ºC, relative humidity > %65 and rainfall > 55 mm). On the other hand, higher correlation (R2=0.95) was determined the strong relationship between predicted and actual PM10 concentrations.
1. McKendry IG (2000) PM10 levels in the Lower Fraser Valley, British Columbia, Canada: an overview of spatiotemporal variations and meteorological controls. J Air Waste Ma 50: 443-452. Link: https://goo.gl/6jW0Ro
2. Brandt C, Kunde R, Dobmeier B, Schnelle-Kreis J, Orasche J, et al. (2011) Ambient PM10 concentrations from wood combustion-emission modeling and dispersion calculation for the city area of Augsburg, Germany. Atmos Environ 45: 3466-3474. Link: https://goo.gl/ZTy1xn
3. Akyüz M, Cabuk H (2009) Meteorological variations of PM2.5/PM10 concentrations and particle-associated polycyclic aromatic hydrocarbons in the atmospheric environment of Zonguldak, Turkey. J Hazard Mater 170: 13-21. Link: https://goo.gl/035UqZ
4. Choi YS, Ho CH, Cheni D, Noh YH, Song CK (2008) Spectral analysis of weekly variation in PM10 mass concentration and meteorological conditions over China. Atmos Environ 42; 655-666. Link: https://goo.gl/muZV4d
5. Pateraki St, Asimakopoulos DN, Flocas HA, Maggos Th, Vasilakos Ch (2012) The role of meteorology on different sized aerosol fractions (PM10, PM2.5, PM2.5–10). Sci Total Environ 419: 124-135. Link: https://goo.gl/s9J12K
6. Lee S, Ho CH, Choi YS (2011) High-PM10 concentration episodes in Seoul, Korea: background sources and related meteorological conditions. Atmos Environ 45: 7240-7247. Link: https://goo.gl/lOYkTD
7. Özdemir U, Özbay B, Veli S, Zor S (2011) Modeling adsorption of sodium dodecyl benzene sulfonate (SDBS) onto polyaniline (PANI) by using multi linear regression and artificial neural networks. Chem Eng J 178: 183-190. Link: https://goo.gl/vtbUKp
8. Yetilmezsoy, K., Demirel, S., (2008) Artificial neural network (ANN) approach for modeling of Pb(II) adsorption from aqueous solution by Antep pistachio (Pistacia Vera L.) shells. J Hazard Mater 153: 1288-1300. Link: https://goo.gl/exwJu2
9. Gengec E, Özdemir U, Özbay B, Özbay İ, Veli S (2013) Optimizing dye adsorption onto a waste-derived (modified charcoal ash) adsorbent using Box–Behnken and Central Composite Design procedures. Water Air Soil Poll 224: 1751. Link: https://goo.gl/VcXwCJ
10. Al G, Özdemir U, Aksoy Ö (2013) Cytotoxic effects of Reactive Blue 33 on Allium cepa determined using Taguchi's L8 orthogonal array. Ecotox Environ Safe 98: 36-40. Link: https://goo.gl/pm7Alv
11. Sadeghi SH, Moosavi V, Karimi A, Behnia N (2012) Soil erosion assessment and prioritization of affecting factors at plot scale using the Taguchi method. J Hydrol 448-449: 174-180. Link: https://goo.gl/pww4Ol
12. Mohan SV, Mouli PC (2008) Assessment of aerosol (PM10) and trace elemental interactions by Taguchi experimental design approach. Ecotox Environ Safe 69: 562-567. Link: https://goo.gl/w1Y0iS
13. Yusoff N, Ramasamy M, Yusup S (2011) Taguchi’s parametric design approach for the selection of optimization variables in a refrigerated gas plant. Chem Eng Res Des 89: 665-675. Link: https://goo.gl/IN8BDo
14. Zirehpour A, Rahimpour A, Jahanshahi M, Peyravi M (2014) Mixed matrix membrane application for olive oil wastewater treatment: process optimization based on Taguchi design method. J Environ Manage 132: 113-120. Link: https://goo.gl/kpv8sn
15. Zolgharnein J, Asanjarani N, Shariatmanesh T (2013) Taguchi L16 orthogonal array optimization for Cd (II) removal using Carpinus betulus tree leaves: Adsorption characterization. Int Biodeter Biodegr 85: 66-77. Link: https://goo.gl/QZGRkp
© 2016 Özdemir U. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Subscribe to our articles alerts and stay tuned.
|
2019-10-16 14:51:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6730527281761169, "perplexity": 10253.037095071868}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00220.warc.gz"}
|
https://radfordneal.wordpress.com/2020/04/23/the-puzzling-linearity-of-covid-19/
|
## The Puzzling Linearity of COVID-19
We all understand how the total number of cases of COVID-19 and the total number of deaths due to COVID-19 are expected to grow exponentially during the early phase of the pandemic — every infected individual is in contact with others, who are unlikely to themselves be infected, and on average infects more than one of them, leading to the number of cases growing by a fixed percentage every day. We also know that this can’t go on forever — at some point, many of the people in contact with an infected individual have already been infected, so they aren’t a source of new infections. Or alternatively, people start to take measures to avoid infection.
So we expect that on a logarithmic plot of the cumulative number of cases or deaths over time, the curve will initially be a straight line, but later start to level off, approaching a horizontal line when there are no more new cases or deaths (assuming the disease is ultimately eliminated). And that’s what we mostly see in the data, except that we haven’t achieved a horizontal line yet.
On a linear plot of cases or deaths over time, we expect an exponentially rising curve, which also levels off eventually, ultimately becoming a horizontal line when there are no more cases or deaths. But that’s not what we see in much of the data.
Instead, for many countries, the linear plots of total cases or total deaths go up exponentially at first, and then approach a straight line that is not horizontal. What’s going on?
Before trying to answer this question, I’ll first illustrate the issue with some plots taken from https://www.worldometers.info/coronavirus/.
Here’s the linear plot of cases in the UK:
We see how the initial exponential rise changes into a linear rise from about April 3. We can also look at the daily number of cases:
From around April 3, the initial exponential rise in new cases changes to an approximately constant number of new cases each day.
It’s the same for deaths, with a small time lag:
Browsing around worldometers shows that the plots for many (though not all) countries are similar. For instance, Canada:
And the United States:
It’s not too hard to come up with a reason for the number of cases to rise linearly. One need only assume that the country has a limited, and fixed, testing capacity. Once the number of probable cases reaches this capacity, the number of confirmed cases can’t rise any faster than they can be tested, so even if the true number of cases is still rising exponentially, the number of reported cases rises only linearly. Now, one might instead think that testing capacity is being increased, or that cases are being reported as COVID-19 based on symptoms alone, without a confirming test, so this isn’t a completely convincing reason for linearity. But considering that the relationship of number of reported cases to number of actual cases may be rather distant, it’s maybe not too interesting to think about this further.
A linear rise in the number of deaths seems harder to explain.
The analogue of limited testing capacity would be limited hospital capacity. But for that to produce a linear rise in reported deaths, we’d need to assume not only that hospital capacity has been exceeded (which seems not to be true in most places) but also that only the deaths from COVID-19 that occur in hospitals are reported.
It’s possible to imagine situations where the true number of deaths rises exponentially at first, but then linearly. For example, people could live along a river. Infections start among people at the river’s mouth, and expand exponentially amongst them, until most of them are infected. It also spreads up the river, but only by local contagion, so the number of deaths (and cases) grows linearly according to how far up-river it has spread. This scenario, however, seems nothing like what we would expect in almost all countries.
Most countries have taken various measures to slow the spread of COVID-19. We might expect that in some countries, these measures are insufficient, and that the growth in total deaths (and daily deaths) is still exponential, just with a longer doubling time. We might expect that in some other countries, the measures are quite effective, so that the number of new deaths is now declining exponentially, with the plot of total deaths levelling off to a horizontal line. To get a linear growth in number of deaths, the measures taken would need to be just effective enough, but no more, that they lead to a constant number of deaths per day, neither growing or shrinking. Considering that various disparate measures have been taken, it seems like an unlikely coincidence for their net effect to just happen to be a growth rate of zero.
What’s left? Well, it could be that the growth in total deaths is not really linear, but is instead now growing exponentially with a very long doubling time, or is instead levelling off, but very slowly.
Certainly there are some countries where total deaths seem to be levelling off, such as Spain:
Maybe if we only looked at these plots up until about April 7, we’d see the growth in total deaths as being nearly linear (after about March 26), and the number of daily deaths as being almost constant.
But the number of countries where growth in total deaths is almost linear seems more than I’d expect. Could these countries have very precisely calibrated the magnitude of their infection control measures to keep the number of new cases constant, thereby avoiding overwhelming their health care systems at minimal social and economic cost? This seems unlikely to me. I’m puzzled.
Entry filed under: COVID-19, Science, Statistics, Statistics - Nontechnical.
• 1. Aaron Galloway | 2020-04-24 at 8:19 am
Dear Dr. Neal,
I’m curious if the apparent linear rise of cases (actually confirmed cases as per testing) may be an artifact of both the lag time from the moment individuals are actually infected to where they may be exhibiting symptoms or are tested, given an incubation estimated to be about 5-14 days.
If so, and given the inconsistent timing of government interventions across the globe, we might expect to see a “bending” of the rise in cases into a more horizontal line in, say, around May.
• 2. Radford Neal | 2020-04-24 at 9:02 am
Yes, it could be that the curve is bending down from exponential growth to horizontal, but slowly enough that for a while it seems to be going up linearly. This might be more likely given the variable incubation time (and time to death when fatal), which would have the effect of smoothing out the impact of a sudden “lockdown”.
But if you look right now at the world totals for cases and deaths at https://www.worldometers.info/coronavirus/ it’s really hard to think that the strikingly linear growth since about March 30 can be explained so simply. Perhaps the staggered timing of interventions that you mention could explain this, if they just by coincidence have the combined effect of producing a linear curve (for a while), but this seems a bit unlikely.
• 3. Radford Neal | 2020-04-24 at 9:07 pm
There’s interesting discussion of this post at https://www.lesswrong.com/posts/QTXvG3MxrZqafT4ir/the-puzzling-linearity-of-covid-19
• 4. Ken | 2020-04-26 at 9:41 pm
One difficulty that some counties may be having is pushing their transmission rate much below one, so they tend to have fairly flat case rates, at the peak. The death rates will then be delayed and spread out looking even flatter. The UK and USA look like this.
Another interesting feature of the data is that often case rate curves will rise very rapidly early, as a country realises that they have an epidemic and will put resources into finding cases, so will find existing as well as new cases.
• 5. Ken | 2020-04-28 at 6:15 am
Saw today that Germany had their transmission rate down to 0.7 but then relaxed their restrictions and it is now 1.0. It does seem that it is quite difficult to get it below 1.0. In Australia we are seeing that people make their own decision on what they think is appropriate levels of restrictions, as the rate of new cases declines to low levels. The good news is that if we go to a transmission rate of 1 then it won’t be a problem, as we are at under 20 cases per day.
• 6. Radford Neal | 2020-04-28 at 9:34 am
Peter McCluskey makes an interesting comment on this post at lesswrong pointing out that after restrictions start, there will still be transmission within households for a while. Since most households have only two adults (and children will probably not be noted as having covid-19), this would for a while lead to R seeming to be 1. So we can hope that that’s why getting it below 1 seems to be difficult at the moment.
• 7. Yves Moreau | 2020-05-27 at 9:22 pm
I have also been puzzled that the effective reproduction number has seemed to be close to 1 after measures were taken. I have not assumed linearity (R=1), but simply close to it. In particular, one might have expected that the drastic measures taken should have brought R much lower. R is usually qualitatively described as C x P x D (number of contacts per day x probability that contact is infectious x number of days that a patient is infectious). Given the multiple interventions tackling each of these factors, it is surprising that R went from 2 to 4 pre-lockdown only to 0.7 to 1 post-lockdown. At least one factor that may explain this is that by confining people in their homes they will infect family members more effectively than pre-lockdown. The idea of the lockdown is that intrafamilial contamination should stop at the family, so that after lockdown we should only see a fixed multiplicative factor but no exponential growth. This may have happened in Wuhan because of the severity of the measures taken (e.g., only one person allowed to leave the home for shopping every other day). But in other countries, some people still went to work, some people still went out despite the lockdown. If extrafamilial contaminations go down but intrafamilial contaminations go up, there will be a “buffering effect” that counterbalances the decrease in R. Such an effect might partly explain why it has been hard to bring R closer to zero.
• 8. ecoquant | 2020-05-27 at 10:12 pm
@Yves Moreau,
Some of your puzzlement may be explained by the attendant simplified model of transmission. Reproduction number is essentially a Poisson lambda mean. In fact, disease transmission is characterized by that parameter, and a variance, variously called a concentration or a dispersion parameter, and the distribution is the Negative Binomial. So it can be overdispersed.
See:
J. O. Lloyd-Smith, S. J. Schreiber, P. E. Kopp & W. M. Getz, “Superspreading and the effect of individual variation on disease emergence“, Vol 438|17 November 2005|doi:10.1038/nature04153
• 9. Radford Neal | 2020-07-01 at 9:40 pm
A comment by “SoerenMind” on the lesswrong link to this post (https://www.lesswrong.com/posts/QTXvG3MxrZqafT4ir/the-puzzling-linearity-of-covid-19) points to the following highly relevant paper:
https://www.medrxiv.org/content/10.1101/2020.05.22.20110403v1
• 10. ecoquant | 2020-07-01 at 10:26 pm
I don’t understand the continuing emphasis upon counts of positive cases. As indicated, there are many problems with believing these to be actual measures of infection prevalence. Ideally, we’d like to do something like random survey sampling to estimate prevalence in population. Without the investment in that, there are proxies which, while much cruder, could indicate, such as the proportion infected of the number of tests conducted. Then, there are fused estimates which consider linear combinations of such proportions and proportions of antibody tests showing positives and deaths. There’s a suite of network sampling techniques like NSUM which, to my knowledge, have yet to be applied to ascertain covert populations of infected people.
It’s not like the literature hasn’t discussed these:
Russell Timothy W , Hellewell Joel , Jarvis Christopher I , van Zandvoort Kevin , Abbott Sam , Ratnayake Ruwan , CMMID COVID-19 working group , Flasche Stefan, Eggo Rosalind M , Edmunds W John , Kucharski Adam J . “Estimating the infection and case fatality ratio for coronavirus disease (COVID-19) using age-adjusted
data from the outbreak on the Diamond Princess cruise ship”, February 2020. Euro Surveill. 2020;25(12):pii=2000256. https://doi.org/10.2807/1560-7917. ES.2020.25.12.2000256.
T. Jombart, et al, “Inferring the number of COVID-19 cases from recently reported deaths” [version 1; peer review: 2 approved], Wellcome Open Research 2020, 5:78 Last updated: 26 MAY 2020.
T. W. Russell, et al, “Using a delay-adjusted case fatality
Given that infections are far from Poisson events, being overdispersed because of the superspreader phenomenon, it seems a good deal more investigation of those long tails would be warranted. After all, the nice thing about Poisson statistics is that they imply a certain stability and predictability in outcome. Forcing a Poisson model on top of an actually Negative Binomial model with a big variance means the $\lambda$ of the Poisson is going to be exaggerated. Sure, it looks like the Poisson is exaggerating. But, in fact, there’s bigger latent risk: Can’t know how the big tail events are going to behave.
Indeed, if there’s anything specific to be criticized about Imperial College is that they did not acknowledge this feature of epidemics in their analysis. The superspreader phenomenon has been known for a while, since 2000 at least. See
J. O. Lloyd-Smith, S. J. Schreiber, P. E. Kopp & W. M. Getz, “Superspreading and the effect of individual variation on disease emergence”, Nature,438(17), November 2005, doi:10.1038/nature04153.
And see its references.
April 2020
M T W T F S S
12345
6789101112
13141516171819
20212223242526
27282930
|
2020-07-13 10:07:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6063061952590942, "perplexity": 1387.6186841037877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143365.88/warc/CC-MAIN-20200713100145-20200713130145-00444.warc.gz"}
|
http://theinfolist.com/php/HTMLGet.php?FindGo=Adiabatic
|
HOME TheInfoList
picture info Adiabatic In thermodynamics, an adiabatic process is one that occurs without transfer of heat or matter between a thermodynamic system and its surroundings. In an adiabatic process, energy is transferred to its surroundings only as work. The adiabatic process provides a rigorous conceptual basis for the theory used to expound the first law of thermodynamics, and as such it is a key concept in thermodynamics. Some chemical and physical processes occur so rapidly that they may be conveniently described by the term "adiabatic approximation", meaning that there is not enough time for the transfer of energy as heat to take place to or from the system. By way of example, the adiabatic flame temperature is an idealization that uses the "adiabatic approximation" so as to provide an upper limit calculation of temperatures produced by combustion of a fuel [...More Info...] [...Related Items...] picture info Chemical Potential In thermodynamics, chemical potential of a species, is a form of energy that can be absorbed or released during a chemical reaction or phase transition due to a change of the particle number of the given species. The chemical potential of a species in a mixture is defined as the rate of change of a free energy of a thermodynamic system with respect to the change in the number of atoms or molecules of the species that are added to the system. Thus, it is the partial derivative of the free energy with respect to the amount of the species, all other species' concentrations in the mixture remaining constant. The molar chemical potential is also known as partial molar free energy. When both temperature and pressure are held constant, chemical potential is the partial molar Gibbs free energy [...More Info...] [...Related Items...] picture info Real Gas Real gases are non-hypothetical gases whose molecules occupy space and have interactions; consequently, they adhere to gas laws. To understand the behaviour of real gases, the following must be taken into account: For most applications, such a detailed analysis is unnecessary, and the ideal gas approximation can be used with reasonable accuracy. On the other hand, real-gas models have to be used near the condensation point of gases, near critical points, at very high pressures, to explain the Joule–Thomson effect and in other less usual cases [...More Info...] [...Related Items...] picture info Carnot Heat Engine A Carnot heat engine is a theoretical engine that operates on the reversible Carnot cycle. The basic model for this engine was developed by Nicolas Léonard Sadi Carnot in 1824. The Carnot engine model was graphically expanded upon by Benoît Paul Émile Clapeyron in 1834 and mathematically explored by Rudolf Clausius in 1857 from which the concept of entropy emerged. Every thermodynamic system exists in a particular state. A thermodynamic cycle occurs when a system is taken through a series of different states, and finally returned to its initial state. In the process of going through this cycle, the system may perform work on its surroundings, thereby acting as a heat engine. A heat engine acts by transferring energy from a warm region to a cool region of space and, in the process, converting some of that energy to mechanical work. The cycle may also be reversed [...More Info...] [...Related Items...] picture info State Of Matter In physics, a state of matter is one of the distinct forms in which matter can exist. Four states of matter are observable in everyday life: solid, liquid, gas, and plasma. Many other states are known to exist, such as glass or liquid crystal, and some only exist under extreme conditions, such as Bose–Einstein condensates, neutron-degenerate matter, and quark-gluon plasma, which only occur, respectively, in situations of extreme cold, extreme density, and extremely high-energy. Some other states are believed to be possible but remain theoretical for now. For a complete list of all exotic states of matter, see the list of states of matter. Historically, the distinction is made based on qualitative differences in properties. Matter in the solid state maintains a fixed volume and shape, with component particles (atoms, molecules or ions) close together and fixed into place [...More Info...] [...Related Items...] picture info Chemical Thermodynamics Chemical thermodynamics is the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. Chemical thermodynamics involves not only laboratory measurements of various thermodynamic properties, but also the application of mathematical methods to the study of chemical questions and the spontaneity of processes. The structure of chemical thermodynamics is based on the first two laws of thermodynamics. Starting from the first and second laws of thermodynamics, four equations called the "fundamental equations of Gibbs" can be derived. From these four, a multitude of equations, relating the thermodynamic properties of the thermodynamic system can be derived using relatively simple mathematics [...More Info...] [...Related Items...] picture info Statistical Mechanics Statistical mechanics is a branch of theoretical physics that uses probability theory to study the average behaviour of a mechanical system whose exact state is uncertain. Statistical mechanics is commonly used to explain the thermodynamic behaviour of large systems. This branch of statistical mechanics, which treats and extends classical thermodynamics, is known as statistical thermodynamics or equilibrium statistical mechanics. Microscopic mechanical laws do not contain concepts such as temperature, heat, or entropy; however, statistical mechanics shows how these concepts arise from the natural uncertainty about the state of a system when that system is prepared in practice [...More Info...] [...Related Items...] picture info Heat In thermodynamics, heat is energy in transfer to or from a thermodynamic system, by mechanisms other than thermodynamic work or transfer of matter. The mechanisms include conduction, through direct contact of immobile bodies, or through a wall or barrier that is impermeable to matter; or radiation between separated bodies; or friction due to isochoric mechanical or electrical or magnetic or gravitational work done by the surroundings on the system of interest, such as Joule heating due to an electric current driven through the system of interest by an external system, or through a magnetic stirrer. When there is a suitable path between two systems with different temperatures, heat transfer occurs necessarily, immediately, and spontaneously from the hotter to the colder system [...More Info...] [...Related Items...] picture info Control Volume In continuum mechanics and thermodynamics, a control volume is a mathematical abstraction employed in the process of creating mathematical models of physical processes. In an inertial frame of reference, it is a volume fixed in space or moving with constant flow velocity through which the continuum (gas, liquid or solid) flows. The surface enclosing the control volume is referred to as the control surface. At steady state, a control volume can be thought of as an arbitrary volume in which the mass of the continuum remains constant. As a continuum moves through the control volume, the mass entering the control volume is equal to the mass leaving the control volume. At steady state, and in the absence of work and heat transfer, the energy within the control volume remains constant [...More Info...] [...Related Items...] picture info Thermal Efficiency In thermodynamics, the thermal efficiency (${\displaystyle \eta _{th}\,}$) is a dimensionless performance measure of a device that uses thermal energy, such as an internal combustion engine, a steam turbine or a steam engine, a boiler, furnace, or a refrigerator for example. For a heat engine, thermal efficiency is the fraction of the energy added by heat (primary energy) that is converted to net work output (secondary energy) [...More Info...] [...Related Items...] picture info Conjugate Variables (thermodynamics) In thermodynamics, the internal energy of a system is expressed in terms of pairs of conjugate variables such as temperature and entropy or pressure and volume. In fact, all thermodynamic potentials are expressed in terms of conjugate pairs. The product of two quantities that are conjugate has units of energy or sometimes power. For a mechanical system, a small increment of energy is the product of a force times a small displacement. A similar situation exists in thermodynamics. An increment in the energy of a thermodynamic system can be expressed as the sum of the products of certain generalized "forces" that, when unbalanced, cause certain generalized "displacements", and the product of the two is the energy transferred as a result. These forces and their associated displacements are called conjugate variables. The thermodynamic force is always an intensive variable and the displacement is always an extensive variable, yielding an extensive energy transfer [...More Info...] [...Related Items...] picture info Intensive And Extensive Properties Physical properties of materials and systems can often be categorized as being either intensive or extensive quantities, according to how the property changes when the size (or extent) of the system changes. According to IUPAC, an intensive property is one whose magnitude is independent of the size of the system. An extensive property is one whose magnitude is additive for subsystems. An intensive property is a bulk property, meaning that it is a physical property of a system that does not depend on the system size or the amount of material in the system. Examples of intensive properties include temperature, T, refractive index, n, density, ρ, and hardness of an object, η (IUPAC symbols are used throughout this article). When a diamond is cut, the pieces maintain their intrinsic hardness (until the sample reduces to a few atoms thick), so hardness is independent of the size of the system, for larger samples.
|
2020-04-09 07:52:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7269647121429443, "perplexity": 466.30608294354647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371830894.88/warc/CC-MAIN-20200409055849-20200409090349-00345.warc.gz"}
|
https://web2.0calc.com/questions/what-must-be-added-to-9x-12x-3-to-make-it-a-whole-square
|
+0
# WHAT MUST BE ADDED TO 9X^-12X+3 TO MAKE IT A WHOLE SQUARE?
0
101
1
WHAT MUST BE ADDED TO 9X^-12X+3 TO MAKE IT A WHOLE SQUARE?
Sep 28, 2020
#1
+111981
+2
I will do a similar one for you.
$$4x^2+18x-1$$
$$4x^2+18x-1\\ =4(x^2+\frac{18}{4}x-\frac{1}{4})\\ =4(x^2+\frac{9}{2}x-\frac{1}{4})\\ =4(x^2+\frac{9}{2}x+\left[(\frac{9}{4})^2+\frac{1}{4}\right]-\frac{1}{4})\\ =2^2(x+\frac{9}{4})^2\\ =\left[2(x+\frac{9}{4})\right]^2\\~\\ \text{So to do this I had to add}\\ 4\left[(\frac{9}{4})^2+\frac{1}{4}\right]=4*\frac{81+2}{8}=\frac{81+2}{2}=41.5$$
I could have made a stupid error so if you think something is wrong then let me know.
yours can be done the same way.
LaTex:
4x^2+18x-1\\
=4(x^2+\frac{18}{4}x-\frac{1}{4})\\
=4(x^2+\frac{9}{2}x-\frac{1}{4})\\
=4(x^2+\frac{9}{2}x+\left[(\frac{9}{4})^2+\frac{1}{4}\right]-\frac{1}{4})\\
=2^2(x+\frac{9}{4})^2\\
=\left[2(x+\frac{9}{4})\right]^2\\~\\
|
2021-01-16 12:16:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9561015963554382, "perplexity": 1093.2114983517226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506640.22/warc/CC-MAIN-20210116104719-20210116134719-00709.warc.gz"}
|
https://eprint.iacr.org/2021/919
|
## Cryptology ePrint Archive: Report 2021/919
The supersingular isogeny path and endomorphism ring problems are equivalent
Benjamin Wesolowski
Abstract: We prove that the path-finding problem in $\ell$-isogeny graphs and the endomorphism ring problem for supersingular elliptic curves are equivalent under reductions of polynomial expected time, assuming the generalised Riemann hypothesis. The presumed hardness of these problems is foundational for isogeny-based cryptography. As an essential tool, we develop a rigorous algorithm for the quaternion analog of the path-finding problem, building upon the heuristic method of Kohel, Lauter, Petit and Tignol. This problem, and its (previously heuristic) resolution, are both a powerful cryptanalytic tool and a building-block for cryptosystems.
Category / Keywords: public-key cryptography / isogeny-based cryptography, cryptanalysis, endomorphism ring, isogeny path
Original Publication (with minor differences): FOCS 2021
Date: received 7 Jul 2021, last revised 10 Sep 2021
Contact author: benjamin wesolowski at math u-bordeaux fr
Available format(s): PDF | BibTeX Citation
Short URL: ia.cr/2021/919
[ Cryptology ePrint archive ]
|
2022-01-19 22:58:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8877095580101013, "perplexity": 12057.42551884844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301592.29/warc/CC-MAIN-20220119215632-20220120005632-00555.warc.gz"}
|
http://www.fact-archive.com/encyclopedia/Bayesian_statistics
|
## Online Encylopedia and Dictionary Research Site
Online Encyclopedia Search Online Encyclopedia Browse
# Bayesian inference
(Redirected from Bayesian statistics)
Bayesian inference is statistical inference in which probabilities are interpreted not as frequencies or proportions or the like, but rather as degrees of belief. The name comes from the frequent use of the Bayes' theorem in this discipline.
Bayes' theorem is named after the Reverend Thomas Bayes. However, it is not clear that Bayes would endorse the very broad interpretation of probability now called "Bayesian". This topic is treated at greater length in the article Thomas Bayes.
Contents
## Evidence and the scientific method
Bayesian statisticians claim that methods of Bayesian inference are a formalisation of the scientific method involving collecting evidence which points towards or away from a given hypothesis. There can never be certainty, but as evidence accumulates, the degree of belief in a hypothesis changes; with enough evidence it will often become very high (almost 1) or very low (near 0).
Bayes theorem provides a method for adjusting degrees of belief in the light of new information. Bayes' theorem is
$P(H_0|E) = \frac{P(E|H_0)\;P(H_0)}{P(E)}$
For our purposes, H0 can be taken to be a hypothesis which may have been developed ab initio or induced from some preceding set of observations, but before the new observation or evidence E.
The scaling factor P(E | H0) / P(E) gives a measure of the impact that the observation has on belief in the hypothesis. If it is unlikely that the observation will be made unless the particular hypothesis being considered is true, then this scaling factor will be large. Multiplying this scaling factor by the prior probability of the hypothesis being correct gives a measure of the posterior probability of the hypothesis being correct given the observation.
The keys to making the inference work is the assigning of the prior probabilities given to the hypothesis and possible alternatives, and the calculation of the conditional probabilities of the observation under different hypotheses.
Some Bayesian statisticians believe that if the prior probabilities can be given some objective value, then the theorem can be used to provide an objective measure of the probability of the hypothesis. But to others there is no clear way in which to assign objective probabilities. Indeed, doing so appears to require one to assign probabilities to all possible hypotheses.
Alternately, and more often, the probabilities can be taken as a measure of the subjective degree of belief on the part of the participant, and to restrict the potential hypotheses to a constrained set within a model. The theorem then provides a rational measure of the degree to which some observation should alter the subject's belief in the hypothesis. But in this case the resulting posterior probability remains subjective. So the theorem can be used to rationally justify belief in some hypothesis, but at the expense of rejecting objectivism.
It is unlikely that two individuals will start with the same subjective degree of belief. Supporters of Bayesian method argue that even with very different assignments of prior probabilities sufficient observations are likely to bring their posterior probabilities closer together. This assumes that they do not completely reject each other's initial hypotheses; and that they assign similar conditional probabilities. Thus Bayesian methods are useful only in situations in which there is already a high level of subjective agreement.
In many cases, the impact of observations as evidence can be summarised in a likelihood ratio, as expressed in the law of likelihood. This can be combined with the prior probability to reflect the original degree of belief and any earlier evidence already taken into account. For example, if we have the likelihood ratio
$\Lambda = \frac{L(H_0\mid E)}{L(\mbox{not } H_0|E)} = \frac{P(E \mid H_0)}{P(E \mid \mbox{not } H_0)}$
then we can rewrite Bayes' theorem as
$P(H_0|E) = \frac{\Lambda P(H_0)}{\Lambda P(H_0) + P(\mbox{not } H_0)} = \frac{P(H_0)}{P(H_0) +\left(1-P(H_0)\right)/\Lambda }.$
With two independent pieces of evidence E1 and E2, one possible approach is to move from the prior to the posterior probability on the first evidence and then use that posterior as a new prior and produce a second posterior with the second piece of evidence; an arithmetically equivalent alternative is to multiply the likelihood ratios. So
if $P(E_1, E_2 | H_0) = P(E_1 | H_0) \times P(E_2 | H_0)$
and $P(E_1, E_2 | \mbox{not }H_0) = P(E_1 | \mbox{not }H_0) \times P(E_2 | \mbox{not }H_0)$
then $P(H_0|E_1, E_2) = \frac{\Lambda_1 \Lambda_2 P(H_0)}{\Lambda_1 \Lambda_2 P(H_0) + P(\mbox{not } H_0)}$,
and this can be extended to more pieces of evidence.
Before a decision is made, the loss function also needs to be considered to reflect the consequences of making an erroneous decision.
## Simple examples of Bayesian inference
### From which bowl is the cookie?
To illustrate, suppose there are two bowls full of cookies. Bowl #1 has 10 chocolate chip and 30 plain cookies, while bowl #2 has 20 of each. Our friend Fred picks a bowl at random, and then picks a cookie at random. We may assume there is no reason to believe Fred treats one bowl differently from another, likewise for the cookies. The cookie turns out to be a plain one. How probable is it that Fred picked it out of bowl #1?
Intuitively, it seems clear that the answer should be more than a half, since there are more plain cookies in bowl #1. The precise answer is given by Bayes' theorem. Let H1 corresponds to bowl #1, and H2 to bowl #2. It is given that the bowls are identical from Fred's point of view, thus P(H1) = P(H2), and the two must add up to 1, so both are equal to 0.5. The datum D is the observation of a plain cookie. From the contents of the bowls, we know that P(D | H1) = 30/40 = 0.75 and P(D | H2) = 20/40 = 0.5. Bayes' formula then yields
$\begin{matrix} P(H_1 | D) &=& \frac{P(H_1) \cdot P(D | H_1)}{P(H_1) \cdot P(D | H_1) + P(H_2) \cdot P(D | H_2)} \\ \\ \ & =& \frac{0.5 \times 0.75}{0.5 \times 0.75 + 0.5 \times 0.5} \\ \\ \ & =& 0.6 \end{matrix}$
Before observing the cookie, the probability that Fred chose bowl #1 is the prior probability, P(H1), which is 0.5. After observing the cookie, we revise the probability to P(H1|D), which is 0.6.
### False positives in a medical test
False positives are a problem in any kind of test: no test is perfect, and sometimes the test will incorrectly report a positive result. For example, if a test for a particular disease is performed on a patient, then there is a chance (usually small) that the test will return a positive result even if the patient does not have the disease. The problem lies, however, not just in the chance of a false positive prior to testing, but determining the chance that a positive result is in fact a false positive. As we will demonstrate, using Bayes' theorem, if a condition is rare, then the majority of positive results may be false positives, even if the test for that condition is (otherwise) reasonably accurate.
Suppose that a test for a particular disease has a very high success rate:
• if a tested patient has the disease, the test accurately reports this, a 'positive', 99% of the time (or, with probability 0.99), and
• if a tested patient does not have the disease, the test accurately reports that, a 'negative', 95% of the time (i.e. with probability 0.95).
Suppose also, however, that only 0.1% of the population have that disease (i.e. with probability 0.001). We now have all the information required to use Bayes' theorem to calculate the probability that, given the test was positive, that it is a false positive.
Let A be the event that the patient has the disease, and B be the event that the test returns a positive result. Then, using the second alternative form of Bayes' theorem (above), the probability of a true positive is
$\begin{matrix}P(A|B) &= &\frac{0.99 \times 0.001}{0.99\times 0.001 + 0.05\times 0.999}\, ,\\ ~\\ &\approx &0.019\, .\end{matrix}$
and hence the probability of a false positive is about (1 − 0.019) = 0.981.
Despite the apparent high accuracy of the test, the incidence of the disease is so low (one in a thousand) that the vast majority of patients who test positive (98 in a hundred) do not have the disease. (Nonetheless, the proportion of patients who tested positive who do have the disease is 20 times the proportion before we knew the outcome of the test! Thus the test is not useless, and re-testing may improve the reliability of the result.) In particular, a test must be very reliable in reporting a negative result when the patient does not have the disease, if it is to avoid the problem of false positives. In mathematical terms, this would ensure that the second term in the denominator of the above calculation is small, relative to the first term. For example, if the test reported a negative result in patients without the disease with probability 0.999, then using this value in the calculation yields a probability of a false positive of roughly 0.5.
In this example, Bayes' theorem helps show that the accuracy of tests for rare conditions must be very high in order to produce reliable results from a single test, due to the possibility of false positives. (The probability of a 'false negative' could also be calculated using Bayes' theorem, to completely characterise the possible errors in the test results.)
### In the courtroom
Bayesian inference can be used to coherently assess additional evidence of guilt in a court setting.
• Let G be the event that the defendant is guilty.
• Let E be the event that the defendant's DNA matches DNA found at the crime scene.
• Let p(E | G) be the probability of seeing event E assuming that the defendant is guilty. (Usually this would be taken to be unity.)
• Let p(G | E) be the probability that the defendant is guilty assuming the DNA match event E
• Let p(G) be the probability that the defendant is guilty, based on the evidence other than the DNA match.
Bayesian inference tells us that if we can assign a probability p(G) to the defendant's guilt before we take the DNA evidence into account, then we can revise this probability to the conditional probability p(G | E), since
p(G | E) = p(G) p(E | G) / p(E)
Suppose, on the basis of other evidence, a juror decides that there is a 30% chance that the defendant is guilty. Suppose also that the forensic evidence is that the probability that a person chosen at random would have DNA that matched that at the crime scene was 1 in a million, or 10-6.
The event E can occur in two ways. Either the defendant is guilty (with prior probability 0.3) and thus his DNA is present with probability 1, or he is innocent (with prior probability 0.7) and he is unlucky enough to be one of the 1 in a million matching people.
Thus the juror could coherently revise his opinion to take into account the DNA evidence as follows:
p(G | E) = (0.3 × 1.0) /(0.3 × 1.0 + 0.7 × 10-6) = 0.99999766667.
In the United Kingdom, Bayes' theorem was explained by an expert witness to the jury in the case of Regina versus Dennis John Adams. The case went to Appeal and the Court of Appeal gave their opinion that the use of Bayes' theorem was inappropriate for jurors.
### Search theory
In May 1968 the US nuclear submarine USS_Scorpion_(SSN-589) failed to arrive as expected at her home port of Norfolk,_Virginia. The US Navy was convinced that the vessel had been lost off the Eastern seaboard but an extensive search failed to discover the wreck. The US Navy's deep water expert, John Craven, believed that it was elsewhere and he organised a search south west of the Azores based on a controversial approximate triangulation by hydrophones. He was allocated only a single ship, the USNS Mizar, and he took advice from a firm of consultant mathematicians in order to maximise his resources. A Bayesian search methodology was adopted. Experienced submarine commanders were interviewed to construct hypotheses about what could have caused the loss of the Scorpion. The sea area was divided up into grid squares and a probability assigned to each square, under each of the hypotheses, to give a number of probability grids, one for each hypothesis. These were then added together to produce an overall probability grid. The probability attached to each square was then the probability that the wreck was in that square. A second grid was constructed with probabilities that represented the probability of successfully finding the wreck if that square were to be searched and the wreck were to be actually there. This was a known function of water depth. The result of combining this grid with the previous grid is a grid which gives the probability of finding the wreck in each grid square of the sea if it were to be searched. This sea grid was systematically searched in a manner which started with the high probability regions first and worked down to the low probability regions last. Each time a grid square was searched and found to be empty its probability was reassessed using Bayes' theorem. This then forced the probabilities of all the other grid squares to be reassessed (upwards), also by Bayes' theorem. The use of this approach was a major computational challenge for the time but it was eventually successful and the Scorpion was found in October of that year. Suppose a grid square has a probability p of containing the wreck and that the probability of successfully detecting the wreck if it is there is q. If the square is searched and no wreck is found then, by Bayes, the revised probability of the wreck being in the square is given by
$p' = \frac{p(1-q)}{(1-p)+p(1-q)}$
## More mathematical examples
### Posterior distribution of the binomial parameter
In this example we consider the computation of the posterior distribution for the binomial parameter. This is the same problem considered by Bayes in Proposition 9 of his essay.
We are given m observed successes and n observed failures in a binomial experiment. The experiment may be tossing a coin, drawing a ball from an urn, or asking someone their opinion, among many other possibilities. What we know about the parameter (let's call it a) is stated as the prior distribution, p(a).
For a given value of a, the probability of m successes in m+n trials is
$p(m,n|a) = \begin{pmatrix} n+m \\ m \end{pmatrix} a^m (1-a)^n$
Since m and n are fixed, and a is unknown, this is a likelihood function for a. From the continuous form of the law of total probability we have
$p(a|m,n) = \frac{p(m,n|a)\,p(a)}{\int_0^1 p(m,n|a)\,p(a)\,da} = \frac{\begin{pmatrix} n+m \\ m \end{pmatrix} a^m (1-a)^n\,p(a)} {\int_0^1 \begin{pmatrix} n+m \\ m \end{pmatrix} a^m (1-a)^n\,p(a)\,da}$
For some special choices of the prior distribution p(a), the integral can be solved and the posterior takes a convenient form. In particular, if p(a) is a beta distribution with parameters m0 and n0, then the posterior is also a beta distribution with parameters m+m0 and n+n0.
A conjugate prior is a prior distribution, such as the beta distribution in the above example, which has the property that the posterior is the same type of distribution.
What is "Bayesian" about Proposition 9 is that Bayes presented it as a probability for the parameter a. That is, not only can one compute probabilities for experimental outcomes, but also for the parameter which governs them, and the same algebra is used to make inferences of either kind. Interestingly, Bayes actually states his question in a way that might make the idea of assigning a probability distribution to a parameter palatable to a frequentist. He supposes that a billiard ball is thrown at random onto a billiard table, and that the probabilities p and q are the probabilities that subsequent billiard balls will fall above or below the first ball. By making the binomial parameter a depend on a random event, he cleverly escapes a philosophical quagmire that was an issue he most likely was not even aware of.
### Computer applications
Bayesian inference has applications in artificial intelligence and expert systems. Bayesian inference techniques have been a fundamental part of computerized pattern recognition techniques since the late 1950s. There is also an ever growing connection between Bayesian methods and simulation Monte Carlo techniques since complex models cannot be processed in closed form by a Bayesian analysis, while the graphical model structure inherent to all statistical models, even the most complex ones, allows for efficient simulation algorithms like the Gibbs sampling and other Metropolis-Hastings algorithm schemes.
As a particular application of statistical classification, Bayesian inference has been used in recent years to develop algorithms for identifying unsolicited bulk e-mail (spam). Applications which make use of Bayesian inference for spam filtering include Bogofilter, SpamAssassin and Mozilla. Spam classification is treated in more detail in the article on naive Bayesian classification.
In some applications fuzzy logic is an alternative to Bayesian inference. Fuzzy logic and Bayesian inference, however, are mathematically and semantically not compatible: You cannot, in general, understand the degree of truth in fuzzy logic as probability and vice versa.
## References
Berger, J.O. (1985) Bayesian Statistics. Springer Verlag, New York.
O'Hagan, A. and Forster, J. (2003) Bayesian Inference. In Kendall's Advanced Theory of Statistics, Arnold, New York.
Robert, C.P. (2001) The Bayesian Choice. Springer Verlag, New York.
|
2020-02-22 01:00:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8315374255180359, "perplexity": 558.740338621627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145621.28/warc/CC-MAIN-20200221233354-20200222023354-00392.warc.gz"}
|
https://math.meta.stackexchange.com/questions/10683/question-lists-build-on-favourite-tag-list?noredirect=1
|
# Question lists build on favourite tag list [duplicate]
What I would like is to have lists of questions that only lists the questions that have one or more of my favourite tags attached, so that I can have a quick view of newest or featured questions in any of my favourite fields.
Copy/paste/edited from Jeff Atwood's answer over on meta.stackoverflow.com:
There is a default tag filter on http://stackexchange.com/ but you must be logged in:
1. Click on "Filtered Questions"
2. Click on "Favourite Tags" filter.
• Can robjohn's bookmarklet be used to render math on that page? I never tried because I actually prefer non-rendered previews, but someone may be interested. – user Aug 18 '13 at 15:34
• Thanks for that, but it is a bit complicated, can it not be in some way be integrated in the mathematics site? also you cannot select the featured questions this way (but that is minor), btw i don;t want the filter to be active all the time just when i start up. – Willemien Aug 18 '13 at 15:42
You can create and bookmark a URL of the form
https://math.stackexchange.com/questions/tagged/tag1+or+tag2+or+tag3+or+tag4
You can use the featured and other sub-tabs to customize the view further.
If you have only a few favorite tags, the above is equivalent to searching for
[tag1] or [tag2] or [tag3] or [tag4]
If you have many favorite tags, they may not fit into the search box (which takes only 240 characters), but you can use a text editor to create the URL as above.
|
2021-05-15 18:02:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2848718464374542, "perplexity": 2140.959394132964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00456.warc.gz"}
|
https://www.physicsforums.com/threads/basic-calculation-problem-with-commutators.327102/
|
# Basic calculation problem with commutators
1. Jul 26, 2009
### Unkraut
1. The problem statement, all variables and given/known data
A is a Hermitian operator which commutes with the Hamiltonian: $\left[A,H\right]=AH-HA=0$
To be shown: $\frac{d}{dt}A=0$
2. Relevant equations
Schrödinger equation: $i\hbar\frac{\partial}{\partial t}\psi=H\psi$ with the Hamilton operator H.
3. The attempt at a solution
I have seen this solution on many sites:
$\frac{d}{dt}<A>=\frac{d}{dt}<\psi|A|\psi>=<\psi|\frac{\partial A}{\partial t}|\psi>+<\frac{d\psi}{dt}|A|\psi>+<\psi|A|\frac{d\psi}{dt}>=<\frac{\partial A}{\partial t}>+\frac{1}{i\hbar}<\left[ A, H\right] >=0$
I have a problem with this: $<\frac{d\psi}{dt}|A|\psi>+<\psi|A|\frac{d\psi}{dt}>=\frac{1}{i\hbar}<\left[ A, H\right] >$
Okay, obviously we have from the Schrödinger equation:
$H=i\hbar\frac{\partial}{\partial t}$
and thus
$\frac{\partial}{\partial t}=\frac{1}{i\hbar}H$
and thus
$<\frac{d\psi}{dt}|A|\psi>+<\psi|A|\frac{d\psi}{dt}>=\frac{1}{i\hbar}(<H\psi|A|\psi>+<\psi|A|H\psi>)=\frac{1}{i\hbar}<\psi|HA+AH|\psi>$
But this is not the commutator but the anti-commutator. It is plus and not minus! What did I do wrong here?
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
2. Jul 26, 2009
### Dick
Remember <a|ib>=i*<a|b> but <ia|b>=(-i)*<a|b>.
3. Jul 26, 2009
### Unkraut
Oh, of course! :yuck:
Thank you.
|
2017-10-22 18:04:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5734625458717346, "perplexity": 2241.2629904287614}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825399.73/warc/CC-MAIN-20171022165927-20171022185927-00874.warc.gz"}
|
https://www.sparrho.com/item/adaptive-radio-transceiver-with-filtering/1082f55/
|
Imported: 10 Mar '17 | Published: 27 Nov '08
Hooman Darabi, Ahmadreza Rofougaran, Shahla Khorram, Brima Ibrahim
USPTO - Utility Patents
Abstract
An exemplary embodiment of the present invention described and shown in the specification and drawings is a transceiver with a receiver, a transmitter, a local oscillator (LO) generator, a controller, and a self-testing unit. All of these components can be packaged for integration into a single IC including components such as filters and inductors. The controller for adaptive programming and calibration of the receiver, transmitter and LO generator. The self-testing unit generates is used to determine the gain, frequency characteristics, selectivity, noise floor, and distortion behavior of the receiver, transmitter and LO generator. It is emphasized that this abstract is provided to comply with the rules requiring an abstract which will allow a searcher or other reader to quickly ascertain the subject matter of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or the meaning of the claims.
Description
CROSS-REFERENCE TO RELATED APPLICATION
The present application is a continuation of co-pending patent application Ser. No. 09/634, 552, filed Aug. 8, 2000, priority of which is hereby claimed under 35 U.S.C. 120. The present application also claims priority under 35 U.S.C. 119(e) to provisional Application Nos. 60/160,806, filed Oct. 21, 1999; Application No. 60/163,487, filed Nov. 4, 1999; Application No. 60/163,398, filed Nov. 4, 1999; Application No. 60/164,442, filed Nov. 9, 1999; Application No. 60/164,194, filed Nov. 9, 1999; Application No. 60/164,314, filed Nov. 9, 1999; Application No. 60/165,234, filed Nov. 11, 1999; Application No. 60/165,239, filed Nov. 11, 1999; Application No. 60/165,356; filed Nov. 12, 1999; Application No. 60/165,355, filed Nov. 12, 1999; Application No. 60/172,348, filed Dec. 16, 1999; Application No. 60/201,335, filed May 2, 2000; Application No. 60/201,157, filed May 2, 2000; Application No. 60/201,179, filed May 2, 2000; Application No. 60/202,997, filed May 2, 2000; Application No. 60/201,330, filed May 2, 2000. All these applications are expressly incorporated herein by referenced as though fully set forth in full.
FIELD OF THE INVENTION
The present invention relates to telecommunication systems, and in particular, to radio transceiver systems and techniques.
BACKGROUND OF THE INVENTION
Transceivers are used in wireless communications to transmit and receive electromagnetic waves in free space. In general, a transceiver comprises three main components: a transmitter, a receiver, and an LO generator or frequency synthesizer. The function of the transmitter is to modulate, upconvert, and amplify signals for transmission into free space. The function of the receiver is to detect signals in the presence of noise and interference, and provide amplification, downconversion and demodulation of the detected the signal such that it can be displayed or used in a data processor. The LO generator provides a reference signal to both the transmitter for upconversion and the receiver for downconversion.
Transceivers have a wide variety of applications ranging from low data rate wireless applications (such as mouse and keyboard) to medium data rate Bluetooth and high data rate wireless LAN 802.11 standards. However, due to the high cost, size and power consumption of currently available transceivers, numerous applications are not being fully commercialized. A simplified architecture would make a transceiver more economically viable for wider applications and integration with other systems. The integration of the transceiver into a single integrated circuit (IC) would be an attractive approach. However, heretofore, the integration of the transceiver into a single IC has been difficult due to process variations and mismatches. Accordingly, there is a need for an innovative transceiver architecture that could be implemented on a single IC, or alternatively, with a minimum number of discrete off-chip components that compensate for process variations and mismatches.
SUMMARY OF THE INVENTION
In one aspect of the present invention, a filter circuit includes a plurality of cascaded filters, and a bypass circuit coupled across one of the cascaded filters.
In another aspect of the present invention, a filter circuit includes a plurality of cascaded filters, and bypass means for bypassing at least one of the cascaded filters.
In yet another aspect of the present invention, filter circuit includes a biquad filter, and a polyphase filter coupled to the biquad filter.
In still another aspect of the present invention, a complex differential filter includes first and second differential amplifiers each having a differential input and a differential output, a first input resistor coupled to a first one of the differential inputs of the first differential amplifier, a second input resistor coupled to a second one of the differential inputs of the first differential amplifier, a third input resistor coupled to a first one of the differential inputs of the second differential amplifier, a fourth input resistor coupled to a second one of the differential inputs of the second differential amplifier, a first input capacitor having one end coupled to the first input resistor and another end coupled to the third input resistor, a second input capacitor having one end coupled to the second input resistor and another end coupled to the fourth input resistor, a third input capacitor having one end coupled to the third input resistor and another end coupled to the second input resistor, and a fourth input capacitor having one end coupled to the fourth input resistor and another end coupled to the first input resistor.
In a further aspect of the present invention, a method of complex filtering to extract a signal in a frequency spectrum comprising a plurality of channels includes selecting one of the channels having the signal, rejecting an image of the signal in the selected channel, and applying gain to the signal, the applied gain being programmable.
It is understood that other embodiments of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein it is shown and described only embodiments of the invention by way of illustration of the best modes contemplated for carrying out the invention. As will be realized, the invention is capable of other and different embodiments and its several details are capable of modification in various other respects, all without departing from the spirit and scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.
Exemplary Embodiments of a Transceiver
In accordance with an exemplary embodiment of the present invention, a transceiver utilizes a combination of frequency planning, circuit design, layout and implementation, differential signal paths, dynamic calibration, and self-tuning to achieve robust performance over process variation and interference. This approach allows for the full integration of the transceiver onto a single IC for a low cost, low power, reliable and more compact solution. This can be achieved by (1) moving external bulky and expensive image reject filters, channel select filters, and baluns onto the RF chip; (2) reducing the number of off-chip passive elements such as capacitors, inductors, and resistors by moving them onto the chip; and (3) integrating all the remaining components onto the chip. As those skilled in the art will appreciate, the described exemplary embodiments of the transceiver do not require integration into a single IC and may be implemented in a variety of ways including discrete hardware components.
As shown in FIG. 1, a described exemplary embodiment of the transceiver includes an antenna 8, a switch 9, a receiver 10, a transmitter 12, a local oscillator (LO) generator (also called a synthesizer) 14, a controller 16, and a self-testing unit 18. All of these components can be packaged for integration into a single IC including components such as filters and inductors.
The transceiver can operate in either a transmit or receive mode. In the transmit mode, the transmitter 12 is coupled to the antenna 8 through the switch 9. The switch 9 provides sufficient isolation to prevent transmitter leakage from desensitizing or damaging the receiver 10. In the receive mode, the switch 9 directs signal transmissions from the antenna 8 to the receiver 10. The position of the switch 9 can be controlled by an external device (not shown) such as a computer or any other processing device known in the art.
The receiver 10 provides detection of desired signals in the presence of noise and interference. It should be able extract the desired signals and amplify it to a level where information contained in the received transmission can be processed. In the described exemplary embodiment, the receiver 10 is based on a heterodyne complex (I-Q) architecture with a programmable intermediate frequency (IF). The LO generator 14 provides a reference signal to the receiver 10 to downconvert the received transmission to the programmed IF.
A low IF heterodyne architecture is chosen over a direct conversion receiver because of the DC offset problem in direct conversion architectures. DC offset in direct conversion architectures arises from a number of sources including impedance mismatches, variations in threshold voltages due to process variations, and leakage from the LO generator to the receiver. With a low IF architecture, AC coupling between the IF stages can be used to remove the DC offset.
The transmitter 12 modulates incoming data onto a carrier frequency. The modulated carrier is upconverted by the reference signal from the LO generator 14 and amplified to a sufficient power level for radiation into free space through the antenna 8. The transmitter uses a direct conversion architecture. With this approach only one step of upconversion is required This leads to a reduction in both circuit complexity and power consumption.
The controller 16 performs two functions. The first function provides for adaptive programming of the receiver 10, transmitter 14 and LO generator 16. By way of example, the transceiver can be programmed to handle various communication standards for local area networks (LAN) and personal area networks (PAN) including HomeRF, IEEE 802.11, Bluetooth, or any other wireless standard known in the art. This entails programming the transceiver to handle different modulation schemes and data rates. The described exemplary embodiment of the transceiver can support modulation schemes such as Binary Phase Shift Keying (BPSK), Quadrature Phase Shift Keying (QPSK), offset quadrature phase shift keying (OQPSK), Multiple frequency modulations such as M level frequency shift keying (FSK), Continuous Phase Frequency Shift Keying modulation (CFSK), Minimum Shift Keying modulation (MSK), Gaussian filtered FSK modulation (GFSK), and Gaussian filtered Minimum Shift Keying (GMSK), Phase/Amplitude modulation (such as Quadrature Amplitude Modulation (QAM)), orthogonal frequency modulation (such as Orthogonal Frequency Division Multiplexing (OFDM)), direct sequence spread spectrum systems, and frequency hopped spread spectrum systems and numerous other modulation schemes known in the art. Dynamic programming of the transceiver can also be used to provide optimal operation in the presence of noise and interference. By way of example, the IF can be programmed to avoid interference from an external source.
The second function provides for adaptive calibration of the receiver 10, transmitter 14 and LO generator 16. The calibration functionality controls the parameters of the transceiver to account for process and temperature variations that impact performance. By way of example, resistors can be calibrated within exacting tolerances despite process variations in the chip fabrication process. These exacting tolerances can be maintained in the presence of temperature changes by adaptively fine tuning the calibration of the resistors.
The controller 16 can be controlled externally by a central processing unit (CPU), a microprocessor, a digital signal processor (DSP), a computer, or any other processing device known in the art. In the described exemplary embodiment, a control bus 17 provides two way communication between the controller 16 and the external processing device (not shown). This communication link can be used to externally program the transceiver parameters for different modulation schemes, data rates and IF operating frequencies. The output of the controller 16 is used to adjust the parameters of the transceiver to achieve optimal performance in the presence of process and temperature variations for the selected modulation scheme, data rate and IF.
The self-testing unit 18 generates test signals with different amplitudes and frequency ranges. The test signals are coupled to the receiver 10, transmitter 12 and LO generator 14 where they are processed and returned to the self-testing unit 18. The return signals are used to determine the gain, frequency characteristics, selectivity, noise floor, and distortion behavior of the receiver 10, transmitter 12 and LO generator 14. This is accomplished by measuring the strength of the signals output from the self-testing unit 18 against the returned signals over the tested frequency ranges. In an exemplary embodiment of the self-testing unit 18, these measurements can be made with different transceiver parameters by sweeping the output of the controller 16 through its entire calibrating digital range, or alternatively making measurements with the controller output set to a selected few points, by way of example, at the opposite ends of the digital range.
In the described exemplary embodiment, the self-testing unit 18 is in communication with the external processing device (not shown) via the control bus 17. During self-test, the external processing device provides programming data to both the controller 16 and the self-testing unit 18. The self-testing unit 18 utilizes the programming data used by the controller 16 to set the parameters of the transceiver to determine the gain, frequency characteristics, selectivity, noise floor, and distortion behavior of the receiver 10, transmitter 12 and LO generator 14.
FIG. 2 shows a block diagram of the transceiver in accordance with an embodiment of the invention. The described exemplary embodiment is integrated into a single IC. For ease of understanding, each component coupled to the controller is shown with a program designation or a calibration designation. These designations indicate whether the component is programmed by the controller or calibrated by the controller. In practice, in accordance with the described exemplary embodiment of the present invention, the components that are programmed receive the MSBs and the components that are calibrated receive the LSBs. The components requiring both programming and calibration receive the entire digital output from the controller. As those skilled in the art will appreciate, any number of methodologies may be used to deliver programming and calibration information to the individual components. By way of example, a single controller bus could be used having the programming and or calibration data with the appropriate component addresses.
The receiver 10 front end includes a low noise amplifier (LNA) 22 which provides high gain with good noise figure performance. Preferably, the gain of the LNA 22 can be set by the controller (not shown) through a select gain input to maximize the receivers dynamic range.
The desirability of dynamic gain control arises from the effect of blockers or interferers which can desensitize the LNA. Conventional filter designs at the input of the LNA 22 may serve to sufficiently attenuate undesired signals below a certain power level, however, for higher power blockers or interferers, the LNA 22 should be operated with low gain.
The output of the LNA 22 is downconverted to a low IF frequency by the combination of complex IF mixers 24 and a complex bandpass filter 26. More particularly, the output of the LNA 22 is coupled to the complex IF mixers 24 which generate a spectrum of frequencies based upon the sum and difference of the received signal and the RF clocks from the LO generator. The complex bandpass filter passes the complex IF signal while rejecting the image of the received signal. The image rejection capability of the complex IF mixers 24 in cooperation with the complex bandpass filter 26 eliminates the need for the costly and power consuming preselect filter typically required at the input of the LNA for conventional low IF architectures.
The output of the complex bandpass filter 26 is coupled to a programmable multiple gain stage amplifier 28. The amplifier 28 can be designed to be programmable to select between a limiter and an automatic gain control (AGC) feature, depending on the modulation scheme used in the transceiver. The limiting amplifier can be selected if the transceiver uses a constant envelope modulation such as FSK. AGC can be selected if the modulation is not a constant envelope, such as QAM. In addition, the bandwidth of the amplifier 28 can be changed by the controller to accommodate various data rates and modulation schemes.
The output of the amplifier 28 is coupled to a second set of complex IF mixers 30 where it is mixed with the IF clocks from the LO generator for the purpose of downconverting the complex IF signal to baseband. The complex IF mixers 30 not only reject the image of the complex IF signal, but also reduces some of the unwanted cross modulation spurious signals thereby relaxing the filtering requirements.
The complex baseband signal from the mixers 30 is coupled to a programmable passive polyphase filter within a programmable low pass filter 32. The programmable low pass filter 32 further filters out higher order cross modulation products. The polyphase filter can be centered at four times the IF frequency to notch out one of the major cross modulation products which results from the multiplication of the third harmonic of the IF signal with the IF clock. After the complex baseband signal is filtered, it either is passed through an analog-to-digital (A/D) converter 34 to be digitized or is passed to an analog demodulator 36. The analog demodulator 36 can be implemented to handle any number of different modulation schemes by way of example FSK. Embodiments of the present invention with an FSK demodulator uses the A/D converter 36 to sample baseband data with other modulation schemes for digital demodulation in a digital signal processor (not shown).
The LO generator 14 provides the infrastructure for frequency planning. The LO generator 14 includes an IF clock generator 44 and an RF clock generator 47. The IF clock generator includes an oscillator 38 operating at a ratio of the RF signal (fOCS). High stability and accuracy can be achieved in a number of ways including the use of a crystal oscillator.
The reference frequency output from the oscillator 38 is coupled to a divider 40. The divider 40 divides the reference signal fOSC by a number L to generate the IF clocks for downconverting the complex IF signal in the receiver to baseband. A clock generator 41 is positioned at the output of the divider 40 to generate a quadrature sinusoidal signal from the square wave output of the divider 40. Alternatively, the clock generator 41 can be located in the receiver. The divider 40 may be programmed by through the program input. This feature allows changes in the IF frequency to avoid interference from an external source.
The output of the divider 40 is coupled to the RF clock generator 47 where it is further divided by a number n by a second divider 42. The output of the second divider 42 provides a reference frequency to a phase lock loop (PLL) 43. The PLL includes a phase detector 45, a divide by M circuit 46 and a voltage controlled oscillator (VCO) 48. The output of the VCO 48 is fed back through the divide by M circuit 46 to the phase detector 45 where it is compared with the reference frequency. The phase detector 45 generates an error signal representative of the phase difference between the reference frequency and the output of the divide by M circuit 46. The error signal is fed back to the control input of the VCO 48 to adjust its output frequency fVCO until the VCO 48 locks to a frequency which is a multiple of the reference frequency. The VCO 48 may be programmed by setting M via the controller through the program input to the divide by M circuit 46. The programmability resolution of the VCO frequency fVCO is set by the reference frequency which also may be programmed by the controller through the program input of the divider 42.
In the described exemplary embodiment, the VCO frequency is sufficiently separated (in frequency) from the RF frequency generated by the transmitter 12 to prevent VCO pulling and injection lock of the VCO. Transmitter leakage can pull the VCO frequency toward the RF frequency and actually cause the VCO to lock to the RF signal if their frequencies are close to each other. The problem is exasperated if the gain and tuning range of the VCO is large. If the frequency of the RF clocks is fLO, then the VCO frequency can be defined as: fVCO=NfLO/(N+1). This methodology is implemented with a divide by N circuit 50 coupled to the output of the VCO 48 in the PLL 43. The output of the VCO 48 and the output of the divide by N circuit 50 are coupled to a complex mixer 52 where they are multiplied together to generate the RF clocks. A filter 53 can be positioned at the output of the complex mixer to remove the harmonics and any residual mixing images of the RF clocks. The divide by N circuit can be programmable via the controller through the select input. For example, if N=2, then fVCO=()fLO, and if N=3, then fVCO=()fLO.
A VCO frequency set at the frequency of the RF clocks works well in the described exemplary embodiment because the transmitter output is sufficiently separated (in frequency) from the VCO frequency. In addition, the frequency of the RF clocks is high enough so that its harmonics and any residual mixing images such as fVCO1(1/N)), 3fVCO1+(1/N), and 3fVCO1(1/N)) are sufficiently separated (in frequency) from the transmitter output to relax the filtering requirements of the RF clocks. The filtering requirements do not have to be sharp because the filter can better distinguish between the harmonics and the residual images when they are separated in frequency. Programming the divide by N circuit 50 also provides for the quadrature outputs of the divide by N circuit. Otherwise, with an odd number programmed, the outputs of the divide by N circuit 50 would not be quadrature. For an odd number, the divider 50 outputs will be differential, but will not be 90 degrees out of phase, i.e., will not be I-Q signals.
In the described exemplary embodiment, the RF clocks are generated in the in the LO generator 14. This can be accomplished in various fashions including, by way of example, either generating the RF clocks in the VCO or using a polyphase circuit to generate the RF clocks. Regardless of the manner in which the RF clocks are generated, the mixer 52 will produce a spectrum of frequencies including the sum and difference frequencies, specifically, fVCO(1+(1/N)) and its image fVCO(1(1/N)). To reject the image, the mixer 52 can be configured as a double quadrature mixer as depicted in FIG. 3. The double quadrature mixer includes one pair of mixers 55, 57 to generate the Q-clock and a second pair of mixers 59, 61 to generate the I-clock. The Q-clock mixers utilizes a first mixer 55 to mix the I output of the VCO 48 (see FIG. 2) with the Q output of the divider 40 and a second mixer 57 to mix the Q output of the VCO with the I output of the divider. The outputs of the first and second mixers are connected together to generate the Q-clock. Similarly, the I-clock mixers utilizes a first mixer 59 to mix the I output of the divider with the Q output of the VCO and a second mixer 61 to mix the Q output of the divider with the I output of the VCO. The outputs of the first and second mixers are connected together to generate the I-clock. This technique provides very accurate I-Q clocks by combination of quadrature VCO and filtering. Because of the quadrature mixing, the accuracy of the I-Q clocks is not affected by the VCO inaccuracy, provided that the divide by N circuit generates quadrature outputs. This happens for even divide ratios, such as N=2.
Optimized performance is achieved through frequency planning and implemented by programmable dividers in the LO generator to select different ratios. Based on FIG. 2, all the dependencies of the frequencies are shown by the following equation:
fLO=fRF(MfOSC/nL)(1+1/N)fOSC/L
where fRF is frequency of the transmitter output.
Turning back to FIG. 2, the transmitter 12 includes a complex buffer 54 for coupling incoming I-Q modulated baseband signals to a programmable low-pass filter 56. The low-pass filter 56 can be programmed by the controller through the select input. The output of the low-pass filter 56 is coupled to complex mixers 58. The complex mixers 58 mixes the I-Q modulated baseband signals with the RF clocks from the LO generator to directly upconvert the baseband signals to the transmitting frequency. The upconverted signal is then coupled to an amplifier 60 and eventually a power amplifier (PA) 62 for transmission into free space through the antenna. A bandpass filter (not shown) may be disposed after the PA 62 to filter out unwanted frequencies before transmission through the antenna.
In the described exemplary embodiment, the transmitter can be configured to minimize spurious transmissions. Spurious transmissions in a direct conversion transmitter are generated mainly because of the nonlinearly of the complex mixers and the DC offsets at the input to the complex mixers. Accordingly, the complex mixers can be designed to meet a specified IIP3 (Input Intercept Point for the 3rd Harmonic) for the maximum allowable spurs over the frequency spectrum of the communications standard. The DC offsets at the input to the complex mixers can be controlled by the physical size of the transistors.
In addition, the transmitter can be designed to minimize spurious transmission outside the frequency spectrum of the communications standard set by the FCC. There are two sources for these spurs: the LO generator and the transmitter. These spurs can be are suppressed by multiple filtering stages in the LO generator and transmitter. Specifically, in the LO generator, due to the complex mixing of the VCO signal with the output of the divide by N circuit, all the spurs are at least fVCO/N away from the RF clocks. By setting N to 2, by way of example, these unwanted spurs will be sufficiently separated (in frequency) from the transmitted signal and are easily removed by conventional filters in the LO generator and transmitter. Thus, the spurs will be mainly limited to the harmonics of the transmitted signal, which are also sufficiently separated (in frequency) from the transmitted signal, and therefore, can be rejected with conventional filtering techniques. For further reduction in spurs, a dielectric filter may be placed after the PA in the transmitter.
1.1 Differential Amplifier
In exemplary embodiments of the present invention, a differential amplifier can be used to provide good noise immunity in low noise applications. Although the differential amplifiers are described in the context of a low noise amplifier (LNA) for a transceiver, those skilled in the art will appreciate that the techniques described are likewise suitable for various applications requiring good noise immunity. Accordingly, the described exemplary embodiments of an LNA for a transceiver is by way of example only and not by way of limitation.
1.1.1 Single-to-Differential LNA
The described LNA can be integrated into a single chip transceiver or used in other low noise applications. In the case of transceiver chip integration, the LNA should be relatively insensitive to the substrate noise or coupling noise from other transceiver circuits. This can be achieved with a single-to-differential LNA. The single-ended input provides an interface with an off-chip single-ended antenna. The differential output provides good noise immunity due to its common mode rejection.
FIG. 4 shows a schematic of a single-to-differential amplifier having two identical cascode stages that are driven by the same single-ended input 64. The input 64 is coupled to a T-network having two series capacitors 82, 84 and a shunt inductor 72. The first stage includes a pair of transistors 74, 78 connected between the shunt inductor 72 and a DC power source via an inductor 68. The second stage includes a complimentary pair of transistors 76, 80 connected between ground and the DC power source via an inductor 70. The gate of the one of the transistors 80 in the second stage is connected to the output of the T-network at the capacitor 84. A bias current is applied to the gate of each transistor.
This configuration provides an input that is well matched with the antenna because the parallel connection of the T-network with the source of the transistor 78 transforms the 1/gm (transconductance) of the transistor to a resistance (preferably 50 ohms to match the antenna). By adjusting the values of T-network components, the matching circuit can be tuned for different frequencies and source impedances. The input capacitor 82 of the T-network further provides decoupling between the antenna and the amplifier.
For DC biasing purposes, the shunt inductor 72 provides a short circuit to ground allowing both stages of the amplifier to operate at the same DC drain current. The output capacitor 84 provides DC isolation between the gate bias applied to the transistor 80 of the second stage and the source 82 of the transistor 78 in the first stage.
In operation, a signal applied to the input of the amplifier is coupled to both the source 82 of the transistor 78 of the first stage and the gate 83 of the transistor 80 of the second stage. This causes the gain of each stage to vary inversely to one another. As a result, the signal voltage applied to the input of the amplifier is converted to a signal current with the signal current in the first stage being inverted from the signal current in the second stage. Moreover, the two stages will generate the same gain because the gin of the transistors should be the same, and therefore, the total gain of the amplifier is twice as much as conventional single-to-differential amplifiers.
1.1.2 Differential LNA
A differential LNA can also be used to provide good noise immunity in low noise applications, such as the described exemplary embodiment of the transceiver. In FIG. 4(a), an exemplary differential LNA is shown having a cascode differential pair with inductive degeneration. In the described exemplary embodiment, the differential LNA can be integrated into a single chip transceiver or used in other similar applications.
In the case of transceiver chip integration, an off chip coupler (not shown) can be used to split the single-ended output from the antenna into a differential output with each output being 180 out of phase. The LNA input can be matched to the coupler, i.e., a 50 ohm source, by LC circuits. A shunt capacitor 463 in combination with a series inductor 465 provides a matching circuit for one output of the coupler, and a shunt capacitor 467 in combination with a series inductor 469 provides a matching circuit for the other output of the coupler. At 2.4 GHz., each LC circuit may be replaced by a shunt capacitor and transmission line. In the described exemplary embodiment, the LC circuits are off-chip for improved noise figure performance. Alternatively, the LC circuits could be integrated on chip. However, due to the high loss of on chip inductors, the noise figure, as well as gain, could suffer.
The differential output of the coupler is connected to a differential input of the LNA via the LC matching circuits. The differential input includes a pair of input FET transistors 471,473 with inductive degeneration. This is achieved with an on chip source inductor 475 connected between the input transistor 471 and ground, and a second on chip source inductor 479 connected between the input transistor 473 and ground. The on chip inductive degeneration provides a predominantly resistive input impedance. In addition, the FET noise contribution at the operating frequency is reduced.
The outputs of the input transistors 471, 473 are coupled to a cascode stage implemented with a pair of transistors 481, 486, respectively. The cascode stage provides isolation between the LNA input and its output. This methodology improves stability, and reduces the effect of the output load on the LNA input matching circuits. The gates of the cascode transistors 481, 486 are biased at the supply voltage by a resistor 488. The resistor 488 reduces instability that might otherwise be caused by parasitic inductances at the gates of the cascoded transistors 481, 486. Since the described exemplary embodiment of the LNA uses a differential architecture, the resistor does not contribute noise to the LNA output.
The output of cascoded transistor 481 is coupled to the supply voltage through a first inductor 490. The output of the cascodedc transistor 486 is coupled to the supply voltage through a second inductor 492. The LNA is tuned to the operating frequency by the output inductors 490, 492. More particularly, these inductors 490, 492 resonate with the LNA output parasitic capacitance, and the input capacitance of the next state (not shown). Embodiments of the present invention integrated into a single integrated circuit do not require a matching network at the LNA output.
The gain of the LNA can be digitally controlled. This is achieved by introducing a switchable resistor in parallel with each of the output inductors. In the described exemplary embodiment, a series resistor 494 and switch 496 is connected in parallel with the output inductor 490, and a second series resistor 498 and switch 500 is connected in parallel with the output inductor 492. The switches can be FET transistors or any other similar switching devices known in the art. In the low gain mode, each resistor 494,498 is connected in parallel with its respective output inductor 490, 492, which in turn, reduces the quality factor of each output inductor, and as a consequence the LNA gain. In the high gain mode, the resistors 494, 498 are switched out of the LNA output circuit by their respective switches 496, 500.
1.2 A Complex Filter
In an exemplary embodiment of the present invention, a programmable/tunable complex filter is used to provide frequency planning, agility, and noise immunity. This is achieved with variable components to adjust the frequency characteristics of the complex filter. Although the complex filter is described in the context of a transceiver, those skilled in the art will appreciate that the techniques described are likewise suitable for various applications requiring frequency agility or good noise immunity. Accordingly, the described exemplary embodiment for a complex filter in a transceiver is by way of example only and not by way of limitation.
The described complex filter can be integrated into a single chip transceiver or used in other low noise applications. In the case of transceiver chip integration, the off-chip filters used for image rejection and channel selection can be eliminated. A low-IF receiver architecture enables the channel-select feature to be integrated into the on-chip filter. However, if the IF lies within the bandwidth of the received signal, e.g. less than 80 MHz in the Bluetooth standard, the on-chip filter should be a complex filter (which in combination with the complex mixers) can suppress the image signal. Thus, either a passive or an active complex filter with channel select capability should be used. Although a passive complex filter does not dissipate any power by itself, it is lossy, and loads the previous stage significantly. Thus, an active complex filter with channel select capability is preferred. The channel select feature of the active complex filter can achieve comparable performance to conventional band-pass channel-select filters in terms of noise figure, linearity, and power consumption
The described exemplary embodiment of the complex filter accommodates several functions in the receiver signal path: it selects the desired channel, rejects the image signal which lies inside the data band of the received signal due to its asymmetric frequency response, and serves as a programmable gain amplifier (PGA). Moreover, the complex filter center frequency and its bandwidth can be programmed and tuned. These capabilities facilitate a robust receiver in a wireless environment, where large interferers may saturate the receiver or degrade the signal-to-noise ratio at the demodulator input. The attenuation of the received signal at certain frequencies can also be enhanced by introducing zeros in the complex filter.
In the described exemplary embodiment, the bypass switches are operated in accordance with the output from the controller 16 (see FIG. 2). An 8'th order filter can be constructed by opening the bypass switches 91, 93, 95, 97 via the digital signal from the controller output. The complex filter can be reduced to a 6'th order filter by closing the bypass switch 97 to effectively remove the output stage biquad from the complex filter. Similarly, the complex filter can be reduced to a 4'th order filter by closing bypass switches 95, 97 effectively removing the third stage biquad and output stage biquad. A 2'nd order filter can be created by closing bypass switches 93, 95, 97 effectively removing all biquads with the exception of the input stage from the circuit.
1.2.1.1 The Poles of a Biquad Stage
FIG. 6 shows an exemplary embodiment of a biquad stage of the complex filter. The biquad stage includes two first order resistor-capacitor (RC) filters each being configured with a differential operational amplifier 94, 96, respectively. The first differential operational amplifier 94 includes two negative feedback loops, one between each differential output and its respective differential input. Each feedback loop includes a parallel RC circuit (98, 106), (108, 100), respectively. Similarly, the second differential operational amplifier 96 includes two negative feedback loops, one between each differential output and its respective differential input. Each feedback loop includes a parallel RC circuit (102-110), (112-104), respectively. This topology is highly linear, and therefore, should not degrade the overall IIIP3 of the receiver. The RC values determine the pole of the biquad stage.
The differential inputs of the biquad stage are coupled to their respective differential operational amplifiers through input resistors 114, 116, 118, 120. The input resistors in combination with their respective feedback resistors set the gain of the biquad stage.
Preferably, some or all of the resistors and capacitors values can programmable and can be changed dynamically by the controller This methodology provides a frequency agile biquad stage.
The two first order RC filters are cross coupled by resistors 86, 88, 90, 92. By cross-coupling between the two filters, a complex response can be achieved, that is, the frequency response at the negative and positive frequencies will be different. This is in contrast to a real-domain filter, which requires the response to be symmetric at both positive and the negative frequencies. This feature is useful because the negative frequency response corresponds to the image signal. Thus, the biquad stage selects the desired channel, whereas the image signal, which lies at the negative frequency is attenuated.
For the resistor values shown in FIG. 6, the biquad stage outputs are:
$V OI = A ( 1 + j RC ) V II + 2 QV IQ ( 1 + j RC ) 2 + 4 Q 2 and ( 1 ) V OQ = A - 2 QV II + ( 1 + j RC ) V IQ ( 1 + j RC ) 2 + 4 Q 2 ( 2 )$
FIG. 7 shows the frequency response for the complex biquad filter.
After the received signal is downconverted, the desired channel in the I path lags the one in the Q path, that is, VII=jVIQ, and therefore:
$H ( j ) = V o V I ( j ) = A 1 + j RC - j 2 Q ( 3 )$
This shows a passband gain of A 122 at a center frequency of 2 Q/RC 124, with a 3-dB bandwidth of 2RC 126. Thus, the quality factor of the second-order stage will be Q. For the image signal however, the signal at the I branch leads, and as a result:
$H ( j ) = A 1 + j RC + j2 Q ( 4 )$
which shows that the image located at 2 Q/RC is rejected by
$1 1 + ( 4 Q ) 2 .$
Therefore, the biquad stage has an asymmetric frequency response, that is, the desired signal may be assigned to positive frequencies, whereas the image is attributed to negative frequencies. In general, the frequency response of the biquad stage is obtained by applying the following complex-domain transformation to a normalized real-domain lowpass filter:
$j - j ( - 0 ) BW ( 5 )$
where 0 is the bandpass (BP) center frequency, and BW is the lowpass (LP) equivalent bandwidth, equal to half of the bandpass filter bandwidth. For instance, for a second-order biquad stage (as shown in FIG. 6), 0=2 Q/RC, and BW=1/RC. The biquad stage is designed by finding its LP equivalent frequency response using equation (5). Once the LP poles are known, the BP poles are calculated based on equation (5). Assume that the LP equivalent has n poles, and PiLP=i+ji is the ith pole. From equation (5), the BP pole will be:
Pi,BP=BWPi,LP+j0=iBW+j(0+iBW)(6)
The complex filter is realized by cascading n biquad stages. Therefore, similar to real-domain bandpass filters, an nth order complex filter uses 2n integrators. Based on equation (3), each biquad stage has a pole equal to 1/RC+j2 Q/RC. Thus:
$i BW = - 1 RC and ( 7 ) o + i BW = 2 Q RC ( 8 )$
Since the LP equivalent poles are located in the left-half plane, ai is always negative. The above equations set the value of Q and RC in each stage. The gain of each biquad stage can be adjusted based on the desired gain in the complex filter, and noise-linearity trade-off: increasing the gain of one biquad stage lowers the noise contributed by the following biquad stages, but it also degrades the linearity of the complex filter.
In addition to image rejection, the complex frequency transformation of the biquad stage (equation (5)) provides for its frequency response to be symmetric around its center frequency as shown in FIG. 7. This is in contrast to regular bandpass filters which use the following real-domain transformation:
$j - j ( 2 - 0 2 ) BW ( 9 )$
This symmetric response in the biquad stage ensures a uniform group delay across the data band.
1.2.1.2 The Zeros of a Biquad Stage
The described exemplary embodiment of the biquad stage can be modified to obtain a sharper rejection or notch at an undesired signal at a specific frequency. This can be achieved in the biquad stage by adding zeros. Assume that the input resistors at the biquad input (Ri 114 in FIG. 6) is replaced with an admittance Yi. For the received signal, the frequency response of the biquad stage will be equal to:
$H ( j ) = R Y i 1 + j RC - j 2 Q ( 10 )$
FIG. 8 shows Yi having resistor Rz 128 and capacitor Cz 130.
In order to have a zero located at j axis in the frequency response, Yi should contain a term such as 1/z. If Yi is simply made of a resistor Rz in parallel with a capacitor Cz, then the input admittance will be equal to:
$Y i = 1 R z + j C z ( 11 )$
which is not desirable, since the zero will be in the left-half plane, rather than the j axis.
FIG. 9 shows Yi with the capacitor Cz 132 connected to the Q input 134 and the resistor RZ connected to the I input 136. Now the current I will be equal to:
$I = V R z + j C z ( j V ) ( 12 )$
Therefore, the input admittance will be equal to:
$Y i = 1 V = 1 R z - C z ( 13 )$
which indicates that the filter will have a zero equal to 1/RzCz at the j axis.
FIG. 10 shows a single biquad stage modified to have a zero at the j axis. The biquad stage includes capacitors 138, 140, 142, 144. The combination of capacitors 138, 140, 142, 148 and resistors 116, 118 determines a complex zero with respect to the center frequency. The transfer function for the received signal will be:
$H ( j ) = A 1 - RC z A 1 + j RC - j 2 Q ( 14 )$
Equation (14) is analogous to equation (3), with the difference that now a zero at A/RCz is added to the biquad stage of the complex filter. By knowing the LP equivalent characteristics of the biquad stage, the poles are calculated based on equation (6). The value of Q and RC in each biquad stage is designed by using equation (7) and equation (8). If the normalized LP zeros are at z,LP, then the biquad stage should be realized with two biquad stages cascoded, and the frequency of zeros in the biquad stages will be (equation (5)):
z,1,2=0z,LPBW(15)
If the differential I and Q inputs connected to the zero capacitors are switched, the biquad stage will have zeros at negative frequencies (image response). This property may be exploited to notch the image signal.
1.2.1.3 Tunability and Programmability
In addition to channel selection and image rejection, the described exemplary embodiment of the complex filter can provide variable gain, bandwidth, and center frequency. In addition, an automatic tuning loop can be implemented to adjust the center frequency. These features result in a high quality receiver which can dynamically support different communication standards, modulation schemes and data rates.
By changing the gain of the biquad stages, the complex filter can perform as a PGA in the signal path of the receiver. This assures that the output swing of the complex filter remains constant when the receiver input signal changes. Moreover, adaptivity is achieved through dynamic programming of the bandwidth and center frequency. By way of example, when the receive environment is less noisy, the transmitter may switch to a higher data rate, and the bandwidth of the complex filter should increase proportionally. The center frequency, on the other hand, may be changed to increase the receiver immunity to blockers and other interferers.
The center frequency of each biquad stage is equal to 2 Q/RC. The quality factor, Q, is precisely set, since it is determined by the ratio of two resistors (Rf and Rc in FIG. 10), which can be accurately established when the resistors are implemented on-chip. However, the RC product varies by temperature and process variations, and therefore, may be compensated by automatic tuning methods.
Referring to FIG. 12(a), each capacitor can be implemented with a capacitor 148 connected in parallel with a number of switchable capacitors 150, 152, 154, 156. The capacitance, and thereby the center frequency of the complex filter, can be varied by selectively switching in or out the capacitors based on a four-bit binary code. Each bit is used to switch one of the parallel capacitors from the circuit. In the described exemplary embodiment, the capacitor 148 provides a capacitance of Cu/2. Capacitor 150 provides a capacitance of Cu/2. Capacitor 152 provides a capacitance of Cu/4. Capacitor 154 provides a capacitance of Cu/8. Capacitor 156 provides a capacitance of Cu/16. This provides 50% tuning range with 3% tuning accuracy. Due to discrete nature of the tuning scheme, there may be some error in the center frequency (1/(22n) for n-bit array). This inaccuracy can be tolerated with proper design.
Referring to FIG. 12(b), each resistor can be implemented with a series of switchable resistors 158, 160, 162, 164, 166. Resistor 166 provides a resistance of Ru. Resistor 164 provides a resistance of 2 Ru. Resistor 162 provides a resistance of 4 Ru. Resistor 160 provides a resistance of 8 Ru. Resistor 158 provides a resistance of 16 Ru. In the described exemplary embodiment, the resistance can be varied between Ru and 31Ru in incremental steps equal to Ru by selectively bypassing the resistor based on a five-bit binary code.
The center frequency of the complex filter can be adjusted by setting 1/RuCu equal to a reference frequency generated, by way of example, the crystal oscillator in the controller. The filter is automatically tuned by monotonic successive approximation as described in detail in Section 4.0 herein. Once the value of RuCu is set, the complex filter characteristics depends only on four-bit code for the capacitors and the four-bit code for the resistors. For example, assume that the value of the resistors in the biquad stage of FIG. 6 is as following: Ri=nARu, Rf=nQRu, and Rc=nQRu. Likewise, assume that C=nCCu, where nC is a constant, and that 1/RuCu=u. The value of u is set to a reference crystal by a successive approximation feedback loop. The filter frequency response for the received signal will be:
$H ( j ) = n F n A 1 + j n c n F R u C u - j n F n Q ( 16 )$
Therefore, the biquad stage gain (A), center frequency (0) and bandwidth (BW) will be equal to:
$A = n F n A ( 17 ) 0 = 1 n C n Q u ( 18 ) BW = 1 n C n F u ( 19 )$
The above equations show that the characteristics of the biquad stage is independently programmed by varying nA, nF, and nQ. For instance, by setting nF, the gain of the biquad stage changes from nF/31 to nF by changing nA from 1 to 31.
1.2.2 I-Q Monolithic Bandpass Filter
Alternatively, a low power I-Q monolithic bandpass filter can be used for the complex filter of the described exemplary embodiment of the present invention. The I-Q monolithic bandpass filter is useful for short-range communication applications. It also provides low power monolithic bandpass filtering for high data rates such as Bluetooth and HomeRF applications. The I-Q monolithic bandpass filter can be fully incorporated in monolithic channel select filters for 1-MHz data rates.
FIG. 13 is a block diagram of the I-Q monolithic bandpass filter in accordance with an embodiment of the present invention. The I-Q monolithic bandpass filter includes a cascode of selectively intertwined biquads 168 and polyphase circuits 170. The biquads can be the same as the biquads described in Section 1.2.1 herein, or any other biquads known in the art. Similarly, the polyphase circuits can also be any conventional polyphase circuits known in the art. The biquad circuits can be 2'nd order lowpass filters, which in conjunction with the polyphase circuits, exhibit a 1-MHz bandwidth bandpass filter with more than 45 dB rejection for all frequencies beyond 2 MHz away from the center of the band. The number of biquads determines the order of the I-Q monolithic bandpass filter. The polyphase filters are for wider bandwidth and image rejection. The number of polyphase filters determines the number of zeros in the frequency response of the I-Q monolithic bandpass filter.
In the described embodiment, an 8'th order Butterworth filter is implemented in conjunction with selective side band filtering of polyphase circuits to create a low IF I-Q monolithic bandpass filter. The described embodiment of the I-Q monolithic bandpass filter does not suffer excessive group delay despite large bandwidth. The input IP3 can be better than 5 dBm with a gain of more than 20 dB and the noise figure can be less than 40 dB. In fully integrated embodiments of the present invention, the I-Q monolithic bandpass filter can have on chip tuning capability to adjust for process, temperature and frequency variations.
1.3 Programmable Multiple Gain Amplifier
In one exemplary embodiment of the present invention, a programmable multiple gain amplifier is used in the receiver path between the complex filter and the complex IF mixer (see FIG. 2). The programmable multiple gain amplifier can be designed to be programmable to select between a limiter and an AGC feature. The programmable multiple gain amplifier, when operating as a limiter provides a maximum gain for frequency modulation applications. The programmable multiple gain amplifier, operating as an AGC, can be used for applications utilizing amplitude modulation.
FIG. 14 shows a block diagram of an exemplary embodiment of the programmable multiple gain amplifier with an RSSI output. The RSSI output provides an indication of the strength of the IF signal. The programmable multiple gain amplifier includes three types of amplifiers. The input buffer is shown as a type I amplifier 900 and the type III amplifier 904 serves as the output buffer. The core amplifier is shown as a direct-coupled cascade of seven differential amplifiers 930, 931, 932, 933, 934, 935, 936. The core amplifier includes seven bypass switches 930, 931, 932, 933, 934, 935, 936, one bypass switch connected across each differential amplifier. The bypass switches provide programmable gain under control of the controller (see FIG. 2).
When the programmable gain amplifier is operating as a limiter, all the bypass switches will be opened by the controller. Conversely, when the programmable gain amplifier is operating in the AGC mode, the output gain of the core amplifier will be varied by controlling the bypass switch positions to prevent saturation of the core amplifier by large signals. In the described exemplary embodiment, the RSSI signal is fed back to control the bypass switch positions through a digital AGC loop in the external processing device. The AGC loop provides information to the controller 16 via the control bus 17 regarding the optimum gain reduction (see FIG. 2). The controller translates the information from the external processing device into a digital signal for controlling the bypass switch positions of the core amplifier accordingly. The larger the RSSI signal, the greater the gain reduction of the core amplifier will be and the more bypass switches that will be closed by the controller.
In one embodiment of the programmable gain amplifier, the type I and type III amplifiers can be the same. FIG. 15 shows one possible construction of these amplifiers. In this configuration, transistors 952, 954 provide amplification of the differential input signal. The differential input signal is fed to the gates of transistor amplifiers 952, 954, and the amplified differential output signal is taken from the drains. The gain of the transistor amplifiers 952, 954 is set by load resistors 956, 958. Transistors 960, 962 provide a constant current source for the transistor amplifiers 952, 954. The load resistors 956, 958, connected between the drain of their respective transistor amplifiers 952, 954 and a common gate connection of transistors 960, 962, provides a bias current source to common mode feedback.
Turning back to FIG. 14, the type II core amplifier 902 includes a direct-coupled cascade of seven differential amplifiers 930, 931, 932, 933, 934, 935, 936, each with a voltage gain, by way of example, 12 dB. The voltage at the output of each differential amplifier 930, 931, 932, 933, 934, 935, 936 is coupled to a rectifier 937, 938, 939, 940, 941, 942, 943, 944, respectively. The outputs of the rectifiers are connected to ground through a common resistor 945. The summation of the currents from each of the rectifiers flowing through the common resistor provides a successive logarithmic approximation of the input IF voltage. With a 12 dB gain per each differential amplifier, a total cascaded gain of 84 dB is obtained. As those skilled in the art will appreciate, any number of differential amplifiers, each with the same or different gain, may be employed.
The input dynamic range of an RSSI is explained using the following derivation. Throughout this section, assume each rectifier has an ideal square law characteristic and its transfer function is:
y=2Vin2(20)
Now, assume that S is the maximum input range of one differential amplifier and rectifier combination, whichever is smaller. This is determined with the lowest of the two values Vi and VL that are the maximum input range of each differential amplifier, and the maximum input range of the rectifier, respectively.
S=min(VPVL)(21)
Therefore, the RSSI maximum input level is S, and the ideal RSSI minimum input level is S/An, where A is the gain of each differential amplifier and n is the number of the differential amplifiers. Thus, the ideal dynamic range is calculated as follows:
$Ideal Dynamic Range = 20 log S S A n = 20 log A n = 20 ( n ) log A ( 22 )$
However, in the case of a large amount of gain, the input level will be limited with the input noise and the dynamic range will also be limited to:
$Dynamic Range = 20 log S n n = total noise rms n = ( BW ) Noise Factor ( 23 )$
If each differential amplifier has the same input dynamic range VL and each full-wave rectifier has similar input dynamic range Vi, then the dynamic range of the logarithmic differential amplifier and the total RSSI circuitry are the same.
The logarithmic approximations are provided by piecewise linear summation of the rectified output of each differential amplifier. This is done by segmentation of the input voltage by the power of 1/A. Successively, each differential amplifier will reach the limiting point as the input signal grows by the power of A. Assuming each rectifier is modeled as shown in equation (20), the logarithmic approximation is modeled as following:
For an input being in the following range:
$S A n - m V i n S A n - m - 1 ( 24 )$
up to the last m stages of the differential amplifier are all being limited and the rest of the differential amplifiers are in the linear gain region. Therefore, the RSSI is shown to be:
This is further simplified to:
$RSSI = ( A ) 2 ( A ) 2 - 1 V i n 2 [ ( A ) 2 ( n - m - 1 ) - ] + m 2 S 2 ( 26 ) RSSI 1 ( A ) 2 - 1 V i n 2 ( A ) 2 ( n - m ) + m 2 S 2 ( 27 )$
The above equation is a first order approximation to the logarithmic function shown in equation (28) according to the first two terms of the Taylor expansion at a given operating point.
The following calculates the constant C from the maximum and minimum of the RSSI:
$Max RSSI - Min RSSI = C log A 2 n ( 29 ) RSSI = C log A 2 n ( 30 ) C = RSSI 2 n log A ( 31 ) ( Ideal ) RSSI = RSSI 2 n log A log V i n 2 ( 32 )$
To find the relation between the gain of a differential amplifier, the gain of a rectifier, and the maximum input range of the combined differential amplifier and the rectifier, the RSSI will be calculated for the two consecutive differential amplifier and rectifier combinations (see equations (33) and (34)) for both ideal RSSI equations (32) and approximated RSSI equation (27):
$V i n 1 = S ( A ) n - m ( 33 ) V i n 2 = S ( A ) n - m - 1 ( 34 ) ( Ideal ) RSSI 2 - RSSI 1 = log ( A ) 2 ( 35 ) ( Approximated ) RSSI 2 - RSSI 1 = 2 S 2 ( 36 )$
Therefore,
C log(A)2=2S62(37)
Using equations (18) and (12), the following expression is achieved:
$RSSI n = 2 S 2 ( 38 )$
Plugging equation (19) into (8) results in the following:
$RSSI = 1 ( A ) 2 - 1 ( A ) 2 ( n - m ) V i n 2 + m RSSI n ; S A n - m V i n S A n - m - 1 ( 39 )$
FIG. 16(a) shows a schematic diagram for an exemplary embodiment of the differential amplifier used in the type II core amplifier. The differential input signal is fed to the gates of transistor amplifiers 955, 957. The amplified differential output signal is provided at the drains of the transistor amplifiers 955, 957. The gain of the transistor amplifiers is set by load transistors 958, 860, each connected between the drain of one of the transistor amplifiers and a power source. More particularly, the gain of the differential amplifier is determined by the ratio of the square root of transistor amplifiers-to-load transistors.
$Gain ( A ) = w i n w i n = 200 6 5.8 ( 40 )$
The sources of the transistor amplifiers 955, 957 are connected in common and coupled to a constant current source transistor 952. In the described exemplary embodiment, the controller provides the bias to the gate of the transistor 952 to set the current.
An exemplary embodiment of the full-wave rectifier with two unbalanced source-coupled pairs cross-coupled is shown in FIG. 16(b). In this embodiment, the differential input signal is fed to an unbalanced pair of transistors. One of the differential input pairs is fed to the gates of the unbalanced transistor pair 968, 966 and the other differential input pair is fed to the gates of the other unbalanced transistor pair 964, 962. The drains of transistors 968, 962 are connected in common and provide one of the differential output pairs. The drains of transistors 964, 966 are connected in common and provide the other differential output pair. Transistors 968, 964 are connected in a common source configuration and coupled to a constant current source transistor 965. Transistors 962, 966 are also connected in a common source configuration with the common source connected to a current source transistor 967. The gates of the current sources 965, 967 are connected together. In the described exemplary embodiment, the controller provides the bias to the common gate connection to set the current.
Transistors 970 and 971 provide a current-mirror load to cross-coupled transistors 968, 962. Similarly, transistors 972, 973 provide a current-mirror load to cross-coupled transistors 962, 964. The current through the cross-coupled transistors 962, 964 is the sum of the current through the load transistor 972 and the current through the load transistor 971 which is mirrored from the load transistor 970. The current through the cross-coupled transistors 962, 962 is also mirrored to load transistor 973 for the RSSI output.
When the transistors 962, 964, 966, and 968 are operating in the saturation region, the following equations are shown for the differential output current DISQB1 where k is the ratio of the two unbalanced source-coupled transistors:
$if I SQM 1 = ( I D 1 + I D 4 ) - ( I D 2 + I D 3 ) = 2 ( I D C + I SQ ) = 2 k - 1 k + 1 I o - 4 k ( k - 1 ) N ( k + 1 ) 2 V I 2 ( 41 )$
The input dynamic range of the full rectifier is then:
$if I SQMI = O , V i = I o N k + 1 2 k ( 42 )$
The full-wave rectifier includes two unbalanced differential pairs with a unidirectional current output. One rectifier 976 taps each differential pair and sums their currents into a 10 kW resistor RL.
The square law portion of equation (41) multiplied by the resistance provides the 2S2 of equation (42):
$2 S 2 = 4 k ( k - 1 ) N ( k + 1 ) 2 V i 2 R L ( 43 )$
By plugging the Vi from equation (42) and replacing 2S2 from equation (38), the following relation is obtained:
$RSSI n = 2 k - 1 k + 1 I o R L ( 44 )$
For RSSI=1V, n=7 stages, RL=10000, and k=4, from the above equation Io is calculated to be 12 mA. Therefore, each rectifier will be biased with two 12 mA current sources (one 12 ma current source for the I signal and a second 12 ma current source for the Q channel). This results in an approximately logarithmic voltage, which indicates the received signal-strength (RSSI).
1.4 Complex IF Mixers
The IF down conversion to baseband signal can be implemented using four fully balanced quadrature mixers as shown in FIG. 17(a). This mixer configuration includes both quadrature inputs from the programmable multiple stage amplifier and quadrature IF clocks from the LO generator. This configuration produces single sideband, quadrature baseband signals, with minimum number of spurs at the output. These characteristics aid in relaxing the baseband filtering as well as simplifying the demodulator architecture. An IF mixer buffer 352 buffers the IF clock (Clk_I, Clk_Q as shown in FIG. 17(a)).
The outputs of the limiters are coupled to the quadrature clocks of the IF mixers (I_in for mixer 322, I_in for mixer 323, Q_in for mixer 324, Q_in for mixer 325) and the IF clocks are coupled to the data input of the IF mixers. This configuration minimizes spurs at the output of the IF mixers because the signal being mixed is the IF clocks which is a clean sine wave, and therefore, has minimal harmonics. The limiting action of the programmable multiple stage amplifier on the I and Q data will have essentially no effect on the spurs at the output of the IF mixers. FIG. 17b shows the IF mixer clock signal spectrum which contains only odd harmonics. The IF signals do not have even harmonics in embodiments of the present invention using a fully differential configuration. The bandwidth of the m'th(=2n+1) harmonic is directly proportional to mfs, whereas its amplitude is inversely proportional to mfs. FIG. 17c shows the sinusoidal input spectrum of the IF clocks. FIG. 17d shows the IF mixer output spectrum.
1.5 Clock Generator
A clock generator can be used to generate a quadrature sinusoidal signal with controlled amplitude. The clock generator can be located in the receiver, or alternatively the LO Generator, and provides a clean sinusoidal IF from the square wave output of the divider in the LO Generator for downconverting the IF signal in the receiver path to baseband. FIG. 18 shows a block diagram and signal spectrum of a clock generator. A sinusoidal signal is generated from a square-wave using cascaded polyphase. FIG. 18 shows a clock generator block diagram. The clock generator outputs clk_I and clk_Q for the IF mixer buffer (see FIG. 17). The clock generator includes a polyphase filter at 3fs 360, a polyphase filter at 5fs 362, and a low pass filter 364. FIG. 19a shows the input clock signal spectrum. FIG. 19b shows the spectrum at 3fs 366 and at 5fs 368 polyphase. FIG. 19c shows the sinusoidal signal generation after the low pass filter 364.
In fully integrated embodiments of the present invention, the controller can provide self calibration to generate precise signal levels with negligible dependency on the process variations. The two polyphase filters 360, 362 with RC calibration can be used to remove the first two odd harmonics of the signal. The remaining harmonics can be filtered with an on chip tunable low pass filter. The output of the clock generator block is a quadrature sinusoidal signal with controlled signal level. This spectrally clean signal is used at the input of complex IF mixers to downconvert the IF signal to baseband.
1.6 Programmable Low Pass Filter
The first major spurs out of downconversion process is at 4 times the IF frequency. A self calibrated 4fs polyphase filter can be used after the complex IF mixers to reduce the spurious and improve the linearity of the demodulator.
The polyphase filter can be implemented with two back to back polyphase to reject both positive phase and the negative phase. Built-in programmability can also be included for operating with other frequencies. This capability enables the demodulator to be highly flexible. It can support wide range of incoming IF frequencies and with different modulation schemes.
Following the polyphase filter, a quadrature lowpass filter can be used to remove unwanted spurs. The lowpass filter can be programmable and designed to minimize group delay distortion without sacrificing high frequency filtering characteristics.
In fully integrated embodiments of the present invention, the controller can provide on chip RC calibration to minimize any process variation. The programmability of the polyphase filter and the low pass filter adds a new degree of flexibility to the system; it can be used to accommodate different data bandwidths.
FIG. 20 shows a baseband spectrum filtering before the discriminator. FIG. 20(a) shows the signal spectrum at polyphase input, i.e., the frequency spectrum of the polyphase filter. FIG. 20(b) shows the signal spectrum at polyphase output, i.e. the frequency spectrum of the low pass filter. FIG. 20(c) shows the signal spectrum at the low pass filter output.
1.7 High Data Rate Frequency Demodulator
The demodulator may take on various forms to accommodate different modulation schemes. One embodiment of the demodulation used in connection with the present invention includes a low power, monolithic demodulator for high data rates in frequency modulated systems. This demodulator can provide data recovery for well over 1-MHz data rates.
The demodulator can be FSK or GMSK demodulator. FSK is digital frequency modulation. GMSK is a specific type of FSK. GMSK stands for Gaussian filtered FSK modulation, which means that GMSK has gaussian filtering at the output of frequency modulation. GMSK has more stringent requirements than FSK. The data rate is higher for GMSK and the modulation index is low for GMSK relative to FSK.
The described embodiment of the demodulator is a low power, fully integrated FSK/GMSK demodulator for high data rates and low modulation index. The FSK operates with the programmable gain stage amplifier as a limiter, and therefore, does not require oversampling clocks or complex AGC blocks.
FIG. 21 is a block diagram of an exemplary high data rate frequency demodulator in accordance with the present invention. The demodulator performs a balanced quadrature demodulation. Differentiators 329, 330 convert the baseband signal to a signal having an amplitude proportional to the baseband signal frequency. One differentiator 329 converts the I signal and the other differentiator 330 converts the Q signal. The I signal output of the differentiator 329 is coupled to a multiplier 331 where it is multiplied by the Q signal input into the demodulator. The Q signal output of the differentiator 330 is coupled to a multiplier 332 where it is multiplied by the I signal input into the demodulator. The multipliers 331, 332 produce a single ended DC signal. The DC signals are summed together by summation circuit 333. A peak detector/slicer 334 digitizes the DC signal from the summation circuit, thereby producing discrete zeros and ones.
The frequency discrimination can be performed using a differentiator as shown in FIG. 22. A differential input signal is coupled to the input of an amplifier 340 through capacitors 341, 342. A feedback resistor 343, 344 is coupled between each differential output. Its operation is based on generating an output signal level linearly proportional to the incoming signal frequency. In other words, the higher the incoming frequency, the larger signal amplitude output by the differentiator. Therefore, it is desirable to have a spur free signal at the input of this stage. High frequency spurs can degrade the performance of the differentiator. By using the polyphase filter in conjunction with the lowpass filter (see FIG. 2) before the demodulator, a nearly ideal baseband signal is input to the differentiator. The capacitors 341, 342 in the signal path with the resistive feedback operation of the amplifier is proportional to the time derivative of the input. For a sinusoidal input, V(in)=A sin(t), the output will be V(out): d/dt(V(in))=to.A.cos(t). Thus, the magnitude of the output increases linearly with increasing frequency.
The controller provides RC calibration to keep the differentiation gain process invariant. In order to reduce the effect of any high frequency coupling to the differentiator input, the differentiator gain is flattened out for frequencies beyond the band of interest. In addition to frequency discrimination, the differentiation process adds a 90 degrees phase shift to the incoming signal. This phase shift is inherent to differentiation process. Since the output is in quadrature phase with the input (except for differing amplitude), cross multiplication of the input and output results in frequency information.
FIG. 23 shows an exemplary analog multiplier 331, 332 with zero higher harmonics in accordance with the present invention. Buffers one 334 and two 335 are added to a Gilbert cell to linearize the voltage levels. Buffers one 334 and two 335 convert the two inputs into two voltage levels for true analog multiplication using a Gilbert cell. The Gilbert cell is comprised of transistors 336, 338, resistors 340, 342 and cross-coupled pairs of transistors 344, 346 and transistors 348, 350.
By cross multiplying the input and the output signals to the differentiator, the amplitude information is generated. Since the signals are at baseband, it can be difficult to filter out any spurs resulting from the multiplication process. Linearized buffers can be used to minimize spurs by providing a near ideal analog multiplier. On chip calibration can also be used to control the multiplication gain and to minimize process variation dependency. In order to accommodate high data rates such as 1 MHz and beyond, all the stages should have low phase delays. In addition, matching all the delays in quadrature signals can be advantageous.
The output of the multiplier is a single ended DC signal which is a linear function of the frequency. This analog output can represent multilevel FSK with arbitrary modulation index. The minimum modulation index is only limited by wireless communication fundamentals.
An exemplary peak detector/slicer for frequency data detection is shown in FIG. 24. The differential input signal is coupled to a peak detector 346 which detects the high peak. The differential input signal is also coupled to a second peak 347 detector which detects the low valley of the signal. The outputs of the peak detectors are coupled to a resistor divider network 348, 349 to obtain the average of the output signal. The average signal output from the resistor divider network is used as the calibrated zero frequency to obviate frequency offset problems due to the frequency translation process from IF to baseband.
A differential amplifier 345 is used to digitize the frequency information by comparing the differential input signal with the calibrated zero frequency. The output of the amplifier is a logic 1 if the baseband frequency is greater than the calibrated zero frequency and a logic 0 if the baseband frequency is less than the calibrated zero frequency. The output is amplified through several inverters 350 which in turn generate digital rail to rail output.
2.0. Transmitter
2.1 Differential Power Amplifier
In an exemplary embodiment of the invention, the PA is a differential PA as shown in FIG. 25. The symmetry of the differential PA in conjunction with other features supports implementation in a variety of technologies including CMOS. The described embodiment of the differential PA can be a fully integrated class A PA. A balun 610 is used to connect the PA to an antenna or a duplexer. The balun converts the differential signal to a single-ended output.
The described embodiments of the differential PA is a two stage device. The two stages minimize backward leakage of the output signal to the input stage. As those skilled in the art will appreciate, any number of stages can be implemented depending on the particular application and operating environment. Equal distribution of gain between the two stages helps prevents oscillation by avoiding excess accumulation of gain in one stage. A cascode architecture may be incorporated into the PA to provide good stability and insulation.
The input stage or pre-amplifier of the power amplifier includes an input differential pair comprising amplifying transistors 612, 614. Transistor 616 is a current source that biases the input differential pair. The presence of a current source provides many positive aspects including common mode rejection. The current is controlled by the voltage applied to the gate of transistor 616. The gate voltage should be chosen to prevent the transistor 616 from operating in the triode region. Triode operation of transistor 616 has a number of drawbacks. Primarily, since transistor 616 is supposed to act as a current source, its operation in the triode region can cause distortion in the current flowing into the transistor 612 and the transistor 614, and consequently gives rise to nonlinearity in the signal. Secondly, the triode behavior of transistor 616 will depend on temperature and process variations. Therefore, the circuit operation will vary over different process and temperature corners.
Cascode transistors 618, 620 provide stability by isolating the output from the input. As a result, no change in the input impedance occurs over frequency. The gates of the cascode transistors 618, 620 devices are biased through a bond wire. A resistor 622 in series with the gates of the cascode transistors prevents the inductance associated with the bonding from resonating with the input capacitive of the transistors, thereby improving stability. The resistor 622 in combination with the gates of transistors 618, 620 also improves common mode rejection and makes the transistor input act like a virtual ground at RF. Resistor 623 isolates the power supply from the PA and provides common mode rejection by increasing the symmetry of the differential PA. Inductors 624, 626 tune out the capacitance at the drains of the transistors 618, 620. At the tuning frequency, the impedance seen at the drains of the transistors 618, 620 is high, which provides the high gain at the tuning frequency.
The differential output of the input stage is provided at the drains of the cascode transistors 618, 620 to AC coupling capacitors 628, 630. Capacitor 628 couples the drain of transistor 618 with the gate of transistor 632. Capacitor 630 couples the drain of transistor 620 with the gate of transistor 634. The transistors 632, 634 provide amplification for the second stage of the PA. Resistors 636, 638 are biasing resistors for biasing the transistors 632, 634.
In the output stage of the PA, the current level is higher and the size of the current source should be increased to maintain the same bias situation. However, large tail devices can lower the common mode rejection. Accordingly, instead of a current source, an inductor 640 can be used to improve the headroom. The inductor 640 is a good substitute for a current source. The inductor 640 is almost a short circuit at low frequencies and provides up to 1 Kohm of impedance at RF. By way of example, a 15 nH inductor with proper shielding (to increase the Q) and a self-resonance frequency close to 4.5 GHz can be used for optimum high frequency impedance and sufficient self-resonance.
Inductors 622, 624 tune out the capacitance at the drains of transistors 632, 634. Capacitors 642, 644 are AC coupling capacitors. Inductors 646 and capacitor 648 match the output impedance of the PA to the antenna, by way of example, 50. Similarly, inductors 650 and capacitor 652 match the output impedance of the PA to the antenna. Balun 610 is a differential to single-ended voltage converter. Resistance 654 is representative of the load resistance.
Capacitances associated with bias resistors may also be addressed. Consider a typical distributed model for a polysilicon (poly for short) resistor. Around 4fF to substrate can be associated with every kilo-ohm of resistance in a poly resistor. This means that, for example in a 20 Kohm resistor, around 80 fF of distributed capacitance to the substrate exists. This can contribute to power loss because part of the power will be drained into the substrate. One way of biasing the input stage and the output stage is through a resistive voltage divider as shown in FIG. 26(a). The biasing of the input stage is shown for the transistor 616 in FIG. 25, however, those skilled in the art will readily appreciate that the same biasing circuit can be used for the transistor 614 (FIG. 25). One drawback from this approach, however, is that the gate of the transistor will see the capacitance from the two resistors 658, 660 of the voltage divider. Capacitor 662 is a coupling capacitor, which couples the previous stage to the voltage divider. Switch 664 is for powering down the stage of the power amplifier that is connected to the voltage divider. The switch 664 is on in normal operation and is off in power down mode.
FIG. 26(b) is similar to FIG. 26(a), except that FIG. 26(b) includes resistor 666. DC-wise the FIG. 26(a) and FIG. 26(b) circuits are the same. However, in AC, not only is the resistance seen from the gates of transistors 634, 632 towards the resistive bias network bigger, but the capacitance is smaller because the capacitance is caused by resistor 666 and not resistors 660, 658. Since there is less capacitance, there is less loss of the signal. From FIG. 25, transistors 618, 620 in the input stage and transistors 632, 634 in the output stage can be biased by the resistive voltage divider shown in FIG. 26(b).
FIG. 27 shows an exemplary bias circuit for the current source transistor 616 of FIG. 25. To fix the bias current of the circuit over temperature and process variation, a diode-connected switch transistor 672 may be used with a well-regulated current 670. The voltage generated across the diode-connected transistor 672 is applied to the gate of the current source transistor 616. Because of the mirroring effect of this connection and since all transistors move in the same direction over temperature and process corners, the mirrored current will be almost constant. The reference current is obtained by calibration of a resistor by the controller. The calibrated resistor can be isolated from the rest of the PA to prevent high frequency coupling through the resistor to other transceiver circuits. As those skilled in the art will appreciate, the exemplary bias circuit is not limited to the current source transistor of the PA and may be applied to other transistors requiring accurate biasing currents.
FIG. 28 shows an exemplary power control circuit. The power control circuit can provide current scaling. The power control circuit changes power digitally by controlling the bias of the current source transistor 616 of the first differential pair 612, 614 in the PA (FIG. 25). The power control circuit can be used in any application requiring different power levels. The power control is done by applying different voltage levels to the gate of the current source in the first stage (input stage or preamplifier) of the PA. A combination of current adjustment in both stages (input stage and output stage) of the PA can also be done. Different voltage levels are generated corresponding to different power levels. In one embodiment of the invention, the power control circuit has four stages as shown in FIG. 28. Alternatively, the power control circuit can have any number of stages corresponding to the number of power levels needed in an application.
The power control circuit includes transistor pairs in parallel. Transistors 674, 676, 678, 680 are switch transistors and are coupled to diode-connected transistors 682, 684, 686, 688, respectively. The switch transistors 674, 676, 678, 680 are coupled to a current source 670. Each diode-connected transistor 682, 684, 686, 688 can be switched into the parallel combination of by turning its respective switching transistor on. Conversely, any diode-connected transistor can be removed from the parallel combination by turning its respective switch transistor off. The current from the current source 670 is injected into a parallel combination of switch transistors 674, 676, 678, 680. The power level can be incremented or decremented by switching one or more switch transistors into the parallel combination. By way of example, a decrease in the power level can be realized by switching a switch transistor into the parallel combination. This is equivalent to less voltage drop across the parallel combination, which in turn corresponds to a lower power level. A variety of stages are comprehended in alternative embodiments of the invention depending on the number of power levels needed for a given application. A thermometer code from the controller can be applied to the power control circuit according to which the power level is adjusted.
As described above, the output of the PA can be independently matched to a 50 ohm load. The matching circuit (inductors 646, 650 and capacitors 648, 652) is connected to the balun. Any non-ideality of the balun, bond wire impedance, pin/PCB capacitance, and other parasitics can be absorbed by the matching circuits. High-Q inductors can be used where possible. The loss in efficiency may also be tolerable with low power applications.
2.2 Single-Ended Differential Power Amplifier
In another embodiment of the present invention, the balun can be eliminated by a single-ended to differential PA. FIG. 29 shows the output stage of a single-ended to differential PA. The output stage includes resistors 690, 692, inductors 694, 696, 698, and transistors 700, 702. Coupling capacitor 704 couples the output stage to an LC circuit, the LC circuit including inductor 706 and capacitor 708. Coupling capacitor 710 couples the second stage to a CL circuit, the CL circuit comprising capacitor 712 and inductor 714. The transistors 700, 702 provide amplification of the differential signal applied to the output stage of the PA. The output of the amplifying transistors 700, 702 produces two signals 180 degrees out of phase. The LC circuit is used to match the first output to a 100 ohm load 718 and to shift the phase of the signal by 90 degrees. The CL circuit is deployed to match the second output to a 100 ohm load 720, and to shift the phase of the signal in the opposite direction by 90 degrees. Since the two outputs were out of phase by 180 degrees at the beginning and each underwent an additional 90 degrees of shift (in opposite directions) the two signals appearing across the two 100 ohm loads will be in phase. In an ideal situation, they will also be of similar amplitudes. This means that the two nodes can be connected together to realize a single-ended signal matched for a 50 ohm load 716.
Unlike the differential PA, the differential to single-ended configuration does not enjoy the symmetry of a fully differential path. Accordingly, with respect to embodiments of the present invention integrated into a single IC, the effect of bond wires should be considered. Because of stability and matching issues, a separate ground (bond wire) for the matching circuit should be used. The bond wires should be small and the matching should be tweaked to cancel their effect.
The bias current to the amplifying transistors 700, 702 for embodiments of the present invention integrated into a single IC can be set in a number of ways, including by way of example, the bias circuit shown in FIG. 27. The voltage generated across the diode-connected transistor 672 is applied to the gate of the amplifying transistor 700. A similar bias circuit can be used for biasing the amplifying transistor 702.
Alternatively, the bias circuit of the amplifying transistors 700, 702 for single IC embodiments can be set with a power control circuit as shown in FIG. 28. The current source is connected directly the amplifying transistor 700. By incrementally switching the diode-connected transistors 682, 684, 686, 688 into the parallel combination, the voltage applied to the gate of the amplifying transistor 700 is incrementally pulled down toward ground. Conversely, by incrementally switching the diode-connected transistors 682, 684, 686, 688 out of the parallel combination, the voltage applied to the gate of the amplifying transistor 700 is incrementally pulled up toward the source voltage (not shown). A similar power control circuit can be used with the amplifying transistor 702.
2.3. Digitally Programmable CMOS PA with On-Chip Matching
In another embodiment of the present invention, a PA is integrated into a single IC with digitally programmable circuitry and on-chip matching to an external antenna, antenna switch, or similar device. FIG. 30 shows an exemplary PA with digital power control. This circuit comprises two stages. The input stage provides initial amplification and acts as a buffer to isolate the output stage from the VCO. The output stage is comprised of a switchable differential pair to steer the current towards the load. The output stage also provides the necessary drive for the antenna. The power level of the output stage can be set by individually turning on and off current sources connected to each differential pair.
Transistors 722, 724 provide initial amplification. Transistor 726 is the current source that biases the transistors 722, 724. Inductors 728, 730 tune out the capacitance at the drains of the transistors 722, 724. At the tuning frequency, the impedance seen at the drains is high, which provides high gain at the tuning frequency.
Capacitors 732, 736 are AC coupling capacitors. Capacitor 732 couples the drain of transistor 724 with the gate of transistor 734. Capacitor 736 couples the drain of transistor 722 with the gate of transistor 738. Resistors 740, 742 are biasing resistors for biasing the gates of the transistors 734, 738. Transistors 734, 738 are amplifying transistors in the output stage of the PA. Transistor pairs 744, 746, transistor pairs 748, 750, and transistor pairs 752, 754 each provide additional gain for the signal. Each pair can be switched in or out depending on whether a high or low gain is needed. For maximum gain each transistor pair in the output stage of the PA will be switched on. The gain can be incrementally decreased by switching out individual transistor pairs. The PA may have more or less transistor pairs depending on the maximum gain and resolution of incremental changes in the gain that is desired.
Transistor 756 has two purposes. First, it is a current source that biases transistors 734, 738. Second, it provides a means for switching transistors 734, 738 in and out of the circuit to alter the gain of the output stage amplifier. Each transistors 758, 760, 762 serves the same purpose for its respective transistor pair. A digital control, word from the controller can be applied to the gates of the transistors 756, 758, 760, 762 to digitally set the power level. This approach provides the flexibility to apply ramp up and ramp down periods to the PA, in addition to the possibility of digitally controlling the power level. The drains of the transistors 756, 758, 760, 762 are connected to a circuit that serves a twofold purpose: 1) it converts the differential output to single ended output, and 2) it matches the stage to external 50 ohm antenna to provide maximum transferable gain.
Inductors 764, 766 tune out the capacitance at the drains of transistors 752, 754. Capacitor 768 couples the PA to the load 770. Inductor 772 is a matching and phase-shift element, which advances the phase of the signal by 90. Capacitor 794 is a matching and phase-shift element, which retards the phase of the signal by 90. Capacitor 796 is the pad capacitance. The bonding wire 798 bonds the PA to the load resistance 770 (e.g., the antenna).
3.0 Local Oscillator
In embodiments of the present invention utilizing a low-IF or direct conversion architecture, techniques are implemented to deal with the potential disturbance of the local oscillator by the PA. Since the LO generator has a frequency which coincides with the RF signal at the transmitter output, the large modulated signal at the PA output may pull the VCO frequency. The potential for this disturbance can be reduced by setting the VCO frequency far from the PA output frequency. To this end, an exemplary embodiment of the LO generator produces RF clocks whose frequency is close to the PA output frequency, as required in a low-IF or direct-conversion architectures, with a VCO operating at a frequency far from that of the RF clocks. One way of doing so is to use two VCO 864, 866, with frequencies of f1 and f2 respectively, and mix 868 their output to generate a clock at a higher frequency of f1+f2 as shown in FIG. 31(a). With this approach, the VCO frequency will be away from the PA output frequency with an offset equal to f1 (or f2). A bandpass filter 876 after the mixer can be used to reject the undesired signal at f1-f2. The maximum offset can be achieved when f1 is close to f2.
An alternative embodiment for generating RF clocks far away in frequency from the VCO is to generate f2 by dividing the VCO output by N as shown in FIG. 31(b). The output of the VCO 864 (at f1) is coupled to a divider 872. The output of te divider 872 (at f2) is mixed with the VCO at mixer 868 to produce an RF clock frequency equal to: fLO=f1(1+1/N), where f1 is the VCO frequency. A bandpass filter 874 at the mixer output can be used to reject the lower sideband located at f1f1/N.
In another embodiment of the present invention, a single sideband mixing scheme is used for the LO generator. FIG. 32 shows a single sideband mixing scheme. This approach generates I and Q signals at the VCO 864 output. The output of the VCO 864 is coupled to a quadrature frequency divider 876 should be able to deliver quadrature outputs. Quadrature outputs will be realized if the divide ratio (N) is equal to two to the power of an integer (N=2n). The I signal output of the divider 876 is mixed with the I signal output of the VCO 864 by a mixer 878. Similarly, the Q signal output of the divider 876 is mixed with the Q signal output of the VCO 864 by a mixer 880.
Although a single sideband structure uses two mixers, this should not double the mixer power consumption, since the gain of the single sideband mixer will be twice as much. By utilizing a Gilbert cell (i.e., a current commutating mixer) for each mixer 878, 880, the addition or subtraction required in a single sideband mixer can be done by connecting the two mixers 878, 880 outputs and sharing a common load (e.g., an LC circuit). The current from the mixers is added or subtracted, depending on the polarity of the inputs, and then converted to a voltage by an LC load (not shown) resonating at the desired frequency.
FIG. 33 shows an LO generator architecture in accordance with an embodiment of the present invention. This architecture is similar to the architecture shown in FIG. 32, except that the LO generator architecture in FIG. 33 generates I-Q data. In a low-IF system, a quadrature LO is desirable for image rejection. In the described embodiment, the I and Q outputs of the VCO can be applied to a pair of single sideband mixer to generate quadrature LO signals. A quadrature VCO 48 produces I and Q signals at its output. Buffers are included to provide isolation between the VCO output and the LO generator output. The buffer 884 buffers the I output of the VCO 48. The buffer 886 buffers the Q output of the VCO 48. The buffer 888 combines the I and Q outputs of the buffers 884, 886. The signal from the buffer 888 is coupled to a frequency divider 890 where it is divided by N and separated into I and Q signals. The I-Q outputs of the divider 890 are buffered by buffer 892 and buffer 894. The I output of the divider 890 is coupled to a buffer 892 and the Q signal output of the divider 890 is coupled to a buffer 894. A first mixer 896 mixes the I signal output of the buffer 892 with the I signal output of the buffer 884. A second mixer 897 mixes the Q signal output from the buffer 894 with Q signal output from the buffer 886. A third mixer 898 mixes the Q signal output of the buffer 894 with the I signal output of the buffer 884. A fourth mixer 899 mixes the I signal output from the buffer 892 with the Q signal output from the buffer 886. The outputs of the first and second mixers 896, 897 are combined and coupled to buffer 900. The outputs of the third and fourth mixers 898, 899 are combined and coupled to buffer 902. LC circuits (not shown) can be positioned at the output of each buffer 900, 902 to provide a second-order filter which rejects the spurs and harmonics produced due to the mixing action in the LO generator.
Embodiments of the present invention which are integrated into a single IC may employ buffers configured as differential pairs with a current source to set the bias. With this configuration, if the amplitude of the buffer input is large enough, the signal amplitude at the output will be rather independent of the process parameters. This reduces the sensitivity of the design to temperature or process variation.
The lower sideband signal is ideally rejected with the described embodiment of the LO generator because of the quadrature mixing. However, in practice, because of the phase and amplitude inaccuracy at the VCO and divider outputs, a finite rejection is obtained. In single IC fully integrated embodiments of the present invention, the rejection is mainly limited to the matching between the devices on chip, and is typically about 30-40 dB. Since the lower sideband signal is 2f1/N away in frequency from the desired signal, by proper choice of N, it can be further attenuated with on-chip filtering.
Because of the hard switching action of the buffers, the mixers will effectively be switched by a square-wave signal. Thus, the divider output will be upconverted by the main harmonic of VCO (f1), as well as its odd harmonics (nf1), with a conversion gain of 1/n. In addition, at the input of the mixer, because of the nonlinearity of the mixers, and the buffers preceding the mixers, all the odd harmonics of the input signals to the mixers will exist. Even harmonics, both at the LO and the input of the mixers can be neglected if a fully balanced configuration is used. Therefore, all the harmonics of VCO (nf1) will mix with all the harmonics of input (mf2), where f2 is equal to f1/N. Because of the quadrature mixing, at each upconversion only one sideband appears at the mixer output. Upper or lower sideband rejection depends on the phase of the input and LO at each harmonic. For instance, for the main harmonics mixed with each other, the lower sideband is rejected, whereas when the main harmonic of the VCO mixes with the third harmonic of the divider output signal, the upper sideband is rejected. Table 1 gives a summary of the cross-modulation products up to the 5th harmonic of the VCO and input. In each product, only one sideband is considered, since the other one is attenuated due to quadrature mixing, and is negligible.
All the spurs are at least 2f1/N away from the main signal located at f1(1+1/N). The VCO frequency will be f1/N away from the PA output. Thus, by choosing a smaller N better filtering can be obtained. In addition, the VCO frequency will be further away from the PA output frequency. The value of N, and the quality factor (Q) of the resonators (not shown) positioned at the output of each component determine how much each spur will be attenuated. The resonator quality factor is usually set by the inductor Q, and that depends merely on the IC technology. Higher Q provides better filtering and lower power consumption.
TABLE 1 Cross-Modulation Products at the LO Generator Output 1st: f1/N 3rd: 3f1/N 5th: 5f1/N 1st: f1 f1 (1 + 1/N) f1 (1 3/N) f1 (1 + 5/N) 3rd: 3f1 f1 (3 1/N) f1 (3 + 3/N) f1 (3 5/N) 5th: 5f1 f1 (5 + 1/N) f1 (5 3/N) f1 (5 + 5/N)
The maximum filtering is obtained by choosing N=1. Moreover, in this case, the frequency divider is eliminated. This lowers the power consumption and reduces the system complexity of the LO generator. However, the choice of N=1 may not be practical for certain embodiments of the present invention employing a low-IF receiver architecture with quadrature LO signals. The problem arises from the fact that the third harmonic of the VCO (at 3f1) mixed with the divider output (at f1) also produces a signal at 2f1 which has the same frequency as the main component of the RF clock output from the LO generator. With the configuration shown in FIG. 33, the following relations hold for the main harmonics:
Cos(1t)Cos(1t)Sin(1t)Sin(1t)Cos(21t)(45)
and
Cos(1t)Sin(1t)+Sin(1t)Cos(1t)Sin(21t)(46)
which show that at the output of the mixers, quadrature signals at twice the VCO frequency exist. For the VCO third harmonic mixed with the divider output, however, the following relations hold:
Cos(1t)Cos(31t)Sin(1t)Sin(31t)Cos(21t)(47)
and
Cos(1t)Sin(31t)Sin(1t)Cos(31t)Sin(21t)(48)
The factor appears in the above equations because the third harmonic of a square-wave has an amplitude which is one third of the main harmonic. Comparing equation (46) with equation (48), the two products are added in equation (46), while they are subtracted in equation (47). The reason is that for the main harmonic of the VCO, quadrature outputs have phases of 0 and 90, whereas for the third harmonic, the phases are 0 and 270. The same holds true for equation (45) and equation (47). The two cosines in equation (45) and equation (47), when added, give a cosine at 21 with an amplitude of , yet the two sinewaves in equation (46) and equation (48) when added, give a component at 21 with an amplitude of 4/3. Therefore, a significant amplitude imbalance exists at the I and Q outputs of the mixers. When these signals pass through the nonlinear buffer at the mixers output, the amplitude imbalance will be reduced. However, because of the AM to PM conversion, some phase inaccuracy will be introduced. The accuracy can be improved with a quadrature generator, such as a polyphase filter, after the mixers. A polyphase filter, however, is lossy, especially at high frequency, and it can load its previous stage considerably. This increases the LO generator power consumption significantly, and renders the choice of N=1 unattractive for embodiments of the present invention employing a low-IF receiver architecture with quadrature LO signals.
For N=2, the LO generator output will have a frequency of 1.5f1 and the closest spurs will be located f1 away from the output. These spurs can be rejected by positioning LC filters (not shown) at the output of each circuit in the LO generator. A second-order LC filter tuned to f0, with a quality factor Q, rejects a signal at a frequency of f as given in the following equation:
$H ( f ) = f Qf 0 [ 1 - ( f f 0 ) ] 2 + ( f Qf 0 ) 2 ( 49 )$
The following discussion changes based on the Q value. Considering a Q of about 5 for the inductor, with f0=1.5f1, the spur located at 2.5f1 is rejected by about 15 dB by each LC circuit. This spur is produced at the LO generator output due to the mixing of the VCO third harmonic (at 3f1) with the divider output (at 0.5f1). This signal is attenuated by 10 dB since the third harmonic of a square-wave is one third of the main harmonic, 15 dB at the LC resonator at the mixers output tuned to 1.5f1, and another 15 dB at the output of the buffers (900, 902 in FIG. 33). This gives a total rejection of 40 dB. When applied to the mixers in the transmitter, this LO generator output will upconvert the baseband data to 2.5f1. With LC filters (not shown) positioned at the upconversion mixers and PA output in the transmitter, another 15+15=30 dB rejection is obtained (FIG. 33).
The spur located at 0.5f1 is produced because of the third harmonic of the divider output (at 1.5f1) is mixed with the VCO output (at f1). Because of the hard switching action at the divider output, the third harmonic is about 10 dB lower than the main harmonic at 0.5f1. The buffer at the divider output tuned to 0.5f1 (892, 8943 in FIG. 33), rejects this signal by about 22 dB (equation (24)). This spur can be further attenuated by LC circuits at the mixer and its buffer output by (2)(22)=44 dB. The total rejection is 76 dB.
FIG. 33(a) shows a signal passing through a limiting buffer 910 (such as the buffers implemented in the LO generator). When a large signal at a frequency of f accompanied with a small interferer at a frequency of f902 away pass through a limiting buffer, at the limiter output the interferer produces two tones f914, 916 away from the main signal, each with 6 dB lower amplitude. Therefore, the spur at 2.5f1 will actually be 10+15+15+6=46 dB attenuated when it passes through the buffer, instead of the 40 dB calculated above. It will also produce an image at 0.5f1 which is 10+15+22+6=53 dB lower than the main signal. This will dominate the spur at 0.5f1 because of the third harmonic of the divider mixed with the VCO signal, which is more than 75 dB lower than the main signal.
Since the buffer is nonlinear, another major spur at the LO generator output is the third harmonic of the main signal located at 31.5f1. This signal will be 10+22=32 dB lower than the main harmonic. The 22 dB rejection results from an LC circuit (not shown) tuned to 1155f1 (equation (49)) in the buffer. This undesired signal will not degrade the LO generator performance, since even if a perfect sinewave is applied to upconversion (or downconversion) mixers, due to hard switching action of the buffer, the mixer is actually switched by a square-wave whose third harmonic is only 10 dB lower. Thus, if a nonlinear PA is used in the transmitter, even with a perfect input to the PA, the third harmonic at the transmitter output will be 10+22+10=42 dB lower. The first 10 dB is because the third harmonic of a square-wave is one third of the main one, the 22 dB is due to the LC filter at the PA output, and the last 10 dB is because the data is spread in the frequency domain by three times. Any DC offset at the mixer input in the transmitter is upconverted by the LO, and produces a spur at f1. This spur can be attenuated by 13 dB for each LC circuit used (equation (49)). In addition, the signal at the mixer input in the transmitter is considerably larger (about 10-20 times) than the DC offset. Thus the spur at f1 will be about 13+13+26=52 dB lower than the main signal. All other spurs given in Table 1 are more than 55 dB lower at the LO generator output. The dominant spur is the one at 2.5f1 which is about 46 dB lower than the main signal.
Choosing N2 may not provide much benefit for single IC embodiments of the present invention with the possible exception that the on-chip filtering requirements may be relaxed. When using an odd number for N, further disadvantages may be realized because the divider output will not be in quadrature thereby preventing single sideband mixing. In addition, for N2 the divider becomes more complex and the power consumption increases. Nevertheless, in certain applications, N=4 may be selected over N=2 so that the divider quadrature accuracy will not depend on the duty cycle of the input signal.
When choosing N equal to 2, such as N=2, quadrature signals are readily available at the divider output despite quadrature phase inaccuracies at the output of the VCO. Assume that the VCO outputs have phase of 0 and 90+q, where q is ideally 0, and that the divider produces perfect quadrature outputs. At the LO generator outputs the following signals exist:
VoutI=Cos(2t)Cos(1t+)Sin(2t)Sin(1t)(50)
and
VoutQ=Cos(2t)Sin(1t)+Sin(2t)Cos(1t+0)(51)
where 1 is the VCO radian frequency, and 2 is the divider radian frequency, equal to 0.51. By simplifying equation (25) and equation (26), the signals at the output of mixers will be:
$V out_I = - Sin ( 2 ) Sin ( ( 1 - 2 ) t + 2 ) + Cos ( 2 ) Cos ( ( 1 + 2 ) t + 2 ) and ( 52 ) V out_Q = - Sin ( 2 ) Cos ( ( 1 - 2 ) t + 2 ) + Cos ( 2 ) Sin ( ( 1 + 2 ) t + 2 ) ( 53 )$
The above equations show that regardless of the value of 0, the outputs are always in quadrature. However, other effects should be evaluated. First, a spur at 12=0.5 1 is produced at the output. This spur can be attenuated by 222=44 dB by the LC filters at the mixer and its buffer outputs. Thus, for 60 dB rejection, the single sideband mixers need to provide an additional 16 dB of rejection (about 0.158). Based on equation (53), tan(/2)=0.158, or =18, phase accuracy of better than 18 can generally be achieved. Second, phase error at the VCO output lowers the mixer gain (term Cos(/2) in equation (52) or (53)). For a phase error of 18, the gain reduction is, however, only 0.1 dB, which is negligible. For =90 (a single-phase VCO), both sidebands are equally upconverted at the mixer output. However, the LC filters reject the lower sideband by about 44 dB. The mixer gain will also be 3 dB lower. This will slightly increase the power consumption of the LO generator. If =180 (the VCO I and Q outputs are switched), the lower sideband is selected, and the desired sideband is completely rejected.
Similarly, the LO generator will not be sensitive to the phase imbalance of the divider outputs if the VCO is ideal. However, if there is some phase inaccuracy at both the divider and VCO outputs, the LO generator outputs will no longer be in quadrature. In fact, if the VCO output has a phase error of q, and the divider output has a phase error of q2, the LO generator outputs will be:
$V out_I = - Sin ( 1 - 2 2 ) Sin ( ( 1 - 2 ) t + 1 - 2 2 ) + Cos ( 1 + 2 2 ) Cos ( ( 1 + 2 ) t + 1 + 2 2 ) and ( 54 ) V out_Q = - Sin ( 1 + 2 2 ) Cos ( ( 1 - 2 ) t + 1 - 2 2 ) + Cos ( 1 - 2 2 ) Sin ( ( 1 + 2 ) t + 1 + 2 2 ) ( 55 )$
This shows that the outputs still have phases of 0 and 90, but their amplitudes are not equal. The amplitude imbalance is equal to:
$A A = 2 Cos ( 1 + 2 2 ) - Cos ( 1 - 2 2 ) Cos ( 1 + 2 2 ) + Cos ( 1 - 2 2 ) = 2 tan ( 1 2 ) tan ( 2 2 ) ( 56 )$
If 1, and 2 are small and have an equal standard deviation, that is, the phase errors in the VCO and divider are the same in nature, then the output amplitude standard deviation will be:
$A ( ) 2 2 ( 57 )$
where A is the standard deviation of the output amplitude, and 7 is the phase standard deviation in radians. Equation (57) denotes that the phase inaccuracy in the VCO and divider has a second order effect on the LO generator. For instance, if 1 and 2 are on the same order and about 10, the amplitude imbalance of the output signals will be only about 1.5%. In this case, the lower sideband will be about 15 dB rejected by the mixers, which will lead to a total attenuation of about 22+22+15=59 dB. This shows that the LO generator is robust to phase errors at the VCO or divider outputs, since typically phase errors of less than 5 can be obtained on chip.
Phase errors in the divider can originate from the mismatch at its output. Moreover, for N=2, if the input of the divider does not have a 50% duty cycle, the outputs will not be in quadrature. Again, the deviation from a 50% duty cycle in the divider input signal may be caused due to mismatch. Typically, with a careful layout, this mismatch is minimized to a few percent. The latter problem can also be alleviated by improving the common-mode rejection of the buffer preceding the divider (888 in FIG. 33). One possible way of doing so is to add a small resistor at the common tail of the inductors in the buffer. For a differential output, this resistor does not load the resonator at the buffer output, since the inductors common tail is at AC ground. A common-mode signal at the output is suppressed however, since this resistor degrades the LC circuit quality factor. The value of the resistor should be chosen appropriately so as not to produce a headroom problem in the buffer.
Embodiments of the present invention that are fully integrated onto a single IC can be implemented with a wide tuning range VCO with constant gain. In a typical IC process, the capacitance can vary by 20%. This translates to a 10% variation in the center frequency of the oscillator. A wide tuning range can be used to compensate for variation. Variations in temperature and supply voltage can also shift the center frequency. To generate a wide tuning range, two identical oscillators can be coupled together as shown in FIG. 34. This approach forces the oscillation to be dependent on the amount of coupling between the two oscillators.
In the described exemplary embodiment of the VCO shown in FIG. 34, the tuning curve is divided into segments with each segment digitally selected. This approach ensures a sufficient amount of coupling between the two oscillators for injection lock. In addition, good phase noise performance is also obtained. The narrow frequency segment prevents the gain of the VCO from saturating. The segmentation lowers the VCO gain by the number of segments, and finally by scaling the individual segments, a piecewise linear version of the tuning curve is made resulting in a constant gain VCO.
FIG. 34 shows a block diagram of the wide tuning range VCO comprising two coupled oscillators where the amount of coupling transconductance is variable. The wide tuning range VCO comprises two resonators 800, 802 and four transconductance cells, gm cells 804, 806, 808, 810. The transconductance cells are driver that converts voltage to current. The transconductance cells used to couple the oscillators together have a variable gain. The first VCO 800 provides the I signal and the second VCO provides the Q signal. The output of the first VCO 800 and the output of the second VCO 802 are coupled to transconductance cells 806, 807, respectively, combined, and fed back to the first VCO 800. The transconductance cell 807 used for feeding back the output of the second VCO to the first VCO is a programmable variable gain cell. Similarly, the output of the second VCO 802 and the output of the first VCO 800 are coupled to transconductance cells 805, 804, respectively, combined, and fed back to the second VCO 802. The transconductance cell 804 used for feeding back the output of the first VCO to the second VCO is a programmable variable gain cell. The gain of the programmable variable gain transconductance cells 804, 807 can be digitally controlled from the controller
FIG. 35 shows a schematic block diagram of the wide-tuning range VCO described in connection with FIG. 34. The wide-tuning range VCO includes individual current sources 810, 812, 814, 816, cross-coupled transistors 818, 820 with resonating inductors 826, 828, and cross-coupled transistors 822, 824 with resonating inductors 830, 832. Two differential pairs couple the two sets of oscillators. Differential pair 834, 836 are coupled to the drains of transistors 824, 822, respectively. Differential pair 838, 840 are coupled to the drains of transistors 818, 820. Tank #1 comprises inductors 826 and 828. Tank #2 comprises inductors 830 and 832.
Transistors 818 and 820 form a cross-coupled pair that injects a current into tank #1 in which the current through the transistor 818 is exactly 180 degrees out of phase with the current in the transistor 820. Likewise, transistors 822 and 824 form a cross-coupled pair that injects a current into tank #2 in which the current through the transistor 822 is exactly 180 degrees out of phase with the current in the transistor 824. The first set of coupling devices 834, 836 injects a current into tank #1 that is 90 degrees out of phase with current injected respectively by the transistors 818, 820. The second set of coupling devices 838, 840 injects a current into tank #2 that is 90 degrees out of phase with the current injected respectively by the transistors 822, 824. The tank impedances causes a frequency dependent phase shift. By varying the amplitude of the coupled signals, the frequency of oscillation changes until the phase shift through the tanks results in a steady-state solution. Varying the bias of the current source controls the gm of the coupling devices. Current sources 812, 816 provide control of VCO tuning. Current sources 810, 814 provide segmentation of the VCO tuning range.
FIG. 36(a) shows the typical tuning curve of the wide tuning range VCO before and after segmentation. The horizontal axis is voltage. The vertical axis is frequency. FIG. 36(b) shows how segmentation is used to divide the tuning range and linearize the tuning curve. The linear tuning curves correspond to different VCO segments. The slope of the linear tuning curves is a result control of VCO tuning. The horizontal axis is voltage. The vertical axis is frequency.
FIG. 37(a) shows how the VCO of FIG. 34 can be connected to the divider before being upconverted to the RF clock frequency in the LO generator. The I output signal of the VCO is coupled to buffer 884 and the Q output signal of the VCO is coupled to buffer 886. Buffer 888 combines the I-Q data from the buffer 884 and the buffer 886 to obtain a larger signal. The large signal is coupled to a divider 50 where it is divided in frequency by N to get quadrature signals.
In another embodiment of the present invention, a polyphase filter 892 follows a single-phase VCO as shown in FIG. 37(b). This approach uses a single phase VCO 48 with a polyphase filter 892 to get quadrature signals. The output of the VCO 48 is coupled to a buffer 888. The buffer provides sufficient drive for the polyphase filter 892.
A multiple stage polyphase filter can be used to obtain better phase accuracy at a certain frequency range. Embodiments of the present invention that are fully integrated into a single IC, the required frequency range is mainly set by the process variation on the chip and the system bandwidth.
Any amplitude imbalance in the signals at the VCO and divider output will only cause a second order mismatch in the amplitude of the LO generator signals, and the output phase will remain 0 and 90. If the standard deviation of the amplitude imbalance at the VCO and divider are the same and equal to a, then the standard deviation of the LO generator output amplitude imbalance (A) will be:
$A = ( ) 2 2 ( 58 )$
The reason phase inaccuracy is more emphasized here is that because of the limiting stages in the LO generator and the hard switching at the mixers LO input, most of the errors will be in phase, rather than amplitude.
Although the phase or amplitude inaccuracy at the mixers input or LO has only a second order effect on the LO generator, any mismatch at the mixers outputs or the following stages will directly cause phase and amplitude imbalance in the LO generator outputs. This mismatch will typically be a few percent, and will not adversely impact the transceiver performance, since in a low-IF or direct conversion architectures the required image rejection is usually relaxed.
4.0 Controller
The controller performs adaptive programming and calibration of the receiver, transmitter and LO generator (see FIG. 2). An exemplary embodiment of the controller in accordance with one aspect of the present invention is shown in FIG. 38. A control bus 17 provides two way communication between the controller and the external processing device (not shown). This communication link can be used to externally program the transceiver parameters for different modulation schemes, data rates and IF operating frequencies. In the described exemplary embodiment, the external processing device transmits data across the control bus 17 to a bank of addressable registers 900-908 in the controller. Each addressable register 900-908 is configured to latch data for programming one of the components in the transmitter, receiver LO generator. By way of example, the power amplifier register 900 is used to program the gain of the power amplifier 62 in the transmitter (see FIG. 2). The LO register 902 is used to program the IF frequency in the LO generator. The demodulator register 903 is used to program the demodulator for FSK demodulation, or alternatively in the described exemplary embodiment, program the A/D converter to handle different modulation schemes. The AGC register 905 programs the gain of the programmable multiple stage amplifier when in the AGC mode. The filter registers 901, 904, 906 program the frequency and bandwidth of their respective filters.
The transmission of data between the external processing device and the controller can take on various forms including, by way of example, a serial data stream parsed into a number of data packets. Each data packet includes programming data for one of the transceiver components accompanied by a register address. Each register 900-908 in the controller is assigned a different address and is configured to latch the programming data in the each data packet where the register address in that data packet matches its assigned address.
The controller also may include various calibration circuits. In the described exemplary embodiment, the controller is equipped with an RC calibration circuit 907 and a bandgap calibration circuit 908. The RC calibration circuit 907 can compensate an integrated circuit transceiver for process, temperature, and power supply variations. The bandgap calibration circuit can be used by the receiver, transmitter, and LO generator to set amplifier gains and voltage swings.
The programming data from the addressable registers 900-908 and the calibration data from the RC calibration circuit 907 and the bandgap calibration circuit 908 are coupled to an output register 909. The output register 909 formats the programmability and calibration data into a data packets. Each data packet includes a header or preamble which addresses the appropriate transceiver component. The data packets are then transmitted serially over a controller bus 910 to their final destination. By way of example, the output register 909 packages the programming data from the power amplifier register 900 with the header or preamble for the power amplifier and outputs the packaged data as the first data packet to the controller bus 910.
The second data packet generated by the output register 909 is for the programmable low pass filter in the transmitter. The second data packet includes two data segments each with its own header or preamble. The first segment consists of both programmability and calibration data. Because the programmability feature requires a large dynamic range as far as programming the programmable low pass filter to handle different frequency bands, and the calibration feature is more of a fine tuning function of the programmable low pass filter once tuned requiring a much smaller dynamic range, a single digital word containing both programming and calibration information can be used with the most significant bits (MSB) having the programming information and the least significant bits (LSB) having the calibration information. To this end, the output register 909 combines the output of the low pass filter register 901 with the output of the RC calibration circuit 907 with the low pass filter register output constituting the MSBs and the RC calibration circuit output constituting the LSBs. A header or preamble is attached to the combined outputs identifying the data packet for RC calibration of the programmable low pass filter in the transmitter. Similarly, the second segment of the second data packet is generated by combining the low pass filter register output (as the MSBs) with the bandgap calibration circuit output (as the LSBs) and attaching a header or preamble identifying the data packets for bandgap calibration of the programmable low pas filter.
The third data packet generated and transmitted by the output register 909 can program the dividers in the LO generator to produce different IF frequencies. The third data packet can be a single segment of data with a header or preamble identifying the LO generator for programming each divider. Alternatively, the third data packet can include any number of data segments with, in one embodiment, different programming data for each divider in the LO generator. Each data segment would include a header or preamble identifying a specific divider in the LO generator.
The fourth data packet generated and transmitted by the output register 909 could include the programming data output from the demodulator register 904 with the appropriate header or preamble.
The output of the complex bandpass filter register 904 can be combined with the output from the RC calibration circuit 907 to form the first segment of the fifth data packet. The output of the complex bandpass filter register 904 can also be combined with the output of the bandgap calibration circuit 908 to form the second segment of the fifth data packet. Each segment can have its own header or preamble indicating the type of calibration data for the complex bandpass filter.
The sixth data packet generated and transmitted by the output register 909 can be the output data from the AGC register 905 accompanied by a header or preamble identifying the data packet for the programmable multiple stage amplifier in the receiver.
The output of the polyphase filter register 906 can be combined with the output from the RC calibration circuit 907 to form the first segment of the seventh data packet. The output of the polyphase filter register 906 can also be combined with the output of the bandgap calibration circuit 908 to form the second segment of the seventh data packet. Each segment can have its own header or preamble indicating the type of calibration data for the polyphase filter.
Finally, the output register 907 can configure additional data packets from the output of the RC calibration circuit 907 and, in separate data packets, the output of the bandgap calibration circuit 908 with appropriate headers or preambles.
As those skilled in the art will appreciate, other data transmission schemes can be used. By way of example, the separate output registers for each transceiver component could be used. In this embodiment, each output register would be directly connected to one or more transceiver components.
4.1 RC Calibration Circuit
RC calibration circuits can provide increased accuracy for improved performance. Embodiments of the present invention that are integrated into a single IC can utilize RC calibration to compensate for process, temperature, and power supply variation. For example, variations in the absolute value of the RC circuit in a complex filter can limit the amount of rejection that the filter can provide. In the described exemplary embodiments of the present invention, an RC calibration circuit in the controller can provide dynamic calibration of every RC circuit by providing a control word to the transmitter, receiver and LO generator.
FIG. 39 shows an exemplary RC calibration circuit in accordance with an embodiment of the present invention. The calibration circuit uses the reference clock from the LO generator to generate a 4-bit control word using a compare-and-increment loop until an optimum value is obtained. The 4-bit control provides an efficient technique for calibrating the RC circuits of the transceiver with a maximum deviation from its optimal value of only 5%.
Transistors 172, 174, 176, 178, 180, 182 form a cascode current source with a reference current IREF 184. With the gates of the transistors 172 and 178 tied to their respective sources, a fixed reference current IREF 184 can be established. By tying the gates of the transistors 174, 180 to the gates of the transistors 172, 178, respectively, the current through resistor RC 186 can be mirrored to IREF 184. Similarly, by tying the gates of the transistors 176, 182 to the gates of the transistors 174, 180, respectively, the current through resistor RC 186 can be mirrored to a tunable capacitor CC 188. The calibration circuit tunes the absolute value of the RC to a desired frequency by using this cascode-current source to provide identical currents to the on-chip reference resistor RC 186 and to the tunable capacitor CC 188 generating the voltages VRES 190 and VCAP 192, respectively. Embodiments of the present invention that are integrated into a single IC can use an off-chip reference resistor Rc to obtain greater calibration accuracy. The current through the tunable capacitor is controlled by a logic control block 195 via switch S2 193. During the charging phase, switch S2 193 is closed and switch S1 is open to charge the tunable capacitor CC 188 to VCAP. The voltage held on the tunable capacitor 188 VCAP is then compared, using a latched comparator 198, to a voltage generated across the reference resistor 186. The value of the tunable capacitor Cc 188 is incremented in successive steps by the logic control block 195 until the voltage held by the tunable capacitor CC matches the voltage across the reference resistor 186, at which point the 4-bit control word for optimal calibration of the RC circuits for the transmitter, receiver, and LO generator is obtained. More particularly, once the voltage VCAP reaches the voltage VRES, the output of the comparator output 198 switches. The switched comparator output is detected by the control logic 195. The control logic 195 opens switch S2 193 and closes switch S1 194 causing the tunable capacitor 188 CC to discharge. The resultant 4-bit control word is latched by the control logic 195 and coupled to the transceiver, receiver, and LO generator.
Cp 200 compensates for the parasitic capacitance loading of the capacitive branch. By choosing Cc 188 to be much larger than Cp 200, the voltage error at node VCAP 192 caused by charging the parasitic capacitance becomes negligible.
The clock signals used by the calibration circuit are generated by first dividing the reference clock down in frequency, and then converting the result into different phases for the charging, comparison, increment, and discharging phases of calibration. Embodiments of the present invention that are integrated onto a single IC can obtain an accurate RC value because capacitor scaling and matching on the same integrated circuit can be well-controlled with proper layout technique. The described RC calibration circuit provides an RC-tuning range of approximately +40%, which is sufficient to cover the range of process variation typical in semiconductor fabrication.
4.2 RC Calibration Circuit Using Polyphase Filtering
An RC calibration circuit using polyphase filtering is an alternative method for calibrating RC circuits in the transmitter, receiver, and LO generator. The RC calibration using polyphase filtering circuit includes an auto-calibration algorithm in which the capacitors or the RC circuits in the transceiver, receiver and LO generator can be calibrated with a control word generated by comparing the signal attenuation across two tunable polyphase filters. The calibrated RC value obtained as a result of this algorithm is accurate to within 5% of its optimal value.
FIG. 40 shows an exemplary embodiment of the RC calibration circuit using polyphase filtering. The RC calibration circuit uses the reference clock from the LO generator to adjust the RC value in two polyphase filters 280, 282 in successive steps until an optimum value has been selected. In this process, the two polyphase filters 280, 282 provide signal rejection that is dependent upon the value of =(RC)1 to which they are tuned by control logic 286. Initially, the first filter (Polyphase A) 280 is tuned to a frequency less than the frequency of the reference clock (reference frequency), and the second filter Polyphase B) 282 is tuned to a frequency greater than the reference frequency by control logic 286. The signals at the outputs of the polyphase filters are detected with a received-signal-strength-indicator (RSSI) block 284, 285 in each path. A filter is coupled to RSSI block 284 and the polyphase B filter is coupled to RSSI block 285.
With an input dynamic range of 50 dB, the RSSI circuit is designed to detect the levels of rejection provided by the polyphase filtering. The outputs of RSSI block 284 and RSSI block 285 are coupled to a comparator 280 where the level of signal rejection of each polyphase filter is compared by comparator 280. The outputs of the RSSI blocks are also coupled to the control logic 286. The control logic 286 determines from the RSSI outputs which polyphase filter has a lower amount of signal suppression. Then, the control logic 286 adjusts the frequency tuning of that filter in an incremental step via the control logic 286. This is done by either increasing the tuned frequency of the first filter (polyphase A) filter 280, or by decreasing the tuned frequency of the second filter (polyphase B) 282 by changing the appropriate 4-bit control word. This process continues in successive steps until the 4-bit control word in each branch are identical, at which point, the RC values of the two polyphase filters are equal. The 4-bit control word provides a maximum deviation of only +5%.
In the described exemplary embodiment, the frequency of the input signal XIN is derived from the reference frequency and is chosen to be, by way of example, 2 MHz. This input signal XIN is obtained by initially dividing the reference clock down in frequency, followed by a conversion into quadrature phases at the control logic 286. By dividing the reference clock by a factor greater than two with digital flip-flops (not shown), the input signal at XIN is known to be differential with well-defined quadrature phases.
Two branches of polyphase filtering are used in this algorithm. Two 4-bit control words are used to control the value of the capacitances in each polyphase filter. The initial control words set the capacitance in the first filter (Polyphase A) to its maximum value and the capacitance in the second filter (Polyphase B) to its minimum value. This provides an initial condition in which the filters have maximum signal suppression set at frequencies (low and high) that are approximately 40% of the frequency of the input signal XIN for the case of nominal process variation. For a sinusoidal input XIN, the calibration circuit depicted in FIG. 40 would require only a single-stage polyphase filter in each branch. The single-stage filters would attenuate the sinusoid input signal, generating outputs at XA and XB with the dominant one still at the same frequency as the input signal. However, the reference clock from the LO generator is a digital rail-to-rail clock. Because the input is not a pure sinusoid, multiple-stage filters may provide greater calibration accuracy. In the case of a single-stage filter with a digital clock, the filter would suppress the fundamental frequency component at coin to a significant degree but the harmonics would pass through relatively unaffected. The RSSI block would then detect and limit the third harmonic component of the input signal at 3in, as it becomes the dominant frequency component after the fundamental is suppressed. This could result in an inaccurate calibration code.
A three-stage polyphase filter can be used in each branch to suppress the fundamental frequency component of XIN as well as the 3rd and 5th harmonics. The first stage of the polyphase filter can provide rejection of the fundamental frequency component. The second stage can provide rejection of the 3rd harmonic. The third stage can provide rejection of the 5th harmonic. At the same time, the higher harmonics of the input signal XIN can be suppressed with an RC lowpass filter in a buffer (not shown) preceding the polyphase filters. As a result, the dominant frequency component of the signals XA and XB remains at the input frequency in, which is then properly detected by the RSSI blocks.
A calibration clock used for the control logic runs at a frequency of 250 kHz. The reference clock can be divided down inside the controller, or alternatively in the control logic. This clock frequency has been selected to allow the RSSI outputs to settle after the capacitance value in one of the polyphase filters has been incremented or decremented. For a clock frequency of 250 kHz and a 4-bit control word generating 24 possible capacitance values, the calibration is completed within (250 kHz)1(241)=60 s. During the calibration process the calibration circuitry draws 4 mA from a 3-V supply, and the RC calibration circuitry can be powered down when the optimal RC value has been selected to reduce power consumption.
4.3 The Capacitor Array
In the transmitter, receiver and LO generator, metal-insulator-metal (MIM) capacitors can be used as the calibration component for the RC circuits. As those skilled in the art will appreciate, other capacitor technologies may be used. The MIM capacitors are generally characterized by a low bottom-plate parasitic capacitance to substrate of 1%.
A parallel capacitor array can be used in calibrating each RC circuit as shown in FIG. 41. The parallel array is much smaller in area than a series array for the same capacitor value.
Complementary MOS switches or other switches known in the art, can be used in the capacitor array. The capacitor array can include any number of capacitors. In the exemplary embodiment, the capacitor array capacitors 290, 292, 294, 296, 298 are connected in parallel. Switches 300, 302, 304, 306 are used to switch the capacitors 292, 294, 296, 298, respectively, in and out of the capacitor array. In the described embodiment, capacitor 290 is 2.4 pF, capacitor 292 is 2.4 pF, capacitor 294 is 1.2 pF, capacitor 296 is 0.6 pF, capacitor 298 is 0.3 pF. The switch positions are nominally selected to produce an equivalent capacitance equal to 4.8 pF. A code of 0111 means that capacitors 294, 296, 298 are switched out of the capacitor array and capacitors 290, 292 are in parallel.
The switches can be binary-weighted in size and the switch sizes can be chosen according to tradeoffs regarding parasitic capacitances and frequency limitations based on the on-resistance of the CMOS switches. The capacitive error resulting from the parasitic capacitance in each capacitive array does not result in frequency error between the three polyphase stages of the RC calibration circuit in the controller. This is because by using same capacitor array in each filter, and by scaling the resistance accordingly in each case. Scaling resistances, relative to those in the fundamental polyphase filter, by factors of and in the 3rd and 5th harmonic filters respectively, are achieved with a high degree of accuracy with proper layout. Similarly, RC tuning in all other blocks utilizing the calibrated code is optimized when an identical capacitive array is used, scaling only the resistance value in tuning to the desired frequency. The capacitors in the capacitive arrays are laid out in 100 fF increments to improve the matching and parasitic fringing effects.
4.4 Bandgap Calibration Circuit for Accurate Bandgap Reference Current
In accordance with an exemplary embodiment of the present invention, a bandgap reference current is generated by a bandgap calibration circuit. The bandgap reference current is used by the receiver, transmitter, and LO generator to set amplifier gains and voltage swings. The bandgap calibration circuit generates an accurate voltage and resistance. An accurate bandgap reference current results from dividing the accurate voltage by the an accurate resistance.
Bandgap calibration circuits can provide increased accuracy for improved performance. Embodiments of the present invention that are integrated onto a single IC can utilize bandgap calibration circuits to compensate for process, temperature, and power supply variations. For example, variations in the absolute value of the resistance in a bandgap reference may result in deviations from optimal performance in sensitive circuitry that rely on accurate biasing conditions. In the described exemplary embodiment of the transceiver, a bandgap calibration circuit in the controller 16 provides an effective technique for self-calibration of resistance values in the transmitter, receiver and LO generator. The calibrated resistance values obtained as a result of the algorithm employed in the bandgap calibration circuit generate a bias current that varies by only +2% over typical process, temperature, and supply variation.
Embodiments of the present invention which are integrated into a single IC can use the described bandgap calibration circuit to provide accurate on-chip resistors by comparing the on-chip resistances to an off-chip reference resistor with a low tolerance of 1%. Using this method, trimming of on-chip resistance values with a total tolerance of 2% can be achieved.
FIG. 42 shows an exemplary embodiment of the bandgap calibration circuit. The bandgap calibration circuit uses the reference clock provided from the LO generator and a reference resistor RREF 236 to adjust a tunable resistance value RPOLY 238 in a compare-and-increment loop until an optimum value is obtained. In embodiments of the present invention which are integrated into a single IC, the reference resistor RREF 236 can be off-chip to provide improved calibration accuracy. A 4-bit control word is output to accurately calibrate the resistors in the transmitter, receiver and LO generator within 2%. Transistors 227, 226, 228, 230, 232, 234 form a cascode current with a reference current IREF. The transistors 224, 230 each have their gates tied to their respective sources to set up the reference current IREF. By tying the gates of the transistors 224, 230, respectively to the gates of the transistors 226, 232, the reference current IREF is mirrored to the reference resistor RREF236. Similarly, by tying the gates of the transistors 228, 234, respectively to the gates of the transistors, the reference current IREF is also mirrored to the tunable resistor RPOLY 238. The voltage generated across the tunable resistor RPOLY 238 is compared, using a latched comparator 240, to the voltage generated across the reference resistor RREF 236. The value of the tunable resistor RPOLY 236 is incremented in successive steps, preferably, every 0.5 s, through the utilization of control logic 242 that is clocked, by way of example, at 2 MHz. This process continues until the voltage VPOLY across the tunable resistor RPOLY 238 matches the voltage VREF across the off-chip reference resistor RREF 236 causing the output of the comparator to change states and disable the control logic 242. Once the control logic is disabled, the 4-bit control word can be used to accurately calibrate the resistors in the transmitter, receiver and LO generator.
The clock signals used by the calibration circuit are generated by first dividing the reference clock input into the controller from the LO generator down in frequency, and then converting the result into different phases for the comparison and increment phases of calibration. This bandgap calibration circuit provides accurate resistance values for use in various on-chip circuit implementations because resistor scaling and matching on the same integrated circuit can be well controlled with proper layout techniques. The bandgap calibration circuit provides a resistor tuning range of approximately +30%, which is sufficient to cover the range of process variation typical in semiconductor fabrication. With a 4-bit control word generating 24 possible resistance values, the calibration is completed within (2 MHz)1(241)=7.5 ms. The calibration circuit can be powered down when the optimal resistance value has been obtained.
The bandgap calibration circuit can be used for numerous applications. By way of example, FIG. 43 shows a bandgap calibration circuit 244 used in an application for calibrating a bandgap reference current that is independent of temperature. The 4-bit control word from the bandgap calibration circuit is coupled, by way of illustration, to the receiver. The 4-bit control word is used to calibrate resistances in a proportional-to-absolute-temperature (PTAT) bias circuit 246, and also in a VBE (negative temperature coefficient) bias circuit 248. The outputs of these blocks are two bias voltages, VP 250 and VN, 252 that generate currents exhibiting a positive temperature coefficient, and a negative temperature coefficient, respectively. When these currents are summed together using the cascode current mirror formed by transistors 254, 256, 258, 260, the result is a current IOUT displays a (ideally) zero temperature coefficient.
4.5 Resistor Array
In the transmitter, receiver and LO generator non-silicided polysilicon resistors can be used. As those skilled in the art will appreciate, other resistor technologies can also be used. Non-silicided polysilicon resistors have a high sheet resistance of 200-/square along with desirable matching properties. A switching resistor array as shown in FIG. 44 can be used to calibrate a resistor. The array includes serial connected resistors 208, 210, 212, 214, 216, which, by way of example, have resistances of 2200, 1100, 550, 275, and 137, respectively. The resistors 210, 212, 214, 216 include a bypass switch for switching the resistors in and out of the array. The switch positions are nominally selected to produce an equivalent of 3025. This resistance value has been chosen as a convenience to match the value used in generating an accurate bandgap reference current. A 4-bit calibration code 206 is used to control the total resistance in this array. As seen in FIG. 44, the resistances are binary-weighted in value and the accurate scaling of each incremental resistance results by placing the largest resistor (2200) 208 in series to generate each value. In the described embodiment, the incremental resistances shown in FIG. 44 are chosen so that the total resistance in the array covers a range 30% above and below its nominal value, with a maximum resistance error of +2% determined by the incremental resistance switched by the LSB. The range of resistance covered by the array is sufficient to cover typical process variations in a semiconductor process. A series resistive array may be desirable opposed to a parallel resistive array because of the smaller area occupied on the wafer.
CMOS switches are one of several different types of switch technology that can be used. The sizing of the switches entails a tradeoff between the on-resistance of each switch and the frequency limitations that result from the parasitic capacitances associated with each switch. For calibration resistors in the bandgap reference circuits, large switches are used to minimize the effect of the on-resistance of each switch, as frequency limitations are not a concern for this application.
5.0 Floating MOSFET Capacitors
Embodiments of the present invention that are integrated into a single IC can be implemented with a variety of technologies including, by way of example, CMOS technology. Heretofore, CMOS capacitors between two nodes with similar voltages (i.e., floating capacitors) have been problematic. In the described exemplary embodiment of the present invention, a MOS capacitor is used between two nodes having similar voltages for signals with no DC information. The capacitor is made of two MOS capacitors in series with a large resistor in between to ground for biasing.
FIG. 45 is a block diagram of the Floating MOS capacitor in accordance with an embodiment of the present invention. As shown in FIG. 45, the capacitor comprises two similar devices 858, 860 in series. Each MOS transistor has its source and drain connected together. The connected drain-source terminal of the MOS transistor 858 constitutes the input of the CMOS capacitor and the connected drain-source terminal of the MOS transistor 860 constitutes the output of the CMOS capacitor. The gates of each MOS transistor are connected through a common resistor 862 to a bias source (not shown).
6.0 Duplexing
In an alternative embodiment of the present invention, an integrated matching circuit can be used to connect the LNA in the receiver to the PA in the transmitter. As the level of integration in radio communication circuits tend to grow, more functions are embodied on the same chip and off-chip components are used less than ever. Presence of external components not only augments the manufacturing costs, but also increases the pin count on the main chip. The antenna switch is an example of such components. This switch is used to connect the receiver to antenna in reception mode and the transmitter to antenna in transmission mode. In the described exemplary embodiment of the present invention, the antenna switch can be eliminated, and the input of the receiver can be tied to the output of the transmitter. This approach has various applications including, but not limited to, single chip integration.
Since the antenna is usually single-ended, differential applications generally require a mechanism to convert the antenna signal from single-ended to differential for connection to the differential low noise amplifier (LNA) or the differential PA. The circuit implementation for a single-ended to differential LNA is shown in FIGS. 46 and 47. LC circuit, 646, 648 and the CL circuit 652, 650 matches the PA to the antenna when the PA is on and the LNA is off (as shown in FIG. 46), and matches the LNA to the antenna when the LNA is on and the PA is off (as shown in FIG. 47). Since the LNA is off and it only introduces a capacitive loading to the PA. The matching circuit can be designed to compensate for this additional capacitance.
In operation, during the transmit mode, a differential voltage across the drains of the PA transistors 634, 632 is generated. The two drains assert 180-degree out of phase voltages and they are combined through the LC and CL matching circuits to yield a single-ended voltage at the output. The LC circuit shifts the phase of the output signal from the transistor 634 by 90 degrees. The CL circuit shifts the phase of the signal output from the transistor 632 by 90 degrees in the opposite direction. Consequently, both signals are in-phase when combined at the output of the matching circuits.
Although a preferred embodiment of the present invention has been described, it should not be construed to limit the scope of the appended claims. For example, the present invention can be into a single integrated circuit, can be constructed from discrete components, or can include one or more integrated circuits supported by discrete components. Those skilled in the art will understand that various modifications may be made to the described embodiments. Moreover, to those skilled in the various arts, the invention itself herein will suggest solutions to other tasks and adaptations for other applications. It is therefore desired that the present embodiments be considered in all respects as illustrative and not restrictive, reference being made to the appended claims rather than the foregoing description to indicate the scope of the invention.
Claims
1. A filter circuit, comprising:
a plurality of cascaded filters; and
a bypass circuit coupled across one of the cascaded filters.
a plurality of cascaded filters; and
a bypass circuit coupled across one of the cascaded filters.
2-81. (canceled)
|
2021-02-25 11:01:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 33, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2691667377948761, "perplexity": 1443.555343548406}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350942.3/warc/CC-MAIN-20210225095141-20210225125141-00608.warc.gz"}
|
https://www.biostars.org/p/9476310/
|
Combining 2 sets of samples.
0
0
Entering edit mode
4 months ago
Lumos • 0
I have a study data containing normal and Diabetic samples. However, the phenotype I am considering for my analysis is BMI. I have checked the distribution of BMI scores between these 2 sets for significant difference, I have done the same for the allele frequencies and counts. The difference is non significant in both cases. My question is, can I consider both these sets (normal & diabetic) as a single cohort for studying BMI?
Gwas • 120 views
|
2021-10-28 12:56:35
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8102085590362549, "perplexity": 1634.2542572360874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00031.warc.gz"}
|
https://math.stackexchange.com/questions/2071298/what-are-some-easy-papers-in-mathematics-understood-by-undergraduates
|
# What are some easy papers in mathematics understood by undergraduates?
Being a junior(3rd.year) undergraduate in mathematics, I would like to learn how serious papers of mathematics do the proofs rigorously as opposed to textbooks and proffessors in lectures and I wish to see this by myself. For this reason , I just want to read at least one paper which I can understand the topic and proofs.
Though the aim of this reading process is to learn the extent of rigorousness in papers , I want also the subject to interest me so that I would learn something. That is why I might choose the topics somewhat related to real analysis since I consider doing Phd on real& functional analysis or probability & stochastic analysis.Hence, subjects could be Real Analysis, Game Theory, Probability or Stochastic Processes.
The thing is that I should be able to understand the paper to avoid throwing the paper away. I am at the level of Apostol's Mathematical Analysis(Point set topology ,metric spaces, differentiation &integration, continuity, uniform convergence, series and sequences of functions etc.) and I took a course on Lebesgue Integration which started Lebesgue Theory by sequence of step functions. I have not taken or read any thing on Stochastic Calculus but I know Applied Probability and Statistics at the level ,say undergraduate engineering student.
With all these in mind, which spesific paper do you suggest me to read?
1) The paper that I am looking for can be any theorem which is usually a subject of an undergraduate real analysis courses or from a textbook on real analysis. To give some examples, Fubini's Theorem, Beppo-Levi's Theorem in Lebesgue Integration or Banach's Fixed Point Theorem, Arzela-Ascoli Theorem, Baire Theorem etc. on Analysis can be candidates .
• This is such a broad topic which clearly has the tendency on getting "down-votes." (I hope not) So what's your field of interest, Mathematical Analysis ? – user399481 Dec 25 '16 at 8:30
• the paper I am looking for can be on any theorem which is a subject of undergraduate mathematical analysis courses. For example Fubini's Theorem on $\mathbb{R}^n$ – Quantes Dec 25 '16 at 8:33
• Right. Couple of ways on top of my head. Check "Google Scholar" option and search on the topic of your interest. Also, go to arxiv.org, choose Mathematics and do the same. – user399481 Dec 25 '16 at 8:35
• If I do that It can go very broad set of papers which I might choose one that I migh not able to understand even a single line of the paper at all. That is why I am looking an answer here. – Quantes Dec 25 '16 at 8:39
• It's not an easy process considering the level you're at. Either you asked some papers from an instructor who knows your level of understanding. Or you do it by yourself. It's up to you. Initially it'll be hard. But you'll get use to it. – user399481 Dec 25 '16 at 8:42
Suggestions:
Source: Mathematics Magazine, vol. 40, 1967, pp. 179-186.
Source: The American Mathematical Monthly, Vol. 78, No. 9 (Nov., 1971), pp. 970-979.
• Thank you! I will certainly look deeper into your suggestion @Pedro and I am open to other suggestions as well. – Quantes Dec 25 '16 at 13:02
• arzela has many generalisations – Max Dec 25 '16 at 14:14
Another Suggestion:
A simple proof that $\pi$ is irrational
It is a one page proof. It is not a proof of a major theorem, and questionably fits into your field though.
• It is a one page but, in my opinion, in order to make it readable for an undergraduate student we have to expand it considerably, as done in this series (or in this text in portuguese). – Pedro Mar 29 '17 at 1:07
|
2021-07-31 09:49:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5452393889427185, "perplexity": 315.72386717968254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00578.warc.gz"}
|
https://enwiki.academic.ru/dic.nsf/enwiki/1456100/11828
|
# Turbulence modeling
Turbulence modeling
Turbulence modeling is the area of physical modeling where a simpler mathematical model than the full time dependent Navier-Stokes Equations is used to predict the effects of turbulence.There are various mathematical models used in flow modelling to understand turbulence.
Joseph Boussinesq was the first practitioner of this, introducing the concept of eddy viscosity. In this model, the additional turbulent stresses are given by augmenting the molecular viscosity with an eddy viscosity. This can be a simple constant eddy viscosity (which works well for some free shear flows such as axisymmetric jets, 2-D jets, and mixing layers). Later, Ludwig Prandtl introduced the additional concept of the mixing length, along with the idea of a boundary layer. For wall-bounded turbulent flows, the eddy viscosity must vary with distance from the wall, hence the addition of the concept of a 'mixing length'. In the simplest wall-bounded flow model, the eddy viscosity is given by the equation:
: $u_t = kappa left|frac\left\{partial u\right\}\left\{partial y\right\} ight|l^2$
:where:
:$kappa$ is the von Karman constant (0.41);
:$frac\left\{partial u\right\}\left\{partial y\right\}$ is the partial derivative of the streamwise velocity (u) with respect to the wall normal direction (y);
:$l$ is the distance from the wall.
This simple model is the basis for the "Law of the Wall", which is a surprisingly accurate model for wall-bounded, attached (not separated) flow fields with small pressure gradients.
More general have evolved over time, with most modern turbulence models given by field equations similar to the Navier-Stokes equations.
Among many others, Joseph Smagorinsky (1964) proposed a useful formula for the eddy viscosity in numerical models, based on the local derivatives of the velocity field and the local grid size:
:$u_t = Delta x Delta y sqrt\left\{left\left(frac\left\{partial u\right\}\left\{partial x\right\} ight\right)^2 + left\left(frac\left\{partial v\right\}\left\{partial y\right\} ight\right)^2 + frac\left\{1\right\}\left\{2\right\}left\left(frac\left\{partial u\right\}\left\{partial y\right\} + frac\left\{partial v\right\}\left\{partial x\right\} ight\right)^2\right\}$
References
* [http://en.wikipedia.org/w/index.php?title=Special:Booksources&isbn=0521298199| Townsend, A.A. (1980) "The Structure of Turbulent Shear Flow" 2nd Edition (Cambridge Monographs on Mechanics)]
* [http://en.wikipedia.org/w/index.php?title=Special:Booksources&isbn=0080166210| Bradshaw, P. (1971) "An introduction to turbulence and its measurement" (Pergamon Press)]
* [http://en.wikipedia.org/w/index.php?title=Special:Booksources&isbn=0963605100| Wilcox C. D., (1998), "Turbulence Modeling for CFD" 2nd Ed., (DWC Industries, La Cañada) ]
* [http://www.ebi-edu.com/Entreprises/Recherche/documents/Absi%20R%20J%20WW%20ASCE%202006.pdf Absi, R. (2006), "Discussion of One-Dimensional Wave Bottom Boundary Layer Model Comparison: Specific Eddy Viscosity and Turbulence Closure Models" Journal of Waterway, Port, Coastal and Ocean Engineering, American Society of Civil Engineers (ASCE), Vol. 132, No. 2, pp. 139-141.]
* [http://www.jstage.jst.go.jp/article/jscejb/62/4/62_437/_article Absi, R. (2006), "A Roughness and time dependent mixing length equation", Journal of Hydraulic, Coastal and Environmental Engineering, (Doboku Gakkai Ronbunshuu B), Japan Society of Civil Engineers (JSCE), Vol. 62, No. 4, pp.437-446.]
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• Turbulence — In fluid dynamics, turbulence or turbulent flow is a fluid regime characterized by chaotic, stochastic property changes. This includes low momentum diffusion, high momentum convection, and rapid variation of pressure and velocity in space and… … Wikipedia
• Atmospheric dispersion modeling — Industrial air pollution source Atmospheric dispersion modeling is the mathematical simulation of how air pollutants disperse in the ambient atmosphere. It is performed with computer programs that solve the mathematical equations and algorithms… … Wikipedia
• Roadway air dispersion modeling — is the study of air pollutant transport from a roadway or other linear emitter. Computer models are required to conduct this analysis, because of the complex variables involved, including vehicle emissions, vehicle speed, meteorology, and terrain … Wikipedia
• Computational fluid dynamics — Computational physics Numerical analysis … Wikipedia
• Fluid dynamics — Continuum mechanics … Wikipedia
• Charles Speziale — Charles G. Speziale Born June 16, 1948(1948 06 16) Newark, New Jersey, United States … Wikipedia
• Modélisation des turbulences — La modélisation de la turbulence (et non des turbulences ) est une branche de la dynamique des fluides consistant, pour la simulation numérique des écoulements turbulents, à représenter l influence de la turbulence sur l écoulement moyen… … Wikipédia en Français
• Joseph Valentin Boussinesq — (born March 13 1842 in Saint André de Sangonis (Hérault département ), died February 19 1929 in Paris) was a French mathematician and physicist who made significant contributions to the theory of hydrodynamics, vibration, light, and heat.John… … Wikipedia
• Von Kármán constant — A unitless constant describing the logarithmic velocity profile of a turbulent fluid near a no slip boundary. The equation for such boundary layer flow profiles is:u=frac{u {star{k}lnfrac{z}{z 0}where u is the velocity at height z above the… … Wikipedia
• Boussinesq approximation — may refer to: * Boussinesq approximation (buoyancy) for buoyancy driven flows for small density differences in the fluid * Boussinesq approximation (water waves) for long waves propagating on the surface of a fluid layer under the action of… … Wikipedia
### Share the article and excerpts
Do a right-click on the link above
|
2020-01-23 22:41:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7619097828865051, "perplexity": 4547.360501594126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614086.44/warc/CC-MAIN-20200123221108-20200124010108-00446.warc.gz"}
|
https://www.physicsforums.com/threads/nonlinear-systems.318250/
|
# Nonlinear Systems
1. Jun 5, 2009
### Stratosphere
1. The problem statement, all variables and given/known data
solve the system of $$3x^{2}+2y^{2}=35$$ and
$$4x^{2}-3y^{2}=24$$
2. Relevant equations
3. The attempt at a solution
I re arranged for y^2 and got $$1\frac{1}{3}x^{2}-16=y^{2}$$ I keep getting x to equal \pm 2.473 this is clearly wrong, the answers should be (–3, –2), (–3, 2), (3, –2), and (3, 2). What am I doing wrong?
2. Jun 5, 2009
### rock.freak667
Instead of messing with fractions, why not just multiply the first equation by 3, the second equation by 2 and then just add them?
3. Jun 5, 2009
### Stratosphere
Thanks for the help but I still have a question, after I combined the two equations how come it worked and it didn’t work when I rearranged them. Did I mess up?
4. Jun 5, 2009
### HallsofIvy
Staff Emeritus
Looks like a basic arithmetic error. Because of the "$1\frac{1}{3}$", which would better be left 4/3, it looks like you solved the second equation for y2: $3y^2= 4x^2- 24$ so $y^2= (4/3)x^2- 8$. 24/3= 8, not 16.
|
2017-12-17 10:40:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6278059482574463, "perplexity": 940.5156309393645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948595342.71/warc/CC-MAIN-20171217093816-20171217115816-00583.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-log-x-1-log-x-1-2-log-x-2#214300
|
# How do you solve log(x-1)+log(x+1)=2 log(x+2)?
Jan 19, 2016
There are no solutions.
#### Explanation:
Use the logarithm rules to simplify either side:
• Left hand side: $\log a + \log b = \log \left(a b\right)$
• Right hand side: $b \log a = \log \left({a}^{b}\right)$
This gives
$\log \left[\left(x - 1\right) \left(x + 1\right)\right] = \log \left[{\left(x + 2\right)}^{2}\right]$
This can be simplified using the following rule:
• If $\log a = \log b$, then $a = b$
Giving us:
$\left(x - 1\right) \left(x + 1\right) = {\left(x + 2\right)}^{2}$
Distribute both of these.
${x}^{2} - 1 = {x}^{2} + 4 x + 4$
Solve. The ${x}^{2}$ terms will cancel, so there will only be one solution.
$4 x = - 5$
$x = - \frac{5}{4}$
However, this solution is invalid. Imagine if $x$ actually were $- \frac{5}{4}$. Plug it into the original equation. The terms $\log \left(x - 1\right)$ and $\log \left(x + 1\right)$ would be $\log \left(- \frac{9}{4}\right)$ and $\log \left(- \frac{1}{4}\right)$, and the logarithm function $\log a$ is only defined when $a > 0$.
|
2021-11-27 09:24:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 18, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8889131546020508, "perplexity": 496.16576825546235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358153.33/warc/CC-MAIN-20211127073536-20211127103536-00394.warc.gz"}
|
https://www.gamedev.net/forums/topic/390267-solved-why-some-items-wont-show/
|
# OpenGL [solved] Why some items wont show.
## Recommended Posts
Hi. I have a problem where I am trying to draw a unicycle. I modelled a wheel as a scaled sphere at first, and am now trying to add a tyre. (In the code below the wheel is scaled extra small so I may better see the tyre) Due to assignment constraints I have to use a tree method of connecting the items, and am currently having the wheel -> tyre (cyclinder) -> one rim (disc) -> other disc. Now I know that if there is a problem with the tree then the objects wont display, however my problem is that only the top and bottom items in the tree are displayed (IE the scaled down wheel/sphere and one rim) I have enclosed a Small Compilable Example below, although with openGL it appears there is no such thing as a small example. (Also hopefully the [ code ] [ /code ] tags work as there isn't a preview button.) So my question is does anything strike you as a good reason as to why some members of the tree are not displaying. I have attempted to play around with the rotate and translate functions but haven't come up with anything yet. The C code below can also be found here: http://myboxoftricks.homedns.org/sce.c If anyone has any ideas I would appreciate it. Kind regards, Mitch. EDIT: I've removed the code as i reckon I might be scaring people off, obviously people are looking but not replying. I cant imagine that no-one has used a tree structure before, so is there anything I can do that would make my question easier to ask? [Edited by - spudtheimpaler on April 29, 2006 4:20:48 PM]
##### Share on other sites
Just bumping the thread to indicate I've changed it.
##### Share on other sites
By tree method I assume you mean an object hierarchy of some sort. Are you using glPushMatrix() and glPopMatrix()? Proper use of these functions is usually the first step towards applying hierarchical transforms.
Another potential problem is order of operations. Generally you want to rotate and then translate, which translates into calling glTranslate*() before glRotate*() in your code. This is a common thing to get wrong, and usually results in objects not appearing where they're supposed to.
Maybe you could post just the bits of code where you set up the relevant transforms. The best tags to use for large blocks of code are probably [ source ][ /source ] (without the spaces).
##### Share on other sites
Quote:
Original post by jykBy tree method I assume you mean an object hierarchy of some sort. Are you using glPushMatrix() and glPopMatrix()? Proper use of these functions is usually the first step towards applying hierarchical transforms.
I am indeed using those methods to form an object hierarchy (Sorry, I've been thrown into the deep end, so unfamiliar with many terms, functions etc).
Quote:
Original post by jykAnother potential problem is order of operations. Generally you want to rotate and then translate, which translates into calling glTranslate*() before glRotate*() in your code. This is a common thing to get wrong, and usually results in objects not appearing where they're supposed to.
I was putting those the wrong way around, however it hasn't fixed the problem. I wasn't even aware it was an issue, however, so thank you for pointing it out.
Quote:
Original post by jykMaybe you could post just the bits of code where you set up the relevant transforms. The best tags to use for large blocks of code are probably [ source ][ /source ] (without the spaces).
ok. Here we have the functions for each item on the hierachy:
//////////////////////// My Wheel Function //////////////////////////void wheel(){ glPushMatrix(); glMaterialfv(GL_FRONT, GL_SPECULAR, wheel_specular); glMaterialfv(GL_FRONT, GL_AMBIENT, wheel_ambient); glMaterialfv(GL_FRONT, GL_DIFFUSE, wheel_diffuse); glTranslatef(0,-4,0); glScalef(0.1, 0.1, 0.1*WHEEL_WIDTH);//Change scale from 1,1,wheel_width gluSphere(wheel_quad, WHEEL_RADIUS, 30, 30); glPopMatrix();}//////////////////////// My Tyre Function ///////////////////////////void tyre(){ glPushMatrix(); glMaterialfv(GL_FRONT, GL_SPECULAR, tyre_specular); glMaterialfv(GL_FRONT, GL_AMBIENT, tyre_ambient); glMaterialfv(GL_FRONT, GL_DIFFUSE, tyre_diffuse); glTranslatef(0,-4,-0.2); glRotatef(0.0,0, 1.0, 0);// glScalef(1,1,1); gluCylinder(tyre_quad, WHEEL_RADIUS, WHEEL_RADIUS, TYRE_WIDTH, 30, 30); glPopMatrix();}void tyre_disc_left(){ glPushMatrix(); glMaterialfv(GL_FRONT, GL_SPECULAR, tyre_specular); glMaterialfv(GL_FRONT, GL_AMBIENT, tyre_ambient); glMaterialfv(GL_FRONT, GL_DIFFUSE, tyre_diffuse); glTranslatef(0,-4.0,-0.2); glRotatef(0.0,0, 1.0, 0);// glScalef(1,1,0.1); gluDisk(tyre_disc_left_quad, WHEEL_RADIUS/TYRE_WIDTH_DIVIDER, WHEEL_RADIUS, 30, 30); glPopMatrix();}void tyre_disc_right(){ glPushMatrix(); glMaterialfv(GL_FRONT, GL_SPECULAR, tyre_specular); glMaterialfv(GL_FRONT, GL_AMBIENT, tyre_ambient); glMaterialfv(GL_FRONT, GL_DIFFUSE, tyre_diffuse); glTranslatef(0,-4,0.2); glRotatef(00.0,0, 1.0, 0);// glScalef(1,1,0.1); gluDisk(tyre_disc_right_quad, WHEEL_RADIUS/TYRE_WIDTH_DIVIDER, WHEEL_RADIUS, 30, 30); glPopMatrix();}
with a display function that looks like this
void display(void){ glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glColor3f(1.0, 1.0, 1.0);// traverse(&torso_node); traverse(&wheel_node); glutSwapBuffers();}
and the traverse function (unaltered from that given to us by the lecturer)
void traverse(treenode* root){ if(root==NULL) return; glPushMatrix(); glMultMatrixf(root->m); root->f(); if(root->child!=NULL) traverse(root->child); glPopMatrix(); if(root->sibling!=NULL) traverse(root->sibling);}
and the hierarchy set up as per:
//////////////////////// My own quadrics //////////////////////////// wheel_quad = gluNewQuadric(); gluQuadricDrawStyle(wheel_quad, GLU_FILL); tyre_quad = gluNewQuadric(); gluQuadricDrawStyle(tyre_quad, GLU_FILL); tyre_disc_left_quad = gluNewQuadric(); gluQuadricDrawStyle(tyre_disc_left_quad, GLU_FILL); tyre_disc_right_quad = gluNewQuadric(); gluQuadricDrawStyle(tyre_disc_right_quad, GLU_FILL);[snip...]//////////////////////// My own Tree //////////////////////////////////////////////////////// Start with the wheel /////////////////////// glLoadIdentity(); glTranslatef(0,-4.0,0); glRotatef(theta[0], 0.0, 1.0, 0.0); glGetFloatv(GL_MODELVIEW_MATRIX,wheel_node.m); wheel_node.f = wheel; wheel_node.sibling = NULL; wheel_node.child = &tyre_node;//&head_node; glLoadIdentity(); //////////////////////// Next the tyre ////////////////////////////// glLoadIdentity(); glTranslatef(0,0,-0.2); glRotatef(theta[1], 0, 1, 0); glGetFloatv(GL_MODELVIEW_MATRIX,tyre_node.m); tyre_node.f = tyre; tyre_node.sibling = NULL; tyre_node.child = &tyre_disc_left_node; glLoadIdentity();//////////////////////// Add the Tyre Rims ////////////////////////// glLoadIdentity(); glTranslatef(0, 0,0); glRotatef(theta[2], 0, 1, 0);//change this? glGetFloatv(GL_MODELVIEW_MATRIX,tyre_disc_left_node.m); tyre_node.f = tyre_disc_left; tyre_node.sibling = NULL; tyre_node.child = &tyre_disc_right_node; glLoadIdentity(); glLoadIdentity(); glTranslatef(0,0,0.4); glRotatef(theta[3], 0, 1, 0);//change this? glGetFloatv(GL_MODELVIEW_MATRIX,tyre_disc_right_node.m); tyre_node.f = tyre_disc_right; tyre_node.sibling = NULL; tyre_node.child = NULL; glLoadIdentity();
As mentioned the compilable example is as [link removed by spudtheimpaler: posted this a while ago, need space on the server]
Hopefully the above will help to clarify :) This has brought me to a complete halt and I know it is simply my annoyance that is blinding me to the obvious answer.
Many thanks :)
Mitch
[Edited by - spudtheimpaler on May 21, 2006 12:37:29 PM]
##### Share on other sites
Solved.
glLoadIdentity(); glTranslatef(0, 0,0); glRotatef(theta[2], 0, 1, 0);//change this? glGetFloatv(GL_MODELVIEW_MATRIX,tyre_disc_left_node.m); tyre_node.f = tyre_disc_left; tyre_node.sibling = NULL; //** tyre_node.child = &tyre_disc_right_node; glLoadIdentity();
//** tyre_node.sibling etc should be tyre__disc_left_node.blah
Thanks for your help though :)
Mitch.
## Create an account
Register a new account
• ## Partner Spotlight
• ### Forum Statistics
• Total Topics
627674
• Total Posts
2978558
• ### Similar Content
• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture( GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.
References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:
• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?
• 11
• 11
• 10
• 12
• 22
|
2017-10-19 14:55:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35752877593040466, "perplexity": 3444.6157812695983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823309.55/warc/CC-MAIN-20171019141046-20171019161046-00367.warc.gz"}
|
https://ask.libreoffice.org/en/question/213468/indent-on-first-line-of-each-page/
|
# indent on first line of each page
This happens when I re-save an .odt to .docx to upload to amazon/kdp.
The first line of many pages (ie, at the top of a new page) is indented about 3 spaces.
In other places, there's an break in a sentence such that there's a gap (several empty lines) at the bottom of one page, and then one word from the sentence is on the next page.
Any idea what's causing this and more importantly is there a way to fix it without going through page by page hitting back space to correct each instance?
Thank you.
edit retag close merge delete
From your last sentence, you suggest that stray "real" spaces have been added by the conversions. Do you confirm this is the case and not a formatting artifact (no real spaces but an error in text formatting)?
In the top-of-the page case, have you a "non-breaking space" as first character of the line followed by one or several ordinary spaces? More generally, are you sure all your spaces (apart from very specific needs) are simple spaces, typed with the space bar, and not special spaces obtained from the space bar + modifiers (Shift, Alt or Ctrl)? Mixing ordinary and special spaces causes "funny" but predictable results with justification.
These effects are present in both versions of the text (.odt and .doc(x)) but may be revealed by the difference in page layout.
( 2019-10-18 08:07:48 +0100 )edit
Sort by » oldest newest most voted
Thank you ajittoz. These are formatting issues that arise when converting from .odt to .docx. Sometimes it's far worse, causing a page break every paragraph or so such that most of a page ends up blank with only one paragraph at the top.
What I found last night is that the problem is page styles. During the conversion, my page styles all became converted 1, converted 2....converted 267. I found two solutions.
1. I went back in and edited the DEFAULT page style in the organizer tab to be followed by DEFAULT. I still had to go through and fix the first page of each chapter, which I did by inserting the cursor in front of the word chapter, backspace so that the chapter heading went to the previous page and became default text, and then once again turned it to HEADING 1.
[HEADING 1 was already defined to insert a page break before the paragraph, and I re-set the page break to DEFAULT.]
If there was an easier way to do that, I'd still like to know, but it got rid of those unwanted spaces and awkward empty spaces on the pages.
1. I gave up on KDP and used draft2digital. Although I originally intended to say that tongue in cheek, the fact is, the same document that continued to have issues at one worked beautifully and absolutely perfectly at the other, on the first try.
more
The problem with page styles comes from the fact that Word has no notion of page style and every occurrence must be converted to some equivalent set of other primitives. Hence, the "converted-n".
Easier way …
It depends on the degree of "styleness" of your document and the presence (frequency) of direct formatting. Usually mixing styles and direct formatting leads into this sort of trouble.
KDP? draft2digital? What these applications? How do they relate to LibreOffice?
( 2019-10-18 18:15:08 +0100 )edit
|
2019-11-12 23:14:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5380131006240845, "perplexity": 1786.467621029756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665809.73/warc/CC-MAIN-20191112230002-20191113014002-00459.warc.gz"}
|
http://hitchhikersgui.de/War_of_attrition_(game)
|
# War of attrition (game)
In game theory, the war of attrition is a dynamic timing game in which players choose a time to stop, and fundamentally trade off the strategic gains from outlasting other players and the real costs expended with the passage of time. Its precise opposite is the pre-emption game, in which players elect a time to stop, and fundamentally trade off the strategic costs from outlasting other players and the real gains occasioned by the passage of time. The model was originally formulated by John Maynard Smith;[1] a mixed evolutionarily stable strategy (ESS) was determined by Bishop & Cannings.[2] An example is an all-pay auction, in which the prize goes to the player with the highest bid and each player pays the loser's low bid (making it an all-pay sealed-bid second-price auction).
## Examining the game
To see how a war of attrition works, consider the all pay auction: Assume that each player makes a bid on an item, and the one who bids the highest wins a resource of value V. Each player pays his bid. In other words, if a player bids b, then his payoff is -b if he loses, and V-b if he wins. Finally, assume that if both players bid the same amount b, then they split the value of V, each gaining V/2-b. Finally, think of the bid b as time, and this becomes the war of attrition, since a higher bid is costly, but the higher bid wins the prize.
The premise that the players may bid any number is important to analysis of the game. The bid may even exceed the value of the resource that is contested over. This at first appears to be irrational, being seemingly foolish to pay more for a resource than its value; however, remember that each bidder only pays the low bid. Therefore, it would seem to be in each player's best interest to bid the maximum possible amount rather than an amount equal to or less than the value of the resource.
There is a catch, however; if both players bid higher than V, the high bidder does not so much win as lose less. The player who bid the lesser value b loses b and the one who bid more loses b -V (where, in this scenario, b>V). This situation is commonly referred to as a Pyrrhic victory. For a tie such that b>V/2, they both lose b-V/2. Luce and Raiffa referred to the latter situation as a "ruinous situation";[1] both players suffer, and there is no winner.
The conclusion one can draw from this pseudo-matrix is that there is no value to bid which is beneficial in all cases, so there is no dominant strategy. Also, there is no Nash Equilibrium in pure strategies in this game indicated as follow:
• If there is a lower bidder and a higher bidder, the rational strategy for the lower bidder is to bid zero knowing that it will lose. The higher bidder will bid a value slightly higher and approaches zero in order to maximize its payoff, in which case the lower bidder has the incentive to outbid the higher bidder to win.
• If the two players equally bid, the equalized value of the bid cannot exceed V/2 or the expected payoff for both players will be negative. For any equalized bid less than V/2, either player will have the incentive to bid higher.
With the two cases mentioned above, it can be proved that there is no Nash Equilibrium in pure strategies for the game since either player has the incentive to change its strategy in any reasonable situation.
## Dynamic formulation and evolutionarily stable strategy
Another popular formulation of the war of attrition is as follows: two players are involved in a dispute. The value of the object to each player is ${\displaystyle v_{i}>0}$. Time is modeled as a continuous variable which starts at zero and runs indefinitely. Each player chooses when to concede the object to the other player. In the case of a tie, each player receives ${\displaystyle v_{i}/2}$ utility. Time is valuable, each player uses one unit of utility per period of time. This formulation is slightly more complex since it allows each player to assign a different value to the object. Its equilibria are not as obvious as the other formulation. The evolutionarily stable strategy is a mixed ESS, in which the probability of persisting for a length of time t is:
${\displaystyle p(t)={\frac {1}{V}}e^{(-t/V)}}$
The evolutionarily stable strategy below represents the most probable value of a. The value p(t) for a contest with a resource of value V over time t, is the probability that t = a. This strategy does not guarantee the win; rather it is the optimal balance of risk and reward. The outcome of any particular game cannot be predicted as the random factor of the opponent's bid is too unpredictable.
That no pure persistence time is an ESS can be demonstrated simply by considering a putative ESS bid of x, which will be beaten by a bid of x+${\displaystyle \delta }$.
It has also been shown that even if the individuals can only play pure strategies, the time average of the strategy value of all individuals converges precisely to the calculated ESS. In such a setting, one can observe a cyclic behavior of the competing individuals.[3]
## The ESS in popular culture
The evolutionarily stable strategy when playing this game is a probability density of random persistence times which cannot be predicted by the opponent in any particular contest. This result has led to the prediction that threat displays ought not to evolve, and the conclusion that the optimal military strategy is to behave in a completely unpredictable, and therefore insane, manner. Neither of these conclusions appear to be truly quantifiably reasonable applications of the model to realistic conditions.
## References
1. ^ Maynard Smith, J. (1974) Theory of games and the evolution of animal conflicts. Journal of Theoretical Biology 47: 209-221.
2. ^ Bishop, D.T. & Cannings, C. (1978) A generalized war of attrition. Journal of Theoretical Biology 70: 85-124.
3. ^ K. Chatterjee, J.G. Reiter, M.A. Nowak: "Evolutionary dynamics of biological auctions". Theoretical Population Biology 81 (2012), 69 - 80
## Sources
• Bishop, D.T., Cannings, C. & Maynard Smith, J. (1978) The war of attrition with random rewards. Journal of Theoretical Biology 74:377-389.
• Maynard Smith, J. & Parker, G. A. (1976). The logic of asymmetric contests. Animal Behaviour. 24:159-175.
• Luce,R.D. & Raiffa, H. (1957) "Games and Decisions: Introduction and Critical Survey"(originally published as "A Study of the Behavioral Models Project, Bureau of Applied Social Research") John Wiley & Sons Inc., New York
• Rapaport,Anatol (1966) "Two Person Game Theory" University of Michigan Press, Ann Arbor
|
2019-06-19 06:10:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6087123155593872, "perplexity": 1176.8764386414039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998913.66/warc/CC-MAIN-20190619043625-20190619065625-00475.warc.gz"}
|
https://math.stackexchange.com/questions/3063272/is-this-way-of-finding-lim-limits-x-to-inftyx-lnx21-valid
|
# Is this way of finding $\lim\limits_{x\to +\infty}(x-\ln(x^2+1))$ valid?
I needed to find: $$\lim\limits_{x\to +\infty}(x-\ln(x^2+1))$$
So here are the steps I took:
Step 1: Replace $$x$$ with $$\ln(e^x)$$: $$\lim\limits_{x\to +\infty}\left(\ln(e^x)-\ln(x^2+1)\right)$$ $$\lim\limits_{x\to +\infty}\ln\left(\frac{e^x}{x^2+1}\right)$$
Step 2: Bring the limit inside of the natural log function since it is continuous on the required interval.
$$\ln\left(\lim_{x\to +\infty}\frac{e^x}{x^2+1}\right)$$
Step 3: Apply L'Hospital's rule twice and evaluate:
$$\ln\left(\lim_{x\to +\infty}e^x\right)$$
$$\ln(+\infty) = +\infty$$
My question is whether step 2 is valid here because $$\lim\limits_{x\to \infty}\frac{e^x}{x^2 + 1}$$ doesn't exist (its $$+\infty$$), and in order to move the limit operator inside the function the limit $$\lim\limits_{x\to \infty}\frac{e^x}{x^2 + 1}$$ must exist according to this theorem in a book about Calculus (ISBN 978-0-470-64769-1):
If it's not valid, what would be a valid way to find the limit?
• Your step is valid. The book deals with the theorem where the limit does exist. Similar result holds if the limit does not exist. See the theorem mentioned at the end of this answer: math.stackexchange.com/a/1073047/72031 – Paramanand Singh Jan 6 at 2:24
Let $$M>0$$ be given. Since $$\ln(u)\to \infty$$ as $$u\to\infty$$, there exists $$K$$ such that for all $$x$$ with $$\frac{e^x}{x^2+1}>K$$ it follows that $$\ln\left(\frac{e^x}{x^2+1}\right)>M$$. Since $$\frac{e^x}{x^2+1}\to \infty$$ as $$x\to \infty$$, there exists $$N$$ such that $$x>N\implies \frac{e^x}{x^2+1}>K\implies \ln\left(\frac{e^x}{x^2+1}\right)>M.$$
By definition of a limit it follows that $$\lim_{x\to\infty}\frac{e^x}{x^2+1}=\infty.$$
Note we can mimic the same argument to conclude that if $$f(x)\to \infty$$ as $$x\to \infty$$ and $$g(x)\to \infty$$ as $$x\to \infty$$, then $$g(f(x))\to \infty$$ as $$x\to \infty$$.
In step 2 all you need is the fact that $$\ln (y) \to \infty$$ as $$y\to \infty$$. Since $$\frac {e^{x}} {x^{2}+1} \to \infty$$ it follows that $$\ln (\frac {e^{x}} {x^{2}+1}) \to \infty$$. To prove that $$\ln (y) \to \infty$$ as $$y\to \infty$$ assume that this is false. Since $$\ln \, x$$is an increasing function, if it doesn't not tend to infinity, it would be bounded for $$x>1$$, say $$\ln\, x for all $$x>1$$. You get a contradiction from this if you take $$x=e^{C}$$.
• @user3071028 Everything you have done is right if you use that fact that $\ln \, y\to \infty$ as $y \to \infty$. – Kavi Rama Murthy Jan 6 at 0:35
|
2019-06-27 08:48:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 40, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9790053963661194, "perplexity": 126.41455360270332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001014.85/warc/CC-MAIN-20190627075525-20190627101525-00281.warc.gz"}
|
http://mathhelpforum.com/advanced-applied-math/228001-fastest-roll-off-filter.html
|
# Thread: Fastest roll-off of a filter
1. ## Fastest roll-off of a filter
I was wondering: can a filter be optimised to have maximum roll off. Filters are usually specified as:
$|H(\omega)|^2 = \frac{1}{1 + \alpha(\omega)}$ where $\alpha$ is a polynomial function. I was wondering if it was possible to maximise the steepness of the roll-off by specifying the passband and stopband ripples.
What is usually done is:
$|H(\omega)|^2 = H(\omega)H(-\omega) = \frac{1}{1 + \alpha(\omega)}$
This gives you the polls and zeros. Then you'll need to do some optimisation of some kind.
2. ## Re: Fastest roll-off of a filter
Optimum "L" filter - Wikipedia, the free encyclopedia particular the two references in the bibliography.
Maximum roll off usually means undesired characteristics somewhere else, such as too much passband ripple or phase distortion.
|
2016-10-27 06:07:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7882863283157349, "perplexity": 2249.514149301584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721141.89/warc/CC-MAIN-20161020183841-00221-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://openstudy.com/updates/50f8ce8ae4b027eb5d9a56dc
|
## A community for students. Sign up today!
Here's the question you clicked on:
## Atkinsoha 2 years ago Use fundamental identities to simplify the expression secØ (sinØ/tanØ). I don't remember learning this. please show work, thanks!
• This Question is Closed
1. wio
There aren't any inverse functions here?
2. wio
oops, nevermind.
3. satellite73
$\sec(x)=\frac{1}{\sin(x)}$ and$\tan(x)=\frac{\sin(x)}{\cos(x)}$replace and then do algebra
4. Atkinsoha
It's a multiple choice, sorry I thought the rest of the question posted. It says, determine which of the following is NOT equivalent. cos^2Ø-sin^2Ø csc^2Ø -cot^2Ø 1 sec^2Ø -tan^2Ø sin^2Ø +cos^2Ø
#### Ask your own question
Ask a Question
Find more explanations on OpenStudy
|
2015-02-27 07:45:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6438310146331787, "perplexity": 10776.551121875085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936460577.67/warc/CC-MAIN-20150226074100-00161-ip-10-28-5-156.ec2.internal.warc.gz"}
|
http://cafe.elharo.com/programming/spot-the-bug/
|
# Spot the Bug
A future exam question: Identify the elementary programming error in the following actual output from a real web store.
Bonus credit: describe both the quick emergency fix for the problem, and the longterm fix for the problem.
Greetings from CellularFactory.com.
We thought you'd like to know that we shipped your items, and that this completes
requests
You can track the status of this order, and all your orders, online by visiting our
page at http://www.CellularFactory.com/help/shipping.jsp
The following items have been shipped to you by CellularFactory.com:
---------------------------------------------------------------------
Qty Item Price Shipped Subtotal
---------------------------------------------------------------------
1 Travel Charger 5.89 2008-02-09 5.89
---------------------------------------------------------------------
Shipped via USPS (estimated arrival date: about 4-6 days after)
---------------------------------------------------------------------
Item Subtotal
5.89
Shipping & Handling:
3.99
Total:
9.879999999999999
--------------------------------------------------------------------
This shipment was sent to:
I teach this bug to my Intro to Java students, but I was a little shocked to see it in production. This is why I get nervous ordering from tiddlywink sites like Cellular Factory. If they can’t get the easy things right, what’s the chance they’re implementing proper information security procedures to protect my financial and personal information? Approximately zero, I’d venture.
### 25 Responses to “Spot the Bug”
1. gatzke Says:
round() ?
2. Peter C O Johansson Says:
Okay, I may be utterly wrong. I hope not. Fundamental error: Using floats to represent currency, manifested as inaccuracy and being printed with a hueg number of decimals at the end. Quick fix: %.2f. Long-term fix: Start using an integral type instead.
3. jbs Says:
Don’t use round().
The real lesson here is that approximations of numbers (e.g., floating point representations) are inappropriate for exact and enumerable quantities.
Approximations are fine if you are talking about, say, elastic-strain tension but can be disastrous when talking about, say, monies. The conventional wisdom is to use a fixed point arithmetic library when the exact count of something matters.
Some languages like Haskell and C# have arbitrary fixed-point arithmetic built-in; others like C require a third-party library (e.g., GMP) or you to roll your own.
4. Garrett N Says:
Use COBOL. Remember BCD?
PIC 99,999,999.99 USAGE IS COMP-3.
5. John Cowan Says:
What’s more, the table of items is horribly out of alignment with its header. I second the recommendation to use Cobol.
6. kalpesh patel Says:
How about using java BigDecimal to represent all the currency.
7. Alan Little Says:
Garrett: you want to hard-code your separators, thus offending/confusing all your customers in Europe where dots are used to mark thousands and the decimal separator is a comma? Or in India where lakhs and crores are delimited every four figures instead of every three? Why? (Or maybe I underestimate COBOL and it has locale settings to handle such things?)
The blatantly misaligned formatting struck me as just as obvious and amateurish too. I’d be willing to bet considerable sums of money that they can’t handle non-US addresses properly either, and parsing spaces out of credit card numbers is beyond them: these being almost universal characteristics of incompetent ecommerce sites.
8. josh Says:
BigDecimal? Wouldn’t using long to store the number of pennies be a lot easier?
9. brett Says:
I think long would be fine as long as you’re not doing financial calculations that involve percentages. Those fractional pennies add up.
10. Martijn Says:
Sometimes you need to store (or calculate with) smaller amounts than even pennies (yes, really.. take weird amounts of tax for example), so it’s best to go for a few centipennies or even millipennies, and round properly/move the comma on output(!) only.
11. ronjohn Says:
this might not be a programming error per say, but it would certainly be advisable to count the basket contents and ship your ‘item’ instead of ‘items’ if there is only one thing in it.
12. John Vance Says:
Simple. The program should not have printed repeated 9s. It should have printed a single 9 followed by a tilde indicating infinite repeat – like so:
9.879~
The long term solution would be to invest in math education, so that the average user could understand such notation. See Djikstra’s comments about end users.
Except they weren’t charged 9.879~, they would have been charged 9.88.
14. John Vance Says:
(9.879~) == (9.88)
Not approximately – exactly.
If you want to nail me for something, nail me for misspelling Dijkstra’s name.
15. Zack Says:
Yes. They were charged 9.88.
Either, the problem is that they used a broken language (java?) with faulty float addition:
MzScheme:
(+ 3.89 5.99)
9.88
Python:
>> 3.89 + 5.99
9.8800000000008
Or, which seems more likely, they stored the prices in integral cents, converted to floating point, divided by 100.0 to format the output:
(389 + 599) / 100.0
which will give you the wacky number they posted in many languages. They should have formatted correctly, yes, but lets hope they also know what is going on with their calculations, as well.
John,
>(9.879~) == (9.88)
>Not approximately – exactly.
Ok fair point. I’d already forgotten http://en.wikipedia.org/wiki/0.999… after reading it a few months ago.
I’ll have to fall back on arguing that 9.88 is a better representation for the users in this case.
17. Garren Says:
Yeah, Java. Just in case the url “…shipping.jsp” didn’t give it away.
It’s not “faulty float addition” nor is it a “broken language” (debatable). It’s just good ol’sloppy coding.
A more interesting game might be, “Fix the bug”.
class phew {
public static void main( String[] args ) {
double price = 5.89, tax = 3.99;
System.out.println( price + ” + ” + tax + ” = ” +
(price + tax) );
// ( java.lang.Math.round( 100 * (price + tax) ) / 100.00 )
}
}
5.89 + 3.99 = 9.879999999999999
18. Ramkumar.E.V. Says:
Hi,
I see this from an user perspective.If you see realistic currencies,you can maximum pay to someone for 2 decimal points,you can pay for figures like 12.3456 and all .. you can max pay for 12.34
So the best deal would be to use double for calculations since it takes care of decimal issues by default or use explicit type conversion to float or use BigDecimal whatever but keep it rounded to 2 decimal points at the end when the results are shown to the user.
Cheers
Ram
And the shipping URL page should have the order number string pre-appended (like “shipping.jsp?order=#####”). Another less semantic load
20. duncan Says:
The upstream commenters who point out that some code will have to deal with amounts less than one cent are of course correct, but if you know you will only be dealing with US currency and that you will not have to deal with fractions of a penny, handling currency in pennies makes a lot of sense as a defensive measure. It’s not always appropriate, but where it is I don’t see a problem with it.
21. Pat Farrell Says:
There may be more, but the criminal sin is using floating point for money.
Any language used by folks who might use money to do anything, needs a Money or Currency intrinsic type so that rookie programmers don’t shoot themselves in the feet using floating point.
I’m not asking for Euro to Pounds Sterling to Dollars, just a Money datatype.
22. MHMD Says:
Hi,
My perspective about analyzing the above code in java would be,
BUG:
Total:9.879999999999999
Expected:
Total:9.88
Problem description:
In Java floating points are expressed as fractions in binary formats , so using double as data type for these would have caused the problem.
Solution:
Quick Fix:
1.Use float instead of double that would give the exact answer need for this case.
2.In case if the double is needed in anyways then use NumberFormat class to format the output
Original Solution:
1.Use BigDecimal and NumberFormat class to format the output.
23. John Cowan Says:
Alan Little: Cobol does indeed allow you to say DECIMAL POINT IS COMMA, and you can insert punctuation editing characters wherever you like — you aren’t restricted to putting one every three places. In any case, Indian numbers aren’t every four places: it’s one thousand = 1,000; one lakh (10^5) = 1,00,000, and one crore (10^7) = 1,00,00,000. It’s Archimede’s Sand Reckoner that worked in multiples of one myriad (10^4).
Zack: If MzScheme indeed prints that, it is violating the Scheme standard, which requires that all inexact numbers be printed using the nearest exact numeral. Python is doing the right thing.
|
2013-06-20 03:25:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33871182799339294, "perplexity": 4637.949737780106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710196013/warc/CC-MAIN-20130516131636-00033-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://sisto.blog/post/2021-03-17-linux-users-files/
|
Some of this was inspired by this post. First let’s talk navigating linux. I’m assuming you are familiar with the ls command. Let’s look at it a little further by exploring it’s options and setting and alias. It’s common for ll to be aliased as ls -lh which is more useful than the ls command alone. I typically like to add -a as well to show files preceded with a dot (.). Which essentially means show hidden files. The alias for that would look like alias ll="ls -hal". In addition to the ls command you can use tree to show more directory information. Here is a good article on understanding permissions.
To add the user steve and set password use the commands below.
sudo useradd steve
sudo passwd steve
This will create a new user according to /etc/default/useradd file.
Entries are added to /etc/passwd, /etc/shadow, /etc/group and /etc/gshadow.
It is common for a home directory to be created when adding a new user. For the sake of exercise I want to create the directory and set the permissions myself. Let’s start by switching to the new user su steve. This will change us to the newly created user. Now try to create /home/steve. You will be unable to do so as the steve user. Instead you will have to use sudo. This will set the user and group owner as root. After creating the directory run the commands below to change the owner and group.
sudo chown steve /home/steve/
sudo chown :users /home/steve
Now let’s download some files to play around with to continue learning linux. Use the commands below to download the Linux Pocket Guide and unzip it.
wget http://linuxpocketguide.com/LPG-stuff.tar.gz
I played around with using different file names to see if it would change the digital hash. I tested this using the sha1sum command after downloading it on different machines with different names. Each returned the same signature db2ed9e750930beb4ed0850f143cdcb5b39312c4
sha1sum LPG-stuff.tar.gz
Now use the command below to unzip and extract the contents. Hence the extension .tar (tarball) .gz (gzip)
tar -xf LPG-stuff.tar.gz
Copy the the contents to other directories to really familiarize yourself with the cp, scp, and rsync. Know how and when to use each one. I recommend using rsync to copy files to and from a remote server. Understand the difference between a hardlink and a softlink. Hardlink and softlink files that you just downloaded. Use the ls -i command to show the inode number and see how it is the same for softlink data. You can also use readlink -f to show the root file of a linked file.
Need to include find
find / -name "passwd" find files containing passwd
find / -name "*.txt" find files with extension .txt
|
2021-04-19 02:01:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40493810176849365, "perplexity": 2719.403925759822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00251.warc.gz"}
|
https://www.openscience.fr/Entropy-Thermodynamics-Energy-Environment-Economy?lang=en
|
Physics > Home > Journal
# Entropy: Thermodynamics – Energy – Environment – Economy
## Entropie : thermodynamique – énergie – environnement – économie
Entropie - ISSN 2634-1476 - © ISTE Ltd
### Objectifs de la revue
In 1965, the first edition of the journal Entropie announced that thermodynamics was the basis for many industrial applications, but also for advanced techniques (aerospace, particle and universe physics, metrology). It is a science of energy and entropy, a branch that studies the properties of materials and fluids, conversion processes.
But since then, it has also become clear that thermodynamics and energy have a major role in the living world and its evolution. This aspect is therefore an integral part of the themes of this journal, as well as the relationship with the environment and the economy: are we not talking about thermo-economics, climate change with the temperature drift, a thermodynamic notion if ever there was one?
In summary, the "new edition" of Entropie confirms the previous major fundamental and applied themes, but also opens up to various everyday applications in our societies, and offers new sections on the living world, on the economy (thermo-economics) and the environment through a systemic approach.
Le premier éditorial de la revue Entropie annonçait, en 1965, que la thermodynamique est à la base de nombreuses applications industrielles, mais aussi de techniques de pointe (aérospatial, physique des particules et de l’univers, métrologie). Elle est une science de l’énergie et de l’entropie, branche qui étudie les propriétés des matériaux et des fluides, les processus de conversion.
Mais depuis lors, il est aussi apparu que la thermodynamique et l’énergie avait un rôle majeur dans le monde du vivant et de son évolution. Cet aspect fait donc partie intégrante des thèmes de la revue, de même que la relation à l’environnement et l’économie : ne parle-t-on pas de thermo économie, de changement climatique avec la dérive en température, notion thermodynamique s’il en est.
En résumé, la « nouvelle édition » d’Entropie confirme les thèmes majeurs antérieurs fondamentaux et appliqués, mais y ajoute une ouverture sur des applications diffuses de tous les jours dans nos sociétés, et de nouvelles rubriques du côté du monde du vivant, puis de l’économie (thermo-économie) et de l’environnement par une approche systémique.
Volume 20- 1
Issue 1
Issue 2
Issue 3
Issue 4
Volume 21- 2
Issue 1
Special issue
Issue 3
Volume 22- 3
Issue 1
### Recent articles
Air-to-Water Cascade Heat Pump Thermal Performance Modelling for Continental Climate Regions
At low ambient temperatures, the heating capacity and coefficient of performance of a single stage vapour compression heat pump cycle is significantly getting reduced. A two-stage cascade heat pump cycle operating with two (...)
Covid19 and Process Engineering
For about two years, we have been suffering from the effect of the SARS-COV-2 pandemic [Covid-19]. Distraught, politicians in their speeches have gone from the theme of an ordinary flu, without major effects, to more (...)
Entropy analysis in spray cooling for dosing water injection
Spraying water in air improves air-cooling capacity, which then relies on the evaporation of water. Even for small drop sizes, literature reports that the evaporation remains limited inside the spray and below saturation (...)
Augmented Curzon-Ahlborn modelling of Carnot engine
This paper reconsiders the modelling of Curzon-Ahlborn dedicated to Carnot engine. A modified model is proposed, taking account of the period (duration) of the cycle. It allows a two steps optimization, by following a model (...)
The IdEP-IdLA model and the biochemistry of aggregative processes: the Arianna’s conjecture
In previous works an exhaustive description has been obtained of the aggregative processes to which the ideal system described by the so called IdEP-IdLA model can be subject both in closed conditions (at equilibrium) and (...)
Information-entropy equivalence, Maxwell’s demon and the information paradox
Several experiments have been carried out to confirm the Landauer’s principle, according to which the erasure of one bit of information requires a minimum energy dissipation of $K$B $T$ Log 2. They are based on the (...)
Editorial Board
Editor in Chief
Michel FEIDT
Université de Lorraine
[email protected]
Vice Editor in Chief
Philippe GUIBERT
Sorbonne Université
[email protected]
Co-Editors
Ali FELLAH
Université de Gabès
Tunisie
[email protected]
Francois LANZETTA
Université de Franche-Comté
[email protected]
George DARIE
Université Politehnica de Bucarest
Roumanie
[email protected]
Lazlo KISS
Université du Québec à Chicoutimi
[email protected]
Alberto CORONAS
Université Rovira i Virgili
Espagne
[email protected]
Gianpaolo MANFRIDA
Université de Florence
Italie
[email protected]
Phillipe MATHIEU
Université de Liège
Belgique
[email protected]
Vincent GERBAUD
Université de Toulouse
[email protected]
Submit a paper
|
2022-05-17 02:08:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24735336005687714, "perplexity": 12395.454178978332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515466.5/warc/CC-MAIN-20220516235937-20220517025937-00266.warc.gz"}
|
https://gasstationwithoutpumps.wordpress.com/tag/bai/
|
# Gas station without pumps
## 2015 December 31
### Twelfth weight progress report (one year)
Filed under: Uncategorized — gasstationwithoutpumps @ 15:57
Tags: , , , , , , ,
This post continues the series of weight progress reports from the previous one. This report marks one year from my New Year’s resolution at the beginning of 2015. At the beginning of the year, I said I wanted to drop my weight by 10–15 pounds, by which I meant a target weight of 160–165 lbs 0r a BMI of 22.5–23 kg/m2. During the course of the year, I re-defined my goal to a target weight of 155–160 lbs or a BMI of 21.6–22.4 kg/m2. I reached that target range in April, and pretty much stayed there until the end of September, when I had a sudden spike in weight that took me about a month to correct. November and December were not good for my weight, which has drifted up to hover around 161 lbs.
2015 weight record, showing successful weight loss followed by almost successful maintenance.
My weight has drifted outside my target range several times, and the holidays have been particularly bad for keeping it in bounds. I’ve adjusted my target weight to gradually relax the upper limit, to allow it to increase at 0.6 lbs a year, which would allow me to drift up t0 178 lbs over 30 years. But I’m currently over even my relaxed limit, so I’ll have to go back to my strict raw-fruits-and-vegetables-for-lunch diet, which I have not been keeping to very well lately.
My Body Adiposity Index now estimates my body fat at 23.4%, while the estimate from BMI is 24.8%. According to some calibration studies on people in Louisiana, neither estimate is particularly accurate—by that study, the correct value should be around 18±5%.
My exercise for December is way down also (only 2.48 miles/day of bicycling, down from 4.28 miles/day in November) and my total mileage for 2015 is only 1479.4 miles. I was going to do some cycling with my son over break, but he got some mild gastrointestinal bug, and I’ve been a bit under the weather also, so we never got around to doing the bike rides we had planned. Even the short ride we attempted to UCSC to film a short video on oscilloscope usage got cancelled when he threw up halfway up the hill (the first symptom of the gastrointestinal bug, other than fatigue).
## 2011 March 9
### BAI: A Better Index of Body Adiposity
Filed under: Uncategorized — gasstationwithoutpumps @ 09:09
Tags: , ,
In the journal Obesity, there is an article, A Better Index of Body Adiposity, suggesting a new way to estimate how fat people are from simple measurements. The authors hope to replace the body mass index with an equally simple index that does not need so many corrections for race and sex.
The new measure is indeed simple $\mbox{BAI} = p/h^{1.5} - 18$, where $p$ is the hip circumference in cm and $h$ is the height in meters. The 18 is there to make the index be approximately the %fat for the individual, which was directly measured in the calibration sample of 1700 people using dual-energy X-ray absorptiometry (DXA).
I tried applying this measure to myself (104 cm hips, 1.8m tall), and got and estimate of 25% body fat. Hips are measured “at the level of the maximum extension of the buttocks posteriorly in a horizontal plane.” Based on the calibration sample, this puts me in the range where BAI does a good job of estimating %fat. The BAI measure tends to overestimate %fat at both the low end and the high end of the measurements, but for most of the calibration sample it was within about ±5.
My BMI is about 25.1–25.8. Using the formula
Adult body fat % = (1.20 x BMI) + (0.23 x Age) – (10.8 x gender) – 5.4
from Wikipedia’s Body fat percentage, I get an estimate of 26.8–27.6% body fat. I think I like the BAI measure better, though I have no idea which is more accurate. Both measures put me at the low end of the overweight range, which matches my self image.
Of course, there have not been any studies yet to show whether BAI has a better diagnostic value than BMI, but it doesn’t seem to have different scales for males and females, the way that BMI does, so it is likely to be more useful.
|
2020-02-29 01:38:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5206645727157593, "perplexity": 2399.6847521006935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148163.71/warc/CC-MAIN-20200228231614-20200229021614-00413.warc.gz"}
|
http://clay6.com/qa/51918/in-a-crystalline-solid-having-formula-ab-2o-4-oxide-ions-are-arranged-in-cu
|
Browse Questions
In a crystalline solid,having formula $AB_2O_4$,oxide ions are arranged in cubic close packed lattice while cations A are present in tetrahedral voids and cations B are present in octahedral voids.What percentage of the tetrahedral voids is occupied by A and B?
$\begin{array}{1 1}13.5\%,60\%\\12.5\%,50\%\\14.5\%,70\%\\17.5\%,80\%\end{array}$
$\therefore$ For four oxide ions there would be eight tetrahedral and four octahedral voids.
Percentage of tetrahedral voids occupied by A=$\large\frac{1}{8}$$\times 100 \Rightarrow 12.5\% Percentage of tetrahedral voids occupied by B=\large\frac{2}{4}$$\times 100$
$\Rightarrow 50\%$
|
2016-12-09 23:08:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.828583836555481, "perplexity": 14023.047956787494}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542851.96/warc/CC-MAIN-20161202170902-00271-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://nebusresearch.wordpress.com/tag/jumpstart/
|
## Reading the Comics, November 2, 2019: Eugene the Jeep Edition
I knew by Thursday this would be a brief week. The number of mathematically-themed comic strips has been tiny. I’m not upset, as the days turned surprisingly full on me once again. At some point I would have to stop being surprised that every week is busier than I expect, right?
Anyway, the week gives me plenty of chances to look back to 1936, which is great fun for people who didn’t have to live through 1936.
Elzie Segar’s Thimble Theatre rerun for the 28th of October is part of the story introducing Eugene the Jeep. The Jeep has astounding powers which, here, are finally explained as being due to it being a fourth-dimensional creature. Or at least able to move into the fourth dimension. This is amazing for how it shows off the fourth dimension being something you could hang a comic strip plot on, back in the day. (Also back in the day, humor strips with ongoing plots that might run for months were very common. The only syndicated strips like it today are Gasoline Alley, Alley Oop, the current storyline in Safe Havens where they’ve just gone and terraformed Mars, and Popeye, rerunning old daily stories.) The Jeep has many astounding powers, including that he can’t be kept inside — or outside — anywhere against his will, and he’s able to forecast the future.
Could there be a fourth-dimensional animal? I dunno, I’m not a dimensional biologist. It seems like we need a rich chemistry for life to exist. Lots of compounds, many of them long and complicated ones. Can those exist in four dimensions? I don’t know the quantum mechanics of chemical formation well enough to say. I think there’s obvious problems. Electrical attraction and repulsion would fall off much more rapidly with distance than they do in three-dimensional space. This seems like it argues chemical bonds would be weaker things, which generically makes for weaker chemical compounds. So probably a simpler chemistry. On the other hand, what’s interesting in organic chemistry is shapes of molecules, and four dimensions of space offer plenty of room for neat shapes to form. So maybe that compensates for the chemical bonds. I don’t know.
But if we take the premise as given, that there is a four-dimensional animal? With some minor extra assumptions then yeah, the Jeep’s powers fit well enough. Not being able to be enclosed follows almost naturally. You, a three-dimensional being, can’t be held against your will by someone tracing a line on the floor around you. The Jeep — if the fourth dimension is as easy to move through as the third — has the same ability.
Forecasting the future, though? We have a long history of treating time as “the” fourth dimension. There’s ways that this makes good organizational sense. But we do have to treat time as somehow different from space, even to make, for example, general relativity work out. If the Jeep can see and move through time? Well, yeah, then if he wants he can check on something for you, at least if it’s something whose outcome he can witness. If it’s not, though? Well, maybe the flow of events from the fourth dimension is more obvious than it is from a mere three, in the way that maybe you can spot something coming down the creek easily, from above, in a way that people on the water can’t tell.
Olive Oyl and Popeye use the Jeep to tease one another, asking for definite answers about whether the other is cute or not. This seems outside the realm of things that the fourth dimension could explain. In the 1960s cartoons he even picks up the power to electrically shock offenders; I don’t remember if this was in the comic strips at all.
Elzie Segar’s Thimble Theatre rerun for the 29th of October has Wimpy doing his best to explain the fourth dimension. I think there’s a warning here for mathematician popularizers here. He gets off to a fair start and then it all turns into a muddle. Explaining the fourth dimension in terms of the three dimensions we’re familiar with seems like a good start. Appealing to our intuition to understand something we have to reason about has a long and usually successful history. But then Wimpy goes into a lot of talk about the mystery of things, and it feels like it’s all an appeal to the strangeness of the fourth dimension. I don’t blame Popeye for not feeling it’s cleared anything up. Segar would come back, in this storyline, to several other attempted explanations of the Jeep’s powers, although they do come back around to, y’know, it’s a magical animal. They’re all over the place in the Popeye comic universe.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 28th of October is a riff on predictability and encryption. Good encryption schemes rely on randomness. Concealing the content of a message means matching it to an alternate message. Each of the alternate messages should be equally likely to be transmitted. This way, someone who hasn’t got the key would not be able to tell what’s being sent. The catch is that computers do not truly do randomness. They mostly rely on quasirandom schemes that could, in principle, be detected and spoiled. There are ways to get randomness, mostly involving putting in something from the real world. Sensors that detect tiny fluctuations in temperature, for example, or radio detectors. I recall one company going for style and using a wall of lava lamps, so that the rise and fall of lumps were in some way encoded into unpredictable numbers.
Robb Armstrong’s JumpStart for the 2nd of November is a riff on the Birthday “Paradox”, the thing where you’re surprised to find someone shares a birthday with you. (I have one small circle of friends featuring two people who share my birthday, neatly enough.) Paradox is in quotes because it defies only intuition, not logic. The logic is clear that you need only a couple dozen people before some pair will probably share a birthday. Marcie goes overboard in trying to guess how many people at her workplace would share their birthday on top of that. Birthdays are nearly uniformly spread across all days of the year. There are slight variations; September birthdays are a little more likely than, say, April ones; the 13th of any month is a less likely birthday than the 12th or the 24th are. But this is a minor correction, aptly ignored when you’re doing a rough calculation. With 615 birthdays spread out over the year you’d expect the average day to be the birthday of about 1.7 people. (To be not silly about this, a ten-day span should see about 17 birthdays.) However, there are going to be “clumps”, days where three or even four people have birthdays. There will be gaps, days nobody has a birthday, or even streaks of days where nobody has a birthday. If there weren’t a fair number of days with a lot of birthdays, and days with none, we’d have to suspect birthdays weren’t random here.
There were also a handful of comic strips just mentioning mathematics, that I can’t make anything in depth about. Here’s two.
T Shepherd’s Snow Sez for the 1st of November nominally talks about how counting can be a good way to meditate. It can also become a compulsion, with hazards, though.
Terri Libenson’s The Pajama Diaries for the 2nd of November uses mathematics as the sort of indisputably safe topic that someone can discuss in place of something awkward.
And that is all I have to say for last week’s comics. Tuesday I should publish the next Fall 2019 A to Z essay. I also figure to open the end of the alphabet up to nominations this week. My next planned Reading the Comic post should be Sunday. Thanks for reading.
## Reading the Comics, August 16, 2019: The Comments Drive Me Crazy Edition
Last week was another light week of work from Comic Strip Master Command. One could fairly argue that nothing is worth my attention. Except … one comic strip got onto the calendar. And that, my friends, is demanding I pay attention. Because the comic strip got multiple things wrong. And then the comments on GoComics got it more wrong. Got things wrong to the point that I could not be sure people weren’t trolling each other. I know how nerds work. They do this. It’s not pretty. So since I have the responsibility to correct strangers online I’ll focus a bit on that.
Robb Armstrong’s JumpStart for the 13th starts off all right. The early Roman calendar had ten months, December the tenth of them. This was a calendar that didn’t try to cover the whole year. It just started in spring and ran into early winter and that was it. This may seem baffling to us moderns, but it is, I promise you, the least confusing aspect of the Roman calendar. This may seem less strange if you think of the Roman calendar as like a sports team’s calendar, or a playhouse’s schedule of shows, or a timeline for a particular complicated event. There are just some fallow months that don’t need mention.
Things go wrong with Rob’s claim that December will have five Saturdays, five Sundays, and five Mondays. December 2019 will have no such thing. It has four Saturdays. There are five Sundays, Mondays, and Tuesdays. From Crunchy’s response it sounds like Joe’s run across some Internet Dubious Science Folklore. You know, where you see a claim that (like) Saturn will be larger in the sky than anytime since the glaciers receded or something. And as you’d expect, it’s gotten a bit out of date. December 2018 had five Saturdays, Sundays, and Mondays. So did December 2012. And December 2007.
And as this shows, that’s not a rare thing. Any month with 31 days will have five of some three days in the week. August 2019, for example, has five Thursdays, Fridays, and Saturdays. October 2019 will have five Tuesdays, Wednesdays, and Thursdays. This we can show by the pigeonhole principle. And there are seven months each with 31 days in every year.
It’s not every year that has some month with five Saturdays, Sundays, and Mondays in it. 2024 will not, for example. But a lot of years do. I’m not sure why December gets singled out for attention here. From the setup about December having long ago been the tenth month, I guess it’s some attempt to link the fives of the weekend days to the ten of the month number. But we get this kind of December about every five or six years.
This 823 years stuff, now that’s just gibberish. The Gregorian calendar has its wonders and mysteries yes. None of them have anything to do with 823 years. Here, people in the comments got really bad at explaining what was going on.
So. There are fourteen different … let me call them year plans, available to the Gregorian calendar. January can start on a Sunday when it is a leap year. Or January can start on a Sunday when it is not a leap year. January can start on a Monday when it is a leap year. January can start on a Monday when it is not a leap year. And so on. So there are fourteen possible arrangements of the twelve months of the year, what days of the week the twentieth of January and the thirtieth of December can occur on. The incautious might think this means there’s a period of fourteen years in the calendar. This comes from misapplying the pigeonhole principle.
Here’s the trouble. January 2019 started on a Tuesday. This implies that January 2020 starts on a Wednesday. January 2025 also starts on a Wednesday. But January 2024 starts on a Monday. You start to see the pattern. If this is not a leap year, the next year starts one day of the week later than this one. If this is a leap year, the next year starts two days of the week later. This is all a slightly annoying pattern, but it means that, typically, it takes 28 years to get back where you started. January 2019 started on Tuesday; January 2020 on Wednesday, and January 2021 on Friday. the same will hold for January 2047 and 2048 and 2049. There are other successive years that will start on Tuesday and Wednesday and Friday before that.
Except.
The important difference between the Julian and the Gregorian calendars is century years. 1900. 2000. 2100. These are all leap years by the Julian calendar reckoning. Most of them are not, by the Gregorian. Only century years divisible by 400 are. 2000 was a leap year; 2400 will be. 1900 was not; 2100 will not be, by the Gregorian scheme.
These exceptions to the leap-year-every-four-years pattern mess things up. The 28-year-period does not work if it stretches across a non-leap-year century year. By the way, if you have a friend who’s a programmer who has to deal with calendars? That friend hates being a programmer who has to deal with calendars.
There is still a period. It’s just a longer period. Happily the Gregorian calendar has a period of 400 years. The whole sequence of year patterns from 2000 through 2019 will reappear, 2400 through 2419. 2800 through 2819. 3200 through 3219.
(Whether they were also the year patterns for 1600 through 1619 depends on where you are. Countries which adopted the Gregorian calendar promptly? Yes. Countries which held out against it, such as Turkey or the United Kingdom? No. Other places? Other, possibly quite complicated, stories. If you ask your computer for the 1619 calendar it may well look nothing like 2019’s, and that’s because it is showing the Julian rather than Gregorian calendar.)
Except.
This is all in reference to the days of the week. The date of Easter, and all of the movable holidays tied to Easter, is on a completely different cycle. Easter is set by … oh, dear. Well, it’s supposed to be a simple enough idea: the Sunday after the first spring full moon. It uses a notional moon that’s less difficult to predict than the real one. It’s still a bit of a mess. The date of Easter is periodic again, yes. But the period is crazy long. It would take 5,700,000 years to complete its cycle on the Gregorian calendar. It never will. Never try to predict Easter. It won’t go well. Don’t believe anything amazing you read about Easter online.
Michael Jantze’s The Norm (Classics) for the 15th is much less trouble. It uses some mathematics to represent things being easy and things being hard. Easy’s represented with arithmetic. Hard is represented with the calculations of quantum mechanics. Which, oddly, look very much like arithmetic. $\phi = BA$ even has fewer symbols than $1 + 1 = 2$ has. But the symbols mean different abstract things. In a quantum mechanics context, ‘A’ and ‘B’ represent — well, possibly matrices. More likely operators. Operators work a lot like functions and I’m going to skip discussing the ways they don’t. Multiplying operators together — B times A, here — works by using the range of one function as the domain of the other. Like, imagine ‘B’ means ‘take the square of’ and ‘A’ means ‘take the sine of’. Then ‘BA’ would mean ‘take the square of the sine of’ (something). The fun part is the ‘AB’ would mean ‘take the sine of the square of’ (something). Which is fun because most of the time, those won’t have the same value. We accept that, mathematically. It turns out to work well for some quantum mechanics properties, even though it doesn’t work like regular arithmetic. So $\phi = BA$ holds complexity, or at least strangeness, in its few symbols.
Henry Scarpelli and Craig Boldman’s Archie for the 16th is a joke about doing arithmetic on your fingers and toes. That’s enough for me.
There were some more comic strips which just mentioned mathematics in passing.
Brian Boychuk and Ron Boychuk’s The Chuckle Brothers rerun for the 11th has a blackboard of mathematics used to represent deep thinking. Also, it I think, the colorist didn’t realize that they were standing in front of a blackboard. You can see mathematicians doing work in several colors, either to convey information in shorthand or because they had several colors of chalk. Not this way, though.
Mark Leiknes’s Cow and Boy rerun for the 16th mentions “being good at math” as something to respect cows for. The comic’s just this past week started over from its beginning. If you’re interested in deeply weird and long-since cancelled comics this is as good a chance to jump on as you can get.
And Stephen ‘s Herb and Jamaal rerun for the 16th has a kid worried about a mathematics test.
That’s the mathematically-themed comic strips for last week. All my Reading the Comics essays should be at this link. I’ve traditionally run at least one essay a week on Sunday. But recently that’s moved to Tuesday for no truly compelling reason. That seems like it’s working for me, though. I may stick with it. If you do have an opinion about Sunday versus Tuesday please let me know.
Don’t let me know on Twitter. I continue to have this problem where Twitter won’t load on Safari. I don’t know why. I’m this close to trying it out on a different web browser.
And, again, I’m planning a fresh A To Z sequence. It’s never to early to think of mathematics topics that I might explain. I should probably have already started writing some. But you’ll know the official announcement when it comes. It’ll have art and everything.
|
2021-10-15 20:10:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40035051107406616, "perplexity": 1380.875917614067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00701.warc.gz"}
|
https://www.jobilize.com/trigonometry/test/key-concepts-double-angle-half-angle-and-reduction-by-openstax?qcr=www.quizover.com
|
# 9.3 Double-angle, half-angle, and reduction formulas (Page 4/8)
Page 4 / 8
Given that $\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\alpha =-\frac{4}{5}\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\alpha \text{\hspace{0.17em}}$ lies in quadrant IV, find the exact value of $\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\left(\frac{\alpha }{2}\right).$
$-\frac{2}{\sqrt{5}}$
## Finding the measurement of a half angle
Now, we will return to the problem posed at the beginning of the section. A bicycle ramp is constructed for high-level competition with an angle of $\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ formed by the ramp and the ground. Another ramp is to be constructed half as steep for novice competition. If $\text{\hspace{0.17em}}\mathrm{tan}\text{\hspace{0.17em}}\theta =\frac{5}{3}\text{\hspace{0.17em}}$ for higher-level competition, what is the measurement of the angle for novice competition?
Since the angle for novice competition measures half the steepness of the angle for the high level competition, and $\text{\hspace{0.17em}}\mathrm{tan}\text{\hspace{0.17em}}\theta =\frac{5}{3}\text{\hspace{0.17em}}$ for high competition, we can find $\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ from the right triangle and the Pythagorean theorem so that we can use the half-angle identities. See [link] .
$\begin{array}{ccc}\hfill {3}^{2}+{5}^{2}& =& 34\hfill \\ \hfill c& =& \sqrt{34}\hfill \end{array}$
We see that $\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta =\frac{3}{\sqrt{34}}=\frac{3\sqrt{34}}{34}.\text{\hspace{0.17em}}$ We can use the half-angle formula for tangent: $\text{\hspace{0.17em}}\mathrm{tan}\text{\hspace{0.17em}}\frac{\theta }{2}=\sqrt{\frac{1-\mathrm{cos}\text{\hspace{0.17em}}\theta }{1+\mathrm{cos}\text{\hspace{0.17em}}\theta }}.\text{\hspace{0.17em}}$ Since $\text{\hspace{0.17em}}\mathrm{tan}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ is in the first quadrant, so is $\text{\hspace{0.17em}}\mathrm{tan}\text{\hspace{0.17em}}\frac{\theta }{2}.\text{\hspace{0.17em}}$
$\begin{array}{ccc}\hfill \mathrm{tan}\text{\hspace{0.17em}}\frac{\theta }{2}& =& \sqrt{\frac{1-\frac{3\sqrt{34}}{34}}{1+\frac{3\sqrt{34}}{34}}}\hfill \\ & =& \sqrt{\frac{\frac{34-3\sqrt{34}}{34}}{\frac{34+3\sqrt{34}}{34}}}\hfill \\ & =& \sqrt{\frac{34-3\sqrt{34}}{34+3\sqrt{34}}}\hfill \\ & \approx & 0.57\hfill \end{array}$
We can take the inverse tangent to find the angle: $\text{\hspace{0.17em}}{\mathrm{tan}}^{-1}\left(0.57\right)\approx 29.7°.\text{\hspace{0.17em}}$ So the angle of the ramp for novice competition is $\text{\hspace{0.17em}}\approx 29.7°.$
Access these online resources for additional instruction and practice with double-angle, half-angle, and reduction formulas.
## Key equations
Double-angle formulas $\begin{array}{ccc}\hfill \mathrm{sin}\left(2\theta \right)& =& 2\mathrm{sin}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta \hfill \\ \hfill \mathrm{cos}\left(2\theta \right)& =& {\mathrm{cos}}^{2}\theta -{\mathrm{sin}}^{2}\theta \hfill \\ & =& 1-2{\mathrm{sin}}^{2}\theta \hfill \\ & =& 2{\mathrm{cos}}^{2}\theta -1\hfill \\ \hfill \mathrm{tan}\left(2\theta \right)& =& \frac{2\mathrm{tan}\text{\hspace{0.17em}}\theta }{1-{\mathrm{tan}}^{2}\theta }\hfill \end{array}$ Reduction formulas $\begin{array}{ccc}\hfill {\mathrm{sin}}^{2}\theta & =& \frac{1-\mathrm{cos}\left(2\theta \right)}{2}\hfill \\ \hfill {\mathrm{cos}}^{2}\theta & =& \frac{1+\mathrm{cos}\left(2\theta \right)}{2}\hfill \\ \hfill {\mathrm{tan}}^{2}\theta & =& \frac{1-\mathrm{cos}\left(2\theta \right)}{1+\mathrm{cos}\left(2\theta \right)}\hfill \end{array}$ Half-angle formulas $\begin{array}{ccc}\hfill \mathrm{sin}\text{\hspace{0.17em}}\frac{\alpha }{2}& =& ±\sqrt{\frac{1-\mathrm{cos}\text{\hspace{0.17em}}\alpha }{2}}\hfill \\ \hfill \mathrm{cos}\text{\hspace{0.17em}}\frac{\alpha }{2}& =& ±\sqrt{\frac{1+\mathrm{cos}\text{\hspace{0.17em}}\alpha }{2}}\hfill \\ \hfill \mathrm{tan}\text{\hspace{0.17em}}\frac{\alpha }{2}& =& ±\sqrt{\frac{1-\mathrm{cos}\text{\hspace{0.17em}}\alpha }{1+\mathrm{cos}\text{\hspace{0.17em}}\alpha }}\hfill \\ & =& \frac{\mathrm{sin}\text{\hspace{0.17em}}\alpha }{1+\mathrm{cos}\text{\hspace{0.17em}}\alpha }\hfill \\ & =& \frac{1-\mathrm{cos}\text{\hspace{0.17em}}\alpha }{\mathrm{sin}\text{\hspace{0.17em}}\alpha }\hfill \end{array}$
## Key concepts
• Double-angle identities are derived from the sum formulas of the fundamental trigonometric functions: sine, cosine, and tangent. See [link] , [link] , [link] , and [link] .
• Reduction formulas are especially useful in calculus, as they allow us to reduce the power of the trigonometric term. See [link] and [link] .
• Half-angle formulas allow us to find the value of trigonometric functions involving half-angles, whether the original angle is known or not. See [link] , [link] , and [link] .
## Verbal
Explain how to determine the reduction identities from the double-angle identity $\text{\hspace{0.17em}}\mathrm{cos}\left(2x\right)={\mathrm{cos}}^{2}x-{\mathrm{sin}}^{2}x.$
Use the Pythagorean identities and isolate the squared term.
Explain how to determine the double-angle formula for $\text{\hspace{0.17em}}\mathrm{tan}\left(2x\right)\text{\hspace{0.17em}}$ using the double-angle formulas for $\text{\hspace{0.17em}}\mathrm{cos}\left(2x\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\mathrm{sin}\left(2x\right).$
We can determine the half-angle formula for $\text{\hspace{0.17em}}\mathrm{tan}\left(\frac{x}{2}\right)=\frac{\sqrt{1-\mathrm{cos}\text{\hspace{0.17em}}x}}{\sqrt{1+\mathrm{cos}\text{\hspace{0.17em}}x}}\text{\hspace{0.17em}}$ by dividing the formula for $\text{\hspace{0.17em}}\mathrm{sin}\left(\frac{x}{2}\right)\text{\hspace{0.17em}}$ by $\text{\hspace{0.17em}}\mathrm{cos}\left(\frac{x}{2}\right).\text{\hspace{0.17em}}$ Explain how to determine two formulas for $\text{\hspace{0.17em}}\mathrm{tan}\left(\frac{x}{2}\right)\text{\hspace{0.17em}}$ that do not involve any square roots.
$\text{\hspace{0.17em}}\frac{1-\mathrm{cos}\text{\hspace{0.17em}}x}{\mathrm{sin}\text{\hspace{0.17em}}x},\frac{\mathrm{sin}\text{\hspace{0.17em}}x}{1+\mathrm{cos}\text{\hspace{0.17em}}x},$ multiplying the top and bottom by $\text{\hspace{0.17em}}\sqrt{1-\mathrm{cos}\text{\hspace{0.17em}}x}\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\sqrt{1+\mathrm{cos}\text{\hspace{0.17em}}x},$ respectively.
For the half-angle formula given in the previous exercise for $\text{\hspace{0.17em}}\mathrm{tan}\left(\frac{x}{2}\right),$ explain why dividing by 0 is not a concern. (Hint: examine the values of $\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ necessary for the denominator to be 0.)
## Algebraic
For the following exercises, find the exact values of a) $\text{\hspace{0.17em}}\mathrm{sin}\left(2x\right),$ b) $\text{\hspace{0.17em}}\mathrm{cos}\left(2x\right),$ and c) $\text{\hspace{0.17em}}\mathrm{tan}\left(2x\right)\text{\hspace{0.17em}}$ without solving for $\text{\hspace{0.17em}}x.$
If $\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}x=\frac{1}{8},$ and $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ is in quadrant I.
a) $\text{\hspace{0.17em}}\frac{3\sqrt{7}}{32}\text{\hspace{0.17em}}$ b) $\text{\hspace{0.17em}}\frac{31}{32}\text{\hspace{0.17em}}$ c) $\text{\hspace{0.17em}}\frac{3\sqrt{7}}{31}$
If $\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}x=\frac{2}{3},$ and $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ is in quadrant I.
If $\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}x=-\frac{1}{2},$ and $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ is in quadrant III.
a) $\text{\hspace{0.17em}}\frac{\sqrt{3}}{2}\text{\hspace{0.17em}}$ b) $\text{\hspace{0.17em}}-\frac{1}{2}\text{\hspace{0.17em}}$ c) $\text{\hspace{0.17em}}-\sqrt{3}\text{\hspace{0.17em}}$
simplify cot x / csc x
what is the period of cos?
Patrick
simplify: cot x/csc x
Catherine
sorry i didnt realize you were actually asking someone else to put their question on here. i thought this was where i was supposed to.
Catherine
some to dereve formula for bulky density
kurash
Solve Given that: cotx/cosx =cosx/sinx/cosx =1/sinx =cosecx Ans.
Vijay
if tan alpha + beta is equal to sin x + Y then prove that X square + Y square - 2 I got hyperbole 2 Beta + 1 is equal to zero
sin^4+sin^2=1, prove that tan^2-tan^4+1=0
what is the formula used for this question? "Jamal wants to save \$54,000 for a down payment on a home. How much will he need to invest in an account with 8.2% APR, compounding daily, in order to reach his goal in 5 years?"
i don't need help solving it I just need a memory jogger please.
Kuz
A = P(1 + r/n) ^rt
Dale
how to solve an expression when equal to zero
its a very simple
Kavita
gave your expression then i solve
Kavita
Hy guys, I have a problem when it comes on solving equations and expressions, can you help me 😭😭
Thuli
Tomorrow its an revision on factorising and Simplifying...
Thuli
ok sent the quiz
kurash
send
Kavita
Hi
Masum
What is the value of log-1
Masum
the value of log1=0
Kavita
Log(-1)
Masum
What is the value of i^i
Masum
log -1 is 1.36
kurash
No
Masum
no I m right
Kavita
No sister.
Masum
no I m right
Kavita
tan20°×tan30°×tan45°×tan50°×tan60°×tan70°
jaldi batao
Joju
Find the value of x between 0degree and 360 degree which satisfy the equation 3sinx =tanx
what is sine?
what is the standard form of 1
1×10^0
Akugry
Evalute exponential functions
30
Shani
The sides of a triangle are three consecutive natural number numbers and it's largest angle is twice the smallest one. determine the sides of a triangle
Will be with you shortly
Inkoom
3, 4, 5 principle from geo? sounds like a 90 and 2 45's to me that my answer
Neese
Gaurav
prove that [a+b, b+c, c+a]= 2[a b c]
can't prove
Akugry
i can prove [a+b+b+c+c+a]=2[a+b+c]
this is simple
Akugry
hi
Stormzy
x exposant 4 + 4 x exposant 3 + 8 exposant 2 + 4 x + 1 = 0
x exposent4+4x exposent3+8x exposent2+4x+1=0
HERVE
How can I solve for a domain and a codomains in a given function?
ranges
EDWIN
Thank you I mean range sir.
Oliver
|
2019-10-23 11:17:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 48, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8425331711769104, "perplexity": 1074.7511565003165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833089.90/warc/CC-MAIN-20191023094558-20191023122058-00507.warc.gz"}
|
https://archive.lib.msu.edu/crcmath/math/math/e/e045.htm
|
## Eilenberg-Steenrod Axioms
A family of Functors from the Category of pairs of Topological Spaces and continuous maps, to the Category of Abelian Groups and group homomorphisms satisfies the Eilenberg-Steenrod axioms if the following conditions hold.
1. Long Exact Sequence of a Pair Axiom. For every pair , there is a natural long exact sequence
(1)
where the Map is induced by the Inclusion Map and is induced by the Inclusion Map . The Map is called the Boundary Map.
2. Homotopy Axiom. If is homotopic to , then their Induced Maps and are the same.
3. Excision Axiom. If is a Space with Subspaces and such that the Closure of is contained in the interior of , then the Inclusion Map induces an isomorphism .
4. Dimension Axiom. Let be a single point space. unless , in which case where are some Groups. The are called the Coefficients of the Homology theory .
These are the axioms for a generalized homology theory. For a cohomology theory, instead of requiring that be a Functor, it is required to be a co-functor (meaning the Induced Map points in the opposite direction). With that modification, the axioms are essentially the same (except that all the induced maps point backwards).
See also Aleksandrov-Cech Cohomology
© 1996-9 Eric W. Weisstein
1999-05-25
|
2021-12-02 16:39:11
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9744133353233337, "perplexity": 323.3218118573904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362230.18/warc/CC-MAIN-20211202145130-20211202175130-00313.warc.gz"}
|
https://blog.isec.pl/ntru-public-key-cryptosystem-explained/
|
# NTRU public key cryptosystem explained
Nov 08, 2019
Author: Mateusz Piotr Siwiec
## Introduction
Most of modern cryptographic algorithms and protocols rely on computational hardness of certain mathematical problems such as factorization of products of two large prime numbers (RSA) or discrete logarithm over certain groups (Diffie-Hellman key exchange, ElGamal encryption system). These problems are believed to have no efficient (polynomial time) solutions, so any cryptographic protocol based on them should be at least as hard to break. Since we can assume that any potential adversary has bounded computational power we can expect those protocols to be secure. This is, however, not necessarily always the case. If the adversary has computing power of a quantum computer (which in a couple of years might not seem to be so abstract), such algorithmic problems (or at least some of them) turn out to be easily solvable. For example the RSA modulus $N = pq$ can be factored using Shor's algorithm in polynomial time. Therefore RSA and many other popular protocols such as the Diffie-Hellman key exchange, ElGamal encryption scheme, DES or ECC (Elliptic-curve cryptography) would be broken. Those will have to be replaced with something that will be secure assuming greater computing power of a potential adversary.
Luckily, there are several algorithmic problems that are believed to be hard to solve on both classical and quantum computers. There are numerous different groups of protocols such as hash-based or code-based cryptography, multivariate-quadratic-equations cryptography or lattice-based cryptography. NTRU (NTRUEncrypt and NTRUSign) is a public key cryptographic system based on hardness of a certain mathematical problem involving special points in a lattice.
## NTRUEncrypt
### Lattices
An $n$-dimensional lattice can be visualised as a regular "grid" of points in a $n$-dimensional space. It is a set of vectors (points)
$$$$L(v_1, v_2, \dots, v_n) = \biggl\{ \sum_{i=1}^n a_iv_i \mid a_i \in \mathbb{Z} \text{ for } 1 \leq i \leq n \biggr\},$$$$
where $v_1, v_2, \dots, v_n \in \mathbb{Z}^n$ are linearly independent vectors of integer coordinates. For example, in a $2$-dimensional space a simple lattice is a set of all points of integer coordinates (with $v_1 = [0,1]$ and $v_2 = [1,0]$). Equivalently, a lattice is a set of points with integer coordinates in certain basis.
We can also use a more coherent way of describing a lattice by simply putting all the vectors $v_1, v_2, \dots, v_n$ as columns of a matrix $B \in \mathbb{Z}^{n \times n}$ and writing that
$$$$L(B) = L([v_1, v_2, \dots, v_n]) = L(v_1, v_2, \dots, v_n) = \lbrace Bx \mid x \in \mathbb{Z}^n\rbrace.$$$$
We can now define a $q$-ary lattice which is going to be a lattice whose points' coordinates are all taken mod $q$.
Let's take any matrix $A \in \mathbb{Z}_q^{n\times m}$ (a matrix of integers with $n$ rows and $m$ columns). Now we can define an $m$-dimensional $q$-ary lattice
$$$$L_q(A) = \lbrace y \in \mathbb{Z}^m \mid y = A^Ts \text{ mod } q,\, s \in \mathbb{Z}^n\rbrace$$$$
This definition is enough to formulate two most important computationally difficult problems which are the core of lattice-based cryptography protocols:
• SVP – Shortest Vector Problem: Given the basis vectors $v_1, v_2, \dots, v_n$ of an $n$-dimensional lattice $L$, find the shortest non-zero vector in $L$.
• CVP – Closest Vector Problem: Given the basis vectors $v_1, v_2, \dots, v_n$ of an $n$-dimensional lattice $L$ and the vector $v$, find a vector in $L$, that is closest to $v$.
• SIVP – Shortest Independent Vector Problem: Given the basis vectors $v_1, v_2, \dots, v_n$ of an $n$-dimensional lattice $L$, find a new base $v'_1, v'_2, \dots, v'_n$ of the lattice $L$, which minimizes the length of the longest basis vector.
where by the length of a vector we most often mean the Euclidean norm, defined simply by
$$\newcommand{\norm}[1]{\left\lVert#1\right\rVert} \norm{x}_n = \norm{[x_1, x_2, \dots, x_n]}_n = \sqrt{x_1^2 + x_2^2 + \dots + x_n^2}.$$
The basic idea behind lattice-based cryptography is that we can represent any lattice using many different bases, some of which are easy to work with'', whereas others require complex computations to achieve even a polynomial approximation of the solution to the SVP/CVP. The most popular way of finding an approximate solution to the SVP is the lattice reduction method: LLL (Lensta-Lenstra-Lovasz), which gives quite good results, but the solution (a short vector) found using LLL in polynomial time is only guaranteed to be shorter than $\left(\frac{2}{\sqrt{3}}\right)^Ns$, where $s$ is the length of the shortest vector in the lattice, which is often not enough.
In general, the CVP problem is known to be NP-complete and the SVP is thought to be NP-hard (it is still yet to be proven), and there are no efficient ways of finding even a good approximation of their solutions.
### GGH
Two most famous examples of lattice-based public-key encryption protocols are GGH (Goldreich-Goldwaser-Halevi) and NTRUEncrypt. In 2008 some important GGH vulnerabilities were discovered by P. Nguyen, that showed information leakage from the ciphertext and the possibility of reducing the CVP problem (which was the basis of the security of the protocol) to its special case – which was a core design flaw and the protocol is no longer considered to be secure. The algorithm behind GGH, however, shows really well how the hardness'' of the CVP can be used to design a public-key cryptosystem.
Let's take two lattice bases ($N$ linearly independent vectors in $\mathbb{Z}^N$): $B$ and $H$. We can generate $B$ and $H$ in such a way that $L(B) = L(H)$ -- the two bases both generate the lattice $L = L(B) = L(H)$ and $B$ is easy to work with (is \textit{almost} orthogonal, and its vectors are as short as possible), whereas $H$ is not – it is difficult to solve the SVP and CVP in $L$ given only this basis. We set $B$ to be the private key and $H$ to be the public key. The encryption is simply taking any vector $v \in L$ and adding to it another vector $m$ (the message). The ciphertext is then $c = v + m$. The decryption is simply finding a point $v'$ in $L$ which is closest to $c$ (we expect it to be the same $v$ as chosen during encryption) and subtracting it from $c$. It can be done only when given a basis $B$, with which it is easy to solve the CVP. If $v = v'$ we are able to decrypt the ciphertext and get the vector $m' = c - v'$. In order to have $m = m'$ we must choose the vector $m$ to be relatively short so that the closest vector $v' = v$. This description of the protocol is very simplified but shows the general idea behind it.
### NTRUEncrypt
Just like in any other public-key cryptosystem, in order to allow a secure communication we require both private and public key, and two functions:
$$\texttt{Encrypt}(\texttt{message},\texttt{public_key})$$
and
$$\texttt{Decrypt}(\texttt{ciphertext},\texttt{private_key}).$$
We further assume that there are some global values known to everybody:
• $N$ – an integer that determines the dimension'',
• $p$, $q$ – two co-prime integer moduli ($gcd(p,q) = 1$),
• $d_f$, $d_g$, $d_r$, $d_m$ – integer bounds on private key, public key, and the message space.
The encryption and decryption in NTRU is described in terms of polynomials (in $\mathbb{Z}\lbrack X\rbrack /(X^N-1)$) of integer coefficients of degree $\leq n-1$, but it is just a more coherent way of describing the protocol instead of using vectors in lattices. The main idea remains unchanged.
We need three operations on polynomials: convolutional product, finding an inverse of a polynomial and reduction of a polynomial modulo an integer.
Convolutional product of two polynomials $f$ and $g$ is defined simply by
$$f\star g = \sum_{i+j=k \text{ mod } N} f_i g_j,$$
where
$$f = \sum_{i=0}^{N-1} f_ix^i\text{, and }g = \sum_{i=0}^{N-1} g_ix^i.$$
It means that after a normal'' multiplication of two polynomials we must also reduce all the powers of $X$ modulo $N$.
The inverse $f^{-1}$ of a polynomial $f$ is a polynomial $f^{-1}$ that
$$f \star f^{-1} = f^{-1}\star f = 1$$
Reducing a polynomial modulo $q$ simply means taking all is coefficients mod $q$.
We will also use the notation $\#_af$ which means the number of coefficients in $f$ equal to $a$. For example if $N = 5$
$$f(x) = 2x^4 + 2x^3 + x + 3 \\ \#_0f = 1 \\ \#_1f = 1 \\ \#_2f = 2 \\ \#_3f = 1$$
### Private key
The private key consists of two polynomials $f$ and $g$, such that:
$$\#_{1}f = d_f\text{, }\#_{-1}f = d_f-1\text{, and }\#_{0}f = N-2d_f+1$$
and:
$$\#_{1}g = \#_{-1}g = d_g\text{, and }\#_{0}g=N-2d+g.$$
We also require that $f$ in invertible modulo $q$ which means there exists a $f_p^{-1}$ and $f_q^{-1}$, such that
$$f \star f_p^{-1} = 1 \text{ mod } p,$$
$$f \star f_q^{-1} = 1 \text{ mod } q.$$
The pair $(f,g)$ is the private key. For efficiency reasons $f_q^{-1}$ is often calculated during key generation and later stored as a part of the private key.
### Public key
The public key is simply
$$h = f_q^{-1}\star g \text{ mod } q.$$
### Encryption
Encryption in NTRU consists of two steps. First, the message has to be encoded as a trinary message $m$ which must also satisfy:
$$\#_1m=\#_{-1}m=d_m \text{, and } \#_0m=N-2d_m$$
and a random polynomial $r$ must be generated, such that:
$$\#_1r=\#_{-1}r=d_r\text{, and }\#_0r=N-2d_r.$$
Then the ciphertext $c$ is defined by
$$c = m + pr\star h \text{ mod } q,$$ where the coefficients of $c$ are reduced modulo $q$ in such a way that they are in $(-\frac{q}{2}, \frac{q}{2}]$, so that $c$ is also centered around $0$.
### Decryption
The decryption is just $\star$-multiplying $c$ by $f$:
$$c\star f = f\star m + f\star pr \star h \text{ mod }q\\ = f\star m + pr \star f \star f_q^{-1}\star g \text{ mod } q\\ = f \star m + pr \star g \text{ mod } q$$
From which we can calculate $f\star m$ by reducing mod $p$
$$f\star m = (f \star m + pr \star g \text{ mod } q) \text { mod } p,$$ which – after multiplying by $f_p^{-1}$ – gives the plaintext $m$.
The above algorithm is called NTRUEncrypt (NTRU Encryption Algorithm) which with the NTRUSign (NTRU Signature Algorithm) form the NTRU public key cryptosystem, and was first described in NTRU: A newhigh speed public key cryptosystem (1996). There are multiple resources available online regarding the exact values of the parameters. Typically $p=3$, $q=2^{11}=2048$ and the dimension $N$ is set to be a prime number, for example $443$, $587$ or $743$ depending on the required level of security. There are multiple publications discussing the choice of NTRU parameters and the security estimates available.
### Final remarks
There are multiple other properties of the NTRUEncrypt apart from being secure against a quantum computer attacks, such as being more efficient and faster then RSA. A lot of useful resources on the topic of post-quantum cryptography and the NTRU cryptosystem (and a lot more) are available in the resources listed below.
This article is a simplification and is by no means a reference for any more serious work. The purpose of this text is to explain the basic concepts and ideas behind the NTRUEncrypt algorithm. For a more detailed description please refer to the original papers and official standards and specifications. More detailed resources for further reading can be found in the "Useful links" section below.
### Bibliography
[1] Daniele Micciancio, Oded Regev. 2009. Lattice-based Cryptography.
In: Daniel J. Bernstein, Johannes Buchmann, Erik Dahmen (eds) Post-
Quantum Cryptography
. Springer, Berlin, Heidelberg.
[2] Daniel J. Bernstein. 2009. Introduction to post-quantum cryptography.
In: Daniel J. Bernstein, Johannes Buchmann, Erik Dahmen (eds) Post-
Quantum Cryptography
. Springer, Berlin, Heidelberg.
[3] Jill Pipher. 2002. Lectures on the NTRU encryption algorithm and digital
signature scheme
.
[4] Jeffrey Hoffstein, Daniel Lieman, Jill Pipher, Joseph H. Silverman. 1999.
NTRU: A Public Key Cryptosystem.
[5] Jeffrey Hoffstein, Jill Pipher, Joseph H. Silverman. 1996. NTRU: A new
high speed public key cryptosystem
Enigma image protected by CC BY-SA 2.0 license. Author: Michele M. F.
#### Mateusz Siwiec
Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
|
2020-10-22 17:12:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.893382728099823, "perplexity": 562.7588667739213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880014.26/warc/CC-MAIN-20201022170349-20201022200349-00351.warc.gz"}
|
http://math.stackexchange.com/questions/413521/what-can-we-say-about-the-order-of-a-group-given-the-order-of-two-elements/413529
|
# What can we say about the order of a group given the order of two elements?
If I know that a group of finite order has two elements $a$ and $b$ s.t. their orders are $6$ and $10$, respectively. What statements can be made regarding the order of the group?
I know by Lagrange's that the elements should divide the order of the group, so I've taken the $\operatorname{lcm}$. I think the order of our group should be a multiple of $30$. But I'm thinking there's more I can say.
-
Is your hypothesis that $a^6=b^{10}=e$, or that the order of $a$ is 6, and the order of $b$ is $10$? – Martin Argerami Jun 7 '13 at 2:41
On the assumption that the orders of the elements are $6$ and $10$ can you see how to construct a group of order $30n$ for $n\in \mathbb N$ – Mark Bennet Jun 7 '13 at 2:45
@MartinArgerami Aren't they saying the same thing? Sorry, I was trying to be more formal about it. Thanks for pointing it out – AlanH Jun 7 '13 at 2:46
@AlanH For $a$ to have order $6$, we must have that $6$ is the smallest number $n>0$ for which $a^n=1$. Consider the identity, for example - surely $a=1$ satisfies $a^n=1$ no matter which $n$ we choose! – Alexander Gruber Jun 7 '13 at 2:50
You can't say anything about the order of the group except that it must be divisible by $6$ and $10$. In fact, nothing can be said about the order of $ab$, either! (see the commentary here.)
could we say it has an element of order 2, 3, and 5? $x^6 = (x^2)^3 = (x^3)^2$, and similarly for $x^{10}$. I know that doesn't say much about $G$, but I'm just trying to figure out what more can be said. – AlanH Jun 7 '13 at 3:03
@AlanH If you know that $x$ has order $6$, then $x^2$ surely has order $3$ and $x^3$ surely has order $2$. Can you can generalize that to $x^{n}$? Furthermore, can you generalize that to prove that a group in which every nontrivial element has the same order must be a $p$-group (a group of prime power order)? – Alexander Gruber Jun 7 '13 at 4:31
To generalize, do I just take all divisors of $n$? If every nontrivial element has the same order, then it must be a $p$-group because the only divisors of $p$ are $1$ and $p$ itself (this means the group $G$ in my problem is certainly not a $p$-group). Is this correct? – AlanH Jun 8 '13 at 2:14
|
2015-07-02 10:10:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8458728790283203, "perplexity": 138.3674494359192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095494.6/warc/CC-MAIN-20150627031815-00203-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://zenodo.org/record/4836253/export/dcite4
|
Conference paper Open Access
# Towards feedback control of the cell-cycle across a population of yeast cells
Perrino, Giansimone; Fiore, Davide; Napolitano, Sara; di Bernardo, Mario; di Bernardo, Diego
### DataCite XML Export
<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="URL">https://zenodo.org/record/4836253</identifier>
<creators>
<creator>
<creatorName>Perrino, Giansimone</creatorName>
<givenName>Giansimone</givenName>
<familyName>Perrino</familyName>
<affiliation>Telethon Institute of Genetics and Medicine, Pozzuoli, Italy</affiliation>
</creator>
<creator>
<creatorName>Fiore, Davide</creatorName>
<givenName>Davide</givenName>
<familyName>Fiore</familyName>
<affiliation>Department of Electrical Engineering and Information Technology, University of Naples Federico II, Naples, Italy</affiliation>
</creator>
<creator>
<creatorName>Napolitano, Sara</creatorName>
<givenName>Sara</givenName>
<familyName>Napolitano</familyName>
<affiliation>Telethon Institute of Genetics and Medicine, Pozzuoli, Italy</affiliation>
</creator>
<creator>
<creatorName>di Bernardo, Mario</creatorName>
<givenName>Mario</givenName>
<familyName>di Bernardo</familyName>
<affiliation>Department of Engineering Mathematics, University of Bristol, Bristol, U.K.</affiliation>
</creator>
<creator>
<creatorName>di Bernardo, Diego</creatorName>
<givenName>Diego</givenName>
<familyName>di Bernardo</familyName>
<affiliation>Telethon Institute of Genetics and Medicine, Pozzuoli, Italy</affiliation>
</creator>
</creators>
<titles>
<title>Towards feedback control of the cell-cycle across a population of yeast cells</title>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2019</publicationYear>
<dates>
<date dateType="Issued">2019-08-15</date>
</dates>
<resourceType resourceTypeGeneral="Text">Conference paper</resourceType>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/4836253</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.23919/ECC.2019.8796301</relatedIdentifier>
<relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/cosy-bio</relatedIdentifier>
</relatedIdentifiers>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract"><p>Abstract</p>
<p>Cells are defined by their unique ability to self-replicate through cell division. This periodic process is known as the cell-cycle and it happens with a defined period in each cell. The budding yeast divides asymmetrically with a mother cell generating multiple daughter cells. Within the cell population each cell divides with the same period but asynchronously. Here, we investigate the problem of synchronising the cell-cycle across a population of yeast cells through a microfluidics-based feedback control platform. We propose a theoretical and experimental approach for cell-cycle control by considering a yeast strain that can be forced to start the cell-cycle by changing growth medium. The duration of the cell-cycle is strictly linked to the cell volume growth, hence a hard constraint in the controller design is to prevent excessive volume growth. We experimentally characterised the yeast strain and derived a simplified phase-oscillator model of the cell-cycle. We then designed and implemented three impulsive control strategies to achieve maximal synchronisation across the population and assessed their control performance by numerical simulations. The first two controllers are based on event-triggered strategies, while the third uses a model predictive control (MPC) algorithm to select the sequence of control impulses while satisfying built-in constraints on volume growth. We compared the three strategies by computing two cost functions: one quantifying the level of synchronisation across the cell population and the other volume growth during the process. We demonstrated that the proposed control approaches can effectively achieve an acceptable trade-off between two conflicting control objectives: (i) obtaining maximal synchronisation of the cell cycle across the population while (ii) minimizing volume growth. The results can be used to implement effective strategies to unfold the biological mechanisms controlling cell cycle and volume growth in yeast cells.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p></description>
<description descriptionType="Other">This is a preprint of the conference paper published in "2019 18th European Control Conference (ECC)"</description>
</descriptions>
<fundingReferences>
<fundingReference>
<funderName>European Commission</funderName>
<funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
<awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/766840/">766840</awardNumber>
<awardTitle>Control Engineering of Biological Systems for Reliable Synthetic Biology Applications</awardTitle>
</fundingReference>
</fundingReferences>
</resource>
8
4
views
|
2021-07-23 18:45:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20427416265010834, "perplexity": 2624.447850163289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150000.59/warc/CC-MAIN-20210723175111-20210723205111-00658.warc.gz"}
|
http://www.machinedlearnings.com/2010/11/on-unimportance-of-zeroes.html
|
## Saturday, November 13, 2010
### On the Unimportance of Zeroes
On most ad networks, most presentations of an advertisement do not result in any user interaction (e.g., are not clicked on). Similarly, in online matchmaking, most introductions that are made do not result in any demonstrated interest. Thus any system which dutifully logs every action and every outcome will contain historical data mostly consisting of rewards of value zero. In ad serving the ratio of zero to non-zero can easily be 100:1 or more, so throwing away zeroes is the difference between a data set that fits comfortably on a laptop versus one that requires map-reduce to process; or alternatively, the difference between a \$100 EC2 bill and a \$10,000 EC2 bill.
Intuitively, if zero examples are common and non-zero examples rare, the non-zero examples contain much more information per sample than the zero examples. This suggests that subsampling the zero reward examples, e.g. to synthesize a data set of roughly 1:1 ratio, might not be that costly in terms of generalization. Here's a brief discussion of some situations where this can be done.
#### Policy Value Estimation
Suppose I'm trying to estimate the expected reward associated with a novel deterministic policy on the basis of historical data generated according to a known policy. There is a offline policy estimator that can be used to evaluate a static policy when the examples are drawn IID. Assume a distribution $D = D_x \times D_{r|x}$, where $x$ is the feature vector associated with an instance and $r: A \to [0, 1]$ are the rewards associated with each action. I have a proposed policy $\pi: X \to A$ that I would like to estimate the performance of under $D$, $E_{(x, r) \sim D} \left[ r \left( \pi (x) \right) \right]$. Further assume a historical policy that is using a known conditional distribution over actions given an instance $p (a | x)$. The historical policy defines a distribution $S$ over historical data defined by
1. Draw $(x, r)$ from $D$.
2. Draw $a$ from $p (a | x)$.
3. Output instance $\left( x, a, r (a), p (a | x) \right)$.
It is easy to show that \begin{aligned} E_{(x, a, r (a), p) \sim S} \left[ r (\pi (x)) \frac{1_{\pi (x) = a}}{p (\pi (x) | x)} \right] &= E_{(x, r) \sim D} \left[ r \left( \pi (x) \right) \right], \end{aligned} which justifies using the empirical policy estimator given a historical data set $H$, $\frac{1}{|H|} \sum_{(x, a, r (a), p) \in H} r (\pi (x)) \frac{1_{\pi (x) = a}}{p (\pi (x) | x)}.$ Here's what's interesting about the empirical policy estimator: any instance where the observed historical reward was zero can be discarded without changing the sum. The number of zero examples needs to be known in order to get the normalization constant right, but any other detail about zero examples is completely unnecessary to compute the policy estimator. That means a data set need only be a constant factor larger than the space required to store all the non-zero examples.
Sometimes I'm subjected to a system with a known logging policy that subsamples zero examples very early and the total zero reward example count is not preserved. That defines a new distribution $\tilde S$ via
1. Draw $(x, r)$ from $D$.
2. Draw $a$ from $p (a | x)$.
3. Observe $r (a)$.
4. If $r (a) = 0$, reject with probability $(1 - l)$.
5. Output instance $\left( x, a, r (a), p (a | x), l \right)$.
In this case $S$ and $\tilde S$ are related by $E_{(x, a, r(a), p, l) \sim S} \left[ f \right] = \frac{E_{(x, a, r (a), p) \sim \tilde S} \left[ \left(l^{-1} 1_{r (\pi (x)) = 0} + 1_{r (\pi (x)) \neq 0} \right) f \right]}{E_{(x, a, r (a), p) \sim \tilde S} \left[ \left(l^{-1} 1_{r (\pi (x)) = 0} + 1_{r (\pi (x)) \neq 0} \right) \right]},$ which suggests using the modified empirical policy estimator given a historical data set $\tilde H$, $\frac{1}{\eta (\tilde H)} \sum_{(x, a, r (a), p, l) \in \tilde H} r (\pi (x)) \frac{1_{\pi (x) = a}}{p (\pi (x) | x)},$ where $\eta (\tilde H)$ is the effective historical data set size, $\eta (\tilde H) = \sum_{(x, a, r (a), p, l) \in \tilde H} \left( l^{-1} 1_{r (a) = 0} + 1_{r (a) \neq 0} \right),$ i.e., a zero reward example increases the effective set size by $1/l$. Note the numerator is unaffected because zero reward examples do not contribute.
Of course, the expected value of the ratio is not the ratio of expected values, so this latter estimator is presumably biased, but hopefully not horribly so (I should understand this better).
#### AUC
Suppose I'm trying to estimate the AUC of a ranking model. I'll assume that the rewards are binary valued, with conditional feature instance distributions $D_{x|0}$ and $D_{x|1}$. To keep things simple I'll assume my model induces a linear ordering via a scoring function, $\phi: X \to \mathbb{R}$. In this case the AUC is given by $\mbox{AUC} (\phi) = E_{(x_+, x_-) \sim D_{x|1} \times D_{x|0}} \left[ 1_{\phi (x_+) > \phi (x_-)} + \frac{1}{2} 1_{\phi (x_+) = \phi (x_-)} \right].$ This is the probability of correct pairwise comparison'' form of the AUC, which is equivalent to the area under the curve'' formulation. Now I can replace $D_{x|0}$ with a new distribution $\tilde D_{x|0}$ defined by
1. Draw $x$ from $D_{x|0}$.
2. Reject $x$ with constant probability $p$.
3. Otherwise, emit $x$.
It is hopefully clear that expectations with respect to $D_{x|0}$ and $\tilde D_{x|0}$ are identical. Therefore $\mbox{AUC} (\phi) = E_{(x_+, x_-) \sim D_{x|1} \times \tilde D_{x|0}} \left[ 1_{\phi (x_+) > \phi (x_-)} + \frac{1}{2} 1_{\phi (x_+) = \phi (x_-)} \right],$ i.e., using a historical data set where the negative examples have been obliviously subsampled does not introduce bias.
Of course, I could repeat this argument for the positives, leading to the absurd conclusion that a good estimator of AUC can be constructed with merely one positive and one negative example. Well, such an AUC estimator would be unbiased (averaged over the ensemble of possible singleton pairs from history), but it would not be good. To understand that, it helps to look at the deviation bound relating the empirical AUC to the actual AUC, as explored by Agarwal et. al in this paper. The money shot is Theorem 3 which states, $P_{T \sim D^N} \left\{ \left| \mbox{AUC}_e (\phi, T) - \mbox{AUC} (\phi) \right| \geq \sqrt{\frac{\ln (\frac{2}{\delta})}{2 \rho (T) (1 - \rho (T)) N}} \right\} \leq \delta,$ where $\mbox{AUC}_e$ is the empirical AUC on the training data set $T$, and $\rho (T)$ measures the amount of imbalance in the labels of the training set, $\rho (T) = \frac{1}{N} \sum_{(x, y) \in T} 1_{y = 1}.$ There are two terms here that are driving the deviation bound. The first is the number of examples used when evaluating the empirical AUC estimator: more is better. The second, however, measures how imbalanced the examples used are. If there are many more positives than negative examples in the data set, or vice versa, the deviation bound degrades. This formalizes the intuition that examples with rare outcomes carry more information than common outcomes.
Here's an interesting question: for a fixed total number of examples $N$ what is the ratio of positive and negative examples that minimizes the deviation bound? The answer is $\rho (T) = 1/2$, i.e. a 1:1 ratio of positive and negative examples, which suggests that if evaluating $\phi$ is expensive, or if you only have a certain amount of hard drive space, or pulling the data across the network is a bottleneck, etc., that subsampling to achieve parity is a good strategy for evaluating AUC loss.
The discussion so far has been in terms of evaluation, but during training some strategies effectively boil down to optimizing for empirical AUC (possibly with other terms to improve generalization). Training is usually more expensive than evaluation, so the idea of having a fixed data budget is extremely plausible here. The deviation bound above naturally suggests training on balanced data sets. This was empirically explored in a paper by Weiss and Provost, where over several datasets using C4.5 as the learning algorithm, they find when the area under the ROC curve is used to evaluate classifier performance, a balanced distribution is shown to perform well.'' In addition they also present a more complicated technique called budget-sensitive'' progressive sampling to further improve classifier performance.
When data budget is not an issue, oversampling the minority class to make a balanced data set is also a possibility, and might improve generalization. This and other ideas are discussed in a paper by Batista et. al.
#### Regression
Suppose I'm trying to maintain a regressor $\phi: X \to \mathbb{R}$ which purports to estimate the conditional expected reward of a context (for simplicity, assume there is no action here; merely a sequence of contexts with associated scalar rewards). In this case I have a data drawn IID according to $D = D_x \times D_{r|x}$ and I'm trying to minimize squared loss $E_{(x, r) \sim D} \left[ (r - \phi (x))^2 \right].$ I'm in an online setting and I'm afraid my regressor is going to get overwhelmed by the data volume, so I'm considering subsampling the zero reward examples. I'm effectively defining a new distribution $\tilde D$ defined by
1. Draw $(x, r)$ from $D$.
2. If $r= 0$, reject with probability $(1 - l)$.
3. Output instance $\left( x, r, l \right)$.
The two distributions are related via $E_{(x, r) \sim D} \left[ f \right] = \frac{E_{(x, r) \sim \tilde D} \left[ (l^{-1} 1_{r=0} + 1_{r \neq 0}) f \right]}{E_{(x, r) \sim \tilde D} \left[ (l^{-1} 1_{r=0} + 1_{r \neq 0}) \right]}.$ If the regressor is actually an importance-weighted regression algorithm (e.g., GD), then using importance weight $w (l, r) = (l^{-1} 1_{r = 0} + 1_{r \neq 0})$ on the subsampled data leads to $E_{(x, r) \sim D} \left[ (r - \phi (x))^2 \right] = \frac{E_{(x, r) \sim \tilde D} \left[ w (l, r) (r - \phi (x))^2 \right]}{E_{(x, r) \sim \tilde D} \left[ w (l, r) \right]},$ i.e., squared loss in the original distribution is proportional to importance-weighted squared loss in the subsampled distribution. In practice, if the subsampling is too aggressive the importance weight for zero reward examples will be too large and performance will be poor, but this is a sensible way to knock a factor of 10 off the data volume. (To really scale up requires employing massively parallel learning strategies, so I'm excited about the workshop on learning on clusters at NIPS 2010 this year.)
In an offline setting I've discovered that calibration often improves my estimators (perhaps in an online setting as well? I haven't tried that, but the procedure I'm about to describe could be implemented online as well.) By calibration I mean ensuring that the output of the estimator is close to the conditional expected value of the reward. Lately I've been doing this by taking a calibration sample $\{ (x_i, r_i) \} \sim D^*$, processing it with the uncalibrated estimator to get a raw estimates $\{ (x_i, \phi (x_i), r_i) \}$, and aggregating it into $J$ bins such that equal numbers of samples fall into each range $b_{j-1} \leq \phi (x_i) < b_j$, with $b_0$ and $b_J$ being the smallest and largest possible output of the uncalibrated estimator. I then define control points via \begin{aligned} \hat \phi_j &= E_{(x, r) \sim D} \left[ \phi (x) | b_{j-1} \leq \phi (x) < b_j \right] \approx \frac{\sum_{\{ x_i, \phi (x_i), r_i \}} \phi (x_i) 1_{b_{j-1} \leq \phi (x_i) < b_j}}{\sum_{\{ x_i, \phi (x_i), r_i \}} 1_{b_{j-1} \leq \phi (x_i) < b_j}}, \\ \hat r_j &= E_{(x, r) \sim D} \left[ r | b_{j-1} \leq \phi (x) < b_j \right] \approx \frac{\sum_{\{ x_i, \phi (x_i), r_i \}} r_i 1_{b_{j-1} \leq \phi (x_i) < b_j}}{\sum_{\{ x_i, \phi (x_i), r_i \}} 1_{b_{j-1} \leq \phi (x_i) < b_j}}. \end{aligned} The set $\{ \hat \phi_j, \hat r_j \}$, augmented with points $\{ \min \phi, \min r \}$ and $\{ \max \phi, \max r \}$ representing the smallest and largest possible outputs of the uncalibrated estimator along with the smallest and largest possible estimates of the reward, defines a linear spline $\psi: [ \min \phi, \max \phi] \to [ \min r, \max r ]$ which can be used to post-process the output of the uncalibrated estimator in order to improve the calibration.
Now suppose it turns out the calibration sample is not drawn from $D^*$, but instead is drawn from zero-reward subsampled $\tilde D^*$ instead. Similar to the empirical value estimator above, the adjustment involves treating any example with $r_i = 0$ as equivalent to $1 / l$ examples with $r_i \neq 0$ in the computation of the control points, \begin{aligned} \hat \phi_j &\approx \frac{\sum_{\{ x_i, \phi (x_i), r_i \}} \left(l^{-1} 1_{r_i = 0} + 1_{r_i \neq 0}\right) \phi (x_i) 1_{b_{j-1} \leq \phi (x_i) \leq b_j}}{\sum_{\{ x_i, \phi (x_i), r_i \}} \left(l^{-1} 1_{r_i = 0} + 1_{r_i \neq 0}\right) 1_{b_{j-1} \leq \phi (x_i) \leq b_j}}, \\ \hat r_j &\approx \frac{\sum_{\{ x_i, \phi (x_i), r_i \}} \left(l^{-1} 1_{r_i = 0} + 1_{r_i \neq 0}\right) r_i 1_{b_{j-1} \leq \phi (x_i) \leq b_j}}{\sum_{\{ x_i, \phi (x_i), r_i \}} \left(l^{-1} 1_{r_i = 0} + 1_{r_i \neq 0}\right) 1_{b_{j-1} \leq \phi (x_i) \leq b_j}}, \end{aligned} and otherwise proceeding as above. But here's what's cool: if you are going to calibrate the estimator anyway, it doesn't seem to matter if the training data is zero-reward subsampled and importance-weighting is not used. The estimator ends up biased, but the calibration corrects it, and in practice this calibration procedure is less sensitive to larger rates of zero-reward subsampling than importance-weighted regression. For example, $l = 1/100$ would be dangerous to attempt via the importance-weighting trick, but in practice works great with the calibration procedure above.
|
2019-12-14 16:36:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9662929773330688, "perplexity": 891.6139545286017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541281438.51/warc/CC-MAIN-20191214150439-20191214174439-00488.warc.gz"}
|
https://alezia.ca/forum/5fdnsm/article.php?5720d7=cost-index-a320
|
# cost index a320
Posted by 3 days ago. B77W = CI 38 A332 - CI 26 EasyJet U2/EZY/EZS: A319/A320/A320SL = CI 12 Wizzair W6/WZZ: A320/A320SL = … According to the calculations, each passenger on board an Airbus A320, which has a capacity of 154, costs the airline $68.50 (£47.06) for the 260-mile journey. Variant: Skills with Different Abilities confuses me, I accidentally added a character, and then forgot to write them in for the rest of the series. How can I calculate the fuel required for a specific route and aircraft? -leasing Could someone explain the real use of the Cost Index? Tema: Re: Cost Index A320 Jue Ene 24, 2013 11:58 pm: El cost index no es lo que cuesta el combustible. A319/A320/A321. 737's: 15-30; 747's: 39(short/mid haul- Europe)757's: 20-40; 767's: 30-45(30-40 for Europe, 40-45 long haul) 777's: 71(long haul, ex KLAX) Emirates. USER_MINI_PROFILE . I've set a CI of 15 in PFPX Link to post. A319/320/321 = CI 15 over FL100 and 0 under FL100 B744 = CI 50 TAM Linheas Areàs JJ/TAM: A319 = CI 19. At the minimum cost index (0) only fuel counts. -other costs Place it in. What do I do to get my nine-year old boy off books with pictures and onto books with text content? It's always a complex compromise on what is best, and policies vary between airlines and how they use their aircraft. It depends on where you do they type rating.If you do it from countries not known for aviation for eg. Der Cost Index (Abkürzung: CI, deutsch: Kostenindex) ist ein Begriff aus der Luftfahrt und wird vor allem in der Flugplanung und -management verwendet.. Hintergrund. Did China's Chang'e 5 land before November 30th 2020? The Airbus A321 is a powerful airplane that is a member of the A320 family. If so, how do they cope with it? I know cost index is basicly how efficient the plane flies. Delta07. For Spirit, this is the A320 family, similar to European low-cost carrier easyJet. To be most effective, and to maximize the benefit available by using the cost index system, the value should be adjusted for each segment depending on conditions at dispatch, by those encountered enroute, or by the operational requirement for arrival within xx minutes of schedule (for whatever the operational driver is). - Anybody no the Cost Index for the Wizz A320 fleet? The time related costs may be difficult to calculate. What is the physical effect of sifting dry ingredients for a cake? A318. Original Professional FLight Planner X is required. Aviation Stack Exchange is a question and answer site for aircraft pilots, mechanics, and enthusiasts. Eduard. What happens to en route flights when a major earthquake etc. Airbus A330-200 Cost Index: 25; FlyGlobespan. Join Date: Apr 2007. This website uses cookies for the loadsheet display. I don't think you can 'do' a calculation though in that sense, since it's a value which often comes from computer formulas. What is it for and how is it calculated? The cost index is a number used in the Flight Management System (FMS) to optimize the aircraft's speed. This will result in the aircraft flying at Maximum Range Cruise. No personal information is stored. Were there often intra-USSR wars? By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. MathJax reference. Boeing 737-700 Cost Index: 14; Boeing 737-800 Cost Index: 13; FlyNiki . then I did the Airline2sim flight : PFPX gave me an initial alt of FL200, the TOD was like 5-10 Nm after TOC. 3 1. Cost Index Database 2017 - Free download as PDF File (.pdf), Text File (.txt) or read online for free. This a cost index. Climb at: Cost Index 0 Cruise at: Cost Index 20. Boeing 737-300/400 and 500 Series. View Comments. I’m gonna answer this in Indian rupees. A319 and A320 = Cost Index 22 B737-300 and B737-500 = Cost Index 30 B747-400 = Cost Index 85 B757-200 = Cost Index 75 B767-300 = Cost Index 60 B777-200 = Cost Index 80 Virgin Atlantic A340-300 CI = 30 A340-600 CI = 40 B747-400 CI = 73 - 93 B747 :150 Westjet B737NG :20 … Can be entered on INIT page A, defaulted from database, or inserted manually in this field. cost index 9999 345/.847 .847 .819/334 opt 268, maX 268, recmD 260 cost index 70 312/.794 .794 .80/313 opt 327, maX 363, recmD 310 COST inDEx iMPACT Figure 4 Fleet Current Cost inDeX oPtimum Cost inDeX time imPACt minutes AnnuAl Cost sAvings ($000’s) 737-400 30 12 +1 uS$754 –$771 737-700 45 12 +3 uS$1,790 –$1,971 mD-80 40 22 +2 uS$319 –$431 cost inDex usage in practice, neither … Click on the INIT key ; apt departure/apt arrival in FROM/TO; validate ALIGN IRS; flight number in FLT NBR; 10 in COST INDEX; Cruising flight level in CRZ FL; Page F-PLN. Flight Simulator 2020. This generally means flying the airplane slower and higher in order to conserve fuel, but that is offset by the higher maintenance costs due to the airplane being in the air longer and inspections becoming due sooner, so is true only to a certain point. Low cost indexes will result in lower climb speed, (both indicated and mach), lower cruise speed, a generally higher cruise altitude, a later descent and a slower descent mach/speed. The time related costs (airplane operated costs affected by flight time), include but: -flight crew Back to Subforum. then I did the Airline2sim flight : PFPX gave me an initial alt of FL200, the TOD was like 5-10 Nm after TOC. Fullscreen. Asking for help, clarification, or responding to other answers. Low Cost Index: Slow flights, high crew times, low fuel use, etc. It depends on the airliner. Simply provide your SimBrief username during the A320-X install/update process to enable the uplink feature. Cost Index : 15 Nouvelair Airbus A320 Cost Index : 38 PIA Boeing 777-200ER/200LR Cost Index : 180 Boeing 777-300ER Cost Index : 180 Qatar Airways Airbus A319CJ Départ de Doha - Cost Index 9 Retour Doha - Cost Index 8 Airbus A320 Cost Index : 10 Airbus A321 Cost Index : 11 Airbus A330-200/200F Cost Index : 15 Airbus A330-300 Cost Index : 15 Airbus A 340-600 We sell A320 flight simulators in a very high quality for commercial and home use. Why do Arabic names still have their meanings? Do PhD students sometimes abandon their original research idea? Closed Thread Subscribe . Re: Cost index. All ways and means to achieve this goal have to be Dieses Tutorial beschäftigt sich mit der Programmierung des Flight Management Computers (auch Flight Management System genannt) eines A320 Zum Inhalt: - Begriffserklärung MCDU (Multifunctional Control and Display Unit) - Eingaben auf der INIT-Page - Cost Index - Flight Level - Zero Fuel Weight - Block Fuel - F-Plan - V-Speeds - Flaps Location: LPPT. Play. Multiple engine out schedules and all engine schedules, Mach numbers and cost index data. Did airlines fly their aircraft slower in response to oil prices in the 1970s? Why don't we get more user-friendly weather reports than METARs? PREAMBLE Today's tough competitive environment forces airlines to consider operational costs in every facet of their business. But BA use CI90 for the 777/744. A higher cost index emphasizes savings on time related costs, while a lower cost index saves on fuel. -maintenance (time related) -airframe material /labor Most ground based flight planning systems use some sort of cost reduction logic, but unless they have a proprietary cost index engine embeded in them, the true cost savings available are probably being missed. The cost of a A320 type rating would cost you 13–30 lakhs. This a short video to explain the Cost Index and how to use it in flight simulation. This will result in the aircraft flying at Maximum Cruise Speed (Vmo / Mmo with a buffer). Airbus A320 Cost Index: 12; Airbus A321 Cost Index: 23; EL AL. Tenemos algún cost index oficial de la compañía. Posts: 431 A320 Cost Index. A cost index of 100 would imply that 100 kilos of fuel is as expensive as 1 minute of flight time. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. passengers about to miss their flight connection) or low fuel price (rare these days). Why did the scene cut away without showing Ocean's reply? Speeds slower than the optimal speed will result in less fuel burn, but also in more flying time. Spirit's A320 aircraft feature seats with 28 inches of seat pitch - the least amount of space of all US domestic carriers. What is the application of rev in real life? The A320-X is now compatible with Prepar3D v5/v5.1! Airbus Fleet. My fifty eighth livery! How to load a flight plan into the A320 FMS. So the lower the Cost Index, the lower the fuel burn in relation to the other operational costs of the trip. Seating from 100 to 240 passengers and flying throughout the world, with the widest single-aisle cabin, an A320 takes off or lands every 1.6 seconds. Eduard 2 Posted August 20, 2016. Top. Fail Passive B777. Find the farthest point in hypercube to an exterior point. Displays cost index in blue. How much leverage do commerial pilots have on cruise speed? Airbus sets the standard for customer support and services, with regular updates on its activities – plus related industry news – provided by two of the company's specialised publications: FAST Magazine and … Low Cost Index: Slow flights, high crew times, low fuel use, etc. A320: THE MOST SUCCESSFUL AIRCRAFT FAMILY Offers unbeatable fuel efficiency The A320 Family is the world's bestselling single-aisle aircraft family and is the preferred choice with traditional airlines and passengers, as well as with the fast-growing low-cost carrier market for which it is now the aircraft of choice. I don't think you can 'do' a calculation though in that sense, since it's a value which often comes from computer formulas. I spent a lot of time to make this one look right, so please leave some feedback to help me improve my work Thanks to @jettojig for the paintkit and @cessnarox for the custom normal maps! I appreciate the help as always...what would I do without the tech/ops forum Fletcher. A319 = CI 20. Climb at: Cost Index 0. This document from Airbus explains the cost index in more detail. No personal information is stored. It will result in a low speed. These are all narrow-body planes that feature twin engines. 21st Feb 2012, 08:29 #2 enemymine . A mi modesto entender es el coeficiente de combustible que quieras que te consuma el avión, a mayor tasa 80, 90, 100 etc más te consumirá. Thanks for contributing an answer to Aviation Stack Exchange! In fact, the airline was the launch customer of the Airbus A320neo. How many spin states do Cu+ and Cu2+ have and why? You will have to get the time related cost form your account Unit/department, The fuel costs are expressed in units of currency per quantity of fuel :cents/pound, while the time related cost is USD/min. At the minimum cost index (0) only fuel counts. I have Airbus A320-232 V2527-A5.per which I use - I found it somewhere on the aerosoft forums. B744 = CI 53 or 0 (Zero can be used on flights from East Coast USA to UK Flights) B763 = CI 40. Airbus A320 Cost Index: 12 Airbus A321 Cost Index: 23 EL AL 737's: 15-30 747's: 39(short/mid haul- Europe)757's: 20-40 767's: 30-45(30-40 for Europe, 40-45 long haul) 777's: 71(long haul, ex KLAX) Emirates Airbus A330-200 Cost Index: 25 FlyGlobespan Boeing 737-700 Cost Index: 14 Boeing 737-800 Cost Index: 13 FlyNiki 35 Hamburg International Airbus A319: 40 Boeing 737-700: 30 … Location: Banana Republic. The cost of the extra flying time outweighs the fuel savings at speeds below the optimum speed. Posts: 94; Joined: Wed Feb 25, 2004 2:29 am; USER_STATUS: OFF_LINE; RE: JetBlue A320… PFPX Airbus A320 Family Performance Profiles Pack Version 1.0 File Description: Professional Flight Planner X (PFPX) performance profiles for the complete Airbus A320 (A318/A319/A320/A321) family with all available engines for real-world operators. Speeds faster than the optimal speed will result in more fuel burn, but also in less flying time. British Airways BA/BAW: A318 = CI 15. c:\Users\Shared\Shared documents\PFPX Data\AircraftTypes View Entire Discussion (4 Comments) More posts from the flightsim community. I currently use the aircraft performance file from airlinerperformance.net, mainly because I hear they are pretty close to being spot on. While somebody else might be able to explain it better, Cost Index is the relationship between fuel use and flight time. FS2020 Tutorials; Loading of the flight plan into the FMS INIT page 1. The A320 is one aircraft in four sizes (A318, A319, A320 and A321), representing the most successful and versatile jetliner family ever. In all cases, management must continually revise and update the actual costs, both fuel and time related, used in their calculations. Tema: Cost Index A320 Miér Ene 23, 2013 9:43 am: Chicos una preguntilla. Ok, I'm back with yet another...do any of you guys/gals who work for jetBlue happen to know what Cost Index-CI they use for the A320 and what they use for the E190. If not, why not? Cost Index is the ratio of time related cost to Fuel related cost. A321 - CI 20. According to this article, Cost Index of a flight is: The ratio of fuel costs to all other costs. Anybody no the Cost Index for the Wizz A320 fleet? High Cost Index: Fast flight, low crew times, high fuel use, etc. Here is a new updated Cost Index List, for alot of airlines! For the low-cost model there's an article called The true cost of flying revealed, which uses a 154-passenger A320 for its figures.. A320 Cost Index. A320 = CI 20. Tech Log. How long does flight planning take / when is it done / how is it done? Take-Off Performance Calculator developed for the FSLABS A320. E-Flyer Rank 8 Posts: 985 Joined: Sat Mar 15, 2008 5:43 am. 0:00 . We have redesigned this document in cooperation with TOGA Projects. It's always a complex compromise on what is best, and policies vary between airlines and how they use their aircraft. v … Use MathJax to format equations. Since entering service in 1988, the A320 has carried more than 13.5 billion. NO URLS in signature. 1.7k. At the maximum cost index only time counts. It only takes a minute to sign up. But for this example, we’ll use a Boeing 747. for exemple for LFRB-LFKJ ( 1h40 flight) it gave me a initla FL of 390 which is the Rec max of the A320, the OPT alt calculated by the FMGC was FL360. Le Cost Index est un élément de comparaison pour un trajet, un avion et une compagnie donnés. Flight Planning Made Easy Welcome to SimBrief.com, a virtual flight planning service designed for Flight Simulation hobbyists looking to take their flights to the next level! Amber boxes display until V-speeds are entered. The lower the CI, the more "importance" the machinery places on saving fuel. BA use a cost index of 40 on all 767 flights. If you continue to use this site we will assume that you are happy with it. Cost index is the cost of time over the cost of fuel. Are there ideal opamps that exist in the real world? With the A319 and A321 there was no problem. so I set FL160 and it was better. APPENDIX 6 - A319 / A320 / A321 Cost Index for LRC 103. Age: 55. [Announcement]We are very happy to announce that we have partnered up with more awesome developers and members of the flight simulator community to bring you more exclusive and cinematic content.Following the great amount of happy customers we are glad to welcome FlightSimExpo and Airline2Sim. Allows entry of V-speeds. ... See MoreSee Less, Share on FacebookShare on TwitterShare on Linked InShare by Email. Ref:Lvl-D 763 Manual----- ADS ----- Daniel Gustin. How to professionally oppose a potential hire that management asked for an opinion on based on prior work experience? What is the accuracy of distance and position calculation? This a cost index. Settings. The Cost Index is for use and calculation by the "Bean Counters" to provide an overall lowest trip cost. Thank you all for your continued support # Can we achieve the 3,000 likes in 2020? Take-Off calculation can be used with other flight simulator A320 products as well. Can "vorhin" be used instead of "von vorhin" in this sentence? Do airlines in Middle East use higher cost indexes? Spirit's A320 offers two rows of "Big Front" seats which are larger seats at the front of the cabin. Multiple engine out schedules and all engine schedules, Mach numbers and cost index data. I added an A320 IAE on PFPX, after uploading the performance profile I noticed I could not enter a cost index value, it … The A321 is the first derivative of the original A320 and is loaded with great features that make it a comfortable plane for travel. With the A319 and A321 there … Airbus A320 Cost Index: 12 Airbus A321 Cost Index: 23 EL AL 737's: 15-30 747's: 39(short/mid haul- Europe)757's: 20-40 767's: 30-45(30-40 for Europe, 40-45 long haul) 777's: 71(long haul, ex KLAX) Emirates Airbus A330-200 Cost Index: 25 FlyGlobespan Boeing 737-700 Cost Index: 14 Boeing 737-800 Cost Index: 13 FlyNiki 35 Hamburg International Airbus A319: 40 Boeing 737-700: 30 … Why do most Christians eat pork when Deuteronomy says not to? Weight and Balance calculations are valid for the FSLABS airbus only. What should I do when I am demotivated by unprofessionalism that has affected me personally at the workplace? It gives the ratio between the unit cost of time and the unit cost of fuel. How is altitude correlated with true airspeed? Convert negadecimal to decimal (and back). Thanks for choosing #avlads !Also if it was a little bit silent from us over the last month we have worked on a lot of very cool projects to be released very soon! B763 = CI 34. Tech Log - Wizz Air Cost index? When is the descent clearance requested from ATC? Stormy approach into Naples, Italy. I'm not sure if it's just in Boeing aircraft, but I've heard that Cost Index has something to do with flight planning but have no idea what it is. Climb at: Cost Index 0 Cruise at: Cost Index 15. The saving of less flight time do not outweigh the fuel burn at speeds above the optimum speed. NOTE: Minimum settings allow the aircraft to load and operate but you’ll need to make compromises in graphics and scenery settings to get adequate performance. I added the A320 file from your website to my PFPX AircraftTypes folder, but still I can not select another cost index. I added the A320 file from your website to my PFPX AircraftTypes folder, but still I can not select another cost index. For a B787 in cruise, what is the altitude, speed, and angle of attack? 0:00. Stay tuned for cinematography at it's best here at AviationLads Smash that like button and don't miss new content! I won’t include ground cost since you specifically asked for the cost per flight hour. Take-Off calculation can be used with other flight simulator A320 products as well. for exemple for LFRB-LFKJ ( 1h40 flight) it gave me a initla FL of 390 which is the Rec max of the A320, the OPT alt calculated by the FMGC was FL360. But I would like to do it in the most realistic way as possible. A low cost index means that the cost of time is low or that fuel is expensive. This cost index database will help you to find the best cost index to use for your virtual flights depending ... A319/A320 CI = 20 A321 CI = 25 Aer Lingus A319 (retired) CI = 5 A320 CI = 4 A321 CI = 6 A332 CI = 13 A333 →CI = 16 CI = B757 CI = 11 Aeroflot A32X CI=10-20 It will result in a low speed. – However, lowest cost per seat hour (and in turn per ASM) provided by new technology mid-sized B757, followed by A320 – B747-400 costs suffer from high wage rates paid to senior pilots who fly international services on this aircraft type – Comparisons of same-sized B727 and A320 show newer A320 … Airbus A320 Cost Index: 13 Airbus A321 Cost Index: 17 Boeing 737-300 Cost Index: 10 Boeing 737-800 Cost Index: 9 Boeing 757-200 Cos Index: 10 Boeing 767-300 Long Haul Cost Index: 19 Short Haul Cost Index: 11 Ukraine International Airlines Boeing 737CL & NG: 11-14 United Airlines Flight with duration < 4 Hours: A319 and A320 = Cost Index 27 FOR SIMULATION USE ONLY. High cost index means high cost of time (e.g. -cabin crew 35; Hamburg International. Cost Index (CI) is nothing new ... For example, one airline generally uses a CI of around 9 in the A320 which results in a climb speed of about 290, a cruise speed of about .76 and a descent speed of about 260. Join Date: Jan 2004. Is there a way to notate the repeat of a larger section that itself has repeats in it?
Categorised in:
This post was written by
Notice: Thème sans comments.php est obsolète depuis la version 3.0.0, aucune alternative n’est disponible. Veuillez inclure un modèle de comments.php à votre thème. in /home/aleziaca/domains/alezia.ca/public_html/wp-includes/functions.php on line 4809
|
2021-04-12 16:15:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22773434221744537, "perplexity": 7656.885261575611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038067870.12/warc/CC-MAIN-20210412144351-20210412174351-00193.warc.gz"}
|
https://staging4.aicorespot.io/an-intro-to-the-jacobean/
|
>Business >An intro to the Jacobean
### An intro to the Jacobean
In the literature, word Jacobian is often interchangeably leveraged to reference to both the Jacobian matrix or its determinant.
Both the matrix and the determinant possess useful and critical applications, within machine learning, the Jacobean matrix goes about aggregating the partial derivatives that are required within backpropagation; the determinant is good in the procedure of altering between variables.
In this guide, you will find out about the Jacobian.
After going through this guide, you will be aware of:
• That the Jacobian matrix gathers all first-order partial derivatives of a multivariate function that can be leveraged for backpropagation.
• The Jacobian determinant is good in altering amongst variables, where it function as a scaling factor between one coordinate space and another.
Tutorial Summarization
This guide is subdivided into three portions, which are:
• Partial derivatives within machine learning
• The Jacobian Matrix
• Other uses of the Jacobian
Partial Derivatives within Machine Learning
We have so far observed gradients and partial derivatives as being critical for an optimization algorithm to update, say, the model weights of a neural network to attain an optimum set of weights. The leveraging of partial derivatives allows every weight to be updated autonomously of the others, by quantifying the gradient of the error curve with regard to one weight in turn.
Several of the functions that we typically work with within machine learning are multivariate, vector-valued functions, which implies that they map several real inputs, n to multiple real outputs, m:
For instance, take up a neural network that categorizes grayscale imagery into various classes. The function being implemented by such a classifier would the map the n pixel values of every single-channel input image, to m output probabilities of belonging to every one of the differing classes.
When undertaking training of a neural network, the backpropagation algorithm is accountable for sharing back the error quantified at the output layer, amongst the neurons consisting of the differing hidden layers of the neural network, till it attains the input.
The basic principle of the backpropagation algorithm in modifying the weights within a network is that every weight within a network should be updated proportionally to the sensitivity of the overall error of the network to modifications in that weight.
This sensitivity of the cumulative error of the network to alterations in any one specific weight is quantified in terms of the rate of change, which, in turn, is quantified by taking the partial derivative of the error with regards to the same weight.
For the sake of simplicity, let’s assume that one of the hidden layers of some specific network is made up of only a singular neuron, k. We can indicate this in terms of a singular computational graph.
Again, for the sake of simplicity, let’s assume that a weight, wk is applied to an input of this neuron to generate an output, zk, according to the function that this neuron is implementing (which includes the nonlinearity). Then, the weight of this neuron can be linked to the error at the output of the network as follows (the upcoming formula is referred to as the chain rule of calculus, but more on this subject later.
Here, the derivative dzk / dwfirst connects the weight, wk to the output, zk while the derivative, derror / dzk, subsequently connects the output, zk, to the network error.
It is more typically the scenario that we’d have several connected neurons making up the network, every one attributed a different weight. As we posses more interest in such a scenario, then we can do a generalization beyond the scalar scenario to take up several inputs and several outputs:
This sum of terms can be indicated more concisely as follows:
Or, equivalently, in vector notation leveraging the del operator ∇, to indicate the gradient of the error with regard to either the weights wk, or the outputs, zk
The back-propagation algorithm is made up of executing such a Jacobian-gradient product for every operation within the graph.
This implies that the backpropagation algorithm can go about relating the sensitivity of the network error to alterations in the weights, via a multiplication by the Jacobian matrix (∂zk / ∂wk)T.
Therefore, what does this Jacobian matrix consist of?
The Jacobian Matrix gathers all first-order partial derivatives of a multivariate function
Particularly, take up first a function that maps u real inputs, to a singular real output:
Then, for an input vector, x of length u, the Jacobian vector of size, u x 1 can be defined as follows:
Now, take up another function that maps u real inputs, to v real outputs:
Then, for the same input vector, x of length u, the Jacobian is now a u x v matrix J ∈ ℝu×v that is defined as follows
Reframing the Jacobian matrix into the machine learning issue taken up prior, while retention of the same number of u real inputs and v real outputs, we identify that this matrix would consist of the following partial derivatives:
Other uses of the Jacobian
A critical strategy when operating with integrals consists of the change of variables (also called as, integration by substitution or u-substitution, where an integral is simplified into another integral that is simpler to compute.
In the single variable case, replacing some variable x, with another variable u, can transform the original function into an easier one for which it is simpler to identify an antiderivative. In the two variable scenario, an extra reason might be that we would also desire to transform the region of terms over which we are integrating into a differing shape.
In the single variable scenario, there’s usually only one reason to wish to modify the variable: to make the function “nice” so we can identify an antiderivative. In the two variable scenario, there is a second possible reason, the two-dimensional region over which we require to integrate is somehow unpleasant, and we wish the region in terms of u and v to be nicer – to be a rectangle, for instance.
When executing a substitution amongst two (or potentially more) variables, the procedure begins with a definition of the variables between which the substitution is going to take place. For instance, x = f(u,v) and y = g(u,v). This is then followed by a conversion of the integral limits dependent on how the functions f, and g will transform the u-v plane into the x-y plane. Lastly, the absolute value of the Jacobian determinant is computed and included, to function as a scaling factor amongst one coordinate space and another.
|
2023-03-27 19:45:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8788005709648132, "perplexity": 605.3840509995018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00262.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-10-section-10-7-probability-exercise-set-page-1120/72
|
## Precalculus (6th Edition) Blitzer
$0.1$
Step 1. There are 900 three-digit numbers (100 to 999). Step 2. To form a three-digit number that reads the same forward and backward, the choice of first (thus the third) digit is $9$ (1-9) and the choice of the middle number is $10$ (0-9). Thus there are $90$ possibilities. Step 3. The probability of finding such a number is $\frac{90}{900}=0.1$
|
2020-05-26 21:47:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6480005383491516, "perplexity": 318.4428992352346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391309.4/warc/CC-MAIN-20200526191453-20200526221453-00041.warc.gz"}
|
http://focm2014.dm.uba.ar/viewAbstract.php?code=560
|
FoCM 2014 conference
Workshop B5 - Information Based Complexity
December 16, 17:00 ~ 17:30 - Room B23
Preasymptotic estimates for approximation of multivariate Sobolev functions
Universität Leipzig, Germany - [email protected]
The talk is concerned with optimal linear approximation of functions in isotropic periodic Sobolev spaces $H^s(\mathbb{T}^d)$ of fractional smoothness $s>0$ on the $d$-dimensional torus, where the error is measured in the $L_2$-norm. The asymptotic rate -- up to multiplicative constants -- of the approximation numbers is well known. For any fixed dimension $d\in\mathbb{N}$ and smoothness $s>0$ one has $$(\star)\qquad a_n(I_d: H^s(\mathbb{T}^d)\to L_2(\mathbb{T}^d))\sim n^{-s/d}\qquad\text{as}\quad n\to \infty\,.$$ In the language of IBC, the n-th approximation number $a_n(I_d)$ is nothing but the worst-case error of linear algorithms that use at most $n$ arbitrary linear informations. Clearly, for numerical issues and questions of tractability one needs precise information on the constants that are hidden in $(\star)$, in particular their dependence on $d$.
For any fixed smoothness $s>0$, the exact asymptotic behavior of the constants as $d\to\infty$ will be given in the talk. Moreover, I will present sharp two-sided estimates in the preasymptotic range, that means for `small' $n$. Hereby an interesting connection to entropy numbers in finite-dimensional $\ell_p$-spaces turns out to be very useful.
Joint work with Sebastian Mayer (Bonn), Winfried Sickel (Jena) and Tino Ullrich (Bonn).
|
2017-06-23 06:49:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7121765613555908, "perplexity": 995.1995162650305}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320023.23/warc/CC-MAIN-20170623063716-20170623083716-00277.warc.gz"}
|
https://cs7545.wordpress.com/homework-2/
|
Homework #2
Due Nov 3rd in class.
1. Consider a classifier in $\mathbb{R}^2$ which assigns value $1$ to a point if and only if it is inside a certain axis-aligned rectangle. Such a classifier is precisely defined as
$h_{(a_1,b_1,a_2,b_2)} (x_1,x_2) = \left\{ \begin{array}{ll} 1 & \text{if } a_1 \leq x_1 \leq b_1 \text{ and } a_2 \leq x_2\leq b_2\\ 0 & \text{ otherwise } \end{array}\right.$
Consider the class of all axis-aligned rectangles in the plane
$\mathcal{H}_{rec}^{2} = \{h_{(a_1, b_1, a_2, b_2)}\text{ : }a_1 \leq b_1 \text{ and } a_2 \leq b_2\}$
• Consider the algorithm that returns the smallest (in areas) axis-aligned rectangle enclosing all positive examples in the training set. Show that this algorithm $(\epsilon, \delta)$-PAC learns $\mathcal{H}_{rec}^2$.
• Let $\mathcal{H}_{rec}^d$ be the class of axis-aligned rectangles in $\mathbb{R}^d$. Prove that VCdim$(\mathcal{H}_{rec}^d) = 2d$.
2. Let $\mathcal{H}$ be a set of classifiers with VC-dimension $d$. Let $\mathcal{F}_t$ be the set of classifiers obtained by taking a weighted majority vote of $t$ classifiers from $\mathcal{H}$. Prove that the VC-dimension of $\mathcal{F}_t$ is at most $O(td \log td)$.
3. Let $\mathcal{H}_1,\ldots, \mathcal{H}_r$ be hypothesis classes over domain set $\mathcal{X}$. Let $d=\max_{i} VCdim(\mathcal{H}_i)$. Prove that
• VCdim$(\cup_{i=1}^{r}\mathcal{H}_i)\leq 4d\log(2d)+2log(r)$
• For $r=2$, VCdim$(\mathcal{H}_1 \cup \mathcal{H}_2) \leq 2d+1$
For simplicity, you can assume that $d\geq 3$.
4. Show that any decision list on $n$ Boolean variables is equivalent to a halfspace. For a decision list of length $k$, give a bound on the margin of the halfspace, and thereby bound the number of mistakes made by Perceptron and by Winnow in the worst case.
5. Give an algorithm to PAC-learn the parity of a set of $d$ linear threshold functions in ${\mathbb{R}}^n$; the time and sample complexity of the algorithm should be polynomial for any fixed $d$.
6. Let $S = \{ (x_1,y_1),\ldots, (x_n,y_n)\}$ be a labeled sample of $n$ points in $\mathbb{R}^n$ with
$x_i = (\underbrace{(-1)^i,\ldots, (-1)^i, (-1)^{i+1}}_\text{{\it i} first components}, 0, \ldots, 0) \text{ and } y_i = (-1)^{i+1}$
Show that the Perceptron algorithm makes $\Omega (2^n)$ updates before finding a separating hyperplane, regardless of the order in which it receives the points.
|
2018-02-25 01:23:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 33, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.899475634098053, "perplexity": 359.45361312402787}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816083.98/warc/CC-MAIN-20180225011315-20180225031315-00616.warc.gz"}
|
https://legends2k.github.io/2d-fov/design.html
|
# Field of View and Line of Sight in 2D
[email protected]
Consider a 2D world with polygonal buildings; the edges of the polygon are the building walls. Say a viewer is present in this world either indoor or outdoor. Given the observer's vision parameters — viewing direction, vision distance or the reach of sight and the angle of vision — we have to find the region visible to the observer i.e. the field of view (FoV) is to be determined. With no obstacles it would be a sector, made of two edges (radii) and a connecting arc; see figure 1. Additionally, given a point in the world, we should be able to quickly tell if this point is visible to the observer i.e. line of sight (LOS) queries on a given point should be serviced. Both these operations should be performed in a way efficient enough to use it in real-time rendering scenarios.
The observer's position is shown as the red dot, the arrow points to the viewing direction, r denotes vision distance and $$\theta$$ is half the angle of view.
# 1 Input
• Set of polygons; includes the world boundary
• FoV Parameters
• Position of viewer
• Viewing direction, $$\hat{v}$$
• Angle of vision, $$2\theta < 180^\circ$$
• Vision distance, $$r$$
## 1.1 Polygons and Edges
When describing a world in 2D, buildings naturally map to polygons and are thus taken as input by the algorithm. However, technically vision is blocked by walls i.e. polygon edges; moreover, dealing with polygon edges gives greater granularity and better control as we will mostly be dealing with geometric primitives when performing intersection testing. Hence, for the most part, the algorithm considers edges directly without worrying about the higher-level abstraction, polygon.
# 2 Basic Algorithm
An interesting initial idea that occurs is clipping the vision sector with the polygon paths. However, this is a vision problem, and trying to solve it with path clipping will not work; we will just end up with an incorrect result. This is understandable as clipping just cuts away the intersection of two regions, while for vision we need to cut not just the intersection, but everything behind it; also this is to be done radially. Figure 2 shows the result of clipping (left) along with the expected result for this configuration (right).
In the physical world, we see things when unhindered light rays reach the eye. Intuitively, vision can be determined by doing the converse i.e. casting rays from the eye into the world. For a light source, the rays emanating from it would go in all directions. When implementing it, rays are shot at some regular interval radially e.g. a ray every 5° would mean shooting 72 rays for full circle coverage. The ray is assumed to go on until it is blocked by an edge. Shooting rays in all 360° and finding the reach of a light source in 2D is a solved problem [1]. The accuracy of this method depends on the interval at which rays are shot; smaller interval gives denser ray distribution.
Shooting rays is a fancy way of saying testing for ray – line segment intersection. For a given ray, if there are m segments, m tests are done to find the edge closest to the ray that blocks it. So for any ray casting algorithm, to shoot n rays, on a set of m edges, n × m tests are to be performed. However cheap the ray – line segment (edge) intersection testing may be, for a huge world with many lights this may become prohibitively expensive.
Let us call the edge that blocks a ray as the blocking edge. Figure 3 highlights them in red. The point where the ray hits the blocking edge would be the hit point. Once all hit points are found, they should be connected circularly i.e. all hit points are sorted by angle, and connected by lines to get the region lit by the light source. The result is essentially an irregular, closed path. Note that when connecting hit points in this fashion, some corners or parts of the polygons are chopped off, leading to an incorrect or less accurate light field. This can be improved by decreasing the interval at which rays are shot; lesser interval means more rays, greater coverage, better output at the cost of performance.
We have discussed, with the help of this light reach problem, the basic approach we will take to solve our problem: ray casting. Note that the light rays originating from the source travel in all directions and with no limit in distance until it hits an obstacle. Having no limits makes it a simpler problem compared to the one in hand where both angle and distance of vision are bounded. We will discuss why it is so later in §3.
## 2.1 An Optimisation
Instead of shooting rays in all directions (at some interval), a common optimisation is to shoot rays only in the directions where edge endpoints are present. This should drastically reduce the number of intersection tests required; the aforementioned n term should become smaller. An additional advantage of this method is greater accuracy; we are no longer at the mercy of the interval chosen for the field of view to be correct.
We will call these edge endpoints, which gives the angle to shoot rays at, as angle points. This term might seem redundant and pointless as a substitute to endpoints, but it will have its use down the line. Angle points are points at which rays of sight are shot. In figure 4, angle points are shown in red, blue and black; rays are shot at all of them. Those that lay closest to their ray got converted to hit points, shown in red. Those to which their ray never reached are in black; these rays hit a more closer lying blocking edge and created hit points; these are shown in blue.
## 2.2 Vision beyond corners
Though shooting rays only to edge endpoints would do, there is a small wrinkle to iron out. In figure 4, note that the ray shot to a polygon's corner hits one of the edges that form the corner and does not proceed further. However, vision should extend beyond the corner in some cases. A clever way to correct this problem is demonstrated with an interactive article by [2].
The idea is make two auxiliary rays by tilting the primary ray — the ray to the edge's endpoint — by an iota angle both clockwise and counter-clockwise. If vision has to extend beyond a corner, one of these auxiliary rays would get past the corner thereby deducing visibility beyond. The downside is for every angle point, we now have to shoot three rays, not one, from the observer; this triples our expenses. It would be good if we could minimize the number of angle points we need to process.
Auxiliary rays of a primary are shown in orange; the angle of tilt (between an orange and black ray) is exaggerated here for clarity, but it would me much smaller, say, half a degree.
An easy optimisation would be to find if both edges forming the corner are exposed or one of them is hidden by the other. If both are visible then no auxiliary rays are needed since vision cannot extend beyond; in figure 4, the apex of the triangle is an exposed corner; auxiliary rays are redundant here as vision cannot be extending beyond. Auxiliary rays are needed only if one of the two edges is hidden to the viewer; both cannot be hidden since we are looking at every corner of a polygon in isolation — two edges and the point they meet (figure 6, black arrows and blue dot). Even when one of the edges is hidden, sending two auxiliary rays is redundant; one can be avoided as only one will go unblocked beyond the corner. In figure 5, the auxiliary ray formed by rotating the primary clockwise is redundant.
Separating axis theorem can be applied to find if one of the edges of the corner block another. Projecting both edge vectors on to the vector perpendicular to the primary ray would give results with differing signs if one of the edges block the other. Also depending on the sign of projection of the first edge, we can know which of the two auxiliary rays is needed. In figure 6, the three possible situations are shown at the top and the projection results are show at the bottom; the black dot denotes the viewer position from where the primary ray (red) is shot. The edge vectors (black) are projected on to the primary perpendicular vector (green). When no auxiliary rays are needed, both projections give negative values, since both edge vectors are in the negative half-space of the perpendicular. In the remaining two cases, where auxiliaries (orange) are needed, the signs of the projection are different for the vectors; when the sign is negative for the longer edge vector, a clockwise rotated auxiliary ray is enough and when it is positive, a counter-clockwise rotated auxiliary ray is will do.
# 3 Blocking Edges and Angle Points
For the problem considered here, vision is bounded in angle and distance; this leads us to interesting situations.
• Not all endpoints become angle points; only the ones within the view sector count.
• This should at least halve the number of rays shot; n should become even lesser now.
• To reap this benefit, effort needs to be spent in filtering angle points from endpoints. The technique used to prune should be quick so that filtering itself does not take too much time.
• Likewise, most edges will not be potentially blocking; just the ones which are contained or cut by the sector matter.
• This will reduce the number of intersection tests needed; the m term should become lesser.
• For this too the onus of quickly rejecting edges that do not count is on us.
A prime difference or deviation from the basic algorithm explained in §2 is that there can be angle points that do not come from the set of edge endpoints. There may be more angle points needed to be found. Consider the configurations shown in figure 7 and 8. The edge endpoints are outside the vision sector, and thus are not useful as angle points. All angle points needed to correctly determine the field of view (shown in black, blue and red) do not come from edge endpoints. Every one of these are necessary; failing to shoot a ray at any of them would lead to an incorrect determination of the visible region. How are they different and how do we find them?
Since vision is bounded by the viewing angle, two angle points are essential and are to be taken implicitly irrespective of the presence of an edge. They come from the far endpoints of the sector's edges. Rays are shot at them and hit points are determined. Figure 7 shows one of these implicit angle points in black; the angle point due to the sector's right edge endpoint. The ray shot to it is blocked by an edge, leading to the blue hit point. Another ray shot to the angle point due to the sector's left edge endpoint is unhindered and thus the angle point itself becomes the hit point, shown in blue. So the blue ones are easy to determine; they need no tests. Their position is fixed, relative to the vision sector's position and orientation.
Consider the red angle point (turned into a hit point since the ray is unhindered) at the intersection of the sector's arc and the edge. This one needs another intersection testing: arc – line segment intersection test. These angle points are needed when an edge intersects the sector's arc.
Ray is shown with an arrow head while the edge has endpoints.
With these considerations, we list the cases where angle points occur:
• Any edge endpoint contained by the vision sector
• Any point on the sector's arc where an edge intersects it
Likewise, potentially blocking edges are the ones fully or partially contained by the sector.
The problem now becomes that of compiling the set of angle points (finding new ones and filtering unnecessary endpoints) and the set of potentially blocking edges (pruning surely non-blocking edges). This should be done fast, rejecting invalid elements as early as possible, so that cycles are not wasted in processing something whose result will ultimately be discarded. The aim is to reduce the terms n (number of rays shot = count of angle point) and m (number of line segments to test the ray against = count of potential blocking edges), giving us a fast algorithm to determine the FoV.
## 3.1 Culling
[3] suggests that an ideal culling algorithm (in reality, costly and riddled with floating-point precision problems) would only allow the exact visible set, while a good one would reject most invisible cases accepting the definitely and possibly visible ones. It will be conservative about rejecting potentially visible ones. In other words, unless it cannot be absolutely sure that an edge would not hamper visibility, it does not reject it.
### 3.1.1 Broad Phase — Early, Trivial Rejection
A naïve idea to reject irrelevant endpoints and edges would be to only test if the edge's endpoints are within the sector; this would work for angle points but is insufficient for blocking edges, as it would fail in configurations where the edge endpoints are out but the edge blocks visibility; see figure 7 and 8. Before going to the more granular entity, the endpoints, if we can reject the edge, the endpoints testing can be skipped altogether.
#### 3.1.1.1 Edge – Bounding Circle Intersection
If a line segment's closest point to the circle centre is within the circle then it is either contained by it or they intersect. This idea is detailed here [4]. Using this with the edge and the sector's bounding circle, we can reject all edges that are disjoint from the bounding circle. Post this stage, all edges that have nothing to do with this circle are not considered. Since this is not a test to find the actual points of intersection but to just know whether the edge is disjoint, the result is boolean and is fairly fast. All we need to implement this test is a couple of vector subtractions (making the vector) and dot products: one to find the closest point (by projection) and another to compute the squared distance to the circle's centre from there.
Results of this test is shown in figure 9. The vision sector's bounding circle is drawn with dashes. The blue dot on every edge denotes its closest point to the circle's centre. The green edges are accepted; the magenta ones are accepted too but are false positives. The red ones are rejected. An edge that is tangential to the bounding circle is rejected as well since it will not occlude vision.
To reject false positives aggressively, the idea of testing if the closest point is within the sector, as opposed to the circle, is charming. However, this is incorrect as it will also reject positives; in figure 9, along with the magenta ones, the green edge having its closest point outside the sector would also be rejected.
#### 3.1.1.2 Point with respect to Sector
Given a point and a sector, it is easy to quickly determine if the point is
1. behind the viewer
2. in front of the viewer and
1. beyond the viewing distance
2. within viewing distance and
1. within the sector
2. outside the sector but inside its bounding semicircle
with just dot and cross products. Since cross product is not defined in 2D, we promote our vectors to 3D by letting z be 0. Though cross product results in a vector, as z = 0, elements pertaining to x and y would be 0, and only the z quantity will be non-zero (if the vectors are not parallel); thus we get a scalar result from this cross product; refer [5] for different definitions of cross product in 2D.
Let the vector from the circle centre to the point be $$\vec u$$ and the sector's edge vectors be $$\vec{e_1}, \vec{e_2}$$. We already have the viewing direction, $$\hat v$$.
1. If $$\vec u \cdot \hat v \le 0$$, the point is behind the viewer.
2. else if $$\vec u \cdot \vec u = \|\vec u\|^2 > r^2$$, the point is beyond viewing distance.
3. else if $$sign(\vec{e_1} \times \vec u) = sign(\vec u \times \vec{e_2})$$ then the point is within the sector.
4. else it is in the bounding semicircle but outside the sector.
We have used two optimisations that need explanation. We compare squares of the lengths (2) instead of just the lengths since finding the length would mean using sqrt; we avoid this and do an optimisation that is usual in computer graphics applications [7]. To check if the point is within the sector (3), we could have found the actual angle it subtends with the sector's first edge and see if it is within the allowed angle of view ($$2\theta$$). However, we need the trigonometry function acos for this, which may be a costlier than doing a couple of cross products which involves only arithmetic operations.
If an edge survived through the previous test, we put its endpoints through this test to classify where they stand. Figure 10 shows how the situation is now; the grey points are behind (1), red ones are beyond (2), green endpoints are within the sector (3) and the magenta one is in the bounding semicircle but outside the sector (4).
Angle points and blocking edges can be determined based on the results
1. If both ends are behind (1), it has no way of obscuring vision; prune — stop further processing.
2. If both ends are within the sector (3), mark both as angle points and the edge as blocking; stop further processing.
3. If one of them is within the sector (3) and the other is not — (1), (2), (4) — mark the one within as angle point and the edge as blocking. This edge may cut the sector's arc and give a new angle point.
4. If both are not within the sector — (2), (4) — endpoints are not angle points, but it may cut the sector's arc, giving new angle points. Edge may be blocking if it cuts the sector's arc or edges.
For cases 3 and 4, we need further tests to find angle points, that are not endpoints, the edge contains and whether the edge is blocking. This leads us to the narrow phase of culling.
## 3.2 Narrow Phase
In this phase we weed out more false positives with costlier tests to improve performance when shooting rays. We may perform an edge – sector arc and/or an edge – edge intersection test.
However, before performing the slightly costly segment – arc intersection test (it has a square root and more than three branch operations), we can use the results of the previous test to see if the test is really needed. The test is needed only when the edge has a possibility of intersecting the arc. If both the endpoints are within the sector's bounding circle, the edge has no way of intersecting the circle; the edge has a possibility of intersecting the sector's arc only if either of its endpoints are in front of the viewer and outside the bounding circle (case 2.a in §3.1.1.2).
Table 1. Edge endpoint possibilities and states
# End A End B State
1 Within Within (2.b.i) Angle points: A, B • Blocking
2 Within Behind (1) Angle point: A • Blocking
3 Within Semicircle (2.b.ii) Angle point: A • Blocking
4 Within Outside (2.a) Angle point: A • Blocking • May intersect arc
5 Behind Behind Prune
6 Behind Outside May intersect arc • May be blocking
7 Behind Semicircle May be blocking
8 Semicircle Semicircle May be blocking
9 Semicircle Outside May intersect arc • May be blocking
10 Outside Outside May intersect arc • May be blocking
Items 1, 2, 3 and 5 are already dealt with in the broad phase and needs nothing further. Item 4 needs an edge – arc intersection test just to know if there is an additional angle point; already in the broad phase we marked one of its endpoints as angle point and the edge as blocking. Items 6 to 10 are of the most interest in the narrow phase. Items 6, 9 and 10 need edge – arc intersection testing to know if any angle points are present due to the intersection, and if so, the edge should be marked blocking. If these items fail in the test, they join items 7 and 8; all of these cannot be having any angle points, but should still be put through edge – edge intersection test to know if they are blocking.
### 3.2.1 Edge – Sector Arc Intersection
For items 4, 6, 9 and 10 in table 1 we check if the edge cuts the sector's arc with a line segment – arc intersection test [6]. If it does, mark the intersection point(s) as angle point(s) and the edge as blocking. No further processing is needed. If it does not, the edge cannot be having any angle point but pass the edge to the next test to know if it is blocking.
The test is essentially a line – circle intersection test and hence would result in two points. The points that lie on both the line segment and the arc are deemed points of intersection. A minor optimisation that can be done here is if both points of intersection are behind the viewer then the line segment cannot be blocking or having any angle point, hence the edge can be discarded without any further processing.
In figure 11, the grey edge with an endpoint outside and another behind is tested but both intersection points are behind (magenta) and is thus rejected. The green edge with both points in the semicircle is never tested. The red one with one of its endpoints outside is tested but the only intersection point found (blue) is not on the arc; this edge cannot be having any angle points but is sent to the next stage (edge – edge test) to test if it is blocking. All other edges (black) with one or both its endpoints outside the bounding circle and having valid (red) intersection point(s) are marked blocking, with the intersection points marked as angle points; each such edge corresponds to one of items 4, 6, 9 and 10 in table 1.
### 3.2.2 Edge – Sector Edge Intersection
For items 7 and 8 which never entered the previous test and for items (6, 9 and 10) which entered but no intersection points were found, there can be no angle points the edge bears: both its endpoints are not within the sector and no new angle point is lying on the arc. However, such edges cannot be discarded as they may still be blocking (see figure 8) or unblocking (see figure 10; the edge with magenta endpoint). We have two options: be on the safer side and mark them blocking or test them against the sector's edges to be sure. By doing the former, for every false positive edge — an edge disjoint from the vision sector — we pay with n line segment – line segment tests during the ray shooting phase, as every ray shot should be tested against every potentially blocking edge. Instead of paying n times for a false positive, paying n + 2 (at worst) for a positive is a better proposition, so we test the edge against both the sector edges; if it intersects any, mark it blocking.
For items 6 to 10, figure 12 contains two edges each: one positive (green), one negative (red). From the figure it is apparent that this test has greater chances of rejecting edges irrelevant to vision.
# 4 Shooting Rays
We have all the angle points and blocking edges. Before casting rays, the angle points are to be sorted by angle so that the final field of view figure appears correctly when the hit points are connected by edges and arcs. Additionally, if two or more angle points and the viewer position are collinear, then multiple rays will be shot in the same direction; duplicate angle points need to be discarded to avoid redundant rays.
For the sorting, the technique explained in §3.1.1.2 of using the cross product to find if a point is within the viewing sector angle can be used. Duplicate angle points can be dealt with easily again with cross product. For every angle point, before casting the ray, it is crossed with the previous ray, if the result is zero then the vectors are linearly-dependant (parallel) and thus can be ignored.
Primary rays are shot at the sorted angle points. Before shooting a ray at an angle point, if we know it is an edge endpoint, and not an intersection point, then auxiliary rays are to be shot as needed; refer §2.2 for details on this. Auxiliary rays are to be shot only for angle points that are edge endpoints and not for the ones from intersection points, as the latter cannot be polygon corners, so vision should not be extending beyond.
For every ray shot, we find the intersection point of the ray and the edge closest to it. This is the hit point for the ray. If this point lies beyond the vision distance, r, then we take the hit point as the point along the ray that is r units away from the viewer i.e. we drag the hit point back until the distance to it from the viewer becomes r. For every hit point found, we check if it is at the arc boundary, if this and the previous point are at the arc boundary, then we connect them both with an arc else with a line. However, there is a small problem in doing this blindly.
In figure 13, the hit points (numbering from right to left) 1 and 2 are on the arc, they should be connected by an arc. Same is the case with hit points 3 and 4. However, hit point 2 and 3 should be connected by a line since there is an edge cutting the arc. When hit point 3 is found and confirmed that this and the previous hit point are on the arc, check, before connecting them with an arc, whether the line connecting them is parallel to the closest edge on which hit point 3 lies; connect them with a line if it is parallel.
# 5 Line of Sight
A line of sight query is to answer the question of whether a given point, X, within the world, is visible to a viewer, with the viewer's parameters defined (§1). Doing this is easy once the blocking edges and angle points are found. First we classify X according to §3.1.1.2. If it's not within the sector, we return false. If it is within, we check if the line segment formed by connecting X to the viewer is intersecting any of the blocking edges. If it is not intersecting, we return true, since the line of sight or the ray from the viewer to the subject is unblocked and is thus visible.
# 6 References
1. 2D Visibility by Amit Patel
2. Sight & Light by Nicky Case
3. §14.2 Culling Techniques, Real-Time Rendering, Third Edition by Thomas Akenine-Möller, Eric Haines and Naty Hoffman
4. Line segment to circle collision/intersection detection by David Stygstra
5. §A.2.1, 3-D Computer Graphics by Samuel R. Buss
6. Intersection of Linear and Circular Components in 2D by David Eberly
7. §2.2.5, Essential Mathematics for Games and Interactive Applications, Second Edition by James Van Verth and Lars M. Bishop
8. 3D Math Primer for Graphics and Game Development, Second Edition by Fletcher Dunn and Ian Parberry
9. Mathematics for 3D Game Programming and Computer Graphics, Third Edition by Eric Lengyel
|
2021-12-09 14:28:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5573616623878479, "perplexity": 835.2615499550012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964364169.99/warc/CC-MAIN-20211209122503-20211209152503-00475.warc.gz"}
|
https://support.bioconductor.org/p/112748/
|
Question: One-sided p-value from DESeq2
0
13 months ago by
t.kuilman140
Netherlands
t.kuilman140 wrote:
For downstream analysis using a modified version of Robust Rank Aggregation (alpha-RRA; Li, et al. Genome Biology 2014), I would like to obtain one-tailed p-values from a standard DESeq2 analysis using a Wald test (rather than the two-tailed p-values that are represented in a 'DESeqResults' object). Can this be done from a 'DESeqDataSet' or a 'DESeqResults' object using DESeq2, and if so what steps should I take to do that?
Thanks, Thomas
modified 13 months ago by Michael Love25k • written 13 months ago by t.kuilman140
2
13 months ago by
Michael Love25k
United States
Michael Love25k wrote:
See the vignette section on threshold tests. You would set lfcThreshold to 0 and specify the alternative as less than or greater than.
Fantastic, that's right what I need. Thanks for your quick reply; I should have spotted the altHypothesis argument of the results() function but apparently read past that.
|
2019-10-16 03:24:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34073352813720703, "perplexity": 3061.3875911428554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986661296.12/warc/CC-MAIN-20191016014439-20191016041939-00540.warc.gz"}
|
https://en.wikipedia.org/wiki/Jean_Charles_Athanase_Peltier
|
# Jean Charles Athanase Peltier
Jean Charles Athanase Peltier
Born 22 February 1785
Ham
Died 27 October 1845 (aged 60)
Occupation Physicist
Jean Charles Athanase Peltier[1] (/ˈpɛltj/;[2] French: [pɛl.tje]; 22 February 1785 – 27 October 1845) was a French physicist. He was originally a watch dealer, but at 30 years old took up experiments and observations in the physics.
Peltier was the author of numerous papers in different departments of physics, but his name is specially associated with the thermal effects at junctions in a voltaic circuit.[3] He introduced the Peltier effect. Peltier also introduced the concept of electrostatic induction (1840), based on the modification of the distribution of electric charge in a material under the influence of a second object closest to it and its own electrical charge. This effect has been very important in the recent development of non-polluting cooling mechanisms.
## Biography
Peltier initially trained as a watchmaker and was up to his 30s working as a watch dealer. Peltier worked with Abraham Louis Breguet in Paris. Later, he worked with various experiments on electrodynamics and noticed that in an electronic element when current flows through, a temperature difference or temperature difference is generated at a current flow. In 1836 he published his work and in 1838 his findings were confirmed by Emil Lenz. Furthermore, Peltier dealt with topics from the atmospheric electricity and meteorology. In 1840, he published a work on the causes and formation of hurricanes.
Peltier's papers, which are numerous, are devoted in great part to atmospheric electricity, waterspouts, cyanometry and polarization of sky-light, the temperature of water in the spheroidal state, and the boiling-point at great elevations. There are also a few devoted to curious points of natural history. But his name will always be associated with the thermal effects at junctions in a voltaic circuit, a discovery of importance quite comparable with those of Seebeck and Cumming.[4]
Peltier discovered the calorific effect of electric current passing through the junction of two different metals. This is now called the Peltier effect[5] (or Peltier–Seebeck effect). By switching the direction of current, either heating or cooling may be achieved. Junctions always come in pairs, as the two different metals are joined at two points. Thus heat will be moved from one junction to the other.
### Peltier effect
Main article: Peltier effect
The Peltier effect is the presence of heating or cooling at an electrified junction of two different conductors (1834).[6] His great experimental discovery was the heating or cooling of the junctions in a heterogeneous circuit of metals according to the direction in which an electric current is made to pass round the circuit. This reversible effect is proportional directly to the strength of the current, not to its square, as is the irreversible generation of heat due to resistance in all parts of the circuit. It is found that, if a current pass from an external source through a circuit of two metals, it cools one junction and heats the other. It cools the junction if it be in the same direction as the thermoelectric current which would be caused by directly heating that junction. In other words, the passage of a current from an external source produces in the junctions of the circuit a distribution of temperature which leads to the weakening of the current by the superposition of a thermo-electric current running in the opposite direction.[4]
When electromotive current is made to flow through an electronic junction between two conductors (A and B), heat is removed[7] at the junction. To make a typical pump, multiple junctions are created between two plates. One side heats and the other side cools. A dissipation device is attached to the hot side to maintain cooling effect on the cold side.[8] Typically, the use of the Peltier effect as a heat pump device involves multiple junctions in series, through which a current is driven. Some of the junctions lose heat due to the Peltier effect, while others gain heat. Thermoelectric pumps exploit this phenomenon, as do thermoelectric cooling Peltier modules found in refrigerators.[9]
The Peltier effect generated at the junction per unit time, ${\displaystyle {\dot {Q}}}$, is equal to
${\displaystyle {\dot {Q}}=\left(\Pi _{\mathrm {A} }-\Pi _{\mathrm {B} }\right)I,}$
where,
${\displaystyle \Pi _{A}}$ (${\displaystyle \Pi _{B}}$) is the Peltier coefficient[10][11] of conductor A (conductor B), and
${\displaystyle I}$ is the electric current (from A to B).
Note: Total heat generated at the junction is not determined by the Peltier effect alone, being influenced by Joule heating and thermal gradient effects.
The Peltier coefficients[10][11] represent how much heat is carried per unit charge. With charge current continuous across a junction, the associated heat flow will develop a discontinuity if ${\displaystyle \Pi _{A}}$ and ${\displaystyle \Pi _{B}}$ are different.
The Peltier effect can be considered as the back-action counterpart to the Seebeck effect (analogous to the back-emf in magnetic induction[12]): if a simple thermoelectric circuit is closed then the Seebeck effect will drive a current, which in turn (via the Peltier effect) will always transfer heat from the hot to the cold junction.
The true importance of this "Peltier effect" in the explanation of thermoelectric currents was first clearly pointed out by James Prescott Joule; and Sir William Thomson[13] further extended the subject by showing, both theoretically and experimentally, that there is something closely analogous to the Peltier effect when the heterogeneity is due, not to difference of quality of matter, but to difference of temperature in contiguous portions of the same material. Shortly after Peltier's discovery was published, Lenz used the effect to freeze small quantities of water by the cold developed in a bismuth-antimony junction when a voltaic current was passed through the metals in the order named.[4]
Voltaic Electricity
Magnetic alterations, magnetic saturation, southern magnetic axis, tensions, coercion, contact, induce (induced current), magnetic event, metal changes, neighboring electric current, electrical polarity, electrical phenomenon, biasing (grid bias, AC bias), positive charge and electrical polarity (polarity (mutual inductance), polarity (physics)), repulsion
Conduction
Electrical conductor, electrical conduction, fast ion conductor, conduction (heat)
Metrology
Condensation (condensation cloud, condensation reaction), tion through vapor (action through vapor), evaporation, fog
People
Antoine César Becquerel
Instruments
Leyden jar, Influence machine (electrostatic influence),
Materials
Atoms and atomic spheres (kissing number problem), state of matter (chemical state), particles (neutral particle), glazed zinc (Zinc oxide), maghemite, awaruite, oxygen, liquids, ponderable matter, pole figure, chemical polarity, molecular substance, copper-antimony (copper, antimony, alloys list), germanium
Power
Power (physics), electric power, motive power, power in an alternating current electrics, transmitter output, effective radiated power, power spectral density signal
Other
Reaction, chemical heat, cohesion, combination, complete, concordance (concordance correlation coefficient), vitreous body, crystal electricity, electric charge, field of view, zone (crystallography), affinity laws (electron affinity, chemical affinity), equilibrium and dynamics (diffusiophoresis), St. Elmo's fire, waves, luminescence (luminance, luminosity), aethereal movements, phys and portion of the aether (quantity of aether rays/aethereal spheres), aethereal glut, nervous system (sense), order of phenomena (critical phenomena, strongly correlated material), will, statistical bias (biased sample, estimator bias), projection spread, quantity of electricity, sphere to another sphere (celestial spheres, esoteric plane), meridian arc (meridian (astronomy), meridian (geography)), resulting segments (gnomonic projection)
Listed by date
Other
## References and notes
General
Citations
1. ^ Catalogue of the Wheeler gift of books, Volume 2. By American Institute of Electrical Engineers. Library, Latimer Clark, Schuyler Skaats Wheeler, Andrew Carnegie, William Dixon Weaver, Engineering Societies Library, Joseph Plass
2. ^
3. ^ A Handy Book of Reference on All Subjects and for All Readers, Volume 6. Edited by Ainsworth Rand Spofford, Charles Annandale. Gebbie publishing Company, limited, 1900. p341 (ed., also Gebbie, 1902 version, p341
4. ^ a b c The New Werner Twentieth Century Edition of the Encyclopaedia Britannica: A Standard Work of Reference in Art, Literature, Science, History, Geography, Commerce, Biography, Discovery and Invention, Volume 18. Werner Company, 1907. p491
5. ^ Contemporarily, known as the thermoelectric effect.
6. ^ Peltier (1834) "Nouvelles expériences sur la caloricité des courants électrique" (New experiments on the heat effects of electric currents), Annales de Chimie et de Physique, 56 : 371-386.
7. ^ or generated
8. ^ This is usually a heatsink and fan assembly.
9. ^ The Peltier effect, where current is forced through a junction of two different metals, also forms the basis of the small 12/24 volt vehicular HVAC systems. It forms the basis of the relatively costly, but stable, junction heated soldering irons. It is used for spot cooling of certain integrated circuits.
10. ^ a b Yu. A. Skripnik, A. I. Khimicheva. Methods and devices for measuring the Peltier coefficient of an inhomogeneous electric circuit. Measurement Techniques July 1997, Volume 40, Issue 7, pp 673-677
11. ^ a b
12. ^ The magnetic field B is sometimes called magnetic induction.
13. ^ Mathematical and physical papers, by Sir William Thomson. Collected from different scientific periodicals from May, 1841, to the present time. Kelvin, William Thomson, Baron, 1824-1907., Larmor, Joseph, 1857-, Joule, James Prescott, 1818-1889. vol. viii. p. 90
14. ^
15. ^ Tr. Observations on a new species of floscularia
16. ^ Tr. Notice of the main facts and new instruments (laboratory equipment) added to the science of electricity.
17. ^ Tr. Notice key facts added to the science of Electricity
18. ^ Tr. Observations on multipliers and on thermo-electric batteries
19. ^ Tr. Memory training tables reports that between the strength of an electric current and the deflection of needles multipliciateurs: follow-up research on the causes of disruption of thermocouples and how to ensure in their job measuring average temperatures
20. ^ Tr. Memory on the various species of mist
21. ^ Tr. Meteorology: Observations and experimental research on the causes that contribute to the formation of tornadoes.
22. ^ Tr. General considerations on the ether, followed by instructions on shooting stars
23. ^ Tr. Essay on the coordination of the above causes, produce and accompany electrical phenomena
24. ^ Tr. Observations in the Alps on the boiling temperature of water.
25. ^ Tr. Letter to the cause of differences between the results of the experiments of MM. Bravais and Peltier on the temperature of boiling water and the results of experiments cabinet.
26. ^ institut. 22 avril 1844. (Comptes-rendus, vol. 18, p. 768.)
27. ^ Tr. Research on the cause of variations in atmospheric pressure.
28. ^ Tr. The cyanométrie and air polarimetry: or user of the additions and changes made to the cyano-polariscope of M. Arago, to make cyano-polarimeter in the observation of all points of the sky.
29. ^ Tr. Notice of galvanism
30. ^ Tr. Notice on the fluid forces (hydrometeorology), and lightning
31. ^ Tr. Notice on the life and scientific work
32. ^ Tr. Notice of the main facts and new instruments added to the science of Electricity by Mr. Peltier
33. ^ Tr. Memoirs of electricity vapor on atmospheric electricity and waterspouts
34. ^ Power Meteorology: Part One
|
2016-08-31 23:01:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5777094960212708, "perplexity": 4004.988848955162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982939756.54/warc/CC-MAIN-20160823200859-00111-ip-10-153-172-175.ec2.internal.warc.gz"}
|
http://pdglive.lbl.gov/DataBlock.action?node=S008FA&home=sumtabM
|
# $\boldsymbol F_{\boldsymbol A}$, AXIAL-VECTOR FORM FACTOR INSPIRE search
VALUE EVTS DOCUMENT ID TECN COMMENT
$0.0119$ $\pm0.0001$ 65k 1, 2
2009
PIBE ${{\mathit e}^{+}}{{\mathit \nu}}{{\mathit \gamma}}$ at rest
• • • We do not use the following data for averages, fits, limits, etc. • • •
$0.0115$ $\pm0.0004$ 41k 1, 3
2004
PIBE ${{\mathit \pi}^{+}}$ $\rightarrow$ ${{\mathit e}^{+}}{{\mathit \nu}}{{\mathit \gamma}}$ at rest
$0.0106$ $\pm0.0060$ 4, 1
1990 B
SPEC 17 GeV ${{\mathit \pi}^{-}}$ $\rightarrow$ ${{\mathit e}^{-}}{{\overline{\mathit \nu}}_{{e}}}{{\mathit \gamma}}$
$0.021$ ${}^{+0.011}_{-0.013}$ 98
1989
SPEC ${{\mathit \pi}^{+}}$ $\rightarrow$ ${{\mathit e}^{+}}{{\mathit \nu}_{{e}}}{{\mathit e}^{+}}{{\mathit e}^{-}}$
$0.0135$ $\pm0.0016$ 4, 1
1986
SPEC ${{\mathit \pi}^{+}}$ $\rightarrow$ ${{\mathit e}^{+}}{{\mathit \nu}}{{\mathit \gamma}}$
$0.006$ $\pm0.003$ 4, 1
1986
SPEC ${{\mathit \pi}^{+}}$ $\rightarrow$ ${{\mathit e}^{+}}{{\mathit \nu}}{{\mathit \gamma}}$
$0.011$ $\pm0.003$ 5, 4, 1
1978
SPEC ${{\mathit \pi}^{+}}$ $\rightarrow$ ${{\mathit e}^{+}}{{\mathit \nu}}{{\mathit \gamma}}$
1 These values come from fixing the vector form factor at the CVC prediction, ${{\mathit F}_{{V}}}$ = $0.0259$ $\pm0.0005$.
2 When $\mathit F_{V}$ is released, the BYCHKOV 2009 $\mathit F_{A}$ is $0.0117$ $\pm0.0017$, and $\mathit F_{A}$ and $\mathit F_{V}$ results are highly (anti-)correlated: $\mathit F_{A}$ + 1.0286 $\mathit F_{V}$ = $0.03853$ $\pm0.00014$.
3 The sign of ${{\mathit \gamma}}$ = ${{\mathit F}_{{A}}}$ /${{\mathit F}_{{V}}}$ is determined to be positive.
4 Only the absolute value of $\mathit F_{\mathit A}$ is determined.
5 The result of STETZ 1978 has a two-fold ambiguity. We take the solution compatible with later determinations.
References:
BYCHKOV 2009
PRL 103 051802 New Precise Measurement of the Pion Weak Form Factors in ${{\mathit \pi}^{+}}$ $\rightarrow$ ${{\mathit e}^{+}}{{\mathit \nu}}{{\mathit \gamma}}$ Decay
FRLEZ 2004
PRL 93 181804 Precise Measurement of the Pion Axial Form Factor in the ${{\mathit \pi}^{+}}$ $\rightarrow$ ${{\mathit e}^{+}}{{\mathit \nu}}{{\mathit \gamma}}$ Decay
BOLOTOV 1990B
PL B243 308 The Experimental Study of the ${{\mathit \pi}^{-}}$ $\rightarrow$ ${{\mathit e}^{-}}{{\overline{\mathit \nu}}_{{e}}}{{\mathit \gamma}}$ Decay in Flight
EGLI 1989
PL B222 533 Measurement of the Decay ${{\mathit \pi}^{+}}$ $\rightarrow$ ${{\mathit e}^{+}}{{\mathit \nu}_{{e}}}{{\mathit e}^{+}}{{\mathit e}^{-}}$ and Search for a Light Higgs Boson
BAY 1986
PL B174 445 Measurement of the Pion Axial Formfactor from Radiative Decay
PIILONEN 1986
PRL 57 1402 Unique Determination of the Formfactor Ratio in Radiative Pion Decay
STETZ 1978
NP B138 285 Determination of the Axial-Vector Formfactor in the Radiative Decay of the Pion
|
2019-12-08 16:01:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8322098255157471, "perplexity": 3012.2199705106636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540511946.30/warc/CC-MAIN-20191208150734-20191208174734-00373.warc.gz"}
|
http://tex.stackexchange.com/questions/87659/reference-formatting/87688
|
# reference formatting
I need my references to be formatted the same as in this image:
My .bib files look like this:
@INCOLLECTION{Chandler2009,
author = {Chandler, R.J.},
title = {{Shorebirds of the Northern Hemisphere}},
publisher = {Christopher Helm},
year = {2009},
annote = {Whimbrel and Curlew account},
file = {:Users/rossahmed/Library/Application Support/Mendeley Desktop/Downloaded/Chandler - 2009 - Shorebirds of the Northern Hemisphere.pdf:pdf}
}
@BOOK{EngelmoerM&Roselaar1998,
title = {{Geographical Variation in Waders}},
publisher = {Springer},
year = {1998},
author = {Engelmoer, M. \& Roselaar, C.S.},
pages = {214--223},
annote = {Curlew and Whimbrel section},
}
@BOOK{PraterT1977,
title = {{Guide to the Identification and Ageing of Holarctic Waders}},
publisher = {BTO},
year = {1977},
author = {{Prater, T.}, Marchant.J. \& Vuorien, J.},
annote = {Curlew and Whimbrel section},
file = {:Users/rossahmed/Library/Application Support/Mendeley Desktop/Downloaded/Prater, T - 1977 - Guide to the Identification and Ageing of Holarctic Waders.pdf:pdf}
}
Unfortunately, as I'm a beginner, the following code is as close as ive got to producing the desired results:
\documentclass{article}
\usepackage[style=authoryear,natbib=true]{biblatex}
\begin{document}
This is my text \cite{EngelmoerM&Roselaar1998}
This is more text \cite{Chandler2009}
This is even more text \cite{PraterT1977}
\printbibliography
\end{document}
How can I produce desired reference formatting?
-
Tou haven't chosen a bibliography style. Have a look at this question and the answers therein. – Vivi Dec 19 '12 at 18:17
@Vivi Yes he has, style=authoryear. – Torbjørn T. Dec 19 '12 at 20:48
@TorbjørnT. I had no idea you could pass that as an argument there! See, even wrong comments are useful (in this case useful to me, not to him). Cheers for pointing it out. – Vivi Dec 19 '12 at 20:49
@Vivi biblatex works a little bit different to the "old" bibtex way. See e.g. the questions mentioned here. – Torbjørn T. Dec 19 '12 at 21:00
@TorbjørnT. I use bibtex, and I haven't yet switched. I will definitely look into it during the Christmas/New Years break! Thanks again :) – Vivi Dec 19 '12 at 21:06
What you'll need to do is to generate a .bbx file and possibly also a .cbx file, say my_style.bbx. The first controls the bibliography list and the latter controls the citation style in the text. The .bbx file should start with \RequireBibliographyStyle{authoryear} (or any other style you'd like to base on) and in the .cbx file you should place \RequireCitationStyle{authoryear}.
|
2013-12-13 16:29:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8329373598098755, "perplexity": 7647.903531758829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164960531/warc/CC-MAIN-20131204134920-00066-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://www.projecteuclid.org/euclid.rmi/1204128300
|
## Revista Matemática Iberoamericana
### On uniqueness of automorphisms groups of Riemann surfaces
#### Abstract
Let $\gamma, r, s$, $\geq 1$ be non-negative integers. If $p$ is a prime sufficiently large relative to the values $\gamma$, $r$ and $s$, then a group $H$ of conformal automorphisms of a closed Riemann surface $S$ of order $p^{s}$ so that $S/H$ has signature $(\gamma,r)$ is the unique such subgroup in $\mathrm{Aut}(S)$. Explicit sharp lower bounds for $p$ in the case $(\gamma,r,s) \in \{(1,2,1),(0,4,1)\}$ are provided. Some consequences are also derived.
#### Article information
Source
Rev. Mat. Iberoamericana, Volume 23, Number 3 (2007), 793-810.
Dates
First available in Project Euclid: 27 February 2008
Permanent link to this document
https://projecteuclid.org/euclid.rmi/1204128300
Mathematical Reviews number (MathSciNet)
MR2414492
Zentralblatt MATH identifier
1144.30017
#### Citation
Leyton A. , Maximiliano; Hidalgo, Rubén A. On uniqueness of automorphisms groups of Riemann surfaces. Rev. Mat. Iberoamericana 23 (2007), no. 3, 793--810. https://projecteuclid.org/euclid.rmi/1204128300
#### References
• Edge, W. L.: Bring's curve. J. London Math. Soc. (2) 18 (1978), no. 3, 539-545.
• Farkas, H, and Kra, I.: Riemann surfaces. Second edition. Graduate Texts in Mathematics 71. Springer-Verlag, New-York, 1992.
• González-Díez, G.: Loci of curves which are prime Galois coverings of $P^1$. Proc. London Math. Soc. (3) 62 (1991), 469-489.
• González-Díez, G. and Harvey, W. J.: Moduli of Riemann surfaces with symmetry. In Discrete groups and geometry, 75-93. London Math. Society Lecture Note Ser. 173. Cambridge University Press, Cambridge, 1992.
• González-Díez, G., Hidalgo, R. A. and Leyton, M.: Generalized Fermat's curves. Preprint.
• Hidalgo, R. A.: Dihedral groups are of Schottky type. Proyecciones 18 (1999), 23-48.
• Hidalgo, R. A.: On Schottky groups with automorphisms. Ann. Acad. Sci. Fenn. Ser. A I Math. 19 (1994), 259-289.
• Hidalgo, R. A.: Homology coverings of Riemann surfaces. Tôhoku Math. J. (2) 45 (1993), 499-503.
• Hidalgo, R. A.: A commutator rigidity for function groups and Torelli's theorem. Proyecciones 22 (2003), 117-125.
• Hidalgo, R. A.: Noded function groups. In Complex geometry of groups (Olmué, 1998), 209-222. Contemp. Math. 240. Amer. Math. Soc., Providence, RI, 1999.
• Hidalgo, R. A.: Kleinian groups with common commutator subgroup. Complex Variables Theory Appl. 28 (1995), no. 2, 121-133.
• Keen, L.: Canonical polygons for finitely generated Fuchsian groups. Acta Math. 115 (1965), 1-16.
• Kuribayashi, A. and Kimura, H.: On automorphism groups of compact Riemann surfaces of genus $5$. Proc. Japan Acad. Ser. A Math. Sci. 63 (1987), no. 4, 126-130.
• Kuribayashi, I. and Kuribayashi, A.: Automorphism groups of compact Riemann surfaces of genera three and four. J. Pure Appl. Algebra 65 (1990), no. 3, 277-292.
• Leyton, M.: Cubrimientos abelianos maximales. Masterss Thesis presented to the Department of Mathematics. Universidad Técnica Federico Santa María, 2004.
• Macbeath, A. M.: On a curve of genus $7$. Proc. London Math. Soc. (3) 15 (1965), 527-542.
• Maclachlan, C.: Abelian groups of automorphisms of compact Riemann surfaces. Proc. London Math. Soc. (3) 15 (1965), 699-712.
• Maskit, B.: Kleinian Groups. Grundlehren der Mathematischen Wissenschaften 287. Springer-Verlag, Berlin, 1988.
• Maskit, B.: The homology covering of a Riemann surface. Tôhoku Math. J. (2) 38 (1986), 561-562.
• Nag, S.: The complex analytic theory of Teichmüller spaces. Canadian Mathematical Society Series of Monographs and Advanced Texts. A Wiley-Interscience Publication. John Wiley & Sons, New York, 1988.
• Rauch, H. E. and Lewittes, J.: The Riemann surface of Klein with 168 automorphisms. In Problems in analysis (papers dedicated to Salomon Bochner, 1969), 297-308. Princeton University Press, Princeton, New Jersey, 1970.
• Riera, G. and Rodríguez, R.: The period matrix of Bring's curve. Pacific J. Math. 154 (1992), no. 1, 179-200.
• Ries, J. F. X.: Subvarieties of moduli space determined by finite group actions acting on surfaces. Trans. Amer. Math. Soc. 335 (1993), 385-406.
• Singerman, D.: Finitely maximal Fuchsian groups. J. London Math. Soc. (2) 6 (1972), 29-38.
• Vermeulen, A. M.: Weierstrass points of weight two on curves of genus three. Dissertation, University of Amsterdam, Amsterdam, 1983. With a Dutch summary. Universiteit van Amsterdam, Amsterdam, 1983.
|
2019-10-15 15:23:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5257582068443298, "perplexity": 1737.3828549413015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986659097.10/warc/CC-MAIN-20191015131723-20191015155223-00553.warc.gz"}
|
https://socratic.org/questions/5951fa9c7c01493126e6cb54#445012
|
# Question #6cb54
Jun 27, 2017
333,600 J
#### Explanation:
Use the heat of fusion formula: $q = m \cdot {H}_{f}$
The heat of fusion of water is 333.6 J/g, which means that it takes 333.6 joules of energy to melt one gram of ice.
Since we have 1000 grams of ice, plug 1000 into the formula and multiply by 333.6.
$q = \left(1000\right) \left(333.6\right)$
$q = 333600$ J
|
2022-01-22 09:11:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7955061197280884, "perplexity": 3171.281456984529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303779.65/warc/CC-MAIN-20220122073422-20220122103422-00612.warc.gz"}
|
https://math.stackexchange.com/questions/610250/a-question-on-probability-hunter-and-rabbit/611482#611482
|
# A Question on Probability - Hunter and Rabbit
Suppose there are m different hunters and n different rabbits. Each hunter selects a rabbit uniformly at random independently as a target. Suppose all the hunters shoot at their chosen targets at the same time and every hunter hits his target.
(i) Consider a particular Rabbit $$1$$, what is the probability that Rabbit $$1$$ survives?
(ii) Suppose $$m=7$$, $$n=5$$. What is the probability that no rabbit survives?
Attempt for (i):
Consider 1st hunter, No. of rabbits he can choose is $$n-1$$, since Rabbit $$1$$ survives. Consider 2nd hunter, No. of rabbits he can choose is $$n-1$$, since Rabbit $$1$$ survives. .... So, for $$m$$ hunters, number of ways they choose rabbits such that they won't choose Rabbit 1 $$= (n-1)^m$$ And number of ways ways they choose rabbits = $$n^m$$
$$P(\text{Rabbit 1 survives)} = \frac{ (n-1)^m }{n^m} = \left[ \frac{(n-1)}{n} \right]^m$$
For (2), you can try Inclusion-Exclusion, using the fact (generalizing (i)) that the probability that a particular set of $k$ rabbits survives is $((n-k)/n)^m$.
EDIT: Here's the Inclusion-Exclusion calculation:
\eqalign{ P(0\text{ survive}) &= 1 - P(\ge 1\text{ survive}) \cr &= 1 - {5 \choose 1} (4/5)^7 + {5 \choose 2} (3/5)^7 - {5 \choose 3} (2/5)^7 + {5 \choose 4} (1/5)^7\cr &= \frac{672}{3125}}
(which is the same as $16800/78125$). In general with $m$ hunters and $n$ rabbits the probability that none survive is $$1 + \sum_{k=1}^{n-1} (-1)^k {n \choose k} \left(\dfrac{n-k}{n}\right)^m$$
• Is P(1 rabbit survives) = [(5-1)/5]^7 * C(5,1) ? Dec 17, 2013 at 11:43
• This works out for all rabbits surviving, since $k=n$, the probability is $0$, which is correct (every hunter has to pick a rabbit, so the best case for the rabbits is one very unlucky rabbit getting shot by all hunters). This also works out for $k=1$, since that becomes case (i), if that is correct. But this puts the probability of no rabbit surviving at $1$, since $k=0$ and so $$\left[ \frac{(n-k)}{n} \right]^m = \left[ \frac{n}{n} \right]^m = 1$$
– SQB
Dec 17, 2013 at 11:45
• When getting ((n-1)/n)^m, we specify the rabbit Rabbit 1. So, I thought if we need to find P(1 rabbit survives), P(1 rabbit survives) = ((n-1)/n)^m * C(n,1) Dec 17, 2013 at 12:28
• Inclusion-exclusion again. Probability that at least $k$ survive = ${n \choose k} \cdot$ probability of a given $k$-tuple surviving $- {n \choose {k+1}} \cdot$ probability of a given $k+1$-tuple surviving $+ \ldots$. Dec 17, 2013 at 13:31
• For 7 hunters and 5 rabbits, there are 78125 ways for the hunters to choose a rabbit ($5^7$). Of those, 16800 have all rabbits killed.
– SQB
Dec 17, 2013 at 14:51
For part 2 , i followed this approach. I think it is correct or else please write your comment.
for all rabbits to die, each should be hit by atleast one hunter.
so selecting n hunters among m maintaining the order - mpn
and the remaining m-n hunters can shoot any one. so n(m-n)
so total number of chances -> mpn * n(m-n)
so , the probability is ((mpn * n(m-n)) / nm)
I found out how to arrive at the numbers I got from my simulation, so I'll try my hand at answering.
First, my simulation. As I think we've killed enough rabbits by now, I'll try to make the world a better place by giving icecream to children.
We've got 7 children: Alice, Bob, Carol, Dave, Eve, Frank, and Gabrielle.
The icecream parlor has only 5 flavours. You can come up with any flavours you like, but we'll just number them 1 through 5.
The kids get one scoop each. These kids like to share, and they'd like to try each flavour. So if they can make sure that they have picked each flavour at least once among the seven of them, they all can taste every flavour by sharing.
The question now becomes, what is the probability of the 7 children having picked all 5 flavours between them (if they don't know what the others picked, of course).
Now here's my simulation of that in SQL (Oracle 11g).
CREATE OR REPLACE TYPE nums AS TABLE OF NUMBER;
/
WITH
icecream AS (
SELECT LEVEL AS flavour
FROM dual
CONNECT BY LEVEL <= :v_nr_of_flavours
),
children AS (
SELECT
a.flavour AS alice,
b.flavour AS bob,
c.flavour AS carol,
d.flavour AS dave,
e.flavour AS eve,
f.flavour AS frank,
g.flavour AS gabrielle,
CARDINALITY(
nums(
a.flavour,
b.flavour,
c.flavour,
d.flavour,
e.flavour,
f.flavour,
g.flavour
)
MULTISET UNION DISTINCT
nums()
) AS nr_of_flavours_picked
FROM icecream g
CROSS JOIN icecream f
CROSS JOIN icecream e
CROSS JOIN icecream d
CROSS JOIN icecream c
CROSS JOIN icecream b
CROSS JOIN icecream a
)
SELECT
COUNT(*) AS nr_of_combinations,
nr_of_flavours_picked,
CASE
WHEN GROUPING(nr_of_flavours_picked) = 1
THEN NULL
ELSE DECODE(nr_of_flavours_picked, :v_nr_of_flavours, 1, 0)
END AS all_flavours_picked
FROM children
GROUP BY ROLLUP (nr_of_flavours_picked);
This gives us a value of 78125 total possibilities of which 16800 have all rabbits killed all flavours picked.
But where do those numbers come from?
The number of total possibilities is easy, that's ($5^7$). But the other number is a bit more involved.
As it turns out, there are two ways to have 7 children pick all 5 flavours. Either two flavours are picked twice, or one flavour is picked thrice.
That last case is the easiest, as that is just $7 \cdot 6 \cdot 5 \cdot 4$ (the first four kids pick a flavour that hasn't been picked yet, the last three kids pick the one flavour left). Of course there are ${5 \choose 1} = 5$ flavours that can be picked thrice, so we get $7 \cdot 6 \cdot 5 \cdot 4 \cdot 5$ possibilities.
The first case is a little bit harder, but not much. Here we have $7 \cdot 6 \cdot 5 \cdot 6$ (the first three kids pick a flavour that hasn't been picked yet, after which there are 6 ways to distribute the remaining two pairs of flavours among the remaining four kids). Here we have ${5 \choose 2} = 10$ ways of deciding which two flavours get picked twice, so the total number here is $7 \cdot 6 \cdot 5 \cdot 6 \cdot 10$.
The total of these two cases is $7 \cdot 6 \cdot 5 \cdot 4 \cdot {5 \choose 1} + 7 \cdot 6 \cdot 5 \cdot 6 \cdot {5 \choose 2} = 7 \cdot 6 \cdot 5 \cdot (4 \cdot 5 + 6 \cdot 10) = 210 \cdot 80 = 16800$ which is indeed the number we got from our simulation.
So the probability of the children having picked all flavours is $\frac{16800}{78125}$.
Here are the results for 7 children with different numbers of flavours. $$\begin{array}{rrr} \begin{array}{c}\text{Nr. of flavours}\end{array} & \begin{array}{c}\text{Nr. of combinations} \\ \text{with all flavours chosen}\end{array} & \begin{array}{c}\text{Nr. of possible combinations}\end{array} \\ \hline 1 & 1 & 1 \\ 2 & 126 & 128 \\ 3 & 1806 & 2187 \\ 4 & 8400 & 16384 \\ 5 & 16800 & 78125 \\ 6 & 15120 & 279936 \\ 7 & 5040 & 823543 \\ \end{array}$$ The general question remains to find a formula for different $m$ (hunters or children) and $n$ (rabbits or icecream flavours). I can explain all the numbers in the table above, but so far I haven't been able to formulate the general formula.
|
2022-05-16 16:22:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8989985585212708, "perplexity": 1646.6137039891514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510138.6/warc/CC-MAIN-20220516140911-20220516170911-00248.warc.gz"}
|
http://mathematica.stackexchange.com/questions/48222/what-summations-can-mathematica-do
|
# What summations can Mathematica do? [closed]
I was very pleased to discover that Mathematica could do this summation and produce a symbolic result.
s1 =
1/nn Sum[Cos[2 π a (n - 1)] E^(-I (n - 1) (s - 1)/nn), {n, nn}] // Simplify
The answer is long but very useful for me. I also discovered that Mathematica could do some variants of this sum. However, Mathematica cannot do this summation.
s2 =
1/nn Sum[
Cos[2 π ((a - ϵ) (n - 1) + ϵ/(nn - 1) (n - 1)^2)] E^(-I (n - 1) (s - 1)/nn),
{n, nn}]
This leads to the big question what summations can Mathematica do? Help suggest that "Sum can do essentially all sums that are given in standard books of tables". Is my sum s2 in some way pathological that it cannot be done? My second question is thus can my second sum be coerced into a solution?
These sums come from Fourier analysis, where it is useful to have a symbolic expression for results that are often calculated numerically using the discrete Fourier transform (Fourier in Mathematica). In order to investigate my sum, I looked at the continuous Fourier transform in which time is equal to n -1 and frequency is (s - 1)/nn. The continuous Fourier transform is given by the integral
1/t2 Integrate[Cos[2 π ( (a - ϵ) t + ϵ/t2 t^2)] E^(-I 2 π f t), {t, 0, t2}]
Mathematica can do this integral which comes out in terms of error functions. However, I can't see how to use this to help get the symbolic solution to the sum s2.
In summary what summations can Mathematica do? Is there any hope for getting an closed-form solution for my second sum?
-
## closed as too broad by Michael E2, RunnyKine, m_goldberg, rasher, JensMay 21 '14 at 1:36
There are either too many possible answers, or good answers would be too long for this format. Please add details to narrow the answer set or to isolate an issue that can be answered in a few paragraphs. If this question can be reworded to fit the rules in the help center, please edit the question.
I'm afraid that in its current form this question might be deemed too general and it is at risk of being closed ... It can do some surprising sums such as Sum[1/Prime[k]^2, {k, Infinity}]. No one but the programmers working on Sum would be able to give a complete list of what it can do and what it cannot. It would be better to focus the question on your second sum instead of on what sorts of sums Mma can do in general. – Szabolcs May 20 '14 at 22:22
+1 for second question. – Fred Kline May 20 '14 at 23:11
ϵ/(nn - 1) (n - 1)^2 this could have zero divide – Fred Kline May 20 '14 at 23:49
@Szabolcs well, it can only sort-of do that one--it returns an answer in terms of a special function representing exactly this, and it cannot do Sum[1/Prime[k + 1]^2, {k, Infinity}], for example. (NSum chokes on this one as well, apparently.) Perhaps an apt example of how what it can and can't do might be extremely specific. – Oleksandr R. May 21 '14 at 0:59
@Szabolcs Thanks for your comment. This is very interesting. I had (probably naively) thought that if Mathematica could do the integration it should be able to do a similar sum. I even thought there were theorems that indicated what could be done. So I have learnt something. What do you suggest: should I start a new question on my specific problem or edit this one to make my specific problem the subject? – Hugh May 21 '14 at 8:48
|
2015-03-06 04:01:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47354254126548767, "perplexity": 920.2812380716892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936465487.60/warc/CC-MAIN-20150226074105-00194-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://stacks.math.columbia.edu/tag/05D9
|
Lemma 29.34.20. Let $f : X \to S$ be a morphism of schemes. Let $\sigma : S \to X$ be a section of $f$. Let $s \in S$ be a point such that $f$ is smooth at $x = \sigma (s)$. Then there exist affine open neighbourhoods $\mathop{\mathrm{Spec}}(A) = U \subset S$ of $s$ and $\mathop{\mathrm{Spec}}(B) = V \subset X$ of $x$ such that
1. $f(V) \subset U$ and $\sigma (U) \subset V$,
2. with $I = \mathop{\mathrm{Ker}}(\sigma ^\# : B \to A)$ the module $I/I^2$ is a free $A$-module, and
3. $B^\wedge \cong A[[x_1, \ldots , x_ d]]$ as $A$-algebras where $B^\wedge$ denotes the completion of $B$ with respect to $I$.
Proof. Pick an affine open $U \subset S$ containing $s$ Pick an affine open $V \subset f^{-1}(U)$ containing $x$. Pick an affine open $U' \subset \sigma ^{-1}(V)$ containing $s$. Note that $V' = f^{-1}(U') \cap V$ is affine as it is equal to the fibre product $V' = U' \times _ U V$. Then $U'$ and $V'$ satisfy (1). Write $U' = \mathop{\mathrm{Spec}}(A')$ and $V' = \mathop{\mathrm{Spec}}(B')$. By Algebra, Lemma 10.139.4 the module $I'/(I')^2$ is finite locally free as a $A'$-module. Hence after replacing $U'$ by a smaller affine open $U'' \subset U'$ and $V'$ by $V'' = V' \cap f^{-1}(U'')$ we obtain the situation where $I''/(I'')^2$ is free, i.e., (2) holds. In this case (3) holds also by Algebra, Lemma 10.139.4. $\square$
There are also:
• 2 comment(s) on Section 29.34: Smooth morphisms
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2021-06-24 06:19:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9895680546760559, "perplexity": 120.10563817842676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488551052.94/warc/CC-MAIN-20210624045834-20210624075834-00001.warc.gz"}
|
https://blender.stackexchange.com/questions/3346/how-to-iterate-over-material-index-using-python
|
# How to iterate over material index using Python
I want to iterate over the material index of material slot and change the value of each material index. How can this be done in Python?
I've tried using the Python shown below, but it doesn't work correctly.
import bpy
object = bpy.context.object
material = object.active_material
for num in range(0, 5):
object.active_material_index = num
else:
The fastest way to loop over the object's materials is through its material_slots attribute. Here's an equivalent code for toggling all object materials shadeless property:
import bpy
for m in bpy.context.object.material_slots:
The only issue I see in your own code is that you're assigning the material to a variable before changing active material index, which renders the latter operation moot. Changing the index before assigning the resulting material to a variable, will yield the result you want.
• this work best, using material_slots seems solve the issue – aditia Oct 16 '13 at 3:36
• doesn't look like it works for you. maybe further testing is needed. – Adhi Oct 17 '13 at 14:11
The error with the script is its not re-assigning material.
Corrected script:
import bpy
object = bpy.context.object
for num in range(0, 5):
object.active_material_index = num
material = object.active_material # <-- changed line
|
2020-01-19 08:23:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21100234985351562, "perplexity": 2669.3339148879845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594333.5/warc/CC-MAIN-20200119064802-20200119092802-00059.warc.gz"}
|
https://thuses.com/best/year/
|
## A proof of a general slice-Bennequin inequality
In this blog post, I’ll provide a slick proof of a form of the slice-Bennequin inequality (as outlined by Kronheimer in a mathoverflow answer.) The main ingredient is the adjunction inequality for surfaces embedded in closed 4-manifolds. To obtain the slice-Bennequin inequality (which is a statement about surfaces embedded in 4-manifolds with boundary) we use […]
## What Topological Spaces are π₀?
This was a fun question I thought about once. My answer is at the end, in case you’d like to try solving the problem yourself. The question is likely more interesting than my solution. A well known theorem says that every group occurs as for some topological space . It’s not a hard construction; is […]
## Isometries of a product of Riemannian manifolds
Theorem. Let and be two compact Riemannian manifolds with irreducible holonomy groups. Let . Then This result seems to be a folklore, probably well known to the specialists, although it is hard to find it in the literature. The only discussion which I managed to find on Mathoverflow contains an answer by Igor Rivin (due […]
## Demystification of the Willmore integrand
The Willmore energy for a surface in Euclidean 3-space is defined as , where is the mean curvature of and its area form. It’s known to be invariant under the conformal transformations (whereas the mean curvature itself is not). White, and later Bryant noticed that the 2-form , where stands for the Gaussian curvature, is […]
## Simple Proof of Tokuyama’s Formula
Tokuyama’s Formula is a combinatorial result that expresses the product of the deformed Weyl denominator and the Schur polynomial as a sum over strict Gelfand-Tsetlin patterns. This result implies Gelfand’s parametrization of the Schur polynomial, Weyl’s Character Formula, and Stanley’s formula on Hall-Littlewood polynomials — all for ; also, the formula is related to alternating […]
## The Picard number of a Kummer K3 surface
Let be a separably closed field of characteristic not , and an abelian surface. Then it is a basic fact (e.g. see Example 1.3 (iii) of Huybrechts’ “K3 Surfaces”) that one can make a K3 surface out of . The construction is as follows. Consider the involution given by The fixed locus of this involution […]
## The torsion component of the Picard scheme
This post is a continuation of Sean Cotner’s most recent post [see An example of a non-reduced Picard scheme]. Since writing that post, Bogdan Zavyalov shared some notes of his proving the following strengthened version of the results described there. Main Theorem. Let be a noetherian local ring and let be a finite flat commutative […]
## The étale cohomology of curves over finite fields
When I was a graduate student, Zev Rosengarten (a former student of Brian Conrad) and I used to eat dinner at Stanford’s Arrillaga dining hall a lot. We’d talk about math for hours, but one thing that will forever be ingrained in my mind is how Zev was able to do all these complicated spectral […]
## An explicit construction of indecomposable vector bundles over an elliptic curve
In the celebrated paper “Vector bundles over an elliptic curve,” M. Atiyah classifies indecomposable vector bundles, namely he provides a bijection between indecomposable bundles of arbitrary rank and degree (denoted by ) and (where ). The latter is described explicitly: there is a distinguished element such that for any other one has with a line […]
## A shortcut in Kapovich’s proof of Haupt’s theorem
The Teichmüller space of genus curves carries the Hodge bundle , the total space of which maps into the first cohomology space via the period map (i. e., a holomorphic 1-form maps into its cohomology class). Haupt’s (or Haupt–Kapovich) theorem describes the image in terms of the integral structure on and the intersection pairing . It […]
|
2021-10-18 01:38:01
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9283384680747986, "perplexity": 546.389310820898}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585186.33/warc/CC-MAIN-20211018000838-20211018030838-00019.warc.gz"}
|
http://physicsgoeasy.blogspot.com/2010/08/system-of-varying-massrocket.html
|
## Pages
### System of varying mass(Rocket)
NOTE:- For full notes on Impulse and linear momentum and System of particles visit
physics study material for IITJEE/AIEEE/PMT and board examinations
• While studying classical mechanics we have always considered the particle under consideration to have constant mas
• Some times it is required to deal with the particles or system of particles in which mass is varying and motion of the rocket is one such examples
• In a rocket fuel is burned and the exhaust gas is expelled out from the rear of the rocket
• The force exerted by the exhaust gas on the rocket is equal and opposite to the force exerted by the rocket to expell it
• This force exerted by the exhaust gas on the rocket propels the rocket forwards
• The more gass is ejected from the rocket ,the mass of the rcoket decreases
• To analyze this process let us consider a rocket being fired in upwards direction and we neglect the resistance offered by the air to the motion of the rocket and variation in the value of the acceleration due to gravity with height
• Figure above shows a rocket of mass m at a time t after its take off moving with velocity v.Thus at time t momentum of the rocket is equal to mv.THus
pi=mv
• Now after a short interval of time dt,gas of total mass dm is ejected from the rocket
• If vg represents the downward speed of the gas relative to the rocket then velocity of the gas relative to earth is
vge=v-vg
And its momentum equal to
dmvge=dm(v-vg)
• At time t+dt,the rocket and unburned fuel has mass m-dm and its moves with the speed v+dv.Thus momentum of thee rocket is
=(m-dm)(v+dv)
• Total momentum of the system at time t+dt is
pf=dm(v-vg)+(m-dm)(v+dv)
Here system constitute the ejected gas and rocket at the time t+dt
• From Impulse momentum relation we know that change in momentum of the system is equal to the product of the resultant external force acting on the system and the time interval during which the force acts
• Here external force on the rocket is weight -mg of the rocket ( the upward direction is taken as positive)
• Now
Impulse=change in momentum
Fextdt=pf-pi
or
-mgdt=dm(v-vg)+(m-dm)(v+dv) - mv
or
-mgdt=mdv-vgdm-dmdv
term dmdv can be dropped as this product is neglibigle in comparison of other two terms
Thus we have
• In equation (14) dv/dt represent the acceleration of the rocket ,so mdv/dt =resulant force on the rocket
Therefore
Resultant Force on rocket=Upthrust on the rocket - weight of the rocket
where upthrust on rocket=vg (dm/dt)
• The upthrust on rocket is proportional to both the relative velocity (vg) of the ejected gas and the mass of the gas ejected per unit time (dm/dt)
• Again from equation (14)
As rocket goes higher and higher ,value of the acceleration due to gravity g decreases continously .The values of vg and dm/dt parctically remains constant while fuel is being consumed but remaining mass m decreases continously .This result in continous increase in acceleration of rocket untill all the fuel is burned up
• Now we will find the relation between the velocity at any time t and remaining mass.Again from equation (15) we have
dv=vg (dm/m) -gdt
• Here dm is a +ve quantity representing mass ejected in time dt.So change in mass of the rocket in time dt is -dm.So while calculating total mass change in rocket,we must change the sign of the term containing dm
dv=-vg (dm/m) -gdt --(16)
• Initially at time t=0 if the mass and velocity of the rocket are m0 and v0 respectively.After time t if m and v are mass and velocity of the rocket then integrating equation (16) within these limits
On evaluating this integral we get
v-v0=-vg(ln m- ln m0)-g(t-0)
or v=v0+vgln(m0/m) -gt (17)
• equation(17) gives the change in velocity of the rocket in terms of exhaust speed and ration of initial amd final masses at any time t
• The speed acquired by the rocket when the whole of the fuel is burned out is called burn-out speed of the rocket
|
2017-11-22 13:05:35
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.901375949382782, "perplexity": 1018.1983203386671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806586.6/warc/CC-MAIN-20171122122605-20171122142605-00029.warc.gz"}
|
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=3108
|
## WeBWorK Main Forum
### solutions in instructor hardcopy (webwork 2.7)
by Gavin LaRose -
Number of replies: 6
Hi all,
I believe the following is the default behavior in WeBWorK 2.7 (at least, this is what I'm seeing): if an instructor generates a hardcopy of a set for which the problems have solutions, s/he sees the solutions regardless of:
• the selection of the "show solutions" checkbox in the generate hardcopy selector page; and
• the setting of "always_show_solution" in %permissionLevels in default.conf/localOverrides.conf.
Can someone else confirm or refute this behavior? It looks like a bug to me, but it may also be that I have something messed up at my end.
Thanks,
Gavin
### Re: solutions in instructor hardcopy (webwork 2.7)
by Michael Gage -
see
http://bugs.webwork.maa.org/show_bug.cgi?id=2707
for more discussion. Not sure if it is a bug or a feature. :-)
You might ask Geoff what changes he made if any. I couldn't find a change in the recent pull requests.
### Re: solutions in instructor hardcopy (webwork 2.7)
by Gavin LaRose -
Hi Mike,
Thanks. From the discussion there it does sound as if the code was patched to make the "show solutions" selection be honored, but that doesn't appear to be the case at the moment. I'll follow up with Geoff.
I'd agree with Danny that this is a bug, not a feature.
Gavin
### Re: solutions in instructor hardcopy (webwork 2.7)
by Danny Glin -
This happened to me too. I'm hoping that this is not the intended behaviour. I had a student who was having login issues, and I wanted to print him a copy of his assignment to start working on, but I couldn't due to not being able to suppress solutions.
Danny
### Re: solutions in instructor hardcopy (webwork 2.7)
by Gavin LaRose -
Hi all,
Geoff indicates that the behavior that I expected should be the default if one has a fully updated installation of PG. The update would have been made on or around 31 August 2013, so that would require an update to PG after that.
(Thanks, Geoff!)
Gavin
### Re: solutions in instructor hardcopy (webwork 2.7)
by Danny Glin -
I changed always_show_solution to "nobody" on my install, and solutions don't always show up.
I'm not sure I see the need for this setting at all. Does anyone see a problem with the following scenario:
• When a pg problem is loaded in a page, do the usual checks to see if solutions can be displayed (high enough permissions, after answer date, etc.). If so, include the solution in the page, hidden with javascript (which is what happens for students in 2.7 already). Then it is one click and no page loads to view the solution.
• For hard copies, leave the configuration as it was. When generating a hardcopy, let the user choose if solutions are displayed.
Danny
|
2023-02-04 15:45:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42147374153137207, "perplexity": 4145.072863387429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500140.36/warc/CC-MAIN-20230204142302-20230204172302-00505.warc.gz"}
|
https://sea-man.org/testy/radar-observation
|
.
Site categories
1/ Help
:
# Crew Evaluation Test online for seamans about Radar Observation and Plotting
Welcome to the website where you can pass online the Computer Based Test (CBT) also known as Crew Evaluation System (CES) on the subject «Radar Observation and Plotting». Practice like this will help you as a marine specialist improve your knowledge with the help of online studying and appraisal practice. CES/CBT based on practical information and marine specialists experience.
CES & CBT tests developed for evaluating seaman basic knowledge by Seagull Company (rebranded as «OTG»), is an evaluating online-tool, used for revealing any professional preparation needed in specific fields of knowledge, defined by STCW.
CES tests have proven themselves as good tools for the selection and recruitment process, as well as advancing the level of knowledge of the current officers and crew. Ocean Technologies Group use various subjects for question creation, which includes:
• Crowd and Crisis Management;
• Ballast water management;
• Handling and Stowage;
• Vessel operation management and safety;
• Marine engineering;
• Maintenance and repair, etc.
Current test contains Seagull CES questions on the subject «Radar Observation and Plotting». Those questions can be used for competence verification specialist capable of preventing accidental situations related with transporting safety, or also for self-examination.
«Radar Observation and Plotting» subject includes theoretical and practical information about radar navigation, observation and plotting. Knowledge of this information directly shows employee’s competence who holds a relevant post on a vessel, provides to set up and work with radar and radiolocating equipment, understand designation specifics to use the radar data for navigating considering plotting error to know ship position relatively to other ships or any different objects requiring attention, for avoiding collisions.
On this site Crew Evaluation System Test on the subject «Radar Observation and Plotting» contains 53 questions you need to answer with no possibility to go back to previous question. Therefore, we recommend carefully reading each question and making decision with no hurry. In case you have some difficulty answering, you have also possibility to request a hint.
Choose mode in which you want take CES test about:
Start test
Can the frequency of radar pulses transmitted per second affect the radar’s maximum range?
No.
Yes.
Only at short range.
Only during rain, snow etc.
Next question
Does the Collision Regulations give any preference to ships equipped with radar?
No.
Yes.
Only in good visibility.
Only during reduced visibility.
Next question
Does the ships trim affect radar minimum range?
No.
Yes, it may introduce obstructions and blind sectors.
Only at long distance.
Only at short distance.
Next question
How can the Beam Width Distortion be minimized?
Adjust the setting of the Brilliance control.
Reduce the setting of the Gain control.
Next question
How can the horizontal extent of a “Blind Sector” be accurately determined?
By visually looking for on-board obstructions.
By turning the ship through 360° and determining when a strong echo disappears and reappears on the radar screen.
By determining the direction of indirect echoes.
By looking at the sectors on the radar display with no sea clutter.
Next question
How does radar antenna height influence on sea clutter?
High antenna height results in less sea clutter being displayed.
High antenna height results in more sea clutter displayed on the screen.
Antenna height has no effect on the amount of sea clutter displayed.
Low antenna height results in more sea clutter being displayed.
Next question
How is the amplifier protected from the transmitted pulse?
By timing the pulse transmissions to precise times.
By having separate transmission and receiver scanners.
Not necessary.
Next question
IMO resolution 477 applies to all ships built after:
1972.
1982.
1984.
1992.
Next question
Is it a requirement to have the HM-suppress button spring loaded?
No requirement.
Yes, it is a IMO requirement.
HM-switch must not be spring loaded.
Next question
Marine radar bearing accuracy is generally:
Not very good.
Good, but not as good as the range rings.
Excellent.
Next question
Marine radar range accuracy is generally:
Excellent, but better with the VRM than the fixed rings.
Good, but the range rings are better than the VRM.
Poor.
Variable.
Next question
The action taken to avoid collision in reduced visibility should be:
Always by a large alteration of course to starboard.
Always by a reduction of speed.
Sometimes by no alterations as own ship may be the “Stand-on” vessel.
Positive, in ample time and avoiding a series of small alterations.
Next question
The actions required by the Collision Regulations can be ignored when:
In a traffic separation Line.
In clear visibility.
When at anchor.
Never.
Next question
The radar must be able to operate in relative wind speeds up to:
50 knots.
75 knots.
100 knots.
No requirement.
Next question
The reflection of radar pulses is similar to that of?
Sound waves.
Ice.
Snow.
Light waves.
Next question
We have normal transmission of radar waves when the radar horizon is:
The same as the visible horizon when at the same vertical height.
10 % longer than the visible horizon.
25 % longer than the visible horizon.
50 % longer than the visible horizon.
Next question
What is known as “Side Lobes” with reference to marine radar?
The unwanted affects on magnetic compass, when being close to radar equipment.
Unwanted lobes of energy transmitted outside the main radar transmission beam.
The separate transmissions eventually being returned to the scanner after bouncing back and forth between the sides of own ship and another vessel.
The horizontal extensions of the radar scanner.
Next question
What is sometimes known as a Coded Racon?
A racon, which automatically activate itself.
A racon only working at night.
A racon, which displays a Morse code on the radar screen.
A racon, which has the transmitter sending a signal continuously.
Next question
What is the correct setting for the “Gain” control?
No noise (speckled effect) visible on the screen.
A noise (speckled effect) visible across the whole screen.
No sea clutter remains visible on the screen.
A little noise (speckled effect) remains visible on the screen.
Next question
What is the correct type of radar display to be selected when at sea?
Always use a compass stabilized display either True Motion or Relative Motion to suit experience and environment.
Always use ships Head-up unstabilised display.
Always a North-up Stabilized Relative Motion display.
Always a North-up Stabilized True Motion display.
Next question
What is the effect of Sub-refraction?
Distance to the radar horizon is reduced.
Distance to the radar horizon is increased.
Distance to the radar horizon not effected.
Next question
What is the effect of error in own ships speed, when completing a plot?
Error in calculated CPA and TCPA.
Error in the targets course and speed.
Error in calculated target aspect and CPA.
Error in CPA, TCPA and target course.
Next question
What is the effect of gyro error?
All bearings will be ignored.
All ranges will be wrong.
Bearings and ranges will be wrong.
None of the three alternatives.
Next question
What is the fundamental principle of a marine radar?
Detection of other objects outside own ship.
Provide early warning of ships.
Determine the course of other ships.
Provide an early warning of ships on collision course.
Next question
What is the main content of Rule 19 in the Collision Regulations?
The conduct of vessels in sight of one another.
The necessity of keeping a proper lookout.
The conduct of vessels in restricted visibility.
The conduct of vessels in any visibility.
Next question
What is the main disadvantage of true plotting?
It provides immediate collision risk information.
Collision risk (CPA) can only be found by completing the plot.
The chart scale is not always suitable to be used for true plotting.
Can be done on ordinary piece of paper.
Next question
What is the main function of the antenna?
To provide information about echo range bearing.
Next question
What is the main function of the receiver?
Amplify the incoming signals.
Amplify the outgoing signals.
Display signals of interest.
Next question
What is the main purpose of plotting?
Obtain CPA and TCPA only.
Obtain target course and speed only.
Obtain information about whether danger of collision exists, CPA, TCPA, target course and speed.
Obtain target-calculated aspect only.
Next question
What is the main purpose of the parallel index lines?
To assist the navigator in maintaining the ships heading.
To assist the navigator in maintaining the ships steered course.
To assist the navigator in maintaining the desired course over the ground.
To assist the navigator in finding the ships set and drift.
Next question
What is the maximum radar “warm-up” time?
1 min.
2 min.
3 min.
4 min.
Next question
What is the meaning of “Radar Bearing Discrimination”?
The radar ability to display as separate spots on the screen, close targets on the same range.
The radar ability to display as separate spots on the screen, close targets on the same bearing.
The radar ability to pick up large targets.
The radar ability to pick up small targets.
Next question
What is the meaning of “Radar’s Range Discrimination”?
The radar ability to display as separate spots on the screen, close targets on the same range.
The radar ability to display as separate spots on the screen, close targets on the same bearing.
The radar ability to provide accurate bearings of small echoes.
The radar ability to provide accurate ranges of small echoes.
Next question
What is the minimum display diameter required on ships bigger than 1 600 tons but less than 10 000 tons?
250 mm.
340 mm.
360 mm.
380 mm.
Next question
What is the minimum number of range scales required?
2.
3.
5.
6.
Next question
What is the minimum radar display diameter for ships of 10 000 tons and upwards?
9 inch.
12 inch.
16 inch.
20 inch.
Next question
What is the purpose of a radar reflector?
Making small targets more visible to the eye.
Making large echoes smaller.
Making small objects more radar visible.
Next question
What is the purpose of determining the Aspect of a target vessel?
Provide the relative bearing of the target.
Provide the same information as could be seen visually out of the wheelhouse.
Determine the probable reflective properties of the target.
Avoid a collision risk developing.
Next question
What is the purpose of the Anti-Sea Clutter controls?
Reduces the amount of echoes from the sea waves.
Reduces the strength of the transmitted signal.
Reduces the height of the sea waves.
Reduces the affects of the echo returns from the sea waves.
Next question
What is the purpose of the Gain control?
To adjust the amount of amplification of the echoes.
Adjust the strength of the transmitted signal.
Adjust the brightness of the screen.
Next question
What is the purpose of the Heading Line (HL) suppression control?
To switch off the HL permanently in order to see under the line on the screen.
To switch off the HL temporarily in order to see under the line on the screen.
To switch on the HL.
To adjust the position of the HL in azimuth.
Next question
What us the purpose of the VRM control?
To measure distance of a target on the screen accurately from own ship.
To measure bearing of a target on the screen accurately.
To measure range and bearing of a target on the screen accurately.
To measure the length of all the target trails on the screen.
Next question
What is the quickest method of deciding whether collision danger exists or not?
By taking several bearings of the target.
By taking several range measurements to the target.
By true plotting.
By relative plotting.
Next question
What is the relation between the scanner’s horizontal width (size) and horizontal beam width?
No relation.
Small scanner horizontal width, narrow horizontal beam width.
Large scanner horizontal width, narrow horizontal beam width.
Large scanner horizontal width, wide horizontal beam width.
Next question
What is the required accuracy of the heading marker?
+/- 0,5°.
+/- 1,0°.
+/- 1,5°.
No requirement.
Next question
What radar range should be selected to suit all conditions?
The 12 miles range.
The 12 miles range with frequent changes to higher ranges to determine distant targets.
The range most suitable to the proximity of targets and land, but with frequent checks of higher and lower ranges.
The 6 mile range with frequent observations of the 12 mile range.
Next question
When completing a full Relative plot (North up), what input information is required?
A series of target ranges and bearings and own ships course and speed over the ground.
A series of target ranges and bearings and own ships course and speed through the water.
A series Ranges and bearings only.
The CPA and TCPA.
Next question
When does a radar display the identity signal of a racon?
When the wavelength of the racon is similar to the radar in use.
Always when using all marine radars.
Always when using a S-band radar.
No radar can activate a racon.
Next question
Which course should be fed in to the radar with a stabilised radar picture?
Magnetic Compass course steered.
Master Gyro course steered.
True course steered by own ship.
True course made good over the ground.
Next question
Which of the following provides the complete answer, when related to the effect of using incorrect bearings in plotting?
Calculation of CPA is effected.
Calculation of TCPA is effected.
Calculation of target course and speed are effected.
CPA, TCPA, Course and Speed of the target are effected.
Next question
Which of the radar controls can be left at the setting determined when the set was first switched on?
Tuning control.
None.
Sea Clutter.
Pulse length.
Next question
Which radar condition can be associated with a temperature inversion?
Ducting.
Sub-refraction.
Super refraction.
Next question
Why should radar still be used during the day in clear visibility?
To avoid looking out the wheelhouse windows.
To improve confidence and competence in the use of that specific radar.
To remove the necessity of keeping a good lookout.
Because radar is the only method to determine any risk of collision from other ships.
Show result
* In some questions may be more, than one right answer.
Did you find mistake? Highlight and press CTRL+Enter
Апрель, 18, 2022 1830 0
Favorite articles
• Список избранных статей пуст.
Here will store all articles, what you marked as "Favorite". Articles store in cookies, so don't remove it.
$${}$$
|
2022-10-05 02:27:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3391881585121155, "perplexity": 6570.1943916029295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00080.warc.gz"}
|
http://www.oalib.com/relative/3814108
|
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
Asian Journal of Information Technology , 2012, Abstract: This paper deals with sorting a list of items. Sorting within a linear time is always desirable. We have many sorting algorithms. But the complexities of almost all of them are not linear. Here we have proposed a sorting algorithm named K-Index-Sort whose time complexity is O(n). We have used a temporary character array that will hold a track character against every input number. This is an interesting thing of that method as the input list is being sorted with the help of a character array. For every input k, a track symbol (like ‘-‘, ‘, ‘#, any symbol) is placed to that character array to an index of k. After collecting the index numbers sequentially from that where the track symbol is residing we will get sorted list. Distinct integer numbers are a prerequisite of that algorithm. This algorithm will perform better performance for k=O(n) where k is the maximum number and n is the number of input items. 中国图象图形学报 , 2011, Abstract: The index map after vector quantization has a strong statistical correlation.That means the neighboring indices are the same or the offset between them is very small.Codebook sorting can,according to some criteria,enhance the correlation among neighboring indices.Based on the squared Euclidean distance between code words,a new codebook sorting method is proposed.Compared with the conventional mean-ordered codebook,the distance-ordered codebook has a much higher correlations between neighboring indices and the offset become even smaller.As a result,distance-ordered codebook can also significantly improve the compression efficiency of the AICS (adaptive index coding scheme) algorithm. Jens Gerlach Computer Science , 2013, Abstract: In a totally ordered set the notion of sorting a finite sequence is defined through a suitable permutation of the sequence's indices. In this paper we prove a simple formula that explicitly describes how the elements of a sequence are related to those of its sorted counterpart. As this formula relies only on the minimum and maximum functions we use it to define the notion of sorting for lattices. A major difference of sorting in lattices is that it does not guarantee that sequence elements are only rearranged. However, we can show that other fundamental properties that are associated with sorting are preserved. Journal of Computer and Communications (JCC) , 2017, DOI: 10.4236/jcc.2017.512006 Abstract: A kind of heap sorting method based on array sorting was proposed. Some advantages and disadvantages of it were discussed. It was compared with the traditional method of direct application. In the method, the ordered keywords in the array are put into the heap one by one after building an empty heap. This method needs relatively less space and is fit for ordered sequence. 计算机科学 , 2005, Abstract: A common query against large protein and gene sequence data sets is to locate targets that are similar to an input query sequence.The current set popular search tools,such as BLAST,employ heuristics to improve the speed of such searches.However,such heuristics can sometimes miss targets,which in many cases is undesirable.The alterna- tive to BLAST is to use an accurate algorithm,Such as Smith-Waterman(S-W) algorithm.However,these accurate al- gorithms are computationaUy very expensive.Recently,a new technique,OASIS,has been proposed to improve the ef- ficiency and accuracy by employing dynamical programming during traversing suffix tree and its speed is comparable to BLAST.But its main drawback is too much memory consuming.We propose an efficient and accurate algorithm for lo- cally aligning genome sequences.We construct a block sorting index structure for the large sequence.The index struc- ture is less than the suffix tree index and can be fit for large data size.Experimental results show that our algorithm has better performance than OASIS. Chinese Science Bulletin , 2001, DOI: 10.1007/BF03183395 Abstract: Chemical analysis of acid-insoluble fractions in loess and paleosols shows that concentrations of Fe and Mg were under control of wind sorting and post-depositional weathering-pedogenesis. The former caused Fe and Mg concentrated in the finer grain-size fractions, displaying synchronous variations, while the latter made Fe and Mg separated, leading to Fe retained in the weathered section and Mg leached out. Therefore, Fe/Mg ratios in the acid insoluble fraction of loess and paleosols can eliminate the effect of wind sorting and serve as an excellent proxy record on intensity of weathering-pedogenesis. Based on calculation, leaching percentage of Mg in the paleosol S1 from the Luochuan, Xifeng and Huanxian sections is 15%, 11% and 2%, respectively, and on average 9% for the paleosols S2–S14 from the Luochuan section, with the highest value amounting to 22% in S5-1, suggesting the strongest weathering-pedogenesis. Computer Science , 2011, Abstract: Previous compact representations of permutations have focused on adding a small index on top of the plain data<\pi(1), \pi(2),...\pi(n)>$, in order to efficiently support the application of the inverse or the iterated permutation. In this paper we initiate the study of techniques that exploit the compressibility of the data itself, while retaining efficient computation of$\pi(i)$and its inverse. In particular, we focus on exploiting {\em runs}, which are subsets (contiguous or not) of the domain where the permutation is monotonic. Several variants of those types of runs arise in real applications such as inverted indexes and suffix arrays. Furthermore, our improved results on compressed data structures for permutations also yield better adaptive sorting algorithms. Mathematics , 2005, Abstract: In 1966, Claude Berge proposed the following sorting problem. Given a string of$n$alternating white and black pegs on a one-dimensional board consisting of an unlimited number of empty holes, rearrange the pegs into a string consisting of$\lceil\frac{n}{2}\rceil$white pegs followed immediately by$\lfloor\frac{n}{2}\rfloor$black pegs (or vice versa) using only moves which take 2 adjacent pegs to 2 vacant adjacent holes. Avis and Deza proved that the alternating string can be sorted in$\lceil\frac{n}{2}\rceil$such {\em Berge 2-moves} for$n\geq 5$. Extending Berge's original problem, we consider the same sorting problem using {\em Berge$k$-moves}, i.e., moves which take$k$adjacent pegs to$k$vacant adjacent holes. We prove that the alternating string can be sorted in$\lceil\frac{n}{2}\rceil$Berge 3-moves for$n\not\equiv 0\pmod{4}$and in$\lceil\frac{n}{2}\rceil+1$Berge 3-moves for$n\equiv 0\pmod{4}$, for$n\geq 5$. In general, we conjecture that, for any$k$and large enough$n$, the alternating string can be sorted in$\lceil\frac{n}{2}\rceil$Berge$k$-moves. This estimate is tight as$\lceil\frac{n}{2}\rceil$is a lower bound for the minimum number of required Berge$k$-moves for$k\geq 2$and$n\geq 5\$.
|
2020-01-21 21:11:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48598700761795044, "perplexity": 2154.7870280513616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00415.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-real-solutions-of-the-polynomial-x-4-2x-3-12x-2-40x-32
|
# How do you find the real solutions of the polynomial x^4+2x^3=12x^2+40x+32?
Mar 26, 2017
Solutions to ${x}^{4} + 2 {x}^{3} = 12 {x}^{2} + 40 x + 32$ are $x = - 2$ and $x = 4$
#### Explanation:
${x}^{4} + 2 {x}^{3} = 12 {x}^{2} + 40 x + 32$
or $f \left(x\right) = {x}^{4} + 2 {x}^{3} - 12 {x}^{2} - 40 x - 32 = 0$
This will have roots as $\pm 1$, $\pm 2$, $\pm 4$, $\pm 8$, $\pm 16$ or $\pm 32$, although some of them may repeat.
Note that $x = - 2$ is a solution as $f \left(- 2\right) = 0$ and $\left(x + 2\right)$ is a factor of $f \left(x\right)$
Similarly $x = 4$ is a solution as $f \left(4\right) = 0$ and $\left(x - 4\right)$ is a factor of $f \left(x\right)$
As $\left(x + 2\right) \left(x - 4\right) = {x}^{2} - 2 x - 8$, dividing ${x}^{4} + 2 {x}^{3} - 12 {x}^{2} - 40 x - 32$ by ${x}^{2} - 2 x - 8$ we get
${x}^{2} \left({x}^{2} - 2 x - 8\right) + 4 x \left({x}^{2} - 2 x - 8\right) + 4 \left({x}^{2} - 2 x - 8\right)$ i.e.
our equation is $\left({x}^{2} + 4 x + 4\right) \left({x}^{2} + 4 x + 4\right) = 0$
or $\left(x + 2\right) \left(x - 4\right) {\left(x + 2\right)}^{2} = {\left(x + 2\right)}^{3} \left(x - 4\right) = 0$
Hence solutions to ${x}^{4} + 2 {x}^{3} = 12 {x}^{2} + 40 x + 32$ are $x = - 2$ and $x = 4$
|
2019-04-21 20:35:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 28, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9469167590141296, "perplexity": 163.1007053734949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532882.36/warc/CC-MAIN-20190421195929-20190421221929-00523.warc.gz"}
|
https://scoop.eduncle.com/a-long-solenoid-is-embedded-in-a-conducting-medium-and-is-insulated-from-the-medium-if-the-current-through
|
CSIR NET Follow
December 28, 2019 11:42 am 15 pts
Magnetic field outside a solenoid is zero. How the question is even valid?
• 0 Likes
• Shares
• Paltu Sen
Great!! you have pointed out the right thing...In question they should mention the current through which portion of the conducting medium. Here we are actually getting a volume cu...
• Sanchari Pal
Why Electric field is proportional to emf? as integration( E. dl) =Emf. then E*2pi*r =Emf. and now this EMf is proportional to the current. means only right hand side should be pr...
• Paltu Sen
You just imagine a circular disc of radius r, perpendicular to the axis of the cylinder and find the magnetic flux through this disc. For better understanding you can study Griffit...
• Paltu Sen
1.The induced current is proportional to induced emf which is proportional to induced circumferential electric field here. 2. Since the value of B is nonzero inside the solenoid we...
• Sanchari Pal
first of all the 1)induced current is asked not the Electric fied 2)2ndly the value of B = (neu)o nI is valid inside the solenoid not outside it. then? how is this response is vali...
|
2021-05-14 04:43:16
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8192609548568726, "perplexity": 1881.0222702942585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991737.39/warc/CC-MAIN-20210514025740-20210514055740-00241.warc.gz"}
|
https://www.intmath.com/blog/videos/friday-math-video-the-surprising-math-of-cities-and-corporations-6364
|
Search IntMath
Close
450+ Math Lessons written by Math Professors and Teachers
5 Million+ Students Helped Each Year
1200+ Articles Written by Math Educators and Enthusiasts
Simplifying and Teaching Math for Over 23 Years
# Math movie: The surprising math of cities and corporations
By Murray Bourne, 29 Jul 2011
Physicist Geoffrey West takes us through an examination of how metabolic rates and body masses occur along a log-log graph to the idea that pace of life slows with increasing size.
It all leads to a "scientific theory of cities" based on math. of course.
That log-log graph occurs around the 7-minute mark. The idea is that mice and birds have a low metabolic rate (less than 0.5 Watt) and low mass (less than 100 g), and at the other end of the scale, elephants have a much higher metabolic rate (more than 1000 Watts) and a higher mass (more than 1000 kg). Most animals fit quite closely to the line between those 2 extremes, on the log-log scale.
Be the first to comment below.
### Comment Preview
HTML: You can use simple tags like <b>, <a href="...">, etc.
To enter math, you can can either:
1. Use simple calculator-like input in the following format (surround your math in backticks, or qq on tablet or phone):
a^2 = sqrt(b^2 + c^2)
(See more on ASCIIMath syntax); or
2. Use simple LaTeX in the following format. Surround your math with $$ and $$.
$$\int g dx = \sqrt{\frac{a}{b}}$$
(This is standard simple LaTeX.)
NOTE: You can mix both types of math entry in your comment.
From Math Blogs
|
2023-03-23 21:18:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8649676442146301, "perplexity": 6270.136042107662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00704.warc.gz"}
|
https://meta.stackoverflow.com/questions/261592/how-much-research-effort-is-expected-of-stack-overflow-users
|
# How much research effort is expected of Stack Overflow users?
I'm well aware that some research effort is expected of Stack Overflow users before they post any new questions, but I'm not sure just how much research effort is considered adequate.
I asked a question because I had found no search engine results that offered a clear answer, even after searching for almost an hour. Nonetheless, one Stack Overflow user was apparently dissatisfied with the amount of effort that I had put into this question, and they replied to my question with a critical (and rude) comment. Should I take their advice and refrain from asking for help even when I am not able to answer my own question with a reasonable amount of effort?
• This misunderstanding seems common enough. Even if I actually researched for days I could imagine someone may come and mistook my question as an opportunity to accuse "you didn't do your homework". The research time also seems topic-dependent. That's why Tell us what you found and why it didn’t meet your needs. seems a good idea to me. Aaron Kurtzhal emphasized it from the faq. Let the community decide whether your amount of research in the particular topic was sweet, and give you helpful, constructive feedback if it needs improvement. – n611x007 Aug 22 '14 at 17:08
• Related: 1) Introducing Top Question Writers 2) quora.com/..../. In case folks are curious about how fellow Q&A sites at the other side of the internet is doing it. – Pacerier Feb 24 '16 at 16:36
• One thing that I discovered when asking questions here, half the time the process of making my question the best question I can ask leads me to answering my own question. The kind of putting things in order that is required in a good question makes the answer obvious. – user3458 Jun 22 '16 at 15:20
• Funny how some questions are downright shamelessly asked when there is already (at least) one out there with an answer! And this "new" questions are voted up substantially. OTOH you can miss the fact that your matter has already been raised due to existing misformulated questions. This case should not be imputed on the new asker. Care must be taken both in discouraging carelessly asked questions, and encouraging editing these questions. Oh, and the knowledge in the answers should belong to the whole community not the individual asking user. So being overly specific is not always beneficial. – mireazma Mar 1 at 17:00
• Should I take their advice and refrain from asking for help... You should ignore that advice if you spent an hour looking and could not find anything before asking. The Nick Burns' of our community need to go back to lurking on expertsexchange or at least stop outwardly behaving like they do. If the question was "bad" to them, they can ignore it easier than they can comment on it negatively. – StingyJack Apr 8 at 3:10
A lot. Asking a question on Stack Overflow should be the last step in your process for finding an answer—if the information that you need already exists, then you should be able to find it before asking.
You want to
• Troubleshoot.
• Find books.
It is important to emphasize that we want to help you, but you also need to help yourself. The more effort you put into your question, the more benefit that you and future readers will get out of the answer(s). Understand that our time is not free, although we do not charge for it. Answering low quality, poorly researched, and/or duplicated questions becomes tiresome and does not contribute meaningfully to our goal of building a knowledge base, so please do your part to avoid this.
That said, if the critical comment you're receiving is indeed rude—you should flag it. But you should also assume good faith, try to understand the frustration that motivated it, and strive to do better in the future.
Searching and researching is a skill, and mastery is achieved only through practice. The abilities you gain on the road to asking questions here will serve you well long into the future.
In my opinion, there are four steps that one must take before asking a question on Stack Overflow:
Step 1: If applicable, research any core documentation + tutorials associated with your problem.
Step 3: If no results return from step 2, do enough extra research to formulate a specific, well-written, on-topic, and objective question.
Stack Overflow's mission is to be an objective Q&A site "for professional and enthusiast programmers". Period. It was not created to be a crutch for the lazy, nor was it created to be a "playground" for the experts. Stack Overflow has evolved to become, not just a programming Q&A site; but THE programming Q&A site.
It shouldn't matter if every other site on the Internet has the answer you're looking for; if there exists a specific, well-written, on-topic, and objective question that has not been asked & answered on Stack Overflow, it should be. Do not be intimidated into withholding questions simply because you don't hold a computer science degree in the subject, or are concerned about the precious minutes it would take away from someone else's busy schedule.
Yes, it is important for askers not to waste the time of those who volunteer to help them; but the whole reason the site was created was so that askers can save theirs.
• You should add a step 5: If step 3 provided the answer in enough detail. Answer your own question. So now the next person can searching will find the question and answer on SO. – Martin York Jul 15 '15 at 23:38
• "...simply because you don't hold a computer science degree in the subject..." That's not the expectation. The expectation is that they not use StackOverflow as a substitute for the most basic research efforts. The OP claims to have spent an hour researching. Do you really believe that? The OP's question would be quickly answered at your step 1. – user2437417 May 13 '18 at 16:26
• "It shouldn't matter if every other site on the Internet has the answer you're looking for..." Yes, it should. Because any author could self answer the question if so if they've done their research. That necessarily means that asking the question here would be using SO as "a crutch for the lazy," which you say is inappropriate. That's a contradiction. – jpmc26 Jun 26 '19 at 19:02
• there is always a gap between what is meant to be, and what it really is. – Greco Jonathan Aug 26 '19 at 8:36
• "The OP claims to have spent an hour researching. Do you really believe that? The OP's question would be quickly answered at your step 1." Of course it's true. Carrying out step 1 in sufficient detail to answer this question would take a full workday minimum, probably a full work week. That is a completely unfair amount of work to demand of a querent before they are permitted to ask the question. – Jacob Kopczynski Jun 11 '20 at 18:08
• "It shouldn't matter if every other site on the Internet has the answer you're looking for;" I would say that it should, for the simple reason that if that's the case, there's no reason that even preliminary research efforts would come up empty-handed. – Karl Knechtel Mar 30 at 9:04
The problem with this is that some people are better at searching the internet than others. For some questions, a very slight change in the approach to the search engine can make a very large difference in the quality of the results. So we do get situations where someone has, in fact, made a nontrivial effort, and still ends up asking a question to which an expert can find the answer within three clicks.
On the other hand, there are some warning signs that should indicate to you that you're missing something simple, and you need to rethink how you are searching. Here we have a non-esoteric programming language. Here we have, indeed, what looks like a very simple question about this programming language. It should really bother you that you can't find an answer to your question.
If you can't think of anything else to search for in a case like this, searching for a tutorial can't hurt. And in this case, searching includes visiting the best web site(s) on the topic.
However, if you've tried A, and you've tried B, and you've looked for a tutorial, and you've taken a walk around the block, and you still have come up empty-handed, then ask a question here. You might get the occasional snide comment, which you should flag, but you will be justified in posting your question.
But if this keeps happening to you over and over, you really need to rethink how much effort you are actually putting into trying to find the answer on your own.
• I think some people don't realize that Googling well is a real skill, one that not everybody has. – Mark Ransom Jan 10 '20 at 23:55
• @MarkRansom That doesn't mean that people who can Google should be doing the work for those who can't, any more than we should be writing code for people who can't. If you don't have the skill, learn. – Ian Kemp Mar 4 '20 at 9:53
• @IanKemp some skills aren't easy to learn, either you have it or you don't. Good Googling requires knowing which search terms are going to be most relevant, how to properly combine them, and how to exclude results that don't relate. If for example you don't realize that "golang" is often used as a synonym for the "go" language you're going to be pretty stuck. – Mark Ransom Mar 4 '20 at 15:06
• "That doesn't mean that people who can Google should be doing the work for those who can't" No, that's exactly what it means. – Jacob Kopczynski Jun 11 '20 at 18:10
• The issue is that people with expertise are often not aware of how much their expertise helps them search for the right resource. If you don't know or can't guess what something is called, searching by descriptive phrases can work but often does not. I think SO should have a section devoted specifically to requesting resources or terms, it would really help avoid polluting the programing forum with jargon questions. – Stonecraft Jun 15 '20 at 13:44
• This is why I have developed the policy, when telling people that their question is easy to answer by searching, of showing them the exact query I used and providing a link to the results, as well as the solution page. – Karl Knechtel Mar 24 at 10:42
• If someone asks for help and you are able but unwilling to do so, it might be better to keep scrolling than to downvote and/or post a snarky comment. Everyone here is supposed to be a volunteer. – Tim Randall Apr 7 at 13:27
As moderators, we typically try not to make controversial statements; if something is accepted by the community, then we go with it; but this question is a shining example of where conventional wisdom is toxic to a sustainable community.
Don't misunderstand me; I believe some respect of others is required to ask a good question; but I don't think it's appropriate to go as far as the top rated answer suggests.
Effort is misused as a word; so much so that we should probably banish it from our vocabulary.
Stack Overflow was created as a repository of useful programming information; that means that if it's of use to others, it should be here, regardless of how many times the OP commits Self-flagellation.
It's not about how much effort you put it, it's about how much you respect other people's time. A common characteristic of bad questions is that they don't respect other people's time, because they:
• Have little to no punctuation
• terrible spelling
• don't provide the essential information we need to solve the problem
• don't tell us what the problem is
• expect us to write their program for them
If that's how your question looks, don't be surprised if it's closed.
If someone is knowledgeable about your programming language was able to find the answer rather quickly, but you weren't, that's ok. That means the problem isn't you, the problem is the answer isn't easily discernible as such. And now you've contributed to the community by making it easier for someone else to find the answer.
• I would say the answer to the conflict of "easy to answer" and "need to do research" is that simple questions should usually be self answered. In other words, if they're not already on the site and someone wants to contribute, they should find the answer themselves and then post it along with the question. – jpmc26 Jun 30 '19 at 2:18
• Excellent answer. – hepcat72 Jun 2 '20 at 21:54
• I agree with @jpmc26 about self-answers. I often ask questions that I later discover were foolish, but they did not seem foolish after many hours of banging my head against the wall. If I find the answer, the least I can do is put it there in case someone else goes through the same though process that I did. – Stonecraft Jun 15 '20 at 13:48
• I do like this answer. I feel like users are sometimes criticized for not doing enough research work to be able to answer their own question. And there's always the possibility that an expert can simply make a connection that a non-expert cannot. – Tim Randall Apr 7 at 13:40
• This answer embodies how I want SO to be as both a resource and a community. I can only hope we get more open-minded individuals like you to respond to questions. I feel like over the years I've dealt with a lot more snobby folks on here; it has left a bad stain on the site's reputation, IMHO. – void.pointer Apr 10 at 19:58
From the Help Center article How do I ask a good question?, emphasis mine:
Sharing your research helps everyone. Tell us what you found and why it didn’t meet your needs. This demonstrates that you’ve taken the time to try to help yourself, it saves us from reiterating obvious answers, and above all, it helps you get a more specific and relevant answer!
Doing research is only half of what you need. Your question did not explain what you found and why that wasn't helpful to you.
• Searched using the wrong terms? If you post the search terms you used, someone can help you with better search terms.
• Spent a half hour on a website that had your answer, but you didn't see it? Someone familiar with the website can help.
• This is saying why it's important to do research, but that's not the question, the question is how much research should be done. This post doesn't answer that question. – Servy Oct 7 '13 at 16:55
• No, but it addresses the implicit "why did I get a snarky response?" question, so it is useful even if slightly off-topic ... – Ben Bolker Oct 7 '13 at 20:42
• I think that the power behind this answer is not obvious, and I think that it is very on topic. I can't count the number of times I have started writing a question, and by the time I was done researching and drafting the question, I had found a satisfactory answer. I think that there is a lot of power in taking a perceived problem in your head and framing it in a way that you can explain it to someone else. – nispio Oct 18 '13 at 19:18
• While I've only recently began posting regularly on SO, I will typically leave a comment asking if a user tried searching a specific phrase with a link to the Google search. It's not meant to be snarky, instead, it's meant to be helpful. When I started programming I used to ask my mentor twice a day, "What do I need to Google to figure this out?!?!?!" because I was searching the wrong terms. – silencedmessage Aug 22 '14 at 3:55
I know this is an old question, but I would like to add one point that I do not see addressed in the other answers:
It is not enough to do the research. You must also SHOW US that you have done the research.
Stating "I googled for hours and didn't find anything" is not satisfactory mostly because "finding nothing" is completely impossible. Typing a single search phrase in Google will get you millions of hits, which is far from "didn't find anything".
To start, tell us what search terms you used. Then we can help you with better search terms, including the exact terminology which is unfamiliar to you.
Maybe you found a page that was related to your search term but you were unable to see how you can adapt it to your situation. Provide a link and we can help you decipher it.
Ultimately, you need to demonstrate the effort you have put in.
• I broadly agree. I would note, however, that too many details of how you attempted to solve the problem can result in an overly long difficult to read questions. I've had people say questions aren't clear precisely because I've tried to include as many details as possible. I hypothesis there is also an effect where people don't want to answer questions which include a lot of attempts to solve a problem. Also the more your write, the less well proof-read it will be. Perhaps it's better sometimes to just ask a simple question, and deal with the obvious comments rather than to pre-empt them. – Att Righ Nov 27 '17 at 9:02
• @AttRigh Yes, getting the right amount of information in a question is a delicate balance. This balance is similar to the tension between "minimal" and "complete" in an MCVE. One of my favorite quotes from Albert Einstein applies: "Keep it as simple as possible but no simpler." – Code-Apprentice Nov 27 '17 at 15:35
• "finding nothing" is completely impossible is probably right. But I remember a time where Google wasn't smart enough to handle "R" (programming language) correctly and showed results of topic "word" instead. The results algorithm is a black box and especially this search engine uses all gettable data from a user to detect what user wants to search. This leads to many false postives in recall due to historical data if user wants to find documents of ambigous words (e. g. latex). This could be very frustrating. Search terms could also lead to different results for different users, then. – colidyre Sep 9 '18 at 19:11
• @colidyre Good point. Ideally an OP would write something like "I searched google for 'R' and got many irrelevant results such as...". Of course, we live in a far from ideal world. – Code-Apprentice Sep 10 '18 at 0:34
• I disagree with this. When I research I try a lot of different web-sites and a lot of different search terms. If I were to post all that it would be noise. You should not have to read my thesis on a subject. You should show that you have made some effort though by demonstrating some key related knowledge. – Bruce Adams Nov 29 '18 at 19:53
• @BruceAdams nothing in my answer says to write a thesis dissertation. I am only saying that stating "I searched the internet and found nothing" is insufficient. There is a lot of ground between these two extremes. – Code-Apprentice Nov 29 '18 at 20:29
• I absolutely agree its just the telling people the search terms you used that I took objection to. That's some appropriate but rarely. – Bruce Adams Nov 29 '18 at 23:18
• Sometimes I have done more than 50 searches with various combinations of keywords until I find a specific combination that gives one web page that has the clue I need to figure it out. So if I give up after 49 searches and post a question on SO am I really supposed to list all 49 searches with explanation of what they returned and why it wasn't what I wanted? This is the red tape for why I do that 50th search and never post a question. But it doesn't make SO the place to go to get answers - but it should be because the info is so hidden that if it were on SO people would be able to find it. – Jerry Jeremiah Sep 11 '20 at 1:05
• @JerryJeremiah "So if I give up after 49 searches and post a question on SO am I really supposed to list all 49 searches with explanation of what they returned and why it wasn't what I wanted?" Posting a summary that shows you have done some research is sufficient. Which of those 49 attempts seemed the most promising? Focus on those and we can point out what you missed or point you in the right direciton. – Code-Apprentice Sep 12 '20 at 7:01
• I heavily disagree with "Stating "I googled for hours and didn't find anything" is not satisfactory mostly because "finding nothing" is completely impossible. Typing a single search phrase in Google will get you millions of hits, which is far from "didn't find anything"." ---> so you expect someone to look through 1 million results? – 10 Rep Apr 16 at 23:52
• @10Rep "so you expect someone to look through 1 million results?" No. My point is that the statement "I googled for hours and didn't find anything" is demonstrably false. There are many more options than "didn't find anything" and "look through all million results". When you ask a question, you should demonstrate that you did the work by explaining what you found and why it doesn't solve your problem. – Code-Apprentice 2 days ago
Some people neglected the necessary learning process when approaching new things. I think it is important to point out that, when talking about research your question, research doesn't just mean search. Yes I agree it is a correct attitude to ask when you don't know. But it does not imply that one should ask whenever they come across something they don't know.
SO is not a site meant for those who skip the first 4 chapters of a tutorial or a book. There are plenty of tutorials, documentation and blog articles about virtually any topic one can think of. Is it really a problem that requires explanation or is the person just lazy to read? This is often a basis for my up-vote or down-vote.
The only type of new questions I can think of, is either an application that requires non-conventional decisions to be made, or a question about new technology (say HTTP v3 is released yesterday).
• on the other hand, writing tomes about things have become the intimidating justification of "value". On the top of that, we all may know there are industries built on making patents and some specifications deliberately long, obfuscated and hard to understand, an immoral way of protecting interests. Being succinct and concise while remaining understandable is very hard but it gives even more value to a work. In a quote attributed to Pascal, I didn't have time to write a short letter, so I wrote a long one instead. – n611x007 Aug 22 '14 at 10:46
• "There are plenty of tutorials, documentation and blog articles about virtually any topic one can think of" I agree, but note that stack overflow does quite a good job of "indexing" them. – Att Righ Nov 27 '17 at 8:54
As a newbie here - and to programming in general - I do respect the need to do as much as you can before posting a question.
However, depending on your experience, you may reach a point of "I'm well and truly stuck" before an expert would. I sometimes tutor in math - and what's obvious to me sometimes isn't to someone else. That's not often a lack of effort on their part - it's that the resources they've used aren't speaking to them in a way they understand. Sometimes you have to take time to walk someone through things in more than one way.
It's a matter of opinion, I guess, but I like being able to help someone who has tried something, but just can't quite get it or is missing something. It's part of learning - asking questions (other than "do this for me") is good.
If this were strictly a professional site, then I could see wanting to turf the amateur questions. But coming as an amateur, I like the idea of my questions being addressed, in order to help me learn.
• I think the point isn't that beginner questions are a problem. It's that question without effort of the person asking are. I'll upvote a question that has good research and a nice little example, even if it's just due to misreading the manual. Heck, I've asked such questions myself... – Robert Aug 25 '18 at 16:22
How much research should you do? Well, we can never known how much research you have actually done, because we have not been observing you, and we can not evaluate the truth of a claim by you that you have "searched for ages".
And the truth is, we don't really care how much research you do. Because this site is not about you specifically and not about you doing work. The site is for professional and enthusiast programmers and intends to build a useful repository of high quality questions and answers by having experts answer questions for free. If someone criticises your question because "you have not done enough research", what they really mean is that the you have not done the research expected of a professional or enthusiast and that would result in a useful high-quality question. So instead, consider how a lack of research can be incompatible with that purpose.
|
2021-04-21 13:34:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.362258642911911, "perplexity": 674.2957271105539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039544239.84/warc/CC-MAIN-20210421130234-20210421160234-00501.warc.gz"}
|
https://aviation.stackexchange.com/questions/46633/how-does-gross-weight-affect-the-minimum-controllable-airspeed-vmc
|
# How does gross weight affect the Minimum Controllable Airspeed (Vmc)?
I am about to get my MEL added to my CFI, and I cannot get my head around the multiple explanations for how gross weight affects Vmc.
The accepted answer is that The higher the gross weight, the lower is the Vmc. I have been proffered two contradictory explanations, from both internet articles, manufacturer documents, and other CFIs, both of which seem to me (I have an MSME masters degree in Aero) to be flawed.
1. That because the aircraft is slightly banked into the direction of the good engine, the horizontal component of Lift (HCL) opposes the yaw from the rudder, thus allowing the aircraft to slow to a lower airspeed before full rudder authority is required to counteract the yaw from the operating engine. This is wrong, because if you're in a bank, the HCL (by definition, it is HORIZONTAL!), is no longer lined up with the lateral axis of the aircraft. It is misaligned by exactly the bank angle, so, although there is a component of the HCL opposing the yaw force from the rudder, there is an exactly equal and compensating component of the Vertical Component of Lift (VCL) that will be augmenting the Yaw force. This argument is bogus.
I mean if you think about it simpler, the Lift vector is always, ALWAYS, perpendicular to the wings, so that any effect of Lift along the lateral axis (parallel with yaw forces), must necessarily be zero.
The fact that is not aligned with the earth horizon is irrelevant.
1. The other rationale I have been presented with is that as the gross weight increases, the resistance to motion (Engineers would call this rotational inertia) increases, and makes the aircraft more stable. This is true, but it is an argument about dynamic stability, not about static stability. In other words, this affects the aircrafts resistance to changes in yaw/sideslip angle. Vmc is about static stability, i.e., at what airspeed does the aircraft, in a static, (unchanging) zero sideslip angle, require full rudder defection to hold the aircraft (STATICALLY) at that zero sideslip angle against all the yaw forces being produced by all the factors resulting from Asymmetric thrust?
So this argument or rationale also seems to be incorrect to me.
Where is the flaw in this reasoning?
By the way, the only effect of Gross weight on Vmc that I can think of is the obvious one, The higher the gross weight, the greater the Angle of Attack required to hold 1 G (Level flight), and obviously, both P-Factor and adverse yaw increases with AOA. So the higher the Gross weight, the greater the P-Factor, and the greater the yaw-induced aerodynamic effects of the asymmetric thrust. So, if this logic is correct, Higher Gross weight means Higher Vmc, not Lower.
I think your objection to #2 is exactly correct, and your #1 is where the misunderstanding may lie.
The point of VMCA is essentially to maintain a constant heading, and when you have no more rudder authority to make that happen, you're there -- and any slower, with slightly less rudder authority, your heading starts to change in the direction of the dead engine. With no bank, it's all rudder; the lift from the wings doesn't tend to move the nose in any direction. With bank, there is now a component of the lift from the wings that does tend to affect heading (which is the dynamic of banking when you turn -- the heading change comes from the lift created by the wings, not from the lift generated by the rudder).
If you are banking into the good engine, that component of lift tends to change your heading toward the good engine and away from the failed engine. Since the asymmetric thrust is working to change your heading toward the failed engine while you hold rudder to oppose that change, you now have the component of lift from the wings that is assisting your rudder.
If you consider a case of symmetric thrust, you can hold a heading with right wing down and left rudder or vice versa; the lift from the wings is working to change your heading to the right, and your rudder is counteracting that drift. At some point, with enough bank, you run out of rudder authority & the airplane will turn, although pretty badly uncoordinated. With an engine out, you CAN fly straight ahead holding bank into the dead engine, you just need a lot of rudder authority (i.e. lots of speed above VMCA) to hold your heading -- the rudder is fighting both the asymmetric thrust and the component of the wings' lift. And doing that, you'll run out of rudder authority a lot sooner (i.e. at a higher airspeed). If you switch to bank into the good engine, the component of lift from the wings is now working with you, and that's the usual case for computing and demonstrating VMCA.
I suspect that the confusion on your point #1 comes by conflating horizontal (earth reference) with horizontal (aircraft reference). Hopefully the explanation above separates things out?
And, to bring things back to gross weight, the higher the weight, the greater the lift from the wings, so a Sin(5 degrees) (I think that's right -- LONG time since I took Trig!) component of "more lift" is a greater force to resist heading change than that same component of "less lift" with a lighter aircraft. The other forces involved in the balance, the asymmetric thrust and the rudder force, are independent of aircraft weight.
Best wishes for your MEL checkride!
• But the vertical component of Lift is acting in the opposite direction. i.e., Say the vertical component of Lift is LCos(5 deg). Then the component of that which is acting along the aircraft lateral axis (parallel to the Yaw force) is LCos(5deg)sin(5deg), in the SAME direction as the Yaw force, so it is augmenting the effect of yaw from the Rudder. The exacerbating effect of the HCL is Lsin(5deg), and the component of that which is aligned with the Yaw force is Lsin(5deg) cos(5deg), in exactly the opposite direction. They cancel each other out. – Charles Bretana Dec 15 '17 at 19:13
• By the way, as I understand it, the reason we are in a bank is because although we need the rudder to attain zero sideslip (to minimize drag and maximize excess thrust), this generates a sideways yaw force, which, uncorrected, will cause the aircraft to turn. We put the aircraft in a bank to counteract this turning force and keep a constant heading. – Charles Bretana Dec 15 '17 at 19:37
• @CharlesBretana Let's say your left engine is dead. Aircraft wants to yaw left. Rudder is displaced right, pushes tail left, and nose right. You're banking right (into the good engine). The lift from the wings is: all vertical (aircraft reference), mostly vertical(earth reference), some right (earth reference). Which tends to track the nose to the right -- assisting the rudder. The rudder is "pushing (the tail) left" while the wings are "pushing (the nose) right" but they're both pushing the heading in the same direction (right). – Ralph J Dec 15 '17 at 20:37
• Also, NO component of lift from the wings ever generates ANY yaw. Those two components are always perpendicular. The lift from the wings can track the nose, but it doesn't generate yaw. When you talk about a component of lift acting parallel to the yaw force, that's not quite accurate. Your "wing-aligned component of VCL" isn't right -- the LIFT vector has ZERO wing-aligned component. It has a horizontal component in the earth-frame-of-reference, but not in the aircraft frame of reference. Your red & purple arrows will cancel each other out; the blue HCL arrow remains & tracks the nose. – Ralph J Dec 15 '17 at 20:48
• Right - the lift vector CANNOT generate Yaw. That's my whole point here. No matter how much bank you have, ALL Lift is perpendicular to the Yaw vector. What I am describing is just Math to address the argument that the Horizontal component of Lift (HCL) somehow augments and exacerbates the yaw from the rudder, which is the essence of this argument. It can't, because if you want to actually look at it this way, then you also have to look at the effect of the Vertical component of Lift, which (because of the argument you make), must exactly oppose and compensate, for the effect of the HCL. – Charles Bretana Dec 15 '17 at 22:45
The flaw in the reasoning under 1. is that you should not show horizontal and vertical components of lift, but of gravity. You're considering the aircraft reference frame, and should look at gravity alignment with the aircraft axes. This is proportional to sin $\Phi$ and to mass.
What you were doing was breaking the lift vector into earth axis components, then re-breaking the resulting components into aircraft axis components where they already were.
Well, let’s consider the problem in detail.
Assumptions:
the aircraft is a light twin under 6,000lbs GTOW powered by reciprocating engines.
Engines and propellers are of a tractor configuration, mounted on pylons forward of the wing leading edges. Engines are not geared to counter rotate and turn clockwise from the pilot’s perspective; left engine is the critical engine.
The center of lift is aft of the center of gravity for the entire approved CG Range.
Factors affecting Vmc is the position of the CG as this affects the length of the moment arm for the rudder to counteract the vertical moment caused by the asymmetric thrust loading as a consequence of a critical engine failure. Aft most CG is the worst case.
Asymmetric thrust moment will be a sum of the moments caused by the position of the line of thrust (engine crankshaft) relative to the lateral position of the aircraft CG (airplane centerline) and the thrust aysymmetry caused by P Factor over the propeller disc. Worst case is right at takeoff power at critical AOA.
The horizontal component of lift HCL is used to counteract the lateral drift resulting from the yawing force of the rudder. Regulations and good operating practices limit the amount of bank used to generate this HCL to 5° in order not to diminish the climbing characteristics at low speed with a OEI condition.
Since the airplane will reach the critical AOA at a higher indicated airspeed when at GTOW than at a lower gross weight, it will stall at faster airspeeds than a lightly loaded aircraft would. But since the force and the corresponding counter moment that the rudder can generate are greater for a higher airspeed than a lower one, the aircraft may stall at or below the minimum control speed in this case. At a lower loading the aircraft will stall at a slower airspeed, but the rudder will have a diminished effectiveness here. This may result in a case where Vmc may be reached at or above the stall speed for the aircraft. The wings as well are generating greater lift for an AOA at higher speeds, so the HCL can more effectively counter the rudder loads at higher airspeeds than at lower ones without exceeding the 5° bank limit for this.
In truth, there is no one set Minimum Control Speed; it varies based upon a number of factors such as aircraft type and configuration, density altitude, gross weight, etc. The value of Red Line is based upon certain configurations during flight testing. In practice, minimum control speed will be identified when you encounter any of these three conditions with OEI and the propeller windmilling:
• Loss of directional (yaw) control.
• Maximum rudder deflection in the direction of the operating engine.
• Buffet just prior to aerodynamic stall.
Recognizing this condition and being able to articulate that to a student pilot next to you is what an examiner is going to want you to demonstrate on your checkride and is probably going to be the limit of the knowledge they expect about Vmc.
• Gravity has no aerodynamic effect on an aircraft. Vmc is an aerodynamic effect. Gravity has no aerodynamic effect on an aircraft. Only aerodynamic effects can affect Vmc. – Charles Bretana Dec 17 '17 at 3:02
• Secondly, the rudder is not the only aerodynamic surface creating lateral lift. The fuselage can create lateral lift. That's why you need a right bank to counteract the left turn induced by Left sideslip when landing in a crab in a right crosswind. Left rudder does create right lift, but it puts the fuselage into a left sideslip (to line up with the runway), and the left sideslip creates a larger lift to the left. With left engine out, right rudder only brings the fuselage back to zero sideslip, - the only lateral lift is from the rudder, to the left. So the right bank counteracts that. – Charles Bretana Dec 17 '17 at 3:10
• And my point in breaking the lift into components twice is just to show that this technique is indeed, pointless. The Lift is always perpendicular to the lateral axis of the aircraft, so Breaking it up into Horizontal and vertical components (relative to the horizon), is meaningless, since these are not aligned with the aircraft. And breaking those up into aircraft aligned components, just reverses the process and shows that there is zero effect along the lateral axis from any amount of lift, no matter the gross weight. – Charles Bretana Dec 17 '17 at 3:16
• Charles, you're getting tangled up here with your information and it's causing you to miss the real facts. The HCL is necessary in this problem as it is the force causing the aircraft to 1) turn or 2) counter the lateral lift force from the vertical fin and rudder during to prevent lateral drift of the aircraft. You use right rudder in the event of a left engine failure to counter the torque generated by the asymmetrical thrust (do a sum of moments about the CG with an OEI and this becomes clear). – Carlo Felicione Dec 17 '17 at 9:49
• Carlo,I'm sorry, but I don't believe I am. The HCL, as defined, is the component of Lift parallel with the Horizon, due to the bank, and yes, it, alone, would cause the aircraft to turn to the right. I fully understand that. But The aircraft is NOT turning right, so something else must be causing it to remain on a constant heading. That something is the leftwards Lift generated by the right rudder. That's why we establish the right bank - to stop the left turn that the rudder would induce. But what does this have to do with Vmc??!! – Charles Bretana Dec 17 '17 at 16:22
It's fun to listen to engineers talk about flying airplanes. I know this is a little late for the answer but here we go. It's not the lift, It's the weight. A plane in flight balances at the center of lift, not the center of gravity. As the CG is normally forward of the CL the bank causes the nose to yaw toward Gravity. The more weight vectored toward the live engine with bank the less the rudder has to do. The bank angle is limited because of other considerations. 90* of bank would vector all the weight to oppose yaw. This is the same reason that excess thrust, not more lift, is the force that causes the constant speed climb. Anything that causes less lift to have to be generated by the vertical tail will cause the speed needed to produce that lift to be lower. Changing the weight vector or increasing the vertical tail moment by moving the CG forward will decrease the total lift required to counter the yaw.
• Just as a note, answering older questions is fine on Stack Exchange, as long as you're able to add something that hasn't already been said in other answers and you make sure to actually answer the question that is being asked rather than simply providing commentary on existing answers. – a CVn Feb 7 '18 at 20:12
• @user28842, Aircraft balance aerodynamically about the Center of Lift, they "rotate", inertially, about the CG. Second, I believe that when a body is airborne, the force of gravity is a fictitious force, only necessary in calculations because we do our calculations in an accelerated frame of reference. The only forces that need be considered for this issue are the aerodynamic forces, and the asymmetric thrust of the one good engine. – Charles Bretana Feb 8 '18 at 1:59
• @CharlesBretana “Gravity is a fictitious force” in flight? Some force needs to balance Lift, since the aircraft is often maintaining an altitude rather than accellerating upward at (Lift/Mass) ft/sec/sec for the entire flight... – Ralph J Feb 8 '18 at 4:05
• @RalphJ, Actually no, its is just like Centrifugal "force", in that it is a fiction, a mathematical construct only required to make the calculations correct when using an accelerated frame of reference. Imagine doing the same calculations in outer space, with no gravity, but inside of a giant box that was accelerating under one "G" of thrust. You would be of course have to include one "G" of gravity to get the answer to be correct relative to the frame of reference of the box. – Charles Bretana Feb 8 '18 at 12:09
• This is same idea as Einstein's famous elevator experiment, thegreatcoursesdaily.com/einsteins-experimental-elevator where he compared being in outer space, in free fall, and being in a free-falling elevator... They are not similar, they are the same, because they are both in a zero-G or non-accelerating frame of reference, – Charles Bretana Feb 8 '18 at 12:11
|
2019-07-19 06:50:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6456406712532043, "perplexity": 1283.652138946627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526064.11/warc/CC-MAIN-20190719053856-20190719075856-00285.warc.gz"}
|
https://newbedev.com/why-do-we-not-observe-a-greater-casimir-force-than-we-do
|
# Why do we not observe a greater Casimir force than we do?
The answer by G.Smith is correct and concise. For whatever it's worth, I'll give a longer answer.
Sometimes authors use terms like zero point energy or vacuum energy for marketing purposes, because it sounds exotic. But sometimes authors use those terms for a different reason: they're describing a shortcut for doing what would otherwise be a more difficult calculation.
Calculations of the Casimir effect typically use a shortcut in which material plates (which would be made of some complicated arrangement of electrons and nuclei) are replaced with idealized boundary conditions on space itself. In that shortcut, the "force" between the boundaries of space is defined in terms of $$dE/dx$$, where $$E$$ is the energy of the ground state (with the given boundary conditions) and $$x$$ is the distance between the boundaries. This is a standard shortcut for calculating the force between two nearly-static objects: calculate the lowest-energy configuration as a function of the distance between them, and then take the derivative of that lowest energy with respect to the distance. When we idealize the material plates as boundaries of space, the lowest-energy configuration is called the vacuum, hence the vacuum energy language.
The important point is that this is only a shortcut for the calculation that we wish we could do, namely one that explicitly includes the molecules that make up material plates, with all of the complicated time-dependent interactions between those molecules. The only known long-range interactions are the electromagnetic interaction and the gravitational interaction, and gravity is extremely weak, so that leaves electromagnetism.
What about all of the other quantum fields in the standard model(s)? Why don't they also contribute to the Casimir effect? Well, they would if we really were dealing with the force between two movable boundaries of space itself, because then the same boundary conditions would apply to all of the fields. But again, the boundaries-of-space thing is just an idealization of plates made of matter, so the only relevant fields are the ones that mediate macroscopic interactions between matter.
Okay, but isn't the usual formula for the Casimir effect independent of the strength of the interaction? Not really. That's another artifact of the idealization. The paper https://arxiv.org/abs/hep-th/0503158 says it like this:
The Casimir force (per unit area) between parallel plates... the standard result [which I called the shortcut], which appears to be independent of [the fine structure constant] $$\alpha$$, corresponds to the $$\alpha\to\infty$$ limit. ... The Casimir force is simply the (relativistic, retarded) van der Waals force between the metal plates.
For perspective, the electromagnetic Casimir effect typically refers to an attractive interaction between closely-spaced plates, and van der Waals force typically refers to an attractive interaction between neutral molecules, but they're basically the same thing: interactions between objects, mediated by the (quantum) electromagnetic field. The related post Van der Waals and Casimir forces emphasizes the same point.
Metal plates impose a boundary condition on the electromagnetic field, because metal is made of charged particles which interact with an electromagnetic field. But those metal plates do not impose a boundary condition on the Higgs field, which extends through conductors.
|
2023-02-01 10:09:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8043219447135925, "perplexity": 350.10525222976685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499919.70/warc/CC-MAIN-20230201081311-20230201111311-00625.warc.gz"}
|
https://www.technicalfeeder.com/2022/04/rainbow-colored-keyboard-that-makes-typing-fun/
|
# Rainbow colored keyboard that makes typing fun
The keyboard is one of the devices that we use every day for both private and working time. It is the second-longest used device after monitors. After I bought my keyboard, I walked around my desk to check which type of keyboard is used by other people but many colleagues use a normal keyboard that is accompanied by a PC. Many people do not feel that keyboards are important.
A professional should have professional devices for the work. Musicians have expensive instruments. No musician plays with starter kits on a stage. Some might use starter kits for performance but not for their normal play.
## Switch type
There are some switch types. Each type has its own character and it’s up to you which one to choose. It is best to type the keys but if you don’t have the opportunity, you should at least know the differences.
### Blue
I think I’ve used this switch type 10 years ago. This is the most clicking switch and heavy. It is fun to type the keys but it chooses the place to use because the sound is the loudest in the switch types. I think the people around you claim to you if you use this keyboard in the office.
If you use this type in your own room or place where it is not silent, no one claims.
### Red
This is the lightest switch type. It is not so clicking because it’s a light key. The sound is relatively quiet. Of course, it’s not silent but it is acceptable. I think no one claims to you if you use this in the office.
I think the hands and fingers won’t be tired with the switches.
### Brown
I’ve never used this type but this is between the Blue and Red types. If you want to have clicking and the key weight, this might be the best fit for you.
### HyperX switch type
You can check the details of HyperX switch type on the official site. If you want to know more, check the following resource.
Was sind HyperX Switches? - Nur die beste Taste für dich | HyperX
HyperX Switches gibt es in vielen unterschiedlichen Versionen. Wähle einfach die Farbe, die dir am besten gefällt. Erfahre den Unterschied zwischen den roten, b...
## Alloy Origin Core Review
Package looks as follows. 2 Years warranty.
The backside
There are three switch types for this model. I chose the Red switch because I don’t want to have a loud keyboard.
Open!!
The color is all black on the surface.
The switches can be seen from the side.
### Lighting Rainbow color
If your work is boring, you need to somehow make it fun. I recommend this keyboard to you.
As you can see, the keyboard has a lighting feature! The color is rainbow and its color flows from left to right side. Look at the second picture.
The color layout is different from the one above.
If this keyboard is on your work desk in the office, it stands out from others. Your colleagues definitely speak to you. This feature makes typing fun.
I think the looks are important to choosing a keyboard. If we have attractive devices, we can probably work in a good mood because using those devices is fun.
### USB type-c connection to the device
I first thought it was better to have a Bluetooth connection because a wire is not needed and the desk is tidier. However, I reconsider it. It might not send some key types or have some latency in some cases. I don’t want to have such cases even if it rarely happens.
If the connection is USB, we don’t have to worry about it. See the picture below.
The hole is for the USB type-c. It can be disconnected and thus, it is portable. If the cable is not detachable, we have to buy another device if the cable is broken but we can easily replace the USB cable if it’s broken.
The USB cable is in the package and the length is about 170cm. I’m 170cm and the cable length was about left hand to right hand.
The USB is type-c for one side but another side is USB type-A.
The height is adjustable depending on your preference. The possible angles are 3/7/11 degrees.
I prefer the highest setting because the distance to the upper keys becomes a little bit shorter than the lower height settings.
### Relatively quiet sound
The typing sound is relatively quiet in mechanical keyboards. I use the keyboard in the early morning while my wife is sleeping but she hasn’t said it’s loud. Of course, the door is closed.
The frequencies are not so high. Therefore, the sound is not keen to the ears.
### Light key-touch and the key layout
The key touch is light. The keys hit smoothly and it makes typing fun. I feel less finger fatigue even after long hours of use.
Look at the picture. The key height is well considered.
To type numbers and Function keys, fingers need to be stretched because the keys are a bit distant. To reduce the burden, the height of the keys at the top of the keyboard is slightly high. It looks really slight difference but the maker considers these small things as well!! It is a great thing, isn’t it?
My laptop keyboard is a Japanese key layout but this is a US layout. I’ve used the US keyboard since I came to Germany and I feel it’s easier to write code with the US keyboard because often used keys are in good positions and unnecessary keys don’t exist on the US keyboard. Especially, I like the positions of double quotations and quotations.
US keyboard
Japanese keyboard
We have to press shift + 2 or shift + 7. The marks are often used for string, so the US keyboard is easier to type.
## Overall
I’m very satisfied with this keyboard. The looks and the typing experience are both great. If you are looking for a mechanical keyboard to improve your typing experience, I recommend this keyboard.
HyperX Alloy Origins Core Tenkeyless Red Switches
|
2022-05-21 15:38:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21780066192150116, "perplexity": 1810.8639216422282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539131.21/warc/CC-MAIN-20220521143241-20220521173241-00583.warc.gz"}
|
https://zbmath.org/?q=an:0846.17016
|
# zbMATH — the first resource for mathematics
Yang-Baxter operators and noncommutative de Rham complexes. (English. Russian original) Zbl 0846.17016
Russ. Acad. Sci., Izv., Math. 44, No. 2, 315-338 (1995); translation from Izv. Ross. Akad. Nauk, Ser. Mat. 58, No. 2, 108-131 (1994).
In the first part the author introduces several notions following Manin’s approach to quantum groups and spaces. A quantum matrix semigroup (over a field $${\mathbf k}$$) is a bialgebra $$M={\mathbf k}\langle Z\rangle/ (I_M)$$ generated by entries of an $$n\times n$$ matrix $$Z$$, with the usual comultiplication and counit. It is called a weak $$R$$-matrix semigroup if $$I_M= RZ\circ Z- Z\circ ZR$$, with $$R:{\mathbf k}^{n^2}\to {\mathbf k}^{n^2}$$ and $$(Z\circ Z)^{kl}_{ij}= z^k_i z^l_j$$. This is a strong $$R$$-matrix semigroup if $$R$$ verifies the Yang-Baxter equation. One relates to $$M$$ a matrix algebra $$R_{\text{alg}} (M)$$ formed by matrices $$S$$ such that $$SZ\circ Z- Z\circ ZS=0$$ in $$M$$. A quantum space $$A$$ is a (left) comodule over $$M$$ and to an $$s$$-tuple $$A_1, \dots, A_s$$ one relates a universal quantum semigroup $$M(A_1, \dots, A_s)$$. Mutual relations between all these notions are studied in detail.
These results are applied in the second part of the paper where noncommutative de Rham complexes are defined axiomatically. For a couple of $$n^2 \times n^2$$ matrices $${\mathcal A}$$ and $${\mathcal B}$$ and for two $$n$$-vectors of generators $$x$$ and $$\xi$$, with $$\xi_i= dx_i$$, one sets $$A= {\mathbf k}\langle x\rangle/ (x\circ x- {\mathcal A}x\circ x)$$, $$B= {\mathbf k}\langle \xi\rangle/( \xi\circ \xi- {\mathcal B}\xi\circ \xi)$$. De Rham complex $$\Lambda (A, B)$$ contains $$A$$ and $$B$$ as subalgebras and $$M(A, B)$$ coacts on $$\Lambda (A, B)$$. The conditions on existence and the classification are described. The final part is devoted to differential calculus on quantum semigroups and, first of all, to working out some basic examples.
##### MSC:
17B37 Quantum groups (quantized enveloping algebras) and related deformations 58A12 de Rham theory in global analysis 16W25 Derivations, actions of Lie algebras 81R50 Quantum groups and related algebraic methods applied to problems in quantum theory
Full Text:
|
2021-12-03 03:49:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.546612560749054, "perplexity": 390.3263229630562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00486.warc.gz"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.