text
stringlengths 104
605k
|
---|
The thread former is a tool for the non-cutting production of internal threads. The tap is used in a similar way to a tap drill. The difference is that a tap, as the name suggests, cuts the thread and chips are also produced during the cutting process. In the case of the tap, the thread is created by deformation without cutting.
• No chips
• Longer tool life than a tap
• Higher speed, therefore higher processing speed
• Smooth material surface after processing
• The material fibre flow is not interrupted
• High precision possible.
• Higher demands on the drill hole tolerances
• Higher heat generation than during cutting
• With many materials the use is not possible
• Tap as hand tool badly possible
• Frequently a release agent must be used.
Core hole chart for thread former
Metric Thread Sizes Metric Thread former Core hole diameter (mm) Metric Thread Pitch coarse thread (mm) M1 0,88 0,25 M1,1 0,98 0,25 M1,2 1,08 0,25 M1,4 1,25 0,30 M1,6 1,45 0,35 M1,8 1,65 0,35 M2 1,80 0,40 M2,2 2,00 0,45 M2,5 2,30 0,45 M3 2,80 0,50 M3,5 3,25 0,60 M4 3,70 0,70 M5 4,65 0,80 M6 5,55 1,00 M7 6,55 1,00 M8 7,40 1,25 M9 8,40 1,25 M10 9,25 1,50 M12 11,20 1,75 M14 13,10 2,00 M16 15,10 2,00 M18 16,90 2,50 M20 18,90 2,50
* All information without guarantee. |
# Mapping torus of an inverse
Let $f\colon X\to X$ be a homeomorphism and $M_f$ be the mapping torus of $f$. That is, $M_f=X\times [0,1]/(x,0)\sim (f(x),1)$. Is $M_f$ homeomorphic to $M_{f^{-1}}$? (Since $f$ is a homeomorphism, $f^{-1}$ is also a homeomorphism.)
Yes, they are homeomorphic. The homeomorphism is induced by the function $X \times [0,1] \mapsto X \times [0,1]$ defined by $(x,t) \mapsto (x,1-t)$. |
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 22 Apr 2019, 07:17
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If 4x/7y = 1, then what is the value of 3x + 5y ?
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 54434
If 4x/7y = 1, then what is the value of 3x + 5y ? [#permalink]
### Show Tags
31 Oct 2017, 23:02
00:00
Difficulty:
25% (medium)
Question Stats:
77% (01:17) correct 23% (01:13) wrong based on 79 sessions
### HideShow timer Statistics
If $$\frac{4x}{7y} = 1$$, then what is the value of 3x + 5y ?
(1) x + 4y = 23
(2) y^3 = 64
_________________
Current Student
Joined: 18 Aug 2016
Posts: 619
Concentration: Strategy, Technology
GMAT 1: 630 Q47 V29
GMAT 2: 740 Q51 V38
Re: If 4x/7y = 1, then what is the value of 3x + 5y ? [#permalink]
### Show Tags
01 Nov 2017, 00:28
1
Bunuel wrote:
If 4x/7y = 1, then what is the value of 3x + 5y ?
(1) x + 4y = 23
(2) y^3 = 64
x = 7y/4
(1) 7y/4 + 4y = 23
23y = 23*4
y=4
then x = 7
3x +5y = 41..sufficient
(2) y = 4
then x = 7
3x +5y = 41..sufficient
D
_________________
We must try to achieve the best within us
Thanks
Luckisnoexcuse
Senior Manager
Joined: 07 Jul 2012
Posts: 376
Location: India
Concentration: Finance, Accounting
GPA: 3.5
Re: If 4x/7y = 1, then what is the value of 3x + 5y ? [#permalink]
### Show Tags
01 Nov 2017, 19:35
$$\frac{4x}{7y}$$=1; x=$$\frac{7y}{4}$$
Put this value in (1)
$$\frac{7y}{4}$$+4y=23
$$\frac{7y+16y}{4}$$=23
y=4, then x=7
3x+5y= 3(7)+5(4)= 41 [Suff]
(2) $$y^3$$=64; y=4, then x=7
3x+5y= 3(7)+5(4)= 41 [Suff]
_________________
Kindly hit kudos if my post helps!
Manager
Joined: 25 Jul 2017
Posts: 59
Re: If 4x/7y = 1, then what is the value of 3x + 5y ? [#permalink]
### Show Tags
01 Nov 2017, 20:43
Since 4x/7y=1 => 4x-7y = 0 (*)
1/ From 1 and from (*), we can find y = 4, x=7 => 3x+5y = 41 => Suff
2/ From 2 => y = 4 => x=7 =>3x+5y = 41 => Suff
Please hit kudos if it helps.
Manager
Joined: 11 Feb 2013
Posts: 69
GMAT 1: 490 Q44 V15
GMAT 2: 690 Q47 V38
GPA: 3.05
WE: Analyst (Commercial Banking)
If 4x/7y = 1, then what is the value of 3x + 5y ? [#permalink]
### Show Tags
27 Mar 2019, 10:39
In this DS, No need to calculate values.
It's required to keep in mind that values of two unknown variriables can easily be found from two different equations with the condition that TWO EQUATIONS ARE NOT THE SAME EQUATION.
How to find out whether two equations are the same or different:
(1) X+2y =7 & (2)7x +14y=49
Here, (since 7x is 7 times x) if I multiply each variables of EQUATION 1 by 7, equation 1 will become exactly the same thing written in equation 2.
(1) X+2y =7 & (2)7x +8y=41
Here, (since 7x is 7 times x) if I multiply each variables of EQUATION 1 by 7, equation 1 will BE DIFFERENT FROM equation 2.
Posted from my mobile device
Manager
Joined: 11 Feb 2013
Posts: 69
GMAT 1: 490 Q44 V15
GMAT 2: 690 Q47 V38
GPA: 3.05
WE: Analyst (Commercial Banking)
Re: If 4x/7y = 1, then what is the value of 3x + 5y ? [#permalink]
### Show Tags
27 Mar 2019, 10:54
STATEMENT 1: SUFFICIENT
Prompt says:
4x=7y
Or, 4x-7y=0....(1)
Statement 1: x+4y=23....(2)
Here, (1)TWO EQUATIONS & TWO VARIABLES
(2) BOTH EQUATIONS ARE THE SAME EQUATION.
So, I can easily find out the individual value of x and y.
If I have individual value of x and y, I can easily figure out anything with unknown variriable of x and y.
Thus, I can easily find out the value of 3x +5y.
STATEMENT 2:
Y= 4
[NB: ODD POWER DOESN'T CHANGE THE POSITIVE/NEGATIVE NATURE OF INPUT (BASE) AND OUTPUT (RESULT)]
Now the prompt (4x=7y) becomes (4x=7*4) "ONE EQUATION WITH ONE UNKNOWN VARIRIABLE" from which the value of unknown variriable can be found without calculation.
Now, I have individual value of x and y. So, I can easily figure out anything with unknown variriable of x and y.
Thus, I can easily find out the value of 3x +5y.
So. THERE IS NO NEED TO CALCULATION.
Posted from my mobile device
Re: If 4x/7y = 1, then what is the value of 3x + 5y ? [#permalink] 27 Mar 2019, 10:54
Display posts from previous: Sort by
# If 4x/7y = 1, then what is the value of 3x + 5y ?
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. |
with fewer than 100 propositional letters – whose shortest proof in a conventional natural deduction system will be astronomically long. –––, 1970, “The Ultra-Intuitionistic Criticism and the Antitraditional Program for the Foundations of Mathematics,” in A. Kino, J. Myhill, & R. Vesley (eds. If we regard such expressions as tokens rather than types, then it makes sense to consider the task of concretely ‘counting up to’ a number by And in cases where a logic is standardly characterized proof theoretically rather than semantically – i.e. the principle that for all definite properties $$P(x)$$ of natural numbers, we may infer $$\forall x P(x)$$ from $$P(0)$$ and $$\forall x(P(x) \rightarrow P(x+1))$$ – to the predicate $$F(x)$$, we may conclude that $$\forall x F(x)$$ from (i) and (ii). Allender, E., and McCartin, C., 2000, “Basic Complexity,” in R. Downey & D. Hirschfeldt (eds. Constantinos Daskalakis applies the theory of computational complexity to game theory, with consequences in a range of disciplines. CET is now widely accepted within theoretical computer science for reasons which broadly parallel those which are traditionally given in favor of Church’s Thesis. [31] In particular, the amount of work (defined as the sum of the number of steps required by each processor) performed by a machine satisfying this definition will be polynomial in the size of its input. The attempt to develop models of decision which take resource limitations into account is sometimes presented as a counterpoint to the normative models of rational choice which are often employed in economics and political science. On the one hand, Parikh considers what he refers to as the almost consistent theory $$\mathsf{PA}^F$$. Several algorithms have been discovered which can be implemented on such devices which run faster than the best known classical algorithms for the same problem. [29] Supposing that $$\phi$$ is of the form $$\exists x_1 \forall x_2 \ldots Q_n \psi$$, a play of the verification game for $$\phi$$ is defined as follows: A winning strategy for Verifier in the verification game for $$\phi$$ is a sequence of moves and countermoves for all possible replies and counter-replies of Falsifier which is guaranteed to result in a win for Verifier. One common reaction to these observations is to acknowledge that as plausible as the axioms of traditional epistemic logics may appear, they formalize not our everyday understanding of knowledge but rather an idealized or ‘implicit’ notion more closely resembling ‘knowability in principle’ (e.g. one.[14]. [26], A complexity class which is likely to be properly contained in $$\textbf{EXP}$$ but which still contains many apparently infeasible problems which arise in computational practice is $$\textbf{PSPACE}$$. Haken’s proof made use of Cook and Reckhow’s (1979) observation that we may formulate the Pigeon Hole Principle (PHP) – i.e. To see why this is so, observe that (S1) makes clear that strict finitists propose to identify natural numbers with numerals such as the familiar sequence $$0,0',0'',\ldots$$ of unary numerals. (Simon 1957; Simon 1972; Rubinstein 1998; Gigerenzer and Selten 2002; Kahneman 2003), 2.1 Church’s Thesis and effective computability, 2.2 The Cobham-Edmond’s Thesis and feasible computability, 3.1 Deterministic and non-deterministic models of computation, 3.2 Complexity classes and the hierarchy theorems, 3.3 Reductions and $$\textbf{NP}$$-completeness, 3.4.1 $$\textbf{NP}$$ and $$\textbf{coNP}$$, 3.4.2 The Polynomial Hierarchy, polynomial space, and exponential time, 3.4.3 Parallel, probabilistic, and quantum complexity. Such characterizations may in turn be understood as describing the properties which a physical system would have to obey in order for it to be concretely implementable. As we have just seen, such assignments are based on the time or space complexity of the most efficient algorithms by which membership in a problem can be decided. One example was a technique known as dynamic programming. and the emergence of Dynamical Systems Theory and… [2] Non-deterministic machines are sometimes described as making undetermined ‘choices’ among different possible successor configurations at various points during their computation. Computability and Complexity Theory (Texts in Computer Science) given two natural numbers $$n$$ and $$m$$, are they relatively prime? algorithms which are guaranteed to find a solution which is within a certain constant factor of optimality. Given a formula $$\phi \in \text{Form}_{\mathcal{L}}$$, is it the case that for all $$\mathcal{A} \in \mathfrak{A}$$, $$\mathcal{A} \models_{\mathcal{L}} \phi$$? To avoid pathologies which would arise were we to define complexity classes for ‘unnatural’ time or space bounds (e.g. Many classical results and important open questions in complexity theory concern the inclusion relationships which hold among these classes. Second, they arise in contexts in which we are interested in solving not only isolated instances of the problem in question, but rather in developing methods which allow it to be efficiently solved on a mass scale – i.e. In other words, the satisfiability problem can model any other problem in NP. ), Gödel, Kurt, 1956, “Letter to von Neumann (1956),” Reprinted in. ), Artemov, S., and Kuznets, R., 2014, “Logical Omniscience as infeasibility,”, Baker, T., Gill, J., and Solovay, S., 1975, “Relativizations of the $$\textbf{P} = \textbf{NP}?$$ question,”. The corresponding positive hypothesis that possession of a polynomial time decision algorithm should be regarded as sufficient grounds for regarding a problem as feasibly decidable was first put forth by Cobham (1965) and Edmonds (1965a). Such a requirement can in turn be understood as reflecting the fact that classical physics does not allow for the possibility of action at a distance. The Turing machine will take this problem, modeled as a language, and feed the input to the problem. Table 1. A PRAM machine $$P$$ consists of a sequence of programs $$\Pi_1,\ldots,\Pi_{q(n)}$$, each comprised by a finite sequence of instructions for performing operations on registers as before. These relationships are similar to those which obtain between the analogously defined $$\Sigma^0_n$$- and $$\Pi^0_n$$-sets in the Arithmetic Hierarchy studied in computability theory (see, e.g., Rogers 1987). To this end, we first show that the problem $$\sc{TQBF}$$ may be redefined in terms of a game between a player called Verifier– who can be thought of as attempting to demonstrate that a QBF-formula $$\phi$$ is true – and another player called Falsifier– who can be thought of as attempting to demonstrate that $$\phi$$ is not true. Another notion of complexity is studied in descriptive complexity theory (e.g., Immerman 1999). There are non-classical models of computation which are hypothesized to yield a different classification of problems with respect to the appropriate definition of ‘polynomial time computability’. The completeness of $$X$$ for $$\textbf{C}$$ may thus be understood as demonstrating that $$X$$ is representative of the most difficult problems in $$\textbf{C}$$. 1995) is that the most defensible choices of logics of knowledge lie between the modal systems $$\textsf{S4}$$ and $$\textsf{S5}$$. $$\mathsf{PA}$$) without knowing all of its theorems (e.g. $$\sc{SAT}\$$ Given a formula $$\phi$$ of propositional logic, does there exist a satisfying assignment for $$\phi$$? A similar reference on $$\textbf{P}$$-completeness is (Greenlaw, Hoover, and Ruzzo 1995). [19] The other examples just cited are taken from a list of 21 problems (most of which had previously been identified in other contexts) which were shown by Karp (1972) to be $$\textbf{NP}$$-complete. For instance, in order to employ the Cobham-Edmond’s Thesis to judge whether a problem $$X$$ is feasibly decidable, we consider the order of growth of the time complexity $$t(n)$$ of the most efficient algorithm for deciding $$X$$. Impagliazzo 1995, Arora and Barak 2009, and Fortnow 2013), Bernays (1935), van Dantzig (1955), and Wang (1958). Recall that the theory $$\text{I}\Delta_0$$ introduced in Recall that a complexity class is a set of languages all of which can be decided within a given time or space complexity bound $$t(n)$$ or $$s(n)$$ with respect to a fixed model of computation. For example, larger instances may require more time to solve. $$\textsf{BASIC}$$ contains 32 axioms, which include those of $$\mathsf{Q}$$, as well as others which axiomatize the intended interpretations of $$\lvert \dots\rvert$$, $$\lfloor \frac{x}{2} \rfloor$$ and $$\#$$ (see, e.g., Hájek and Pudlák 1998, 321–22). long. By the completeness theorem for first-order logic, there thus exists a model $$\mathcal{M}$$ for the language of bounded of arithmetic such that $$\mathcal{M} \models \mathsf{S}^1_2 + \exists y \neg \exists z \varepsilon(2,y,z)$$. Such a problem corresponds to a set X in which we wish to decide membership. $$\mathfrak{M}$$ is assumed to be a reasonable model in the sense that it accurately reflects the computational costs of carrying out the sorts of informally specified algorithms which are encountered in mathematical practice. Troelstra, A., and Schwichtenberg, H., 2000, Turing, A., 1937, “On computable numbers, with an application to the, Tversky, A., and Kahneman, D., 1974, “Judgment Under Uncertainty: Heuristics and Biases,”, Urquhart, A., 1984, “The Undecidability of Entailment and Relevant Implication,”, Van Bendegem, J., 2012, “A Defense of Strict Finitism,”, Van Dantzig, D., 1955, “Is $$10^{10^{10}}$$ a finite number?”, Van Den Dries, L., and Levitz, H., 1984, “On Skolem’s exponential functions below $$2^{2^x}$$,”, Vardi, M., 1982, “The Complexity of Relational Query Languages,” in, Vardi, M., and Stockmeyer, L., 1985, “Improved Upper and Lower Bounds for Modal Logics of Programs,” in, Visser, Albert, 2009, “The predicative Frege hierarchy,”. On this basis CT is also sometimes understood as making a prediction about which functions are physically computable – i.e. $$X \subseteq \{0,1\}^*$$. After a number of preliminary results in the 19th and 20th centuries, the problem $$\sc{PRIMES}$$ was shown in 2004 to possess a so-called polynomial time decision algorithm – i.e. Fagin, R., 1974, “Generalized First-Order Spectra and Polynomial-Time Recognizable Sets,” in R. Karp (ed. Note, however, that since there are $$2^n$$ functions in $$S_{\phi}$$, this yields only an exponential time decision algorithm. Note, however, that in order to show that $$\phi \in \sc{VALID}$$ requires that we show that $$\phi$$ is true with respect to all valuations. Since $$\leq_P$$ is transitive, composing polynomial time reductions together provides another means of showing that various problems are $$\textbf{NP}$$-complete.
## complexity theory computer science
Pwm Fan Controller Circuit, Glymed Skin Medication No 5, Ge Microwave Jvm7195sk3ss Filter, Fresh Mackerel Price, How To Install Shapes In Photoshop, Bacon, Lettuce Tomato Potato Salad, Where To Buy Lavender In Dubai, |
## »Day 6: Chronal Coordinates«
The sixth day lets us build a »Voronoi diagram« under the Manhattan distance, where the seeds are given.
# Part 1
To crack the problem, we need to figure out which cells in the diagram will turn out to be infinite. For this, we use two ingredients:
• The function closest calculates the seed closest to a given point, returning 0 in case of a tie and index of the closest seed (starting at 1) otherwise.
• Manually hardcoding the size of a “sufficiently large” bounding box of all seeds, the infinite cells will correspond precisely to:
Finally, we need to calculate the size of the largest finite cell.
# Part 2
The second part asks us to calculate the number of grid coordinates with a total sum of distances from all of the seeds less than 10000. All needed to do here is to find the size of another, for this problem “sufficiently large”, bounding box. Next, a simple bruteforce iteration suffices.
## »Day 7: The sum of its parts«
In the seventh day, we are given a list of instructions/tasks, together with constraints saying that some tasks must be performed before others. We will be interested in “optimal” ways to perform the tasks.
# Preprocessing
To simplify sum cumbersome operations with the graph, we will use some functions from the networkx library.
# Part 1
Given a graph of dependencies, the first part asks us to find the lexicographically smallest topological ordering of this graph. This is something that networkx already has a function for.
# Part 2
In the second part, we are given additional information: for each task, we now know the amount of time it takes. Our workforces has now been increased to 5 workers and we are interested in knowing the smallest possible time in which our workers can complete all of the task. We opt for greedy approach. At each iteration we either:
• see that some worker is free, in which case we assign to him the least-time consuming task from all tasks which are available at the moment.
• see that all workers are busy, in which case we can fast-forward the time until some of the workers finishes his task. We then mark the task as done and free the worker.
We also note that the first part might have been solved by setting nworkers = 1 and printing the tasks as they get assigned to the single worker.
## »Day 8: Memory Maneuver«
In the eight day, we are given tree flattened in a specific recursive way. Given this flattened version, we are supposed to perform some calculations involving leaves of the tree.
# Preprocessing
In the recursive functions below, we will be accessing the elements of the list from the back, we thus reverse the list immediately during parsing.
# Part 1
In the first part, we need to sum up the values attached at each leave. This is quite easy to do just by mimicking the flattening process recursively. Note that we, very conveniently, use the pop() function of Python lists, which allows us to consecutively, and recursively, iterate over the whole list.
# Part 2
In the second part, rather than summing up the values attached at the leaves, we will need to sum up scores calculated at each node. The calculation of the score gets more involved, but can be still done just by mimicking the flattening process recursively.
## »Day 9: Marble Mania«
In the ninth day, we are given a circular list of marbles, into which we will be inserting new marbles in a specific way, placed by players taking turns.
# Preprocessing
The first part asks us to simulate the game. Since any new marble will be placed within small distance 7 from the last one, we can simmulate the whole game via rotation of the circular array. This can be done very efficiently using the Python’s deque, which offers precisely the operation of rotation (both clockwise and anticlockwise). We then blindly implement the rules of the game as given by the problem statement.
# Part 2
In the second part, we are required to do nothing else than to play the game with a bigger amount of marbles. Since the implementation with deque is sufficiently fast, there is nothing else to do.
## »Day 10: The Stars Align«
In the tenth day, we are given a collection of particles in a grid, together with their corresponding velocities. We are told that the particles will align to form a sentence at a particular nonnegative time.
# Preprocessing
Apart from parsing of the input, we prepare for the first time a utility function that will be needed: The »ternary search«, which can minimize a convex function in a logarithmic time.
# Part 1
In the first part, we are supposed to find the sentence formed when the particles align. For this, we will use the following heuristic (which is not guaranteed to work in general; if we however assume the given velocities to be somehow random, this heuristics will do well): we will try to minimize the diameter of the set of particles, where the diameter is defined as:
Since this function is first decreasing and, after reaching its minimum, increasing, we may use ternary search to find the minimum. After calculating the optimal time for the diameter to be minimal, we simply print the positions of the particles.
# Part 2
The second part only asks us for the time when the particles align. This has been already calculated. |
# A multiscale mini-model
The main strength of Morpheus is that it enables you to create multi-scale models concisely and simulate them efficiently.
In this post, I’ll show how to couple different model formalisms using symbols.
It is assumed you have a basic understanding of how models are described in Morpheus. If not, please go through the “Getting started” post first.
# A word on the word “multi-scale”
First, let’s clarify what we mean by the somewhat hyped term “multi-scale”.
Generally, the term refers to mathematical and computational models that simultaneously describe processes at multiple time and spatial scales. In contrast to the models based on the quasi-steady state assumption that discard interactions between scales, multi-scale models describe systems where processes at different scales can influence each other. Therefore, these models should not only simultaneously describe multiple scale, but also allow them to interact.
One fact that complicates this is that processes at different scales are often best formalized in different modeling formalisms. Therefore, multi-scale modeling often also involves coupling different modeling formalisms that may include spatial/nonspatial models, discrete/continuous models, stochastic/deterministic models.
## TL;DR
In short, multi-scale models are, for the purpose of this post, characterized by three features:
1. they simultaneously describe multiple time or spatial scales
2. they allow interaction between the scales
3. typically involve coupling between model formalisms
# Multi-scale models and middle-out
Morpheus deals with a particular type of multi-scale models for multicellular systems that consists of:
• intracellular processes such as genetic regulatory networks, often modeled as ordinary differential equations,
• cellular processes such as motility or cell division, modeled in terms of cellular Potts model, and
• inter/extra-cellular processes such as production and diffusion of cytokines, modeled with reaction-diffusion systems.
Morpheus enables you to first model each of these systems separately as single-scale models and later flexibly combine these sub-models into multi-scale models. This allows you to include certain sub-models in a pragmatically fashion in which you start from a certain level of abstraction and work your way up and down by including crucial processes at different scales. Morpheus is designed to support this so-called middle-out modeling strategy.
# A multi-scale mini-model
Let’s go through an example. We’ll construct a model in which an
1. intracellular cell cycle network (ODE) regulates
2. the division of motile cells which (CPM)
3. release a diffusive cytokine (PDE) which, in turn,
4. controls the cell cycle (ODE).
Thus, there are 3 sub-models (ODE, CPM and PDE) that interact in a cyclic fashion:
• $ODE \rightarrow CPM$,
• $CPM \rightarrow PDE$,
• $PDE \rightarrow ODE$.
## Step 1: Intracellular model (ODE)
First, we define a ODE model of cell cycle that we take directly from [Ferrell et al. 2011].
This system describes three interacting species as variables $CDK1$, $Plk1$ and $APC$:
\begin{aligned} \frac{CDK1}{dt}&=\alpha_1 - \beta_1 \cdot CDK1 \frac{APC^n}{ K^n + APC^n} \\ \frac{Plk1}{dt}&=\alpha_2\cdot(1-Plk1) \frac{CDK1^n}{ K^n + CDK1^n } - \beta_2 \cdot Plk1 \\ \frac{APC}{dt}&=\alpha_3\cdot(1-APC) \frac{Plk1^n}{ K^n + Plk1^n } - \beta_3 \cdot APC \end{aligned}
In Morpheus, we can describe the same system of equations, together with the parameter values as below:
Note that the System is now defined within a CellType. This makes sure that the ODE system is calculated separately for each member of the population of this cell type. Also note that the variables are defined as Properties. These are variable that are bound to a particular cell such that each cell can have a different value for this variable.
If you run this model for a single cell with the parameters above, you will see oscillatory behavior:
See Examples/ODE/CellCycle.xml to try yourself.
## Step 2: Cell-based model (CPM)
Now, we would like to couple the above model to control the “physical” cell division of a spatial cell model. This can be achieved easily by adding a few elements (highlighted in green in the figure below).
First, we add a VolumeConstraint and SurfaceConstraint that control the area and shape of a cell. And, obviously, we need to add a CellDivision plugin that controls when and how a cell divides. Here, we specify that the cell should divide when CDK1 > 0.5 in the Condition of cell division.
Note that the multi-scale coupling is automatically established by the symbolic reference (outlined in red) between the condition for cell division and the variable in the intracellular ODE system.
One additional thing to specify is what happens when cell division occurs, using the Triggers. Here, we specify that the target volume Vt of the two daughter cells is half the target volume of the mother cell, i.e. to model cleavage without cell growth.
This generates a simulation like this:
See Examples/Multiscale/CellCycle.xml to try yourself.
## Step 3: Intercellular model (PDE)
The remaining two steps are (1) to let cells release a diffusive chemokine and (2) to make this chemokine modulate the cell cycle.
First, let’s define a cytokine $g$ that diffuses with coefficient $D_g$, is produced by cells proportionally to its concentration $APC$ and decay linearly with a certain rate (0.05):
$$\frac{\partial g}{\partial t} = D_g{\nabla}^2g + APC - 0.05 \cdot g$$
In Morpheus, we define a Field g in the Global section and specify the diffusion coefficient $D_g$ in Diffusion/rate. The reaction step of is defined in the System where we provide a DiffEqn with the production and decay terms in the expression APC - 0.05*g:
Note that we explicitly set APC to be zero everywhere (Constant APC=0.0) but this value is overwritten wherever there are cells (i.e. in the CellType scope). This established the coupling between CPM and PDE: the movement and shape of CPM cells affect the production term in the PDE.
### Modulating cell cycle
Last step is to let the local concentration of the cytokine modulate the cell cycle.
We first need to compute the local cytokine concentration $g_l$ for each cell (plugins highlighted in green below). Since there are different possibilities (i.e. we could take the sum, the mean or the maximum concentration ‘under’ the cell), we need to use a plugin called a Mapper in which we can define statistic we want to compute. Here, we specify the Mapper to take Field g as an input and assign the average value to the cell Property g_l – the local cytokine concentration.
Finally, we let the local concentration of $g$ affect the cell cycle dynamics. Here, we (quite artificially) assume that this concentration acts as an additional production term for $CDK1$ and add it to the DiffEqn for CDK1 (see red outline). This implements the coupling between PDE and ODE: the extracellular cytokine concentration affects the intracellular cell cycle.
Here’s a video of a simulation of the full multi-scale model we constructed:
See Examples/Multiscale/CellCycle_PDE.xml to try yourself.
# Relative time-scales
At this point, you may be asking yourself “All nice and well, but how can I control the relative time-scales between the various models?”. And I’d respond: “Great question!”.
Morpheus has a number of ways to control the relative time-scales of the various sub-models. We can either control the ODE/PDE dynamics in System or control the cellular dynamics in CPM.
## Controlling ODE dynamics
Within the System element, there is an optional attribute called time-scaling. When set, all equations within the system are multiplied by this value. Therefore, it can be used to slow down or speed up the dynamics that the ODE system models. Note, however, that this may imply that you need to set a smaller time-step to guarantee numerical stability.
For example, to speed up the cell cycle dynamics, we can set time-scaling=2.0 (compare with figure 2)
## Controlling CPM dynamics
Another way to control the relative timescale is by increasing or decreasing the dynamics of the CPM model. That is, we can control how long one CPM step takes in units of simulation time.
In CPM/MCSDuration, we can set the duration of one Monte Carlo step (MCS) which is defined as one full CPM lattice update. For instance CPM/MCSDuration value="0.01" means that Morpheus computes 100 MCS per uint simulation time.
# Conclusion
In this post, we have constructed a small multi-scale model in which an ODE model, a CPM model and a PDE model are mutually coupled to each other. The main aim was to show how one can couple such models with relative ease in Morpheus and how you can control such couplings.
I hope this will enable you to create your own models! |
# American Institute of Mathematical Sciences
June 2016, 5(2): 225-234. doi: 10.3934/eect.2016002
## Blowup and ill-posedness results for a Dirac equation without gauge invariance
1 Dipartimento di Matematica, Unversità di Roma "La Sapienza", Piazzale A. More 2, 00185 Roma, Italy 2 Department of Mathematics, Institute of Engineering, Academic Assembly, Shinshu University, 4-17-1 Wakasato, Nagano City 380-8553
Received January 2016 Revised April 2016 Published June 2016
We consider the Cauchy problem for a nonlinear Dirac equation on $\mathbb{R}^{n}$, $n\ge1$, with a power type, non gauge invariant nonlinearity $\sim|u|^{p}$. We prove several ill-posedness and blowup results for both large and small $H^{s}$ data. In particular we prove that: for (essentially arbitrary) large data in $H^{\frac n2+}(\mathbb{R} ^n)$ the solution blows up in a finite time; for suitable large $H^{s}(\mathbb{R} ^n)$ data and $s< \frac{n}{2}-\frac{1}{p-1}$ no weak solution exist; when $1< p <1+\frac1n$ (or $1< p <1+\frac2n$ in $n=1,2,3$), there exist arbitrarily small initial data data for which the solution blows up in a finite time.
Citation: Piero D'Ancona, Mamoru Okamoto. Blowup and ill-posedness results for a Dirac equation without gauge invariance. Evolution Equations & Control Theory, 2016, 5 (2) : 225-234. doi: 10.3934/eect.2016002
##### References:
[1] I. Bejenaru and S. Herr, The cubic Dirac equation: Small initial data in $H^1(\mathbbR^3)$,, Comm. Math. Phys., 335 (2015), 43. doi: 10.1007/s00220-014-2164-0. Google Scholar [2] I. Bejenaru and S. Herr, The cubic Dirac equation: Small initial data in $H^{1/2}(\mathbbR^2)$,, Comm. Math. Phys., 343 (2016), 515. doi: 10.1007/s00220-015-2508-4. Google Scholar [3] N. Bournaveas and T. Candy, Global well-posedness for the massless cubic Dirac equation,, Int Math Res Notices in press., (). doi: 10.1093/imrn/rnv361. Google Scholar [4] T. Candy, Global existence for an $L^2$ critical nonlinear Dirac equation in one dimension,, Adv. Differential Equations, 16 (2011), 643. Google Scholar [5] T. Cazenave, Semilinear Schrödinger Equations,, Courant Lect. Notes Math., (2003). Google Scholar [6] M. Escobedo and L. Vega, A semilinear Dirac equation in $H^s(\mathbbR^3)$ for $s>1$,, SIAM J. Math. Anal., 28 (1997), 338. doi: 10.1137/S0036141095283017. Google Scholar [7] R. Glassey, Finite-time blow-up for solutions of nonlinear wave equations,, Math. Z., 177 (1981), 323. doi: 10.1007/BF01162066. Google Scholar [8] M. Ikeda and Y. Wakasugi, Small-data blow-up of $L^2$-solution for the nonlinear Schrödinger equation without gauge invariance,, Differential Integral Equations, 26 (2013), 1275. Google Scholar [9] M. Ikeda and T. Inui, Small data blow-up of $L^2$ or $H^1$-solution for the semilinear Schrödinger equation without gauge invariance,, J. Evol. Equ., 15 (2015), 571. doi: 10.1007/s00028-015-0273-7. Google Scholar [10] M. Ikeda and T. Inui, Some non-existence results for the semilinear Schrödinger equation without gauge invariance,, J. Math. Anal. Appl., 425 (2015), 758. doi: 10.1016/j.jmaa.2015.01.003. Google Scholar [11] F. John, Blow-up of solutions of nonlinear wave equations in three space dimensions,, Manuscripta Math., 28 (1979), 235. doi: 10.1007/BF01647974. Google Scholar [12] S. Machihara, M. Nakamura, K. Nakanishi and T. Ozawa, Endpoint Strichartz estimates and global solutions for the nonlinear Dirac equation,, J. Funct. Anal., 219 (2005), 1. doi: 10.1016/j.jfa.2004.07.005. Google Scholar [13] T. Oh, A blowup result for the periodic NLS without gauge invariance,, C. R. Acad. Sci. Paris. Ser., 350 (2012), 389. doi: 10.1016/j.crma.2012.04.009. Google Scholar [14] H. Pecher, Local well-posedness for the nonlinear Dirac equation in two space dimensions,, Commun. Pure Appl. Anal., 13 (2014), 673. doi: 10.3934/cpaa.2014.13.673. Google Scholar [15] T. Runst and W. Sickel, Sobolev Spaces of Fractional Order, Nemytskij Operators, and Nonlinear Partial Differential Equations,, de Gruyter Series in Nonlinear Analysis and Applications, 3 (1996). doi: 10.1515/9783110812411. Google Scholar [16] T. Sideris, Nonexistence of global solutions to semilinear wave equations in high dimensions,, J. Differential Equations, 52 (1984), 378. doi: 10.1016/0022-0396(84)90169-4. Google Scholar [17] Q. Zhang, Blow-up results for nonlinear parabolic equations on manifolds,, Duke Math. J., 97 (1999), 515. doi: 10.1215/S0012-7094-99-09719-3. Google Scholar [18] Q. Zhang, A blow-up result for a nonlinear wave equation with damping: The critical case,, C. R. Acad. Sci. Paris, 333 (2001), 109. doi: 10.1016/S0764-4442(01)01999-1. Google Scholar
show all references
##### References:
[1] I. Bejenaru and S. Herr, The cubic Dirac equation: Small initial data in $H^1(\mathbbR^3)$,, Comm. Math. Phys., 335 (2015), 43. doi: 10.1007/s00220-014-2164-0. Google Scholar [2] I. Bejenaru and S. Herr, The cubic Dirac equation: Small initial data in $H^{1/2}(\mathbbR^2)$,, Comm. Math. Phys., 343 (2016), 515. doi: 10.1007/s00220-015-2508-4. Google Scholar [3] N. Bournaveas and T. Candy, Global well-posedness for the massless cubic Dirac equation,, Int Math Res Notices in press., (). doi: 10.1093/imrn/rnv361. Google Scholar [4] T. Candy, Global existence for an $L^2$ critical nonlinear Dirac equation in one dimension,, Adv. Differential Equations, 16 (2011), 643. Google Scholar [5] T. Cazenave, Semilinear Schrödinger Equations,, Courant Lect. Notes Math., (2003). Google Scholar [6] M. Escobedo and L. Vega, A semilinear Dirac equation in $H^s(\mathbbR^3)$ for $s>1$,, SIAM J. Math. Anal., 28 (1997), 338. doi: 10.1137/S0036141095283017. Google Scholar [7] R. Glassey, Finite-time blow-up for solutions of nonlinear wave equations,, Math. Z., 177 (1981), 323. doi: 10.1007/BF01162066. Google Scholar [8] M. Ikeda and Y. Wakasugi, Small-data blow-up of $L^2$-solution for the nonlinear Schrödinger equation without gauge invariance,, Differential Integral Equations, 26 (2013), 1275. Google Scholar [9] M. Ikeda and T. Inui, Small data blow-up of $L^2$ or $H^1$-solution for the semilinear Schrödinger equation without gauge invariance,, J. Evol. Equ., 15 (2015), 571. doi: 10.1007/s00028-015-0273-7. Google Scholar [10] M. Ikeda and T. Inui, Some non-existence results for the semilinear Schrödinger equation without gauge invariance,, J. Math. Anal. Appl., 425 (2015), 758. doi: 10.1016/j.jmaa.2015.01.003. Google Scholar [11] F. John, Blow-up of solutions of nonlinear wave equations in three space dimensions,, Manuscripta Math., 28 (1979), 235. doi: 10.1007/BF01647974. Google Scholar [12] S. Machihara, M. Nakamura, K. Nakanishi and T. Ozawa, Endpoint Strichartz estimates and global solutions for the nonlinear Dirac equation,, J. Funct. Anal., 219 (2005), 1. doi: 10.1016/j.jfa.2004.07.005. Google Scholar [13] T. Oh, A blowup result for the periodic NLS without gauge invariance,, C. R. Acad. Sci. Paris. Ser., 350 (2012), 389. doi: 10.1016/j.crma.2012.04.009. Google Scholar [14] H. Pecher, Local well-posedness for the nonlinear Dirac equation in two space dimensions,, Commun. Pure Appl. Anal., 13 (2014), 673. doi: 10.3934/cpaa.2014.13.673. Google Scholar [15] T. Runst and W. Sickel, Sobolev Spaces of Fractional Order, Nemytskij Operators, and Nonlinear Partial Differential Equations,, de Gruyter Series in Nonlinear Analysis and Applications, 3 (1996). doi: 10.1515/9783110812411. Google Scholar [16] T. Sideris, Nonexistence of global solutions to semilinear wave equations in high dimensions,, J. Differential Equations, 52 (1984), 378. doi: 10.1016/0022-0396(84)90169-4. Google Scholar [17] Q. Zhang, Blow-up results for nonlinear parabolic equations on manifolds,, Duke Math. J., 97 (1999), 515. doi: 10.1215/S0012-7094-99-09719-3. Google Scholar [18] Q. Zhang, A blow-up result for a nonlinear wave equation with damping: The critical case,, C. R. Acad. Sci. Paris, 333 (2001), 109. doi: 10.1016/S0764-4442(01)01999-1. Google Scholar
[1] Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021002 [2] Joel Kübler, Tobias Weth. Spectral asymptotics of radial solutions and nonradial bifurcation for the Hénon equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3629-3656. doi: 10.3934/dcds.2020032 [3] Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388 [4] Tetsuya Ishiwata, Young Chol Yang. Numerical and mathematical analysis of blow-up problems for a stochastic differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 909-918. doi: 10.3934/dcdss.2020391 [5] Justin Holmer, Chang Liu. Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity II: Supercritical blow-up profiles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 215-242. doi: 10.3934/cpaa.2020264 [6] Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392 [7] Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259 [8] Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $\mathbb{R}^4$. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3327-3355. doi: 10.3934/dcds.2020052 [9] Ahmad El Hajj, Hassan Ibrahim, Vivian Rizik. $BV$ solution for a non-linear Hamilton-Jacobi system. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020405 [10] Sihem Guerarra. Maximum and minimum ranks and inertias of the Hermitian parts of the least rank solution of the matrix equation AXB = C. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 75-86. doi: 10.3934/naco.2020016 [11] Anh Tuan Duong, Phuong Le, Nhu Thang Nguyen. Symmetry and nonexistence results for a fractional Choquard equation with weights. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 489-505. doi: 10.3934/dcds.2020265 [12] Juliana Fernandes, Liliane Maia. Blow-up and bounded solutions for a semilinear parabolic problem in a saturable medium. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1297-1318. doi: 10.3934/dcds.2020318 [13] Jann-Long Chern, Sze-Guang Yang, Zhi-You Chen, Chih-Her Chen. On the family of non-topological solutions for the elliptic system arising from a product Abelian gauge field theory. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3291-3304. doi: 10.3934/dcds.2020127 [14] Shuxing Chen, Jianzhong Min, Yongqian Zhang. Weak shock solution in supersonic flow past a wedge. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 115-132. doi: 10.3934/dcds.2009.23.115 [15] Feimin Zhong, Jinxing Xie, Yuwei Shen. Bargaining in a multi-echelon supply chain with power structure: KS solution vs. Nash solution. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020172 [16] Oussama Landoulsi. Construction of a solitary wave solution of the nonlinear focusing schrödinger equation outside a strictly convex obstacle in the $L^2$-supercritical case. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 701-746. doi: 10.3934/dcds.2020298 [17] Youshan Tao, Michael Winkler. Critical mass for infinite-time blow-up in a haptotaxis system with nonlinear zero-order interaction. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 439-454. doi: 10.3934/dcds.2020216 [18] Daniele Bartolucci, Changfeng Gui, Yeyao Hu, Aleks Jevnikar, Wen Yang. Mean field equations on tori: Existence and uniqueness of evenly symmetric blow-up solutions. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3093-3116. doi: 10.3934/dcds.2020039 [19] Yi An, Bo Li, Lei Wang, Chao Zhang, Xiaoli Zhou. Calibration of a 3D laser rangefinder and a camera based on optimization solution. Journal of Industrial & Management Optimization, 2021, 17 (1) : 427-445. doi: 10.3934/jimo.2019119 [20] Vaibhav Mehandiratta, Mani Mehra, Günter Leugering. Fractional optimal control problems on a star graph: Optimality system and numerical solution. Mathematical Control & Related Fields, 2021, 11 (1) : 189-209. doi: 10.3934/mcrf.2020033
2019 Impact Factor: 0.953 |
# Microtubules as Sub-Cellular Memristors
## Abstract
Memristors represent the fourth electrical circuit element complementing resistors, capacitors and inductors. Hallmarks of memristive behavior include pinched and frequency-dependent I–V hysteresis loops and most importantly a functional dependence of the magnetic flux passing through an ideal memristor on its electrical charge. Microtubules (MTs), cylindrical protein polymers composed of tubulin dimers are key components of the cytoskeleton. They have been shown to increase solution’s ionic conductance and re-orient in the presence of electric fields. It has been hypothesized that MTs also possess intrinsic capacitive and inductive properties, leading to transistor-like behavior. Here, we show a theoretical basis and experimental support for the assertion that MTs under specific circumstances behave consistently with the definition of a memristor. Their biophysical properties lead to pinched hysteretic current–voltage dependence as well a classic dependence of magnetic flux on electric charge. Based on the information about the structure of MTs we provide an estimate of their memristance. We discuss its significance for biology, especially neuroscience, and potential for nanotechnology applications.
## Memristors
The term memristor is the contraction of memory and resistor and it was first proposed in 1971 as the fourth element of the electric circuits1. A memristor is defined as a two-terminal passive circuit element that provides a functional relation between electric charge and magnetic flux1,2. The first physical realization of a memristor was achieved in 20082,3 and it has held a promise of nanoelectronics beyond Moore’s law4, although this realization has been both difficult and controversial5. One of the possible breakthrough applications of memristors is neuromorphic computing6. Memristance refers to a property of the memristor that is analogous to resistance but it also depends on the history of applied voltage or injected current, unlike in other electrical circuit elements. When the electrical charge flows in one direction, the resistance of some memristors increases while it decreases when the charge flows in the opposite direction or vice versa. If the applied voltage is turned off, the memristor retains the last resistance value that it exhibited. This history dependence of memristance is expressed via a self-crossing or pinched I–V loop, which is frequency dependent3,6, and whose lobe area tends to zero as the frequency tends to infinity.
A memristor is said to be charge-controlled if the relation between flux φ and charge q is: φ = φ (q). Conversely, it is said to be flux-controlled if q = q(φ). The voltage v of a charge-controlled memristor obeys a linear relationship with the current i(t) representing a charge-dependent Ohm’s law such that:
$${v}{(}{t}{)}={M}{(}{q}{)}\,{i}{(}{t}{)}$$
(1)
where memristance is defined as:
$${M}{(}{q}{)}={d}\varphi {(}{q}{)}/{dq}$$
(2)
and it has the units of resistance, namely ohms. For a flux-controlled memristor: i(t) = G(φ) v(t) where the proportionality coefficient G(φ) = dq/dφ is called memductance (acronym for memory conductance) and is the inverse of memristance.
The ideal memristor1 has been generalized to any electrical circuit device exhibiting a pinched hysteresis loop7. Such generalized memristors have been identified in numerous naturally occurring systems, e.g. potassium and calcium ion channels in the Hodgkin–Huxley nerve membrane circuit model8,9, the aplysia habituation neuron in Kandel’s research on memory10,11, silk proteins12, organic memristors13 and conducting structures in plants14,15,16. The connection between memristors and neuronal synapses17 can potentially shed light on the enigma of memory generation, erasure and retention in the human brain. In this context, a molecular model of memory encoding has been based on phosphorylation of neuronal MTs by calcium calmodulin kinase enzyme (CaMKII)18. This provides indirect indication that MTs may function as nano-scale sub-cellular memristors with an enormous potential for storage of large amounts of biologically-relevant information. Their involvement in many biological functions, especially in cell morphology, mitosis, intracellular transport and neuronal migration makes them important biological structures whose memristive properties would provide an extraordinary range of possibilities in the context of cell biology and neuroscience and also offer a great potential for hybrid nano-biotechnological advances using a combination of protein-based and synthetic components.
## Microtubules
The cytoskeleton of eukaryotic cells contains three main types of protein filaments, namely: MTs, actin filaments (AFs) and intermediate filaments. In addition to providing the necessary mechanical rigidity for cell morphology and localized force generation capabilities due to their polymerization dynamics, these protein polymers participate in a multitude of key biological functions including cell division, cell motility and intracellular transport. Additionally, MTs and other cytoskeletal filaments in neurons have been hypothesized to store molecular bits of information that can build up memory at a sub-cellular level. They have also been proposed to transmit electrical signals in neuronal cells16,17,18,19,20. Conducting properties of AFs and MTs have been experimentally and computationally investigated yielding important insights into their remarkable behavior, which is discussed below. Importantly in this connection, both AFs and especially MTs possess highly electrically-charged surfaces, which enable them to conduct electrical signals via ionic cable-like transmission process21,22. It is important to note that MTs are very abundant in neurons where they form parallel bundles interconnected by MAPs (MT-associated proteins) resembling parallel processing computational architecture. It is, therefore, unsurprising to find that the key components of this intricate subcellular architecture, namely MTs, are endowed with special electrical conduction properties.
MTs have been experimentally demonstrated to respond to externally-applied electric fields in vitro exhibiting alignment and drift effects along field lines23,24,25. However, electric conductivity determination for biopolymers has been very challenging because of both the structural non-uniformity and instability of these polymers and the need to maintain the samples in liquid solution. Moreover, it is also important to note that these biological polymers are very sensitive to environmental factors, e.g. ambient temperature, pH, and buffer composition, especially its ionic strength. A number of experiments have been performed attempting to circumvent these difficulties involving either the intrinsic26,27,28 or ionic29,30 MT conductivities. Direct measurements of ionic electrical conductivity along the MT axis placed in buffer solution in micro-channels established an upper limit on MT conductivity of 90 S/m29 and using an electro-orientational methodology30 resulted in an MT conductivity estimate of approximately150 mS/m for individual microtubules and a much lower value of approximately 90 mS/m for MTs in the presence of subtilisin (which cleaves C-terminal tails from tubulin reducing 40% of the net charge and hence MT conductivity). This suggests that positive counter-ions partially condense around the negatively-charged MT surface, including the negatively charged C-termini. The mobile counterions attracted to, but not condensed onto the solvent-exposed surface of MTs, appear to be the main contributors to the observed high conductance of MTs with a value approximately 15-fold greater than the solution’s conductivity (9.7 mS/m) in which they were placed in the conducted experiments. More recently, Sahu et al.31 tried to measure electrical conductivity due to counter-ions flowing along the outer surface of MTs. They reported the results from their four-probe measurements of both DC and AC conductive properties. The values of DC conductivity of MTs, found using a 200 nm gap, were reported to be in a very broad range from 10−1 to 102 S/m. In fact, they found MTs at particular frequency values to become almost 1000-fold more conductive than their DC estimates, reportedly showing surprisingly high values for MT conductivities between 103 and up to 105 S/m31. These effects were interpreted as being due to MT’s ballistic conductivity property. It was also claimed by these authors but not proven that the high conductivity at specific frequency ranges arises from the water content inside the MT lumen32. Moreover, Santelices et al.32 published precise measurements of the AC conductance of MTs in electrolytic solutions and compared them to analogous solutions containing unpolymerized tubulin, at various protein concentrations using a nanofabricated microelectrode-geometry system. Their results show that MTs at a 212 nM tubulin concentration in BRB4 buffer raised the solution’s conductance by 23% at 100 kHz. This effect scaled directly with the tubulin concentration in solution. However, a peak in the conductance spectrum positioned at around f = 110 kHz was observed to be concentration –independent while its amplitude decreased linearly with tubulin concentration. On the other hand, free tubulin was observed to have an opposite effect by decreasing the solution’s conductance by 5% at 100 kHz under identical conditions. Interpreting these measurements in terms of the number of MTs and approximating their electrical behavior as resistors networks acting in parallel surrounded by a lower conductance solution, it can be estimated that the single MT conductance is approximately 20 S/m. This can be compared to an approximate value of 10 mS/m estimated for the buffer and it indicates that MTs, under certain experimental conditions, exhibit unusually high electric conductivities, being roughly 1000-fold greater than those of the buffer solution. Further experimentation using a parallel-plate capacitor and physiologically-relevant concentrations of tubulin showed that MTs increased solution capacitance at cellular concentrations unlike free tubulin (A. Kalra et al., arXiv preprint arXiv:1905.02865, 2019). These data interpolated to a single 20 μm-long MT indicates the value of its capacitance as C = 3 pF which is comparable to the earlier computational predictions33,34,35,36,37. Some discrepancy between these results may be due to a significant reduction in the value of the dielectric constant of the solution near the protein surface, a fact not included in these previous computational estimates33,34,35,36,37.
Based on the above overview of the reported effects of MTs on the conducting properties of solutions containing MTs, it can be concluded that MTs act as conducting cables for charge transport, showing increased conductivity compared to the solution itself as well as possessing electric capacitance that is due to counter-ion condensation and the formation of a charge-separation double layer involving negative MT surface charges and positive counter-ions. It was further hypothesized previously that MTs may also have intrinsic inductance due to the possibility of solenoidal flow of ionic charges, which put together with other observed properties and a non-linear capacitance due to highly limited number of counter-ions they can attract, leads to a transistor-like behavior with the observed injected current amplification33,38. Below, we argue that in addition to the already demonstrated unusual electrical conduction properties described above, MTs also behave as nano-scale sub-cellular memristive devices. In Fig. 1 we show how the effects of ionic charge propagation along an MT affect the conformations of the negatively-charged C-termini and also how these ionic flows may involve penetration into the MT lumen. These effects will be discussed below in the paper in connection with memristive behavior or MTs.
## Microtubules as Memristors
It is worth noting that approximately 50% of the net negative charge of a tubulin dimer resides in C-terminal protrusions that arise from the protein surface exposed to the bulk solution30. These C-terminal ‘tails’ have a large percentage of Asp and Glu amino acids, and have been computationally simulated35 showing that they likely exist in two major conformational states: (a) a flexible conformational state pointing away from the MT surface towards bulk solution, which indicates a major role of thermal fluctuations, or (b) a more stable state in which the C-termini bind electrostatically to the MT surface at tubulin areas with a local positive electrostatic charge. The two conformations are separated by a potential barrier that can be overcome by a local electric potential fluctuation. Each tubulin dimer has two C-termini that may thus extend outward toward the bulk solution or bend and bind to the MT surface. The state of the C-termini was modeled to modulate the flow of the solution’s ions radially across the MT cylinder in and out of the lumen, through so-called nanopores, which are present between the two adjacent MT protofilaments and spaced every 4 nm. A schematic of ionic flow along the outer surface of an MT has been shown in Fig. 1A–C. Counter-ion flows around the surface of an MT and long its axis are, therefore, dependent on such factors as local pH values, and on the local ionic concentration in the MT vicinity, as well as on the state of the MT’s C-termini making it a dynamic and possibly nonlinear system. Therefore, the effective radius of the MT structure depends on the state of the C-termini. A lowered concentration of counter-ions will cause a collapse of C-termini on the surface of a MT where patches of positive charges were found to electrostatically attract their negative charges35. It important to note that this effect is similar to the situation arising with current flows along memristors, which affect the state of the memristor7.
It has been described elsewhere39 that a memristor is an electrical analogue of a flexible pipe that changes its diameter with the amount and direction of fluid that flows through it. If the fluid flows through this pipe in one direction, it expands (becoming less resistive to fluid flow). When the fluid flows in the opposite direction, the pipe shrinks (becoming more resistive to fluid flow). Furthermore, the memristor “remembers” its diameter when the fluid last went through. When the fluid flow is turned off, the pipe diameter “freezes” until such time when the fluid flow is turned back on. The ability to indefinitely store resistance values means that a memristor can be used as a nonvolatile memory. This is in fact what the C-termini of an MT represent due to their conformational changes and the ability to either expand or contract radially as a result of the increased or decreased presence of counter-ion concentrations in their vicinity. Since the counter-ions act as electric charge carriers in the ionic currents facilitated by MTs functioning as nonlinear cables, the variable-diameter pipe analogy appears to be suitable for the description of MT conductivity.
Computational support for this conjecture comes from Freedman et al.40, who report simulations of the ionic currents through microtubule nanopores and the lumen in the presence of coupled C-termini dynamics. In this model, Freedman et al.40 use the Grand Canonical Monte Carlo /Brownian Dynamics (GCMC/BD) methodology to study ionic conductance along the lumen, which is affected by fluxes through the nanopores when an external potential is applied. Figure 1D schematically illustrates such ionic movement through nanopores into and out of the MT lumen.
These simulations revealed specific nanopore conductances and selectivity for an ionic species type. At positive voltages, protein charges increase total conductance by a factor of 7 and cation conductance by a factor of 15. At positive voltages, C-termini increase the total conductance by 12% and cation conductance by 11%, but there has been little effect found on anions (which are gated at the entrance). While the simulations of Freedman et al.40 did not explicitly show the existence of a pinched hysteresis loop in the ionic conductivity of MTs, this can be derived and analyzed from the data presented in this paper.
In Fig. 2 we show the key findings regarding the current-voltage (I–V) characteristics for the two types of nanopores present in MTs and for anions and cations based on further analysis of the results of the computer simulations. Below we describe how the curves in Fig. 2 have been obtained mathematically.
The existence of a pinched hysteresis loop results from the I–V characteristics and it requires an inversion operation, which is straightforward to demonstrate. Namely, if I = G(φ)V is a solution describing the I–V characteristic for a microtubule, then (−I) = G(φ)(−V) is an odd-symmetric solution of that dependence as well. This relies on the definition of memductance given above as G(φ) = dq(φ)/. If we reverse the signs of the charge q and the flux φ then memductance G(φ) does not change the sign. However, both the current I = dq/dt and the voltage V = /dt do change their signs. Hence, the flux-dependent Ohm’s law I = G(φ)V does not change when changing the signs of I and V to the opposite ones. Consequently, it follows that we can find the dependence of I versus V for the negative values from that obtained by Freedman et al.40 by inverting the signs of I and V. Next, we follow the ideas stated in Chua et al.7. The problem is to find such a G(φ) for which the pinched hysteresis loop can be reproduced when V is described by a sinusoidal function.
It is important to note that the data represented by the red and blue asterisks in Fig. 2 intersect at the origin. These intersections have different slopes for the two curves representing the data obtained from the simulations of Freedman et al.40. We assume that these data result in the formation of pinched hysteresis loops7. With this in mind, we judiciously choose the following dependence of q versus φ
$${q}(\varphi )={{G}}_{0}\{\varphi +{\alpha }\varphi \,\tanh \,[(\varphi -{\varphi }_{0})/{\varphi }^{\ast }]\}$$
(3)
here, the four parameters, G0, α, φ*, and φ0, are adjustable. In particular, the parameters G0 and G1 = (1 + α) G0 are memductances at φ = φ0 and at φ φ0, respectively. Differentiating the above equation with respect to time t, we obtain29.
$${I}={dq}/{dt}=[{dq}(\varphi )/{d}\varphi ]({d}\varphi /{dt})={G}(\varphi ){v}$$
(4)
where
$${G}(\varphi )={{G}}_{0}\{1+\,{\rm{sech}} \,{[(\varphi -{\varphi }_{0})/{\varphi }^{\ast }]}^{2}\}$$
(5)
is the memductance at φ, and has the unit of siemens (S). Applying a sinusoidal voltage source given by the following relationship
$${v}({t})={A}\,\sin ({\omega }{t})\,{\rm{for}}\,{t} > 0\,{\rm{and}}\,{v}({t})=0\,{\rm{for}}\,{t} < 0$$
(6)
$$\varphi ({t})={\int }_{0}^{{t}}{A}\,\sin ({\omega }{t}{^{\prime} }){d}{t}{^{\prime} }=({A}/{\omega })[1-\cos ({\omega }{t})]$$
(7)
For the sake of simplicity, we set the arbitrary parameters A and ω as A = 1 and ω = 1. The parameters G0, α, φ0 and φ* were adjusted in such a manner that the dependence of i versus v provides the best fit to the data shown in Fig. 2.
The non-zero parameter φ0 in the pinched hysteresis loops shown in Fig. 2 determines concavity of the upper right-side curve covering the red asterisks and the lower left-side one covering the blue asterisks. This parameter determines the asymmetry of the dependence of q versus φ with respect to the origin. It is clearly seen to have a negative value q0 at zero flux, φ = 0. This is in accordance with Freedman et al.40 and the previously described electrostatic characteristics of MTs, which have a large uncompensated negative charge to which q0 corresponds, thereby supporting our fitting procedure.
Note that the value of M of a memristor depends on the charge (i.e., the time integral of current), and not on the current itself. In fact, if a DC current, i(t), is applied across a memristor, the memristance will not have a constant value, but will vary with time. This is because the memristance M(q) = df(q)/dq is not a function of i, but rather it is a function of the time integral of i(t), namely, the charge q, where f(q) is the slope of the flux versus charge, a characteristic curve defining the memristor. After the current drops to zero at t = T, the memristance retains its last value M(T). In other words, the memristor remembers the latest value of M(t) until the current drops to zero. This property is directly responsible for the memory of the device. It is important to understand that what is remembered is the value of memristance, and not the value of the voltage, or current. Hence memristance represents the “memory” property of the device.
The question then arises regarding an estimate of the memristance of a single MT based on our understanding of its electrical conductivity properties. Using earlier analyses of MT conductivity as an effective RLC network forming an ionic conductivity cable36,37,40, the following estimates have been made for a single ring of an MT, which is 8 nm long. Its capacitance was found as C = 6.6 × 10−4pF, its resistance perpendicular to the cylinder axis as R2 = 1.2 MΩ and its inductance as L = 30 pH, assuming the presence of solenoidal ionic currents winding tightly around and along its axis. Recall that memristance is the partial derivative of the magnetic flux with respect to electric charge. Magnetic flux, φ, is proportional to the magnetic induction and the cross-sectional area, A. Inductance is given by the formula: L = μN2A/l where N is the number of virtual coils wrapped around the surface of an MT and N = d/rh with d denoting the length of a dimer (8 nm) and rh the radius of hydration of an ion (0.36 nm). For the tightest possible winding around the cylinder Nmax = 20 while for the least tight only 1 virtual ionic wire wraps around the entire MT, so Nmin = 10−3. Using the formula for magnetic induction B = μNI/l and substituting for L from above we readily find that φ = LI/N. As determined in earlier MT conductivity experiments, typical current values along an MT are on the order of 1 pA34. We now hypothesize, based on the previous arguments, that memristance is due to the effects of C-termini undergoing conformational changes resulting from the ionic current flows. The two conformational states of the C-termini of MTs, namely outstretched and folded, based on the dimensions of the peptide chains involved, may differ by as much as 4 nm (but no more than that due to the size of these peptide structures), which would affect the effective radius of an MT just like a variable diameter pipe used as a metaphor above. Hence, a relative change in the inductance of an MT is estimated to be ΔL/L = 2ΔR/R = 0.6. Therefore, for a single ring of an MT, an associated change in inductance, ΔL, is expected to be in the range of 20 pH. This is expected to result from a change in the amount of net charge on the C-termini of approximately 5 to 6 e, hence ΔQ = 10−18 C. With these values and the fact that N = d/rh = 20, we obtain the memristance of a single MT ring as M0 = ΔLIQN ranging between 10−3 Ω and 20 Ω. Since an average MT is typically 10 µm long, M = nM0 where n is the number of rings n = l/d = 1250 and hence the total memristance for such an MT, M, is expected to range between 1 Ω and 20 kΩ.
This is a very small value for a single MT, whose conductance is expected to be on the order of 20 S/m32, hence resistance is expected to be in the range of 1 GΩ completely overshadowing the memristive contribution. Conversely, a very large memductance means that this mode of ionic conduction around a MT, in a solenoidal fashion involving the dynamics of C-termini represents a high conductivity “cable” with a special memristive property. This does not mean that memristive properties of MTs are insignificant but that they are related to a special mode of ionic conduction related to solenoidal currents affected by C-termini conformational states. We, therefore, hypothesize, that these solenoidal ionic currents require special initial conditions to be generated, i.e. low intensities of the electric fields generating them in order to maintain tight contact of ionic flows around MTs and an orientation of these electric fields at an angle to the MT axis, which is close to but not exactly perpendicular in order to lead to tight winding of the solenoidal flows that result. Otherwise, it is expected that ionic flows involving MTs would either proceed linearly along the MT axis, occur perpendicularly to it and cross the MT walls through its nanopores, or finally, scatter and diffuse off the MTs representing obstacles to ionic flows.
It should be mentioned in the context of biological applications that the memristive behavior of MTs could have remarkable applications in biology, especially as a memory-storing device. This will be discussed in more detail in the Conclusions section but it suffices to say that the direction of the solenoidal currents could be controlled by C-termini phosphorylation or post-translational modifications, both of which are known effects in cell biology and could explain numerous hitherto unexplained phenomena such as calmodulin kinase phosphorylation of MTs and its relation to actual memory storage in the human brain.
There are currently no data available in the experimental biophysics literature to verify the above numbers for memristance of MTs directly but work is underway to create a single MT trapping device, e.g. a microfluidic chamber or a nanochannel, to test these predictions. Below, we briefly discuss some preliminary experimental data we collected on ensembles of microtubules where pinched hysteresis loops were observed. While still preliminary, these observations are consistent with the theoretical predictions made in this paper.
## Experimental Measurements
We have previously reported impedance measurements for MTs and tubulin solutions under various conditions in Santelices et al.32 where details of the methodology used can be found. Such experiments, performing current-voltage measurements at both AC and DC voltages can potentially validate the memristive properties of MTs. Here, we report for the first time the observed hysteretic behavior of the buffer solution with ensembles of MTs.
Specifically, by reversing the voltage applied we were able to observe a characteristic pinched hysteresis loop for memristors, which is shown in Fig. 3. A combined effect is shown for MTs and the buffer in which they are solubilized and a net effect on MTs by themselves where we have subtracted the contribution of the buffer.
Finally, we performed preliminary current-voltage measurements in a set-up shown in Fig. 4A using MTs at physiologically relevant concentrations of tubulin (22 μM)41,42,43 and in the presence of the BRB80T buffer (80 mM ionic strength). When we imaged MTs in such solutions using an epifluorescence microscope, we noticed that MTs formed unaligned and complex meshworks, resulting in a non-trivial bioelectric network (Fig. 4B). The high number of unaligned MTs results in a complex pattern of behavior that will be investigated in detail elsewhere. Nonetheless, current-voltage traces obtained through these measurements displayed complex hysteretic behavior (Fig. 4C).
## Conclusions
In this paper we have provided theoretical and experimental evidence in support of the hypothesis that MTs are subcellular memristors. MTs have highly negative linear charge densities, which are screened by counterions surrounding MTs from both outer and inner surfaces. The same counterions anchored by negative charges within the Bjerrum regions stay in fixed states until injected currents or potential gradients push them away from the MT vicinity. Ionic motion is guided by MT geometry but there are numerous intricacies due to the presence of nanopores on the MT surface, which enable ionic motion in and out of the lumen. Moreover, the highly-charged C-termini decorate the MT surface in a periodic pattern and can fluctuate between at least two conformational states: outstretched and bound to the MT surface. Transitions between these two states are susceptible to local electrostatic potentials and hence interact with ionic flows. We believe that these conformational transitions are responsible for memristive properties of MTs. In Fig. 5 we show an illustrative comparison between a TiO2 memristor where oxygen vacancies play the role of memory carriers and an MT memristor where ionic species are memory carriers.
Overall our findings in the above experiments and simulations seem to indicate that MTs can propagate, and amplify, electric signals via ionic flows along the MT surface, through the lumen and across their nanopores. Simulations suggest that these flows are sensitive to the dynamics of the C-terminal region, and consequently are tubulin isotype-dependent since various tubulin isotypes are characterized, among other properties, by C-termini differences. So far there seems to be no direct experimental verification of role of the MT cytoskeleton in electrical signal conduction in neurons. However, there is significant indirect experimental evidence in support of MT’s being involved in human cognition and hence potentially in neuronal signaling. In Fig. 6 we show the distribution of MTs within the axons of neurons, which is intended to visualize how signals carried by ionic flows along MTs can be incorporated into the functions of neurons by interactions with MAPs, which can then be coupled to axoplasmic transport and can affect ion channels, for example.
The neuronal cytoskeleton has been purported to play a crucial role in learning processes and memory formation, which has been documented and reviewed21,22,38. Most eukaryotic cells exhibit MT dynamic instability with periods of growth interspersed with catastrophes and rescue events. However, MTs in neurons are less dynamic and more stable due to their interconnections with MAPs. However, reorganization of the MTs and MAPs in the neuronal cytoskeleton is known to occur during learning, which has been seen to correlate with an increase in MT numbers, and has also been shown to be impaired by the MT depolymerizing agent colchicine. This appears to indicate that learning involves dynamic MTs21,44. Work on the molecular basis of memory has implicated CaMKII (calcium/calmodulin-dependent protein kinase II) as crucial to LTP (long-term potentiation) contributing to learning and the memory formation45,46. CaMKII also phosphorylates both α- and β-tubulin directly in the C-terminal region of the protein47. An atomic-resolution model of MT phosphorylation by CaMKII18 demonstrates an intricate and potentially massive molecular code of information encryption in the structure of neuronal MTs, especially in dendrites, and this can be directly linked to current flows, which we argue possess memristive properties along MTs. Enzymatic reactions of this type may trigger MT matrix reorganization, which is required for memory formation and learning. A mechanistic understanding of memory encoding at the subcellular level now emerges, which is not only dynamic but inherently linked to subtle conductive properties of MTs, especially their memristive ionic conduction characteristics as argued in this paper48. There are several clear advantages offered by MTs as subcellular memristors. Tubulin is one of the most abundant proteins in neurons, and MTs are exceptionally well-conserved, spatio-temporally ubiquitous proteins. This suggests a widespread nature of MT memristors within the cell. Additionally, post-translational modifications (PTMs) on C-termini tails of tubulin that vary depending on the local and global MT environment may lead to complex attenuations in memristive action, depending on the positioning of MTs within the cell and on the cell type. The presence of MAPs provides further advantages and complexity to MT networks, creating connections among adjacent MTs (see Fig. 6) and establishing contacts between MTs and various macromolecules. While it is well-known that proteins degrade and denature over time, which would affect the endurance of MT-based memristors, some of it may be mitigated by stabilizing MTs via MAPs and pharmacological agents such as taxol. On the other hand, limited durability of MT-based devices offers new avenues such as the ability to construct evolvable bio-electronic devices or biodegradable or self-destructing ones.
Our discovery that MTs are biological memristors could, for example, help resolve the heretofore unknown origin of the impressive memory capabilities exhibited by the amoeba, which do not have neurons, let alone a brain, but are loaded with MTs. In general, MTs may offer numerous advantages over silicon-based technology. Microtubules are biological, biodegradable materials and hence offer an environmental advantage over seminconducting materials. They are very abundant in all eukaryotic organisms and highly conserved through evolution indicating their importance to living systems. Due to the diversity of C-termini sequences, which are cell-type and species-specific, there is a huge potential for designing an array of MT-based memristors with functional differences. This can be further amplified by post-translational modifications. Finally, MTs can form bioelectric circuits through their natural connections to MAPs, hence an enormous spectrum of circuit geometries is possible to be created even by self-organization processes.
## Methods
We performed I-V measurements on samples of MTs in buffer solution using a semiconductor characterization system (Keithley 4200-SCS) with a probe station. For this purpose, we created two-terminal and four-terminal electrical devices, made of Pt wires attached to a glass substrate in a flow cell, to test electrical changes due to MTs in physiological-like solution.
The electrical devices were constructed on a 10 cm square wafer. Each device, called EDA, has five wires, with contact pads large enough to be attached with probe tips to the Keithley 4200 semiconductor characterization system. The region where the wires converge has the five wires extending for 500 µm where the MTs can cross all 5 wires. In one device, the wires were fabricated to be 4 µm wide, with a 6 µm-space between wires (10 µm apart center-to-center). By visual inspection of fluorescent images, there appear to be 2, 3, 10, and 7 MTs making solid connections between wires 1 and 2, 2 and 3, 3 and 4, and 4 and 5, respectively.
MT polymerization was performed by first reconstituting tubulin powder (Cytoskeleton Inc, tl590m) according to the protocol provided by the supplier. The solution was subsequently snap-frozen in experimental sized aliquots. For each experiment, MTs were polymerized by incubating a tubulin aliquot (with a 45.45 μM concentration) at 37 °C for 30 minutes. BRB80 (80 mM PIPES pH 6.9, 2 mM MgCl2 and 0.5 mM EGTA (Cytoskeleton, Inc. BST01) supplemented with paclitaxel was added to this solution to attain the required tubulin and ionic concentration. To attain a final concentration of MTs at 22 μM tubulin, equal volumes of this solution and tubulin solution were mixed.
The flow cell was flushed with BRB4 buffer solution. Next, three sweeps of frequency sweeps for conductance measurements were performed. The flow cell was subsequently flushed with MT-containing buffer solution BRB4-MT1x after which three more frequency sweeps were implemented. An identical protocol was used for the following compositions of MTs and tubulin: BRB4-MT2x, BRB4-MT5x, BRB4-T1x, and BRB4-T5x, respectively. The buffer solution BRB4 was generated by diluting BRB80 20-fold with Milli-Q water. An HM Digital COM-100 EC/TDS/temperature meter was then inserted into the BRB4 buffer solution and the temperature was determined. After an incubation of one minute, the conductivity of each solution was measured and recorded.
To investigate the conductance of MTs and tubulin, we used diluted low ionic solution to lower the amount of ionic contribution to the overall conductivity. As usual, BRB4 ionic strength buffer was prepared by adding 5 μL of BRB80 buffer (80 mM PIPES pH 6.9, 2 mM MgCl2 and 0.5 mM EGTA (obtained from Cytoskeleton, Inc. BST01) to 95 μL Milli-Q water. BRB4-MT1x solutions (42.4 nM tubulin concentration) for testing were prepared by adding 5 μL of MT solution (850 nM tubulin concentration in BRB80) to 95 μL Milli-Q water. BRB4-MT2x solutions (84.8 nM tubulin) were prepared by adding 5 μL of MT2x solution (4 mM tubulin concentration in BRB80) to 95 μL Milli-Q water. BRB4-MT5x (212 nM tubulin) was prepared by adding 5 μL of MT5x solution (10 mM tubulin concentration in BRB80) to 95 μL Milli-Q water.
The range of applied voltages was ±1 V with 0.2 V step, from −1 V to 1 V. Linear regression was applied on the −0.6 to 0 V points to report a conductance. The slope calculated from linear regression is the sample’s conductance, and the inverse is its resistance. Multiple voltage sweeps were performed in each experimental situation. Signatone tungsten probe tips were used. Four-point collinear probe measurements were performed on the first four wires from the left of the EDA device, with a 1 nA applied current (the current range recommended by Keithley to give a voltage drop of about 10 mV) between wires 1 and 4. Wires 2 and 3 both measure the voltage respective to ground wire 4. Mean voltages are reported with an SD error. The current source range to determine the input impedance of wires 2 and 3 was set to be in the 1 nA range. The Keithley settings used were as follows: sampling mode, normal speed, interval 0.25, hold time 1. Ten measurements were performed with ~0.3 s between measurements, per execution.
For the experiments at physiologically relevant ionic strengths, a parallel-plate capacitor geometry composed of FTO (Fluorine-doped Tin Oxide) contacts was used, as displayed in Fig. 4A and described elsewhere (A. Kalra et al., unpublished, 2019). The distance between the plates was 70 μm and the total volume of the solution was 2.5 μL. These experiments were performed using a Zahner Zennium Impedance Analyzer in a two-probe configuration. Under these conditions there was no appreciable difference between the MT solutions and the controls. Experiments investigating the DC response of MTs at such physiologically relevant tubulin and ionic concentrations are presently under way.
## Data availability
The authors will provide the experimental data reported in this paper upon request.
## References
1. 1.
Chua, L. Memristor-The missing circuit element. IEEE Trans. Circuit Theory 18, 507–519 (1971).
2. 2.
Tour, J. M. & He, T. Electronics: the fourth element. Nature 453, 42–43 (2008).
3. 3.
Strukov, D. B., Snider, G. S., Stewart, D. R. & Williams, R. S. The missing memristor found. Nature 453, 80–83 (2008).
4. 4.
Waldrop, M. M. The chips are down for Moore’s law. Nature 530, 144–147 (2016).
5. 5.
Vongehr, S. & Meng, X. The Missing Memristor has Not been Found. Sci. Rep. 5, 11657 (2015).
6. 6.
Adhikari, S. P., Sah, M. P., Kim, H. & Chua, L. O. Three Fingerprints of Memristor. IEEE Trans. Circuits Syst. Regul. Pap. 60, 3008–3021 (2013).
7. 7.
Chua, L. If it’s pinched it’s a memristor. Semicond. Sci. Technol. 29, 104001 (2014).
8. 8.
Hodgkin, A. L. & Huxley, A. F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500–544 (1952).
9. 9.
Chua, L., Sbitnev, V. & Kim, H. Hodgkin–huxley axon is made of memristors. Int. J. Bifurc. Chaos 22, 1230011 (2012).
10. 10.
Kandel, E. R. In Search of Memory: The Emergence of a New Science of Mind. (W. W. Norton & Company, 2007).
11. 11.
Chua, L. Memristor, Hodgkin-Huxley, and edge of chaos. Nanotechnology 24, 383001 (2013).
12. 12.
Mukherjee, C., Hota, M. K., Naskar, D., Kundu, S. C. & Maiti, C. K. Resistive switching in natural silk fibroin protein-based bio-memristors. Phys. Status Solidi A 210, 1797–1805 (2013).
13. 13.
Chen, Y.-C. et al. Nonvolatile bio-memristor fabricated with egg albumen film. Sci. Rep. 5, 10022 (2015).
14. 14.
Volkov, A. G. et al. Memristors in plants. Plant Signal. Behav. 9, e28152 (2014).
15. 15.
Volkov, A. G. et al. Memristors in the electrical network of Aloe vera L. Plant Signal. Behav. 9, e29056 (2014).
16. 16.
Volkov, A. G. et al. Memristors in the Venus flytrap. Plant Signal. Behav. 9, e29204 (2014).
17. 17.
Jo, S. H. et al. Nanoscale memristor device as synapse in neuromorphic systems. Nano Lett. 10, 1297–1301 (2010).
18. 18.
Craddock, T. J. A., Tuszynski, J. A. & Hameroff, S. Cytoskeletal signaling: is memory encoded in microtubule lattices by CaMKII phosphorylation? PLoS Comput. Biol. 8, e1002421 (2012).
19. 19.
Cronly-Dillon, J., Carden, D. & Birks, C. The possible involvement of brain microtubules in memory fixation. J. Exp. Biol. 61, 443–454 (1974).
20. 20.
Lynch, G., Rex, C. S., Chen, L. Y. & Gall, C. M. The substrates of memory: defects, treatments, and enhancement. Eur. J. Pharmacol. 585, 2–13 (2008).
21. 21.
Priel, A., Tuszynski, J. A. & Woolf, N. J. Neural cytoskeleton capabilities for learning and memory. J. Biol. Phys. 36, 3–21 (2010).
22. 22.
Tuszyński, J. A., Hameroff, S., Satarić, M. V., Trpisova, B. & Nip, M. L. A. Ferroelectric behavior in microtubule dipole lattices: implications for information processing, signaling and assembly/disassembly. J. Theor. Biol. 174, 371–380 (1995).
23. 23.
Vassilev, P. M., Dronzine, R. T., Vassileva, M. P. & Georgiev, G. A. Parallel arrays of microtubules formed in electric and magnetic fields. Biosci. Rep. 2, 1025–1029 (1982).
24. 24.
Kirson, E. D. et al. Disruption of cancer cell replication by alternating electric fields. Cancer Res. 64, 3288–3295 (2004).
25. 25.
Stracke, R., Böhm, K. J., Wollweber, L., Tuszynski, J. A. & Unger, E. Analysis of the migration behaviour of single microtubules in electric fields. Biochem. Biophys. Res. Commun. 293, 602–609 (2002).
26. 26.
Fritzsche, W., Böhm, K., Unger, E. & Köhler, J. M. Making electrical contact to single molecules. Nanotechnology 9, 177 (1998).
27. 27.
Fritzsche, W., Böhm, K. J., Unger, E. & Köhler, J. M. Metallic nanowires created by biopolymer masking. Appl. Phys. Lett. 75, 2854–2856 (1999).
28. 28.
Whittier, J. E. & Goddard, G. R. Microtubule Structural Dynamics Measured with Impedance Spectroscopy. FASEB J. 20, A492–A492 (2006).
29. 29.
Umnov, M. et al. Experimental evaluation of electrical conductivity of microtubules. J. Mater. Sci. 42, 373–378 (2007).
30. 30.
Minoura, I. & Muto, E. Dielectric measurement of individual microtubules using the electroorientation method. Biophys. J. 90, 3739–3748 (2006).
31. 31.
Sahu, S. et al. Atomic water channel controlling remarkable properties of a single brain microtubule: correlating single protein to its supramolecular assembly. Biosens. Bioelectron. 47, 141–148 (2013).
32. 32.
Santelices, I. B. et al. Response to Alternating Electric Fields of Tubulin Dimers and Microtubule Ensembles in Electrolytic Solutions. Sci. Rep. 7, 9594 (2017).
33. 33.
Priel, A., Ramos, A. J., Tuszynski, J. A. & Cantiello, H. F. Effect of calcium on electrical energy transfer by microtubules. J. Biol. Phys. 34, 475–485 (2008).
34. 34.
Friesen, D. E., Craddock, T. J. A., Kalra, A. P. & Tuszynski, J. A. Biological wires, communication systems, and implications for disease. Biosystems 127, 14–27 (2015).
35. 35.
Priel, A., Tuszynski, J. A. & Woolf, N. J. Transitions in microtubule C-termini conformations as a possible dendritic signaling phenomenon. Eur. Biophys. J. EBJ 35, 40–52 (2005).
36. 36.
Priel, A., Tuszynski, J. A. & Cantiello, H. F. The Dendritic Cytoskeleton as a Computational Device: An Hypothesis. In The Emerging Physics of Consciousness (ed. Tuszynski, J. A.) 293–325 (Springer, 2006).
37. 37.
Priel, A., Ramos, A. J., Tuszynski, J. A. & Cantiello, H. F. A biopolymer transistor: electrical amplification by microtubules. Biophys. J. 90, 4639–4643 (2006).
38. 38.
Nogales, E., Whittaker, M., Milligan, R. A. & Downing, K. H. High-resolution model of the microtubule. Cell 96, 79–88 (1999).
39. 39.
Mullins, J., Memristor minds. New Scientist 203(2715), 42–45 (2009).
40. 40.
Freedman, H. et al. Model of ionic currents through microtubule nanopores and the lumen. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 81, 051912 (2010).
41. 41.
Hiller, G. & Weber, K. Radioimmunoassay for tubulin: a quantitative comparison of the tubulin content of different established tissue culture cells and tissues. Cell 14, 795–804 (1978).
42. 42.
Shelden, E. & Wadsworth, P. Observation and quantification of individual microtubule behavior in vivo: microtubule dynamics are cell-type specific. J. Cell Biol. 120, 935–945 (1993).
43. 43.
Van de Water, L. & Olmsted, J. B. The quantitation of tubulin in neuroblastoma cells by radioimmunoassay. J. Biol. Chem. 255, 10744–10751 (1980).
44. 44.
Woolf, N. J. & Priel, A. Nanoneuroscience: Structural and Functional Roles of the Neuronal Cytoskeleton in Health and Disease. (Springer Science & Business Media, 2009).
45. 45.
Lisman, J., Schulman, H. & Cline, H. The molecular basis of CaMKII function in synaptic and behavioural memory. Nat. Rev. Neurosci. 3, 175–190 (2002).
46. 46.
Colbran, R. J. & Brown, A. M. Calcium/calmodulin-dependent protein kinase II and synaptic plasticity. Curr. Opin. Neurobiol. 14, 318–327 (2004).
47. 47.
Wandosell, F., Serrano, L., Hernández, M. A. & Avila, J. Phosphorylation of tubulin by a calmodulin-dependent protein kinase. J. Biol. Chem. 261, 10332–10339 (1986).
48. 48.
Chua, L. Five non-volatile memristor enigmas solved. Appl. Phys. A 124, 563 (2018).
## Acknowledgements
J.A.T. acknowledges funding support from Natural Sciences and Engineering Research Council of Canada and the Allard Foundation. L.O.C. acknowledges partial support from the USA Air Force Office of Scientific Research under grant number FA 9550-18-1-0016.
## Author information
Authors
### Contributions
L.O.C. conceived of the main idea. V.I.S. and H.K. performed the main calculations, D.F. and J.A.T. designed the experiments, I.S., A.P.K., S.D.P. and D.F. performed the main experiments under the supervision of J.A.P. and K.S. HF provided parameter values. All the authors contributed to writing the manuscript.
### Corresponding author
Correspondence to Jack A. Tuszynski.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Tuszynski, J.A., Friesen, D., Freedman, H. et al. Microtubules as Sub-Cellular Memristors. Sci Rep 10, 2108 (2020). https://doi.org/10.1038/s41598-020-58820-y
• Accepted:
• Published:
• ### Fractal, Scale Free Electromagnetic Resonance of a Single Brain Extracted Microtubule Nanowire, a Single Tubulin Protein and a Single Neuron
• Komal Saxena
• , Pushpendra Singh
• , Pathik Sahoo
• , Satyajit Sahu
• , Subrata Ghosh
• , Daisuke Fujita |
# Interactive plotting basics in matplotlib
The main goals of this post are:
• Provide bullet points about Matplotlib’s architecture and provide documentation for more in depth exploration.
• Build two examples plotting multidimensional data with basic interactive capabilities.
# 1. Matplotlib architecture bullets
In order to enable Matplotlib’s interactive capabilities, it doesn’t hurt to understand how it is structured. The current matplotlib architecture is comprised of three layers: the scripting layer, the artist layer and backend layer, which interact in the following way:
• The data is either created or loaded in the scripting layer, this layer basically supports the programmatic interaction and provides users with the ability to manipulate figures with a syntax that is somewhat intuitive.
• The data is transformed into various objects in the artist layer; it is adjusted as scripted. This layer is responsible for the abstraction of each visual component that you see in a figure.
• These objects are then rendered by the backend. This last layer enables the users to create, render, and update the figure objects. Figures can be displayed and interacted with via common user interface events such as the keyboard and mouse inputs.
Matplotlib has extensive documentation; however, the best way to learn about backends is exploring the sourcecode. Some of the documents that I also found helpful, aside from the wonderful stackoverflow, are:
Raman, Kirthi. Mastering Python Data Visualization. Packt Publishing Limited, 2015.
Root, Benjamin V. “Interactive Applications Using Matplotlib.” (2015).
McGreggor, Duncan M. “Mastering matplotlib.” (2015).
# 2. Example: Fun click
Here’s a simple example that connects a 5D scatter plot to a pick and a mouse click event. The goal is to show information about the point being clicked. In this case, the row index of the data matrix and the corresponding values are shown in the python terminal as shown in the following figure:This could be useful when plotting a multidimensional Pareto Front. If you see a solution of interest in your scatter plot, you can directly access its exact values and its index by a simple mouse click. This index may be useful to track the corresponding decision vector in your decision matrix. Also, note that even if I am only plotting a 5D scatter plot, I used a 6 D data matrix, hence, I can see what the sixth value is on the terminal. The following code was used to generate the previous figure. Parts 2.1. through 2.5 were adapted from the Visualization strategies for multidimensional data post, refer to this link for a detailed explanation of these fragments.
### 2.1. Required Libraries
import pylab as plt
mpl_toolkits.mplot3d.axes3d as p3
import numpy as np
import seaborn # not required
#data= np.loadtxt('your_data.txt’)
data= X =np.random.random((50,6)) # setting a random matrix
### 2.3. Accessing object oriented Matplotlib’s objective oriented plotting
fig, ax = plt.subplots(1, 1)
ax = fig.add_subplot(111, projection = '3d')
im= ax.scatter(data[:,0], data[:,1],data[:,2], c=data[:,3], s= data[:,4]*scale, alpha=1, cmap=plt.cm.spectral, picker=True)
### 2.4. Setting the main axis labels
ax.set_xlabel('OBJECTIVE 1')
ax.set_ylabel('OBJECTIVE 2')
ax.set_zlabel('OBJECTIVE 3')
### 2.5. Setting colorbar and its label vertically
cbar= fig.colorbar(im)
cbar.ax.set_ylabel('OBJECTIVE 4')
### 2.6. Setting size legend
objs=data[:,4]
max_size=np.amax(objs)*scale
min_size=np.amin(objs)*scale
handles, labels = ax.get_legend_handles_labels()
display = (0,1,2)
size_max = plt.Line2D((0,1),(0,0), color='k', markersize=max_size,linestyle='')
size_min = plt.Line2D((0,1),(0,0), color='k', markersize=min_size,linestyle='')
legend1= ax.legend([handle for i,handle in enumerate(handles) if i in display]+[size_max,size_min],
[label for i,label in enumerate(labels) if i in display]+["%.2f"%(np.amax(objs)), "%.2f"%(np.amin(objs))], labelspacing=1.5, title='OBJECTIVE 5', loc=1, frameon=True, numpoints=1, markerscale=1)
### 2.7. Setting the picker function
This part defines and connects the events. The mpl_connect() method connects the event to the figure. It accepts two arguments, the name of the event and the callable object (such as a function).
def onpick(event):ind = event.ind
print ('index: %d\nobjective 1: %0.2f\nobjective 2: %0.2f\nobjective 3: %0.2f\nobjective 4: %0.2f\nobjective 5: %0.2f\nobjective 6: %0.2f' % (event.ind[0],data[ind,0],data[ind,1],data[ind,2],data[ind,3],data[ind,4],data[ind,5]))
fig.canvas.mpl_connect('pick_event', onpick)
plt.show
For a list of events that Matplotlib supports, please refer to: Matplotlib event options. To download the previous code go to the following link: https://github.com/JazminZatarain/Basic-interactive-plotting/blob/master/fun_click.py
# 3. Example: Fun Annotation
Transitioning to a slightly more sophisticated interaction, we use annotations to link the event to a figure. That is, the information is shown directly in the figure as opposed to the terminal as in the previous example. In the following snippet, the row index of the data matrix is shown directly in the figure canvas by simply clicking on the point of interest.
The following code was adapted from Matplotlib’s documentation and this stackoverflow post to execute the previous figure. Sections 3.1. and 3.2. define our scatter plot with its corresponding labels as we saw in the previous example.
### 3.1. Required Libraries
import matplotlib.pyplot as plt, numpy as np
from mpl_toolkits.mplot3d import proj3d
### 3.2. Visualizing data in 3d plot with popover next to mouse position
def visualize3DData (X,scale,cmap):
fig = plt.figure(figsize = (16,10))
ax = fig.add_subplot(111, projection = '3d')
im= ax.scatter(X[:, 0], X[:, 1], X[:, 2], c= X[:, 3], s= X[:, 4]*scale, cmap=cmap,&amp;amp;amp;nbsp;&amp;amp;amp;nbsp;&amp;amp;amp;nbsp;&amp;amp;amp;nbsp; alpha=1, picker = True)
ax.set_xlabel('OBJECTIVE 1')
ax.set_ylabel('OBJECTIVE 2')
ax.set_zlabel('OBJECTIVE 3')
cbar= fig.colorbar(im)
cbar.ax.set_ylabel('OBJECTIVE 4')
objs=X[:,4]
max_size=np.amax(objs)*scale
min_size=np.amin(objs)*scale
handles, labels = ax.get_legend_handles_labels()
display = (0,1,2)
size_max = plt.Line2D((0,1),(0,0), color='k', marker='o', markersize=max_size,linestyle='')
size_min = plt.Line2D((0,1),(0,0), color='k', marker='o', markersize=min_size,linestyle='')
legend1= ax.legend([handle for i,handle in enumerate(handles) if i in display]+[size_max,size_min],
[label for i,label in enumerate(labels) if i in display]+["%.2f"%(np.amax(objs)), "%.2f"%(np.amin(objs))], labelspacing=1.5, title='OBJECTIVE 5', loc=1, frameon=True, numpoints=1, markerscale=1)
### 3.3. Return distance between mouse position and given data point
def distance(point, event):
assert point.shape == (3,), "distance: point.shape is wrong: %s, must be (3,)" % point.shape
### 3.4. Project 3d data space to 2d data space
x2, y2, _ = proj3d.proj_transform(point[0], point[1], point[2], plt.gca().get_proj())
# Convert 2d data space to 2d screen space
x3, y3 = ax.transData.transform((x2, y2))
return np.sqrt ((x3 - event.x)**2 + (y3 - event.y)**2)
### 3.5. Calculate which data point is closest to the mouse position
def calcClosestDatapoint(X, event):
distances = [distance (X[i, 0:3], event) for i in range(X.shape[0])]
return np.argmin(distances)
def annotatePlot(X, index):
# If we have previously displayed another label, remove it first
if hasattr(annotatePlot, 'label'):
annotatePlot.label.remove()
Get data point from array of points X, at position index:
x2, y2,_ = proj3d.proj_transform(X[index, 0], X[index, 1], X[index, 2], ax.get_proj())
### 3.6. Specify the information to be plotted in the annotation label
annotatePlot.label = plt.annotate("index: %d" % index,
xy = (x2, y2), xytext = (-20,20), textcoords = 'offset points', ha = 'right', va = 'bottom',
bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5),
arrowprops = dict(arrowstyle = '-&gt;', connectionstyle = 'arc3,rad=0'))
fig.canvas.draw()
### 3.7. Defining the event
This last part defines the event that is triggered when the mouse is moved. It also shows the text annotation over the data point closest to the mouse.
def onMouseMotion(event):
closestIndex = calcClosestDatapoint(X, event)
annotatePlot (X, closestIndex)
fig.canvas.mpl_connect('motion_notify_event', onMouseMotion)&nbsp; # on mouse motion
plt.show()
### 3.8. You can forget the previous code and simply insert your data here
if __name__ == '__main__':
import seaborn
X = np.random.random((50,6)) #this is the randomly generated data for this example
scale=1000 # scale of the size objective
visualize3DData (X,scale,cmap)
# Visualizing multidimensional data: a brief historical overview
The results of a MOEA search are presented as a set of multidimensional data points. In order to form useful conclusions from our results, we must have the ability to comprehend the multidimensional differences between results and effectively analyze and communicate them to decision makers.
Navigating through multiple dimensions is an inherently complex task for the human mind. We perceive the world in three dimensions, and thinking in higher dimensional space can be heavily taxing. The difficulty of comprehending multidimensional data is compounded when one must display the data on a two dimensional surface such as a sheet of paper or a computer screen. The challenge of “flattening” data has persisted for centuries, and has plagued not only those who were concerned with gleaning scientific insights from data, but also artists and those seeking to accurately portray the physical world as perceived by the human eye.
For much of human history, even the greatest artists were unable to accurately express the three dimensional world in a two dimensional plane. Nicolo da Bologna’s 14th century work, The Marriage, fails to convey any sense of three dimensional space, giving the viewer the impression that the figures painted have been pressed against a pane of glass.
Nicolo da Bologna’s The Marriage (1350s) is unable to convey any sense of depth to the viewer.
During the Italian Renaissance, artists rediscovered the mathematics of perspective, allowing them to break free of the constraints of their two dimensional canvas and convey realistic images that gave the illusion of a third dimension. Raphael’s The school of Athens masterfully uses perspective to imbue his painting with a sense of depth. Through clever exploitation of Euclidean geometry and the mechanics of the human eye, Raphael is able to use the same medium (paint on a two dimensional surface) to convey a much richer representation of his subjects than his Bolognese predecessor.
Raphael’s The School of Athens (1509-1511) is an example of a masterful use of perspective. The painting vividly depicts a three dimensional space.
In the twentieth century, artists began attempting to covey more than three dimensions in two dimensional paintings. Cubists such as Picasso, attempted to portray multiple viewpoints of the same image simultaneously, and futurists such as Umberto Boccioni attempted to depict motion and “Dynamism” in their paintings to convey time as a fourth dimension.
Pablo Picasso’s Portrait of Dora Maar (1938), depicts a woman’s face from multiple viewpoints simultaneously
Umberto Boccioni’s Dynamism of a Cyclist (1913) attempts to portray a fourth dimension, time, through a sense of motion and energy in the painting. Can you tell this is supposed to be a cyclist, or did I go too far out there for a water programming blog?
Regardless of your views on the validity of modern art, as engineers and scientists we have to admit that in this area we share similar goals and challenges with these artists: to effectively convey multidimensional data in a two dimensional space. Unlike artists, whose objectives are to convey emotions, beauty or abstract ideas through their work, we in the STEM fields seek to gain specific insights from multidimensional data that will guide our actions or investigations.
A notable historical example of the effective use of clever visualization was English physician John Snow’s map of the London Cholera epidemic of 1854. Snow combined data of cholera mortality with patient home addresses to map the locations of cholera deaths within the city.
John Snow’s map of the 1854 London Cholera Epidemic. Each black bar is proportional to the number of cholera deaths at a given residence. Pumps are depicted using black circles. One can clearly see that the cholera deaths are clustered around the pump on Broad Street (which I’ve circled in red).
The results of his analysis led Snow to conclude that a contaminated well was the likely source of the outbreak, a pioneering feat in the field of public health. Snow’s effective visualization not only provided him with insights into the nature of the problem, but also allowed him to effectively communicate his results to a general public who had previously been resistant to the idea of water borne disease.
In his insightful book Visual Explanations: Images and Quantities, Evidence and Narrative, Edward Tufte points to three strengths within John Snow’s use of data visualization in his analysis of the epidemic. First, Snow provided the appropriate context for his data. Rather than simply plotting a time series of cholera deaths, Snow placed those deaths within a new context, geographic location, which allowed him to make the connection to the contaminated pump. Second, Snow made quantitative comparisons within his data. As Tufte points out, a fundamental question when dealing with statistical analysis is “Compared with what?” It’s not sufficient to simply provide data about those who were struck with the disease, but also to explain why certain populations were not effected. By complimenting his data collection with extensive interviews of the local population, Snow was able to show that there were indeed people who escaped disease within the area of interest, but these people all got their water from other sources, which strengthened his argument that the pump was the source of the epidemic. Finally, Tufte insists that one must always consider alternative explanations than the one that seems apparent from the data visualization before drawing final conclusions. It is easy for one to make a slick but misleading visualization, and in order to maintain credibility as an analyst, one must always keep an open mind to alternative explanations. Snow took the utmost care in crafting and verifying his conclusion, and as a result his work stands as a shining example of the use of visualization to explore multidimensional data.
While Snow’s methodology is impressive, and Tufte’s observations about his work helpful, we cannot directly use his methodology to future evaluation of multidimensional data, because his map is only useful when evaluating data from the epidemic of 1854. There is a need for general tools that can be applied to multidimensional data to provide insights through visualizations. Enter the field of visual analytics. As defined by Daniel Keim “Visual analytics combines automated-analysis techniques with interactive visualization for an effective understanding, reasoning and decision making on the basis of very large and complex data sets”. The field of visual analytics combines the disciplines of data analysis, data management, geo-spacial and temporal processes, spacial decision support, human-computer interaction and statistics. The goal of the field is to create flexible tools for visual analysis and data mining. Noted visualization expert Alfred Inselberg proposed six criterion that successful visualization tools should have:
1. Low representational complexity.
2. Works for any number of dimensions
3. Every variable is treated uniformly
4. the displayed object can be recognized under progressive transformations (ie. rotation, translation, scaling, perspective)
5. The display easily/intuitively conveys information on the properties of the N-dimensional object it represents
6. The methodology is based on rigorous mathematical and algorithmic results
Using the above criteria, Inselberg developed the Parallel Coordinate plot. Parallel Coordinate plots transform multidimensional relationships into two dimensional patterns which are well suited for visual data mining.
An example of a five dimensional data set plotted on a Parallel Coordinate plot. Each line represents a data point, while each axis represents the point’s value in each dimension.
As water resources analysts dealing with multiobjective problems, it is critical that we have the ability to comprehend and communicate the complexities of multidimensional data. By learning from historical data visualization examples and making use of cutting edge visual analytics, we can make this task much more manageable. Parallel coordinate plots are just one example of the many visualization tools that have been created in recent decades by the ever evolving field of visual analytics. As computing power continues its rapid advancement, it is important that we as analysts continue to ask ourselves whether we can improve our ability to visualize and gain insights from complex multidimensional data sets.
# Synthetic Weather Generation: Part IV
Conditioning Synthetic Weather Generation on Climate Change Projections
This is the fourth blog post in a five part series on synthetic weather generators. You can read about common single-site parametric and non-parametric weather generators in Parts I and II, respectively, as well as multi-site generators of both types in Part III. Here I discuss how parametric and non-parametric weather generators can be modified to simulate weather that is consistent with climate change projections.
As you are all well aware, water managers are interested in finding robust water management plans that will consistently meet performance criteria in the face of hydrologic variability and change. One method for such analyses is to re-evaluate different water management plans across a range of potential future climate scenarios. Weather data that is consistent with these scenarios can be synthetically generated from a stochastic weather generator whose parameters or resampled data values have been conditioned on the climate scenarios of interest, and methods for doing so are discussed below.
Parametric Weather Generators
Most climate projections from GCMs are given in terms of monthly values, such as the monthly mean precipitation and temperature, as well as the variances of these monthly means. Wilks (1992) describes a method for adjusting parametric weather generators of the Richardson type in order to reproduce these projected means and variances. This method, described here, has been used for agricultural (Mearns et al., 1996; Riha et al., 1996) and hydrologic (Woodbury and Shoemaker, 2012) climate change assessments.
Recall from Part I that the first step of the Richardson generator is to simulate the daily precipitation states with a first order Markov chain, and then the precipitation amounts on wet days by independently drawing from a Gamma distribution. Thus, the total monthly precipitation is a function of the transition probabilities describing the Markov chain and the parameters of the Gamma distribution describing precipitation amounts. The transition probabilities of the Markov chain, p01 and p11, represent the probabilities of transitioning from a dry day to a wet day, or a wet day to another wet day, respectively. An alternative representation of this model shown by Katz (1983) is with two different parameters, π and d, where π = p01/(1 + p01 p11) and d = p11p01. Here π represents the unconditional probability of a wet day, and d represents the first-order autocorrelation of the Markov chain.
Letting SN be the sum of N daily precipitation amounts in a month (or equivalently, the monthly mean precipitation), Katz (1983) shows that if the precipitation amounts come from a Gamma distribution with shape parameter α and scale parameter β, the mean, μ, and variance, σ2, of SN are described by the following equations:
(1) μ(SN) = Nπαβ
(2) σ2(SN) = Nπαβ2[1 + α(1-π)(1+d)/(1-d)].
For climate change impact studies, one must find a set of parameters θ=(π,d,α,β) that satisfies the two equations above for the projected monthly mean precipitation amounts and variances of those means. Since there are only 2 equations but 4 unknowns, 2 additional constraints are required to fully specify the parameters (Wilks, 1992). For example, one might assume that the frequency and persistence of precipitation do not change, but that the mean and variance of the amounts distribution do. In that case, π and d would be unchanged from their estimates derived from the historical record, while α and β would be re-estimated to satisfy Equations (1) and (2). Other constraints can be chosen based on the intentions of the impacts assessment, or varied as part of a sensitivity analysis.
To modify the temperature-specific parameters, recall that in the Richardson generator, the daily mean and standard deviation of the non-precipitation variables are modeled separately on wet and dry days by annual Fourier harmonics. Standardized residuals of daily minimum and maximum temperature are calculated for each day by subtracting the daily mean and dividing by the daily standard deviation given by these harmonics. The standardized residuals are then modeled using a first-order vector auto-regression, or VAR(1) model.
For generating weather conditional on climate change projections, Wilks (1992) assumes that the daily temperature auto and cross correlation structure remains the same under the future climate so that the VAR(1) model parameters are unchanged. However, the harmonics describing the mean and standard deviation of daily minimum and maximum temperature must be modified to capture projected temperature changes. GCM projections of temperature changes do not usually distinguish between wet and dry days, but it is reasonable to assume the changes are the same on both days (Wilks, 1992). However, it is not reasonable to assume that changes in minimum and maximum temperatures are the same, as observations indicate that minimum temperatures are increasing by more than maximum temperatures (Easterling et al., 1997; Vose et al., 2005).
Approximating the mean temperature, T, on any day t as the average of that day’s mean maximum temperature, µmax(t), and mean minimum temperature, µmin(t), the projected change in that day’s mean temperature, ΔT(t), can be modeled by Equation 3:
(3) $\Delta \overline{T}\left(t\right) = \frac{1}{2}\left[\mu_{min}\left(t\right) + \mu_{max}\left(t\right)\right] = \frac{1}{2} \left(CX_0 + CX_1\cos\left[\frac{2\pi\left(t-\phi\right)}{365}\right] + CN_0 + CN_1\cos\left[\frac{2\pi\left(t-\phi\right)}{365}\right]\right)$
where CX0 and CN0 represent the annual average changes in maximum and minimum temperatures, respectively, and CX1 and CN1 the corresponding amplitudes of the annual harmonics. The phase angle, φ, represents the day of the year with the greatest temperature change between the current and projected climate, which is generally assumed to be the same for the maximum and minimum temperature. Since GCMs predict that warming will be greater in the winter than the summer, a reasonable value of φ is 21 for January 21st, the middle of winter (Wilks, 1992).
In order to use Equation 3 to estimate harmonics of mean minimum and maximum temperature under the projected climate, one must estimate the values of CX0, CN0, CX1 and CN1. Wilks (1992) suggests a system of four equations that can be used to estimate these parameters:
(4) ΔT = 0.5*(CX0 + CN0)
(5) Δ[T(JJA) – T(DJF)] = -0.895(CX1 + CN1)
(6) ΔDR(DJF) = CX0 − CN0 + 0.895(CX1 − CN1)
(7) ΔDR(JJA) = CX0 − CN0 − 0.895(CX1 − CN1)
where the left hand sides of Equations (4)-(7) represent the annual average temperature change, the change in temperature range between summer (JJF) and winter (DJF), the change in average diurnal temperature range (DR = µmax – µmin) in winter, and the change in average diurnal temperature range in summer, respectively. The constant ±0.895 is simply the average value of the cosine term in equation (3) evaluated at φ = 21 for the winter (+) and summer (−) seasons. The values for the left hand side of these equations can be taken from GCM projections, either as transient functions of time or as step changes.
Equations (4)-(7) can be used to estimate the mean minimum and maximum temperature harmonics for the modified weather generator, but the variance in these means may also change. Unlike changes in mean daily minimum and maximum temperature, it is fair to assume that changes in the standard deviation of these means are the same as each other and the GCM projections for changes in the standard deviation of daily mean temperature for both wet and dry days. Thus, harmonics modeling the standard deviation of daily minimum and maximum temperature on wet and dry days can simply be scaled by some factor σd’/ σd, where σd is the standard deviation of the daily mean temperature under the current climate, and σd’ is the standard deviation of the daily mean temperature under the climate change projection (Wilks, 1992). Like the daily mean temperature changes, this ratio can be specified as a transient function of time or a step change.
It should be noted that several unanticipated changes can occur from the modifications described above. For instance, if one modifies the probability of daily precipitation occurrence, this will change both the mean daily temperature (since temperature is a function of whether or not it rains) and its variance and autocorrelation (Katz, 1996). See Katz (1996) for additional examples and suggested modifications to overcome these potential problems.
Non-parametric Weather Generators
As described in Part II, most non-parametric and semi-parametric weather generators simulate weather data by resampling historical data. One drawback to this approach is that it does not simulate any data outside of the observed record; it simply re-orders them. Modifications to the simple resampling approach have been used in some stationary studies (Prairie et al., 2006; Leander and Buishand, 2009) as mentioned in Part II, and can be made for climate change studies as well. Steinschneider and Brown (2013) investigate several methods on their semi-parametric weather generator. Since their generator does have some parameters (specifically, transition probabilities for a spatially averaged Markov chain model of precipitation amounts), these can be modified using the methods described by Wilks (1992). For the non-parametric part of the generator, Steinschneider and Brown (2013) modify the resampled data itself using a few different techniques.
The first two methods they explore are common in climate change assessments: applying scaling factors to precipitation data and delta shifts to temperature data. Using the scaling factor method, resampled data for variable i, xi, are simply multiplied by a scaling factor, a, to produce simulated weather under climate change, axi. Using delta shifts, resampled data, xi, are increased (or decreased) by a specified amount, δ, to produce simulated weather under climate change, xi + δ.
Another more sophisticated method is the quantile mapping approach. This procedure is generally applied to parametric CDFs, but can also be applied to empirical CDFs, as was done by Steinschneider and Brown (2013). Under the quantile mapping approach, historical data of the random variable, X, are assumed to come from some distribution, FX, under the current climate. The CDF of X under climate change can be specified by a different target distribution, FX*. Simulated weather variables xi under current climate conditions can then be mapped to values under the projected climate conditions, xi*, by equating their values to those of the same quantiles in the target distribution, i.e. xi* = F*-1(F(xi)).
While simple, these methods are effective approaches for top-down or bottom-up robustness analyses. Unfortunately, what one often finds from such analyses is that there is a tradeoff between meeting robustness criteria in one objective, and sacrificing performance in another, termed regret. Fortunately, this tradeoff can occasionally be avoided if there is an informative climate signal that can be used to inform water management policies. In particular, skillful seasonal climate forecasts can be used to condition water management plans for the upcoming season. In order to evaluate these conditioned plans, one can generate synthetic weather consistent with such forecasts by again modifying the parameters or resampling schemes of a stochastic weather generator. Methods that can be used to modify weather generators consistent with seasonal climate forecasts will be discussed in my final blog post on synthetic weather generators.
Works Cited
Easterling, D. R., Horton, B., Jones, P. D., Peterson, T. C., Karl, T. R., Parker, D. E., et al. Maximum and minimum temperature trends for the globe. Science, 277(5324), 364-367.
Katz, R. W. (1983). Statistical procedures for making inferences about precipitation changes simulated by an atmospheric general circulation model. Journal of the Atmospheric Sciences, 40(9), 2193-2201.
Katz, R. W. (1996). Use of conditional stochastic models to generate climate change scenarios. Climatic Change, 32(3), 237-255.
Leander, R., & Buishand, T. A. (2009). A daily weather generator based on a two-stage resampling algorithm. Journal of Hydrology, 374, 185-195.
Mearns, L. O., Rosenzweig, C., & Goldberg, R. (1996). The effect of changes in daily and interannual climatic variability on CERES-Wheat: a sensitivity study. Climatic Change, 32, 257-292.
Prairie, J. R., Rajagopalan, B., Fulp, T. J., & Zagona, E. A. (2006). Modified K-NN model for stochastic streamflow simulation. Journal of Hydrologic Engineering11(4), 371-378.
Richardson, C. W. (1981). Stochastic simulation of daily precipitation, temperature and solar radiation. Water Resources Research, 17, 182-190.
Riha, S. J., Wilks, D. S., & Simoens, P. (1996). Impact of temperature and precipitation variability on crop model predictions. Climatic Change, 32, 293-311.
Steinschneider, S., & Brown, C. (2013). A semiparametric multivariate, multisite weather generator with low-frequency variability for use in climate risk assessments. Water Resources Research, 49, 7205-7220.
Vose, R. S., Easterling, D. R., & Gleason, B. (2005). Maximum and minimum temperature trends for the globe: An update through 2004. Geophysical research letters, 32(23).
Wilks, D. S. (1992). Adapting stochastic weather generation algorithms for climate change studies. Climatic Change, 22(1), 67-84.
Woodbury, J., & Shoemaker, C. A. (2012). Stochastic assessment of long-term impacts of phosphorus management options on sustainability with and without climate change. Journal of Water Resources Planning and Management, 139(5), 512-519.
# Debugging in Python (using PyCharm) – Part 3
This post is part 3 of a multi-part series of posts intended to provide discussion of some basic debugging tools that I have found to be helpful in developing a pure Python simulation model using a Python Integrated Development Environment (IDE) called PyCharm.
Before I begin this post, the following are links to previous blog posts I have written on this topic:
In this post I will focus on PyCharm’s “Coverage” features, which are very useful for debugging by allowing you to see what parts of your program (e.g., modules, classes, methods) are/are not being accessed for a given implementation (run) of the model. If instead you are interested in seeing how much time is being spent running particular sections of code, or want to glimpse into the values of variables during execution, see the previous posts I linked above on profiling and breakpoints.
To see what parts of my code are being accessed, I have found it helpful to create and run what are called “unit tests”. You can find more on unit testing here, or just by googling it. (Please note that I am not a computer scientist, so I am not intending to provide a comprehensive summary of all possible approaches you could take to do this. I am just going to describe something that has worked well for me). To summarize, unit testing refers to evaluating sections (units) of source code to determine whether those units are performing as they should. I have been using unit testing to execute a run of my model (called “PySedSim”) to see what sections of my code are and are not being accessed.
I integrated information from the following sources to prepare this post:
Step 1. Open the script (or class, or method) you want to assess, and click on the function or method you want to assess.
In my case, I am assessing the top-level python file “PySedSim.py”, which is the file in my program that calls all of the classes to run a simulation (e.g., Reservoirs and River Channels). Within this file, I have clicked on the PySedSim function. Note that these files are already part of a PyCharm project I have created, and Python interpreters have already been established. You need to do that first.
Step 2. With your cursor still on the function/method of interest, click “ctrl + shift + T”.
A window should appear as it does below. Click to “Create New Test”.
Step 3. Create a new test. Specify the location of the script you are testing, and keep the suggested test file and class names, or modify them. Then click to add a check mark to the box next to the Test Method, and click “OK”.
Step 4. Modify the new script that has been created. (In my case, this file is called “test_pySedSim.py”, and appears initially as it does below).
I then modified this file so that it reflects testing I want to conduct on the PySedSim method in the PySedSim.py file.
In my case, it appears like this.
from unittest import TestCase
from PySedSim import PySedSim
class TestPySedSim(TestCase):
def test_PySedSim(self):
PySedSim()
Note that there is a ton of functionality that is now possible in this test file. I suggest reviewing this website again carefully for ideas. You can raise errors, and use the self.fail() function, to indicate whether or not your program is producing acceptable results. For example, if the program produces a negative result when it should produce a positive result, you can indicate to PyCharm that this represents a fail, and the test has not been passed. This offers you a lot of flexibility in testing various methods in your program.
In my case, all I am wanting to do is run the model and see which sections were accessed, not to specifically evaluate results it has produced, so in my case PyCharm should execute the model and indicate it has “passed” the unit test (once I create and run the unit test).
Step 5. In the menu at the top of the screen that I show clicked on in the image below, click on “Edit configurations”.
From here, click on the “+” button, and go to Python tests –> Unittests.
Step 6. In the “Run/Debug Configurations” window, give your test a name in the “Name” box, and in the “script” box locate the script you created in Step 4, and indicate its path. Specify any method parameters that need to be specified to run the method. I did not specify any environment preferences, as the interpreter was already filled in. Click OK when you are done.
Step 7. Your test should now appear in the same configuration menu you clicked on earlier in Step 5. So, click the button at the top to “Run with Coverage”. (In my case, run water_programming_blog_post with coverage)
Note that it is likely going to take some time for the test to run (more than it would take for a normal execution of your code).
Step 8. Review the results.
A coverage window should appear to the right of your screen, indicating what portions (%) of the various functions and methods contained in this program were actually entered.
To generate a more detailed report, you can click on the button with the green arrow inside the coverage window, which will offer you options for how to generate a report. I selected the option to generate an html report. If you then select the “index” html file that appears in the directory you’re working in, you can click to see the coverage for each method.
For example, here is an image of a particular class (reservoir.py), showing in green those sections of code that were entered, and in red the sections that were not. I used this to discover that particular portions of some methods were not being accessed when they should have been. The script files themselves also now have red and green text that appears next to the code that was not or was entered, respectively. See image above for an example of this.
PyCharm also indicates whether or not the unittest has passed. (Though I did not actually test for specific outputs from the program, I could have done tests on model outputs as I described earlier, and any test failure would be indicated here). |
# Is a medical diagnosis considered a theory or a hypothesis
###### Question:
Is a medical diagnosis considered a theory or a hypothesis?
#### Similar Solved Questions
##### Write each expression by using exponents. See Example $I$.$(-8 p)(-8 p)(-8 p)(-8 p)(-8 p)$
Write each expression by using exponents. See Example $I$. $(-8 p)(-8 p)(-8 p)(-8 p)(-8 p)$...
##### Determine the SUITABLE FORM ONLY of the particular_solution of the following non-homogeneous differential equationY" + 4y=9t cos(2t)DO NOT SOLVE THE UNKNOWN CONSTANTS (A, B, C.-1: Enter your answer in the box below th question_
Determine the SUITABLE FORM ONLY of the particular_solution of the following non-homogeneous differential equation Y" + 4y=9t cos(2t) DO NOT SOLVE THE UNKNOWN CONSTANTS (A, B, C.-1: Enter your answer in the box below th question_...
##### Determine the rate law and the value of k for the following reaction using the data provided.NOz(g) O3(g) NOz(g) Oz(g) [NOz]; (M) [O3]; (M) Initial Rate (M-Is-1) 10 0.33 1,.42 0.10 0.66 2.84 0.25 0.66 7.10Rate 1360 M-2.5s- [NO2]2.5[03]Rate = 227 m-25s- [NOzI[03]25Rate = 43 M-Is" ~"[NOzI[O3]Rate = 430 M-2s" ~I[NOz]2[03]Rate = 130 M-2s -'[NOz][O3]2
Determine the rate law and the value of k for the following reaction using the data provided. NOz(g) O3(g) NOz(g) Oz(g) [NOz]; (M) [O3]; (M) Initial Rate (M-Is-1) 10 0.33 1,.42 0.10 0.66 2.84 0.25 0.66 7.10 Rate 1360 M-2.5s- [NO2]2.5[03] Rate = 227 m-25s- [NOzI[03]25 Rate = 43 M-Is" ~"[NOz...
##### 4:01Question 3.€ of 6Submitthe volumethe cube aluminum 436 in , what isthe massof Object 2? The density ofaluminum2.70 glcm?STARTING AcunTACO FAcTORANSUNER35392.7019.3cm"16.471502.540000.03937063.64361021.180.0010.4637.1571.82.99
4:01 Question 3.€ of 6 Submit the volume the cube aluminum 436 in , what is the mass of Object 2? The density of aluminum 2.70 glcm? STARTING AcunT ACO FAcTOR ANSUNER 353 9 2.70 19.3 cm" 16.4 7150 2.54 000 0.039370 63.6 436 102 1.18 0.001 0.463 7.15 71.8 2.99...
##### 14. Saquinavir (trade name Invirase) is a protease inhibitor, used to treat HIV (human immunodefi...
14. Saquinavir (trade name Invirase) is a protease inhibitor, used to treat HIV (human immunodeficiency virus). 0 saquinavir Trade name Invirase a. Locate all stereogenic centers in saquinavir, and label each stereogenic center as R or S b. Draw the enantiomer of saquinavir. c. Draw a diastereomer o...
##### Evaluate (*V19x2 + 19y2 dA, where D is the shaded region enclosed by the lemniscate curve...
Evaluate (*V19x2 + 19y2 dA, where D is the shaded region enclosed by the lemniscate curve r = sin(20) in the figure. r2 = sin 20 0.5 os (Use symbolic notation and fractions where needed.) «V19x + 19da = 0 Use cylindrical coordinates to find the volume of the region bounded below by the plane z...
##### Typewritten answers only, no handwritten answers! NewCar Inc plans to sell a new model auto kit...
Typewritten answers only, no handwritten answers! NewCar Inc plans to sell a new model auto kit to dealers at $8,000 per car. The consumer assembles the car in approximately 4 hours. Below is additional information on the project. NewCar Inc's investment in the project$1,000,000.00 (one million...
##### Calibrating spectrophotometer
in a hurry to complete the experiment , joseph failed to calibrate the spectrophotometer. As a result, all absorbance values for the standard solutions that aremeasured and recorded are too high. Will the equilibrium concentration for FeNCS2+ be too high, too low, or unaffected? Explain. Will the eq...
##### HhNxonypr Genomic ano Yrocision Medicine Beijing iInsutute or Genomics Uninese Furhli
hh Nxonypr Genomic ano Yrocision Medicine Beijing iInsutute or Genomics Uninese Furhli...
##### Consider the solid bounded below by the xy-plane, on the sides by the sphere p =...
Consider the solid bounded below by the xy-plane, on the sides by the sphere p = 6, and above by the cone = a. Find the spherical coordinate limits for the integral that calculates the volume of the given solid b. Evaluate the integral. a. Select the correct choice below and fill in the answer boxes...
##### ~/6.5 pointsMy NotesAsk Your TeacherBy graphlng_ flnd the maxlmum value of least decimal ozces3y subject these constralnts. All answers should be correct t0 at270Y 2What the corer point that lies farthest the right the solution region?What the maximum valueWhat the comer point which maximizes the profit?How many corner points are there the solution region?
~/6.5 points My Notes Ask Your Teacher By graphlng_ flnd the maxlmum value of least decimal ozces 3y subject these constralnts. All answers should be correct t0 at 270 Y 2 What the corer point that lies farthest the right the solution region? What the maximum value What the comer point which maximiz...
##### 2) Given the forces shown, find: a) each force in Cartesian vector form, b) the magnitude...
2) Given the forces shown, find: a) each force in Cartesian vector form, b) the magnitude and coordinate direction angles of F, so that the resultant of the two forces acts along the positive X-axis and has a magnitude of 500N, c) if the distance from A to B is 5m, what are the coordinates of point ...
##### A Ist-order differential equation of the form M(I,y)dr + N(I,y)dy 0 is said to be ho- [ogencous if both the functions M and N are homogeneous of the same
A Ist-order differential equation of the form M(I,y)dr + N(I,y)dy 0 is said to be ho- [ogencous if both the functions M and N are homogeneous of the same...
##### Imagine an astronaut in space at the midpoint between two stars of equal mass. If all other objects are infinitely far away, how much does the astronaut weigh? Explain your answer.
Imagine an astronaut in space at the midpoint between two stars of equal mass. If all other objects are infinitely far away, how much does the astronaut weigh? Explain your answer....
##### Determine an appropriate interval width for random sample of 260 observations that fall between and include the values below 20 to 75 30 to 160 40 to 280 140 to 500What is an appropriate interval width?
Determine an appropriate interval width for random sample of 260 observations that fall between and include the values below 20 to 75 30 to 160 40 to 280 140 to 500 What is an appropriate interval width?...
##### 8. During an early Phase I clinical trial of a new therapeutic agent, the dosages of...
8. During an early Phase I clinical trial of a new therapeutic agent, the dosages of the formulation. The data is alrcady rank ordered by the x- following AUC's (areas under the curve) were observed at different variable. AUC (hrug/ml 1.07 5.82 15.85 25.18 Dosage (mg) 100 300 600 900 Estimate th...
##### True or False A relation may have more than one secondary index.
True or False A relation may have more than one secondary index....
##### Parallel and perlorm Ihe SamC measurements dcscribed in Steps and Conneci Rz and R Record the rfulls in Dat 'Table Tic (irst four coils mace of copper wilh resislivily of p = 1,72 * 10 n-m. The fifth coil is made of an alloy clled Gcrman silvcr wilh rsistivily o 0 =28,0 *10-1 n-m Thc lirst; second_ and fifth coils an" IQUm long, andd the third and fourlh coils are 20.m long: The diameter of the first, third, and fifth cvils i5 (.(7716+.34 11, and the diamcter of Ilie second and fourlh
parallel and perlorm Ihe SamC measurements dcscribed in Steps and Conneci Rz and R Record the rfulls in Dat 'Table Tic (irst four coils mace of copper wilh resislivily of p = 1,72 * 10 n-m. The fifth coil is made of an alloy clled Gcrman silvcr wilh rsistivily o 0 =28,0 *10-1 n-m Thc lirst; sec...
##### 12.10 a) ig su How mach would be s from the case e ri in planting...
12.10 a) ig su How mach would be s from the case e ri in planting a crop (Q the portahle rados they esport to 1,250000 per year? Explain how this differ a. What is the equi b. If Po rep ari firm's cost function ts thar the marker demand for a product is given by QoA-BP Suppose also that the at c...
##### LeAJ(Il} nitnic and umntmum ioddz ractInlouccadingFNNO,,(qu + Z NH,Ui) Fbl,() + 2NHNO (1q) Whal tlumne & 4 0 4S0 M NIIA wlution mquitcu In mil Wilh %IJ mL 0,60 M FNOsh soluuon?tolutncHot mant moks 0i Phl_ ur lumcd Irm thnx rejclionMonmol Pblz
LeAJ(Il} nitnic and umntmum ioddz ract Inlouc cading FNNO,,(qu + Z NH,Ui) Fbl,() + 2NHNO (1q) Whal tlumne & 4 0 4S0 M NIIA wlution mquitcu In mil Wilh %IJ mL 0,60 M FNOsh soluuon? tolutnc Hot mant moks 0i Phl_ ur lumcd Irm thnx rejclion Mon mol Pblz...
##### A family comes home from a long vacation with laundry to do and showers to take....
A family comes home from a long vacation with laundry to do and showers to take. The water heater has been turned off during the vacation. If the heater has a capacity of 50.0 gallons and a 5,250-W heating element, how much time is required to raise the temperature of the water from 20.0°C to 50...
##### Eeh. AuflAly 5 10.1 042 Write quadratic equation with integer coetflclents and the glven numbers andsolutions (Usethe independent varianlec)41,1 points Aulllalg6 10-1.067commd -separated
Eeh. AuflAly 5 10.1 042 Write quadratic equation with integer coetflclents and the glven numbers and solutions (Use the independent varianlec) 41,1 points Aulllalg6 10-1.067 commd -separated...
##### IGren the Iollowing table ot values ot the polynomial ; { t) what guzanzeed 10 east by te Intermediare Value Theorem?Secctt ? corred _ Fcer bclow:
iGren the Iollowing table ot values ot the polynomial ; { t) what guzanzeed 10 east by te Intermediare Value Theorem? Secctt ? corred _ Fcer bclow:...
##### Lelle OcourcilonaginChapter Wiley PLUS AssionmetNWP Aesetsment Player Ul ApfcomThcorerical DistributionDocuWareCitrix Xenapp Log ,Join contersationDashboard Leasd.httpa /ugozweb,zouue 11/13)MtpsVMgharmonytrhIps Iltecure2 5883Nobke Spoon; 5-00Question 4 of 10View Policies Current Attempt in Progress Find the indicated confidence interval Assume the slandard normally distributed errar coines frum bootstrap distribution thatis approximatelyA 90C conhicerice interval for = meank if the samplc has n
lelle Ocourcilonagin Chapter Wiley PLUS Assionmet NWP Aesetsment Player Ul Apf com Thcorerical Distribution DocuWare Citrix Xenapp Log , Join contersation Dashboard Leasd. httpa /ugozweb,zou ue 11/13) MtpsVMgharmonytr hIps Iltecure2 5883 Nobke Spoon; 5-00 Question 4 of 10 View Policies Current Attem...
##### Question 4 1 pts A survey of 400 students provides a sample mean of 7.10 hours...
Question 4 1 pts A survey of 400 students provides a sample mean of 7.10 hours worked per week. From previous studies, the researcher knows that the standard deviation of this variable is 5 hours. What is a 95% confidence interval based on this sample? (6.10, 8.10) (6.45, 7.75) (6.61, 7.59) (7.10, 8...
##### Provide an example that proves heating is an irreversible process.
provide an example that proves heating is an irreversible process....
##### In the following reaction, the major product is
In the following reaction, the major product is...
##### Write the equilibrium constant expression, K, for the reactionbelow.S2 (g) + 2H2 (g) ⟶ 2H2S (g)
Write the equilibrium constant expression, K, for the reaction below. S2 (g) + 2H2 (g) ⟶ 2H2S (g)...
##### Animal e cells Produce need Oryren Secrete enymes duetihto; Produce Produce DNA protel Osmosis Hypotonic Aetent Iuon Iroxn; Hypertonic 5 Teni Lnnto Htee Jawtient Walct hupertonc emironent solution Jluilon mpatonic Jonad euEntcni more sal high frec_ cdulion ~water solution Prokaryotlc cells La Baclctia Jucluda; Fanna Pecictia cells Bactetia pkant cells and_ Achaca Achaea and Iune; Prokaryotic cells lack; DNA Ptotetn> Internal compartmentalization Rlbotome?What = do wax and testostcrone have c
Animal e cells Produce need Oryren Secrete enymes duetihto; Produce Produce DNA protel Osmosis Hypotonic Aetent Iuon Iroxn; Hypertonic 5 Teni Lnnto Htee Jawtient Walct hupertonc emironent solution Jluilon mpatonic Jonad euEntcni more sal high frec_ cdulion ~water solution Prokaryotlc cells La Baclc...
##### (a)The 5-day BOD of a sewage sample is 180 mg/L at 15°C. What is the BODu...
(a)The 5-day BOD of a sewage sample is 180 mg/L at 15°C. What is the BODu when k is 0.52 d-1 at 25°C? (use the Arrhenius equation, kt2=kt1?t2-t1 where ?=1.135 for 4-20°C and 1.056 for 20 -30°C (b)Graph the BOD as a function of time for the previous problem for both 15°C and 25&de... |
## Notes of Andrea Bonfiglioli’s lecture
Maximal principles and Harnack inequalities for PDO’s in divergence form
1. Motivation
CR geometry (sub-Laplacians), stochastic PDE’s.
2. Introduction
2.1. Standing assumptions
1. Total nondegeneracy.
2. Smooth hypoellipticity.
Sometimes, we require that ${L-\epsilon}$ is hypoelliptic as well. Or even existence of a global, positive fundamental solution (unfortunately, this is known only for special classes, like homogeneous operators on nilpotent groups, Nagel Stein 1990.
2.2. Earlier work
Theorem 1 (Bony 1969) Maximum principle and Harnack inequality for a class of degenerate elliptic operators (sums of squares of Hörmander vectorfields).
Bony uses a Hopf-type lemma and maximum propagation to get maximum principle. Then is used to get Harnack inequality.
Huge litterature in the 1980’s : Fabes, Jerison, Serapioni Franchi, Lanconelli, Chanillo, Wheeden, Sanchez-Calle. All assume hypo-ellipticity.
Nowadays, the framework has been enlarged : doubling metric spaces satisfying Poincaré inequality.
2.3. Examples
Sub-Laplacians.
Fedii 1971 : sum of squares of non Hörmander vectorfields (a constant basis, whose vectors are multiplied with flat functions). This can be hypo-elliptic but not sub-elliptic (Fefferman-Phong 1981).
3. Results
3.1. Hopf Lemma
Let ${F}$ be the set where ${u}$ achieves its maximum. Let ${y\in F}$ and ${\nu}$ be orthogonal to ${F}$ (meaning that the interior of some ball centered at ${y+\epsilon \nu}$ and passing through ${y}$ is disjoint from ${F}$). Then…
3.2. From Hopf lemma to maximum principle
Theorem 2 Non total degeneracy and hypoellipticity imply strong maximum principle.
\proof
Principal vectorfields ${X}$ have to be tangent to ${F}$. This implies ${F}$ has to be invariant under ${X}$. How can one build them ? Use columns of the matrix defining the operator. Note that Hörmander’s condition need not hold for these vectorfields.
Amano 1979 observed that non total degeneracy and hypoellipticity imply connectivity of ${{\mathbb R}^n}$ with respect to such vectorfields plus a drift vectorfields. Thus maximum principle follows.
3.3. Harnack inequality
Theorem 3 Non total degeneracy and hypoellipticity of ${L-\epsilon}$ imply strong a Harnack inequality where, however, the constant depends on the shape of the domain and of the considered subdomain.
\proof
Follows Bony’s approach. Solve the Dirichlet for ${L-\epsilon}$ (based on maximal principle). Prove existence of the Green kernel of ${L-\epsilon}$. Get a weak Harnack inequality. Use potential theory to get Harnack from weak Harnack.
By maximum principle, the Green kernel ${k_\epsilon}$ of ${L-\epsilon}$ is positive. Then for ${u\geq 0}$ such that ${Lu=0}$, Bony proves that
$\displaystyle \begin{array}{rcl} u(x)\geq \epsilon\int u(y)k_\epsilon(x,y)\,d\nu(y). \end{array}$
Since ${k_\epsilon>0}$, this allows to locally bound ${u(x)}$ from below with the ${L^1_{loc}}$-norm of ${u}$. On the space of ${L}$-harmonic functions, the ${L^1_{loc}}$ and ${C^\infty}$ topologies coincide. This way, we get the weak Harnack inequality
$\displaystyle \begin{array}{rcl} \sup_K u \leq C(x_0)u(x_0). \end{array}$
3.4. Role of potential theory
Theorem 4 (Mokobodzki-Brelot 1964) Very abstract setting. Assume weak Harnack inequality holds and that Dirichlet problem on small open sets has a solution, then strong Harnack inequality holds.
4. More on potential theory
How can one characterize ${L}$-subharmonic functions ?
Use balls defined by Green’s function (${\Gamma}$-balls) to define inradius of a domain. Then a representation formula follows, based on the divergence theorem, with kernel expressible in terms of Green’s function. A mean value formula holds for ${L}$-harmonic functions on ${\Gamma}$-balls, with a correction term. The corresponding inequality characterizes sub-harmonicity. So does monotonicity of mean values on ${\Gamma}$-balls.
## Pierre Pansu’s slides on Differential Forms and the Hölder Equivalence Problem
Here is the completed set of slides
CIRMsep14_beamer
If you want to know more about the construction of horizontal submanifolds and how Gromov uses it to bound Hausdorff dimensions from below, see Pansu’s Trento notes (2005).
## Notes of Anton Thalmaiers’s lecture nr 4
1. Probabilistic content of Hörmander’s condition
1.1. Statement
Theorem 1 Suppose that the Lie algebra generated by ${A_1,\ldots,A_r}$ and brackets ${[A_0,A_i]}$ fills ${T_xM}$. Then the bilinear form ${C_t(x)}$ on ${T_xM}$ is non-degenerate
1.2. Proof
Let
$\displaystyle \begin{array}{rcl} G_s=span\{{X_s^{-1}}_* A_i\textrm{ at }x\,;\,i=1,\ldots,r\}\subset T_x M,\quad U^+_t=span\bigcup_{s\leq t}G_s. \end{array}$
By Blumenthal’s 0/1-law, ${U^+_t}$ is not random. We prove by contradiction that ${U_0^+=T_x M}$ (this will suffice to prove the theorem). Introduce
$\displaystyle \begin{array}{rcl} \sigma=\inf\{t>0\,;\,U_0^+\not=U_t^+\} \end{array}$
Let ${\xi\in T_x^*M}$ be orthogonal to ${U_0^+}$ (and thus to ${U_t^+}$ for ${t<\sigma}$). Since ${\xi}$ is orthogonal to all ${{X_s^{-1}}_* A_i}$, ${s<\sigma}$. But for all vectorfields ${V}$, ${{X_s^{-1}}_* V}$ satisfies (first line is Stratonovich, the second is Ito)
$\displaystyle \begin{array}{rcl} d({X_s^{-1}}_* V)&=&({X_s^{-1}}_* [A_0,V])_X \,dt+\sum ({X_s^{-1}}_* [A_i,V])_X \cdot dB_s^i\\ &=&({X_s^{-1}}_* [A_0,V])_X \,dt+\sum ({X_s^{-1}}_* [A_i,V])_X \,dB_s^i+\sum_j ({X_s^{-1}}_*[A_j, [A_j,V]])_X\,ds\\ \end{array}$
thus for all ${t<\sigma}$,
$\displaystyle \begin{array}{rcl} \langle\xi,({X_s^{-1}}_*A_i)_X)\rangle&=&\langle\xi,A_i(X)\rangle\\ &&+\int_{0}^{t}\langle\xi,({X_s^{-1}}_* [A_0,A_i])_X \,ds\rangle\\ &&+\int_{0}^{t}\sum_j\langle\xi,({X_s^{-1}}_* [A_j,A_i])_X \rangle dB_s^i+\int_{0}^{t}\sum_j\langle\xi,({X_s^{-1}}_*[A_j, [A_j,A_i]])_X\rangle\,ds \end{array}$
By uniqueness of the solution of an SDE, this implies that ${\langle\xi,({X_s^{-1}}_* [A_j,A_i])_X\rangle=0}$ for all ${i,j\geq 1}$ and ${s<\sigma}$. Replacing ${[A_i]}$ with ${[A_i,Aj]}$ shows that
$\displaystyle \begin{array}{rcl} \langle\xi,({X_s^{-1}}_* [A_j,[A_j,A_i]])_X\rangle=0, \end{array}$
and
$\displaystyle \begin{array}{rcl} \langle\xi,({X_s^{-1}}_* [A_0,A_i])_X\rangle=0 \end{array}$
Iterating the procedure shows orthogonality of ${\xi}$ with all iterated brackets, and thus ${\xi=0}$.
2. Probabilistic proof of hypoellipticity
Theorem 2 Assume that ${A_i}$ and there derivatives satisfy suitable growth conditions. Assume that the bilinear form ${C_t(x)}$ is non-degenerate and
$\displaystyle \begin{array}{rcl} |C_t(x)|^{-1}\in L^p \end{array}$
for all ${p\geq 1}$. Then ${P_t(x,dy)=p_t(x,y)\,dy}$ with a smooth density ${p_t(x,y)}$.
The proof we are about to give is due to a large extent to Bismut, although many details are skipped in Bismut’s original paper. We use more elementary tools. We shall rely on the following standard fact.
2.1. Girsanov’s theorem
Let ${B}$ a Brownian motion on Euclidean space. Add an absolutely continuous process, i.e. ${d\hat{B}_t=dB_t+u_t\,dt}$ such that
$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}(\exp(\frac{1}{2}\int_{0}^{t}|u(s)|^2\,ds))<\infty. \end{array}$
${\hat{B_t}}$ is not a martingale any more, but this can be recovered by changing the probability measure.
Theorem 3 (Girsanov) ${\hat{B}_t}$ is a Brownian measure with respect to the mesure ${\hat{P}}$ whose density with respect to ${P}$ is
$\displaystyle \begin{array}{rcl} G_t:=\frac{d\hat{P}}{dP}_{|\mathcal{F}_t}=\exp(-\int_{0}^{t}u_s\,dB_s-\frac{1}{2}\int_{0}^{t}|u(s)|^2\,ds). \end{array}$
In other words, if ${F}$ is a functional on the space of Brownian motions, then
$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}_{P}(F(B_.))=\mathop{\mathbb E}_{\hat{P}}((F\hat{B}_.)). \end{array}$
2.2. A criterion for a measure to have a smooth density
We want to prove that ${P_t(x,dy)=p_t(x,\cdot)\,dvol}$ for ${t>0}$. We use the following criterion.
Lemma 4 Let ${\mu}$ be a probability measure on some manifold, viewed as a distribution. Assume that for all ${\alpha\in{\mathbb N}}$ and all test functions ${f}$,
$\displaystyle \begin{array}{rcl} |\langle f,D^\alpha \mu\rangle|\leq C_\alpha\,\|f\|_\infty. \end{array}$
Then ${\mu}$ has a smooth density.
2.3. Proof of Theorem 2
Fix ${x}$. Identify ${T_xM}$ with ${{\mathbb R}^n}$. We apply Girsanov’s theorem to ${u_s=a_s\cdot\lambda}$ where ${a_s}$ takes values in ${T_xM\otimes{\mathbb R}^r}$ and ${\lambda\in T_x^*}$. The modified flow is denoted by ${X^\lambda_t(x)}$. Let ${g}$ be a function to be specified later. Up to introducing the density ${G_t^\lambda}$, nothing changes, and
$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}(f(X^\lambda_t(x))g(B^\lambda_\cdot )G_t^\lambda) \end{array}$
does not depend on ${\lambda}$. Let us differentiate with respect to ${\lambda}$ at ${\lambda=0}$.
$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}((D_i f)(X_t(x))(\frac{\partial}{\partial \lambda_k}_{|\lambda=0}X_t^\lambda(x))^i g(B_.))=-\mathop{\mathbb E}(f(X_t(x))\frac{\partial}{\partial \lambda_k}_{|\lambda=0}(g(B^\lambda_.)G_t^\lambda))) \end{array}$
Remember that SDE can be formally differentiated with respect to a parameter. Notation: ${\frac{\partial}{\partial \lambda_k}_{|\lambda=0}X_t^\lambda(x))^i =(\partial X_t(x))_{ik}}$. Get
$\displaystyle \begin{array}{rcl} \partial X_t(x)={(X_t)}_*\int_{0}^{t}(X_s^{-1}A)_X u_s\,ds. \end{array}$
This suggests choosing
$\displaystyle \begin{array}{rcl} u_s=(X_s^{-1})_* A)_X^*:T_x^*M\rightarrow{\mathbb R}^r. \end{array}$
With this choice,
$\displaystyle \begin{array}{rcl} \partial X_t(x)={(X_t)}_*C_t(x). \end{array}$
By assumption, ${C_t(x)}$ is invertible, so we take
$\displaystyle \begin{array}{rcl} g(B^*_.)=(C_t(x)^{-1}({X_t^{-1}}_*)^{-1})_{kj}\gamma(B^\lambda_.), \end{array}$
where ${\gamma}$ is to be specified later. This yields
$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}((D_j f)(X_t(x))\gamma(B_.))=-\mathop{\mathbb E}(f(X_t(x))H_j(\gamma)), \end{array}$
for some rather complicated expression ${H_j(\gamma)}$. Iteration gives
$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}((D_i D_j D_k f)(X_t(x)))=-\mathop{\mathbb E}(f(X_t(x))H_k(H_j(H_i(1)))))), \end{array}$
from which we get the estimate
$\displaystyle \begin{array}{rcl} |\mathop{\mathbb E}((D_i D_j D_k f)(X_t(x)))|\leq\|f\|_{\infty}\|\cdots H_k(H_j(H_i(1)))\|_{L^1} \end{array}$
The right hand side involves only polynomial expressions, except ${C_t(x)^{-1}}$ and its derivatives with respect to ${\lambda}$. These have to be computed and estimated too. Then the Lemma applies, it shows that the distribution of ${X_t(x)}$ has a smooth density.
3. Subjects I could not cover
There was no time to treat
1. the short time asymptotics of the heat kernel,
2. bounds on the lifetime of Brownian motion (differentiating ${d(x,X_t(x))}$, leads to the Laplacian of the distance function, and to Ricci curvature).
3. Bismut’s interpolation between the geodesic flow and an hypoelliptic diffusion.
There is more that probability theory can do for sub-Riemannian geometry and hypoelliptic PDE’s.
## Notes of Anton Thalmaier’s lecture nr 3
1. Stochastic flows of diffeomorphisms
We continue our study of SDE ${dX=A_0(X)dt+\sum A_i(X)\cdot dB^i}$. Up to now, the starting point ${x}$ was fixed. Now we exploit the dependance on ${x}$.
1.1. Random continuous paths of diffeomorphisms
Let us introduce the random set ${M_t(\omega)=\{x\in M\,;\,\zeta(x)(\omega)>t\}}$ of starting points whose trajectory is still alive at time ${t}$. Then
• ${M_t(\omega)}$ is open (in fact, the lifetime ${\zeta(\cdot)(\omega)}$ is lower semi-continuous in ${x}$).
• ${X_t(\cdot)(\omega):M_t(\omega)\rightarrow R_t(\omega)}$ is a diffeomorphism onto an open subset of ${M}$.
• ${s\mapsto X_s(\cdot)(\omega)}$ is continuous: ${[0,t]\rightarrow C^{\infty}(M_t(\omega),M)}$.
Furthermore, under mild growth conditions on vectorfields and their derivatives (for instance, if ${M}$ is compact), ${X_t(\cdot)(\omega)\in Diffeo(M)}$ for all ${t}$.
Consider the tangent flow ${U={X_t}_*}$ on ${TM}$. It solves the formally differentiated SDE
$\displaystyle \begin{array}{rcl} dU=\sum (DA_i)_X U\cdot dZ^i. \end{array}$
1.2. Crucial observation
Let us transport a vectorfield ${V}$ under our stochastic flow. We get a random vectorfield ${{X_t}^{-1}_{*}V}$. This means that, for a test function ${f}$,
$\displaystyle \begin{array}{rcl} ({X_t}^{-1}_{*}V)(f)=(V(f\circ X_t^{-1}))\circ X_t. \end{array}$
Maillavin’s covariance matrix is defined as follows. For ${t>0}$,
$\displaystyle \begin{array}{rcl} C_t(x)=\sum_{i=1}^r\int_{0}^{t}({X_s}^{-1}_{*}A_i)\otimes({X_s}^{-1}_{*}A_i)_X\, dt. \end{array}$
This is a random smooth section of ${TM\otimes TM}$ over ${M_t}$. We shall see later that the condition we need to make this nondegenerate is Hörmander’s condition.
On may view
$\displaystyle \begin{array}{rcl} ({X_s}^{-1}_{*}A)_X:{\mathbb R}^r \rightarrow T_X M \end{array}$
as a linear map from ${{\mathbb R}^r}$ to ${T_X M}$. Its adjoint is a linear map from ${T_X^* M}$ to ${{\mathbb R}^r}$. Then ${C_t(x)}$ may be viewed as an endomorphism of ${T_x M}$.
Lemma 1 The SDE satisfied by ${{X_t}^{-1}_{*}V}$ is
$\displaystyle \begin{array}{rcl} d({X_t}^{-1}_{*}V)=\sum_{i=0}^{r}({X_t}^{-1}_{*}[A_i,V])_X \cdot dZ^i. \end{array}$
In particular, if ${V}$ commutes with vectorfields ${A_i}$, ${{X_t}^{-1}_{*}V=V}$.
2. Stochastic flows and hypoellipticity
We assume that all constant coefficient combinations of the ${A_i}$ are complete. The flow defines two canonical measures,
• The distribution of ${X_t(x)}$, ${P_t(x,dy)=P\{X_t(x)\in dy\}}$,
• Green’s measure ${G_\lambda(x,dy)=\int_{0}^{\infty}P_t(x,dy)\,dt}$.
Let us study the following Dirichlet boundary problem
$\displaystyle \begin{array}{rcl} -Lu+ku&=&f \textrm{ on }D,\\ u_{\partial D}&=&\phi. \end{array}$
The solution takes the following form (Feyman-Kac formula).
$\displaystyle \begin{array}{rcl} u(x)=\mathop{\mathbb E}(\phi(X_{\tau_D})\exp(-\int_{0}^{\tau_D}k(X_s)\,ds)+\int_{0}^{\tau_D}f(X_s)\exp(\int_{0}^{s}k(X_r)\,dr)\,ds). \end{array}$
2.1. Hörmander’s condition
Question: When do ${P_t(x,dy)}$ and ${G_\lambda(x,dy)}$ have a density ?
Let
• ${\mathcal{L}}$ denote the Lie algebra generated by the vectorfield ${A_i}$,
• ${\mathcal{B}}$ the Lie algebra generated by ${A_1,\ldots,A_r}$ only,
• ${\mathcal{J}}$ by ${A_1,\ldots,A_r}$ and brackets ${[A_0,A_i]}$,
• ${\hat{\mathcal{L}}}$ by ${A_0+\partial_t}$ and ${A_1,\ldots,A_r}$ on ${M\times{\mathbb R}}$.
Hörmander’s theorem states
• hypoellipticity of ${L}$ under ${\mathcal{L}(x)=T_xM}$,
• hypoellipticity of ${L+\partial_t}$ under ${\hat{\mathcal{L}}(x)=T_{x,t}M\times{\mathbb R}}$.
It follows that
• Under ${\mathcal{L}(x)=T_xM}$, ${G_\lambda(x,dy)=g_\lambda(x,y)\,dy}$,
• Under ${\hat{\mathcal{L}}(x)=T_{x,t}M\times{\mathbb R}}$, ${P_t(x,dy)=p_t(x,y)\,dy}$,
where the densities are smooth.
2.2. A probabilistic proof of hypoellipticity ?
In 1970, in his Kyoto lectures, Paul Maillavin proposed a toolbox to prove this, called Maillavin Calculus. This calculus deals with infinite dimensional path spaces.
Instead, I will describe a more direct root. The existence of smooth densities ${g_\lambda}$ and ${p_t}$ in turn imply hypoellipticity, so it suffices to prove this.
Theorem 2 Assume that ${A_i}$ and there derivatives satisfy suitable growth conditions. Assume that the bilinear form ${C_t(x)}$ is non-degenerate and
$\displaystyle \begin{array}{rcl} |C_t(x)|^{-1}\in L^p \end{array}$
for all ${p\geq 1}$. Then ${P_t(x,dy)=p_t(x,y)\,dy}$ with a smooth density ${p_t(x,y)}$.
## Notes of Nicola Garofalo’s lecture nr 4
1. The isoperimetric problem
I want to show how PDE results can be used to solve geometric problems.
1.1. The isoperimetric inequality
I will prove the isoperimetric inequality in Carnot groups,
$\displaystyle \begin{array}{rcl} |E|^{\frac{Q-1}{Q}}\leq\mathrm{const.}\,|\partial E|. \end{array}$
It has lots of applications, see the conference in Paris at the end of september.
1.2. Doubling metric spaces
A metric space ${S}$ is doubling if it admits a Borel measure ${\nu}$ such that for all balls, ${\nu(B(x,2r))\leq C_1\,\nu(B(x,r))}$. On can define a dimension by ${Q=\log_2(C_1)}$.
Exercise: Prove that this implies ${\nu(B(x,tr))\geq \frac{1}{C_1}\,t\,\nu(B(x,r))}$ for all ${t>1}$.
1.3. Weak ${L^p}$ spaces
The weak (Marcinkiewicz) ${L^p}$ space, denoted by ${L^{p,\infty}}$, is the set of functions ${f}$ such that
$\displaystyle \begin{array}{rcl} \sup_{t>0}t|\{x\,;\,|f(x)|>t\}^{1/p}<\infty. \end{array}$
It contains ${L^p}$ (Cavalieri’s principle) strictly. For instance, ${f(x)=\frac{1}{|x|^2}}$ belongs to ${L^{n/2,\infty}({\mathbb R}^n)}$ but not to ${L^{n/2}({\mathbb R}^n)}$. The standard operators of analysis often fail to send ${L^p}$ to ${L^q}$, but send ${L^p}$ to weak ${L^q}$. The loss is not so serious since Marcinkiewicz’ interpolation theorem tells us that interpolating ${L^p}$ and weak ${L^p}$ spaces gives ${L^p}$ spaces.
1.4. Fractional integration
The Riesz fractional integration operator ${I_\alpha}$ is
$\displaystyle \begin{array}{rcl} I_\alpha f(x)=\int_{B}f(y)\frac{d(x,y)^\alpha}{\nu((B(x,d(x,y))))}\,dy. \end{array}$
Theorem 1 If ${0<\alpha, then ${I_\alpha}$ is bounded ${L^1(B)\rightarrow L^{q,\infty}(B)}$, provided ${q=\frac{Q}{Q-\alpha}}$. Morover, its norm is at most
$\displaystyle \begin{array}{rcl} C_2\frac{R}{|B|^{1/Q}}. \end{array}$
In fact, the theorem holds for doubling metric spaces.
Theorem 2 (Nagel-Stein-Wainger 1984) Carnot manifolds are locally doubling.
1.5. Fundamental solutions, again
In this section, we deal with a bracket-generating family of vectorfields ${X_j}$, the corresponding sub-Laplacian ${L=\sum X_j^*X_j}$, and the correponding gradient
$\displaystyle \begin{array}{rcl} |\nabla u|=(\sum |X_i u|^2)^{1/2}. \end{array}$
Everything is local.
Theorem 3 (NSW, Sanchez-Calle 1984) There exists a fundamental solution ${\Gamma}$ of ${L}$, it satisfies
$\displaystyle \begin{array}{rcl} 0\leq \Gamma(x,y)\leq C\,\frac{d(x,y)^2}{|B(x,d(x,y))}. \end{array}$
Furthermore,
$\displaystyle \begin{array}{rcl} |\nabla\Gamma(x,y)|\leq C\,\frac{d(x,y)}{|B(x,d(x,y))}. \end{array}$
An integration by parts gives
Corollary 4 (Citti-Garofalo-Lanconelli) For compactly supported functions ${u}$,
$\displaystyle \begin{array}{rcl} |u(x)|\leq C\,I_1(|\nabla u|)(x). \end{array}$
Indeed,
$\displaystyle \begin{array}{rcl} |u(x)|\leq \int|\nabla u(y)||\nabla\Gamma(x,y)|dy\leq C\,\int|\nabla u(y)|\frac{d(x,y)}{|B(x,d(x,y))}\,dy. \end{array}$
Corollary 5 For compactly supported functions ${u}$,
$\displaystyle \begin{array}{rcl} \|I_1(|\nabla u|)\|_{L^{q,\infty}}\leq C\,\|\nabla u\|_{L^1}. \end{array}$
This easily follows from previous results. Combining the last two corollaries yields
Theorem 6 For ${q=\frac{Q}{Q-1}}$, for compactly supported functions ${u}$,
$\displaystyle \begin{array}{rcl} \|u\|_{L^{q,\infty}}\leq C\,\frac{R}{|B|^{1/Q}}\|\nabla u\|_{L^1}. \end{array}$
1.6. From weak to strong Sobolev inequality
Fleming and Richel observed in 1971 that, thanks to coarea formula, the weak Sobolev inequality implies the strong one. This works only for ${p=1}$, the geometric case, which is equivalent to the isoperimetric inequality, since one uses this equivalence.
1.7. Perimeter
To give a precise statement of the isoperimetric inequality, we need to define perimeter. The following definition, in case ${X_j=\partial_j}$, is due to de Giorgi.
The norm of a vectorfield ${\xi}$ is ${(\sum a_i^2)^{1/2}}$ if ${\xi=\sum a_i X_i}$, and ${+\infty}$ if ${\xi\notin\mathrm{span}(X_1,\ldots,X_m)}$.
The total variation of an ${L^1}$ function ${u}$ is
$\displaystyle \begin{array}{rcl} Var(u,\Omega):=\sup\{\int_{\Omega}u\,div(\xi)\,;\,\xi\textrm{ vector field },\|\xi\|_{L^{\infty}}\leq 1\}. \end{array}$
The space of functions of bounded variation ${BV(\Omega)}$ has norm
$\displaystyle \begin{array}{rcl} \|u\|_{BV(\Omega)}:=\|u\|_1 + Var(u,\Omega). \end{array}$
Note that ${W^{1,1}}$ (${L^1}$ functions with ${X_i u\in L^1}$) is strictly contained in ${BV}$. It does not contain indicators ${1_E}$ of sets ${E}$, for instance, although they are often in ${BV}$.
Definition 7
$\displaystyle P(E,\Omega)=Var(1_E;\Omega).$
In ${{\mathbb R}^n}$, for smooth sets, one gets back the surface measure.
1.8. Proof of the isoperimetric inequality
Theorem 8
$\displaystyle \begin{array}{rcl} |E|^{\frac{Q-1}{Q}}\leq C\,|B|^{-1/Q}P(E,B). \end{array}$
\proof
Let ${E}$ be a smooth domain. The idea is to apply the weak Sobolev inequality to the indicator ${u=1_E}$. ${P(E,\Omega)}$ plays the role of ${\|\nabla u\|_{L^1}}$ on the right hand side. On the left hand side,
$\displaystyle \begin{array}{rcl} |\{x\,;\,|u(x)|>t\}=|E| \textrm{ iff }0\leq t<1, \end{array}$
hence
$\displaystyle \begin{array}{rcl} \|u\|_{L^{q,\infty}}=|E|^{1/q}. \end{array}$
To justify replacement of perimeter with ${\|\nabla u\|_{L^1}}$, approximate ${1_E}$ with smooth functions ${u}$ and apply the coarea formula as in next subsection.
1.9. Proof of the strong Sobolev inequality
Theorem 9
$\displaystyle \begin{array}{rcl} \|u\|_{L^{\frac{Q-1}{Q}}(B)}\leq C\,|B|^{-1/Q}\|\nabla u\|_{L^1(B)}. \end{array}$
\proof
Assume ${u}$ is smooth and compactly supported. By Sard’s theorem, for a.e. ${t}$, ${E_t=\{u>t\}}$ is a smooth manifold. In general, Federer’s coarea formula states that, for ${g}$ a Lipschitz function,
$\displaystyle \begin{array}{rcl} \int_{{\mathbb R}^n}f|D g|=\int_{{\mathbb R}}(\int_{\{g=t\}}f\,d\mathcal{H}^{n-1})\,dt \end{array}$
We apply it to ${g=u}$ and ${f=\frac{|\nabla u|}{|Du|}\geq 1}$.
$\displaystyle \begin{array}{rcl} \int_{B}|\nabla u|\geq\int_{{\mathbb R}}(\int_{\partial E_t}\,d\mathcal{H}^{n-1})\,dt=\int_{{\mathbb R}}P(E_t,B)\,dt \end{array}$
Finally, express the ${L^{\frac{Q}{Q-1}}}$-norm of ${u}$ as an integral,
$\displaystyle \begin{array}{rcl} (\int_{B}|u|^\frac{Q}{Q-1})^{\frac{Q-1}{Q}}&=&(\frac{Q}{Q-1}\int_{0}^{\infty}t^{\frac{1}{Q-1}}|E_t|\,dt)^{\frac{Q-1}{Q}}\\ &\leq& C\,(\int_{0}^{\infty}|E_t|^\frac{Q}{Q-1}\,dt)^{\frac{Q-1}{Q}}, \end{array}$
which concludes the proof. We have used the easy fact that, for every nondecreasing function ${V(t)}$ and ${a>1}$,
$\displaystyle \begin{array}{rcl} F(x)=a\int_{0}^{x}t^{a-1}V(t)\,dt-(\int_{0}^{x}V(t)^{1/a}\,dt)^a \end{array}$
is a non decreasing function of ${x}$ (differentiate !) and thus nonnegative.
## Notes of Ludovic Rifford’s lecture nr 4
Open problems
1. The Sard conjecture
2. Regularity of geodesics
3. Small balls
1. The Sard conjecture
1.1. Statement
Theorem 1 (Morse 1939 for ${p=1}$, Sard 1942) If ${f:{\mathbb R}^d\rightarrow{\mathbb R}^p}$ is of class ${C^k}$,
$\displaystyle \begin{array}{rcl} k\geq\max\{1,d-p+1\}\quad \Rightarrow\quad \mathcal{L}^p(\textrm{critical values})=0, \end{array}$
and this is sharp (Whitney).
Does this theorem generalize to the endpoint map of a smooth control system ?
Conjecture. The set of all positions at time ${t}$ of singular paths starting at ${x}$ has measure zero.
Remark. There are examples of smooth (even polynomial) functions on ${L^2}$ which do not satisfy Sard’s theorem. The only infinitesimal version of Sard’s theorem is Smale’s for Fredholm maps.
Conjecture is open for Carot groups (which may be harder).
1.2. Positive cases
Fat distributions have no singular curves but constants.
For rank two distributions in dimension 3, singular curves are contained in the Martinet surface which is known to be countably 2-rectifiable. Conjecturally, the singular values of the endpoint map have Hausdorff dimension ${\leq 1}$. Generically, the horizontal curves on the Martinet surface form a foliation whose singularity are either saddles or foci. At foci, the length of leaves is infinite, so one can ignore them.
1.3. The minimizing Sard conjecture
Let ${S}$ denote the set of points joined to ${x}$ by a minimizing geodesic which is singular. Let ${S_s\subset S}$ denote the set of points joined to ${x}$ by a minimizing geodesic which is singular and not the projection of a normal extremal.
The following partial result turns out to be rather easy.
Proposition 2 (Rifford-Trélat, Agrachev) ${S}$ has empty interior.
Lemma 3 Assume that there is a function ${\phi:M\rightarrow{\mathbb R}}$ such that
1. ${\phi}$ is differentiable at ${y}$,
2. ${\phi(y)=d(x,y)^2}$ and ${d(x,y)^2>\phi(z)}$ for all neigboring ${z\not=y}$.
Then there is a unique minimizing geodesic between ${x}$ and ${y}$, which is the projection of a normal extremal ${\psi}$ such that ${\psi(1)=(y,D_y\psi}$.
\proof
Let ${v}$ be the control of some minimizing geodesic. For ${u\in L^2}$ close to ${v}$,
$\displaystyle \begin{array}{rcl} \|u\|_{L^2}^2 =C(u)\geq e(x,E^x(u)), \end{array}$
with equality at ${u=v}$. By assumption, ${e(x,E^x(u))\geq\phi(E^x(u))}$, with equality at ${u=v}$. Therefore ${v}$ minimizes ${C(u)-\phi(E^x(u))}$ in a neighborhood of ${v}$, and it is locally unique. So there is ${p\in T^*_y M}$ such that ${p\cdot D_vE^x=D_vC}$, ${v}$ is normal, q.e.d.
\proof
of Proposition. Any continuous function has a smooth (even constant) support function at a dense set of points, q.e.d.
Question. Can one improve this to full measure ?
2. Regularity of minimizers
Projections of normal extremals are smooth.
Question. Are abnormal minimizing geodesics of class ${C^1}$ ?
2.1. Partial results
Theorem 4 (Monti-Leonardi) Consider an equiregular (${Lie^k}$ all have constant dimension) distribution. Assume that ${[Lie^k,Lie^\ell]\subset Lie^{k+\ell+1}}$. Then curves with a corner cannot be minimizing.
Theorem 5 (Süssmann) If data are real analytic, singular controls are real analytic on an open dense subset of their interval f definition.
This comes from sub-analytic geometry.
3. Small balls
Question. Are small spheres homeomorphic to spheres ?
It is true in Carnot groups.
Yuri Baryshnikov claims that the answer is yes in the contact case, but the proof does not seem to be correct.
In the absence of abnormal geodesics, then almost every sphere at ${x}$ is a Lipschitz submanifold.
## Notes of Nicola Garofalo’s lecture nr 3
1. Fundamental solutions
Exercise (related to the Hopf-Rinow): compute the sub-Riemannan metric associated to vectorfield ${X=(1+x^2)\partial_x}$. Observe that balls are non compact, i.e. metric is not complete.
2. Existence
Theorem 1 (Folland) On a Carnot group, all sub-Laplacians ${\Delta_H}$ have a unique fundamental solution, i.e. a smooth fonction ${\Gamma}$ on ${G\setminus\{e\}}$ such that
1. ${\Delta_H \Gamma=\delta}$, Dirac distribution at the origin,
2. ${\lim_{|g|\rightarrow\infty} \Gamma(g)=0}$.
It is homogeneous of degree ${2-Q}$ under dilations.
\proof of homogeneity. Consider ${v=\Gamma\circ\delta_\lambda-\lambda^{2-Q}}$. Then ${\Delta_H v=0}$. By hypoellipticity, ${v}$ is smooth and classically harmonic.
By Bony’s maximal principle, since ${v}$ teds to 0 at infinity, ${v=0}$. Alternatively, use Liouville’s theorem.
2.1. The case of groups of Heisenberg type
Charles Feffermann, studying several complex variables, suggested the form that the fundamental solution should take in the Heisenberg group. This was implemented by Folland and Kaplan.
Theorem 2 (Folland 1972, Kaplan 1981) Let ${G}$ be of Heiseberg type. The function
$\displaystyle \begin{array}{rcl} \Gamma(g)=\frac{C}{(|z|^4+16|t|^2)^{\frac{Q-2}{4}}} \end{array}$
is a fundamental solution of ${-\Delta_H}$. Here, ${C}$ is a suitable constant,
$\displaystyle \begin{array}{rcl} C^{-1}=m(Q-2)\int_{G}\frac{1}{((|z|^2+1)^2+16|t|^2)^{\frac{Q+2}{4}}}. \end{array}$
2.2. A Lemma
Lemma 3 If ${G}$ is of Heisenberg type,
1. ${\Delta_H(|t|^2)=\frac{k}{2}|z|^2}$,
2. ${|\nabla_H(|t|^2)|^2=|z|^2|t|^2}$,
3. ${\langle\nabla_H(|z|^2),\nabla_H(|t|^2)\rangle=0}$.
For this, use Baker-Campbell-Hausdorff to compute
$\displaystyle \begin{array}{rcl} z_j(g\exp(se_i))&=&z_j(g)+s\delta_{ij},\\ t_\ell(g\exp(se_i))&=&t_\ell(g)+\frac{s}{2}\langle[z,e_i],\epsilon_\ell\rangle. \end{array}$
Differentiating with respect to ${s}$ at ${s=0}$, this gives
$\displaystyle \begin{array}{rcl} X_i(z_j)(g)&=&\delta_{ij},\\ X_i(t_\ell)(g)&=&\frac{1}{2}\langle[z,e_i],\epsilon_\ell\rangle=\frac{1}{2}\langle J(\epsilon_\ell)z,e_i\rangle. \end{array}$
This leads rather easily to all 3 formulae.
2.3. Proof of the Folland-Kaplan Theorem
We see that ${\Gamma(g)=C\,\rho^{2-Q}}$ where ${\rho(g)=(|z|^4+16|t|^2)^{1/4}}$ is a gauge. Let us regularize it,
$\displaystyle \begin{array}{rcl} \rho_\epsilon(g)=((|z|^2+\epsilon^2)^2+16|t|^2)^{1/4}. \end{array}$
Then
$\displaystyle \begin{array}{rcl} |\nabla_H \rho_\epsilon|^2&=&\frac{|z|^2}{\rho_\epsilon^2},\\ \Delta_H\rho_\epsilon&=&\frac{Q-1}{\rho_\epsilon}|\nabla_H \rho_\epsilon|^2+\frac{m\epsilon^2}{\rho_\epsilon^3}. \end{array}$
Given an arbitrary function ${h:{\mathbb R}\rightarrow{\mathbb R}}$, differentiate ${v=h\circ\rho_\epsilon}$. Then apply it to ${h(t)=t^{2-Q}}$ and observe that this kills a term, yielding
$\displaystyle \begin{array}{rcl} \Delta_H v&=&\frac{m\epsilon^2}{\rho_\epsilon^3}h'(\rho_\epsilon)\\ &=&m(2-Q)\epsilon^2\rho_\epsilon^{-2-Q}\\ &=&-m(Q-2)\epsilon^2 v^{\frac{Q+2}{Q-2}}. \end{array}$
This equation is known as the CR Yamabe equation. This is the conformally invariant form of the sub-Laplacian. It indicates that ${v}$ is critical for the sub-Riemannian Sobolev inequality.
Observe that
$\displaystyle \begin{array}{rcl} \rho_\epsilon=\epsilon\delta_{\epsilon^{-1}}\circ\rho_1. \end{array}$
Thus
$\displaystyle \begin{array}{rcl} \Delta_H v&=&\epsilon^{-Q}\delta_{\epsilon^{-1}}\circ\Delta_H(\rho_1^{2-Q})\\ &=&-m(Q-2)\epsilon^{-Q}\delta_{\epsilon^{-1}}\circ v_1^{\frac{Q+2}{Q-2}}. \end{array}$
It turns out that ${v_1^{\frac{Q+2}{Q-2}}\in L^1(G)}$. So up to a multiplicative constant, ${\epsilon^{-Q}\delta_{\epsilon^{-1}}\circ v_1^{\frac{Q+2}{Q-2}}}$ converges to the Dirac distribution as ${\epsilon\rightarrow 0}$. Indeed, given a test function ${\phi}$,
$\displaystyle \begin{array}{rcl} \langle \rho^{2-Q},\Delta_H\phi\rangle&=&\langle v,\Delta_H\phi\rangle\\ &=&\lim_{\epsilon\rightarrow 0} \langle v,\Delta_H\phi\rangle\\ &=&\lim_{\epsilon\rightarrow 0} \langle\Delta_H v,\phi\rangle\\ &=&-m(Q-2)\lim_{\epsilon\rightarrow 0} \epsilon^{-Q}\langle\delta_{\epsilon^{-1}}\circ v_1^{\frac{Q+2}{Q-2}},\phi\rangle\\ &=&-m(Q-2)\lim_{\epsilon\rightarrow 0} \epsilon^{-Q}\langle v_1^{\frac{Q+2}{Q-2}},\phi\circ \delta_{\epsilon^{-1}}\rangle\\ &=&-m(Q-2)\phi(e)\int_G v_1^{\frac{Q+2}{Q-2}}. \end{array}$
2.4. The CR Yamabe problem
The problem: let ${M}$ be a compact strictly pseudoconvex CR manifold, find a choice of the contact form ${\theta}$, for which the Tanaka-Webster scalar curvature is constant.
This is a sub-Riemannian analogue of a problem posed in 1959 by Yamabe, and which has been solved (Yamabe, Trudinger, Aubin, Schoen).
Theorem 4 (Jerison-Lee 1990) The CR Yamabe problem is solvable when dim${(M)\geq 5}$ and ${M}$ is not locally CR equivalent to the round CR sphere.
The CR version
After a decade, Gamara and Yaccoub, two students of Abbas Bahri, solved the problem when ${M}$ is CR equivalent to the CR round sphere. The 3-dimensional case was later completed by Gamara.
These cases non treated by Jerison and Lee are analogues of the Riemannian cases where the positive mass conjecture in general relativity plays a role. There have been recent progress along similar lines in CR geometry recently. Attend the relevant workshop this fall!
2.5. The sub-Riemannian Sobolev embedding theorem
Observe that
$\displaystyle \begin{array}{rcl} \int_{G}|\nabla_H v|^2=-\int_{G}v\Delta_H v=\int_{G}v^{\frac{2Q}{Q-2}}. \end{array}$
This is an equality case in a Sobolev type inequality. The Euclidean Sobolev inequality reads
$\displaystyle \begin{array}{rcl} (\int_{{\mathbb R}^n}|u|^{q})^{1/q}\leq S(\int_{{\mathbb R}^n}|\nabla u|^p)^{1/p}. \end{array}$
The numerology ${\frac{1}{p}-\frac{1}{q}=\frac{1}{n}}$ is forced by dilaton invariance.
Theorem 5 (Folland-Stein 1975) In a Carnot group, let ${1. There exists a constant ${S_q(G)}$ such that, for all smooth compactly supported functions ${u}$,
$\displaystyle \begin{array}{rcl} (\int_{G}|u|^{q})^{1/q}\leq S_q(G)(\int_{G}|\nabla u|^p)^{1/p}, \end{array}$
provided ${\frac{1}{p}-\frac{1}{q}=\frac{1}{Q}}$. |
# Linux hacks for the command-line
## Intermittently needed cheat codes
Things I forget how to do in Linux (linux in particular, not unix-style command lines in general.
Some of these commands are supposed to be run sudo root, and any of them might turn your computer bad, and set it to obsessively stalking you and sewing shellfish into your curtains, or some other even worse consequence. I take no responsibility for that.
Kung Fury
## Open a file in the GUI from the command line
xdg-open filename.ext
Of course! Why did I not realise that open is spelled xdg-open? For the curious, xdg stands for “Expect Delays Googling”, because that is how you work out these unintuitive and unhelpful command names.
## SysRq
Linux Magic System Request Key Commands are commands you can run in the kernel via a key combination. They look seem useful, and also like massive security risk. I am not clear if they are enabled per default on typical machines.
## What is stealing my keypress?
Annoying keyboard shortcut problems with your X application?
Using xlsclients -la, I found a list of X apps, including those running in the background. I started killing them; some of the process terminations made my Gnome session break down, but I eventually found that shutting down the skypeforlinux process made CTRL+ALT+SHIFT+D work for me.
## Must secure boot be disabled?
Apparently not. But it is onerous beyond plausible usefulness, without Microsoft signing kernel modules for you, unless you are working for some secret agency; in which case, call IT support.
## Rebuild all DKMS kernel modules
Missing some modules for some kernel version? Here is how to rebuild for all kernel versions:
ls /var/lib/initramfs-tools | \
sudo xargs -n1 /usr/lib/dkms/dkms_autoinstaller start
## Pinning kernel version because the drivers/modules
I needed this a couple of times although AFAICT I should not have. I think compatible versions of something I do not want to know about didn’t match with my manually selected versions of some other thing I don’t want to need to care about. Sidestepping the issue by pinning grub default to boot the good kernel seemed to work.
## Filesystem hacks
See Linux FS hacks.
## Linux audio
I am so sorry. Read Linux audio.
## Debian/Ubuntu package file ownership
Two options:
dpkg -S file
dlocate /path/to/file
### No comments yet. Why not leave one?
GitHub-flavored Markdown & a sane subset of HTML is supported. |
## Archive for March, 2011
### Building an English-to-Japanese name converter
Update: I made a Japanese Name Converter web site!
The Japanese Name Converter was the first Android app I ever wrote. So for me, it was kind of a “hello world” app, but in retrospect it was a doozy of a “hello world.”
The motivation for the app was pretty simple: what was something I could build to run on an Android phone that 1) lots of people would be interested in and 2) required some of my unique NLP expertise? Well, people love their own names, and if they’re geeks like me, they probably think Japanese is cool. So is there some way, I wondered, of writing a program that could automatically transliterate any English name into Japanese characters?
The problem is not trivial. Japanese phonemics and phonotactics are both very restrictive, and as a result any loanword gets thoroughly mangled as it passes through the gauntlet of Japanese sound rules. Some examples are below:
beer = biiru (/bi:ru/)
heart = haato (/ha:to/)
hamburger = hanbaagaa (/hanba:ga:/)
strike (i.e. in baseball) = sutoraiku (/sutoraiku/)
volleyball = bareebooru (/bare:bo:ru/)
helicopter = herikoputaa (/herikoputa:/)
English names go through the same process:
Nolan = nooran (/no:ran/)
Michael = maikeru (/maikeru/)
Stan = sutan (/sutan/)
(Note for IPA purists: the Japanese /r/ is technically an alveolar flap, and therefore would be represented phonetically as [ɾ]. The /u/ is an unrounded [ɯ].)
Whole lotta changes going on here. To just pick out some of the highlights, notice that:
1. “l” becomes “r” – Japanese, like most non-Indo-European languages, makes no distinction between the two.
2. Japanese phonotactics only allow one coda – “n.” So no syllables can end on any consonant other than “n,” and no consonant clusters are allowed except for those starting with “n.” All English consonant clusters have to be epenthesized with vowels, usually “u” but sometimes “i.”
3. English syllabic “r” (aka the rhotacized schwa, sometimes written [ɚ]) becomes a double vowel /a:/. Yep, they use the British, r-less pronunciation. Guess they didn’t concede everything to us Americans just because we occupied ’em.
All this is just what I’d have to do to convert the English names into romanized Japanese (roomaji). I still haven’t even mentioned having to convert this all into katakana, i.e. the syllabic alphabet Japanese uses for foreign words! Clearly I had my work cut out for me.
### Initial ideas
The first solution that popped into my head was to use Transformation-Based Learning (aka the Brill tagger). My idea was that you could treat each individual letter in the English input as the observation and the corresponding sequence in the Japanese output as the class label, and then build up rules to transform them based on the context. It seemed reasonable enough. Plus, I would benefit from the fact that the output labels come from the same set as the input labels (if I used English letters, anyway). So for instance, “nolan” and “nooran” could be aligned as:
n:n
o:oo
l:r
a:a
n:n
Three of the above pairs are already correct before I even do anything. Off to a good start!
Plus, once the TBL is built, executing it would be dead simple. All of the rules just need to be applied in order, amounting to a series of string replacements. Even the limited phone hardware could handle it, unlike what I would be getting with a Markov model. Sweet! Now what?
Well, the first thing I needed was training data. After some searching, I eventually found a calligraphy web site that listed about 4,000 English-Japanese name pairs, presumably so that people could get tattoos they’d regret later. After a little wget action and some data massaging, I had my training data.
By the way, let’s take a moment to give a big hand to those unsung heroes of machine learning – the people who take the time to build up huge, painstaking corpora like these. Without them, nothing in machine learning would be possible.
### First Attempt
My first attempt started out well. I began by writing a training algorithm that would generate rules (such as “convert X to Y when preceded by Z”) or (“convert A to B when followed by C”) from each of the training pairs. Each rule was structured as follows:
Antecedent: a single character in the English string
Consequence: any substring in the Japanese string (with some limit on max substring length)
Condition(s): none and/or following letter and/or preceding letter and/or is a vowel etc.
Then I calculated the gain (in terms of total Levenshtein, or edit distance improvement across the training data) for each rule. Finally, ala Brill, it was just a matter of taking the best rule at each iteration, applying it to all the strings, and continuing until some breaking point. The finished model would just be the list of rules, applied in order.
Unfortunately, this ended up failing because the rules kept mangling the input data to the point where the model was unable to recover, since I was overwriting the string with each rule. So, for instance, the first rule the model learned was “l” -> “r”. Great! That makes perfect sense, since Japanese has no “l.” However, this caused problems later on, because the model now had no way of distinguishing syllable-final “l” from “r,” which makes a huge difference in the transliteration. Ending English “er” usually becomes “aa” in Japanese (e.g. “spencer” -> “supensaa”), but ending “el” becomes “eru” (e.g. “mabel” -> “meeberu”). Since the model had overwritten all l’s with r’s, it couldn’t tell the difference. So I scrapped that idea.
### Second Attempt
My Brill-based converter was lightweight, but maybe I needed to step things up a bit? I wondered if the right approach here would be to use something like a sequential classifier or HMM. Ignoring the question of whether or not that could even run on a phone (which was unlikely), I tried to run an experiment to see if it was even a feasible solution.
The first problem I ran into here was that of alignment. With the Brill-based model, I could simply generate rules where the antecedent was any character in the English input and the consequence was any substring of the Japanese input. Here, though, you’d need the output to be aligned with the input, since the HMM (or whatever) has to emit a particular class label at each observation. So, for instance, rather than just let the Brill algorithm discover on its own that “o” –> “oo” was a good rule for transliterating “nolan” to “nooran” (because it improved edit distance), I’d need to write the alignment algorithm myself before inputting it to the sequential learner.
I realized that what I was trying to do was similar to parallel corpus alignment (as in machine translation), except that in my case I was aligning letters rather than words. I tried to brush up on the machine translation literature, but it mostly went over my head. (Hey, we never covered it in my program.) So I tried a few different approaches.
I started by thinking of it like an HMM, in which case I’m trying to predict the the output Japanese sequence (j) given the input English sequence (e), where I could model the relationship like so:
$P(j|e) = \frac{P(e|j) P(j)}{P(e)}$ (by Bayes’ Law)
And, since we’re just trying to maximize P(j|e), we can simplify this to:
$argmax(P(j|e))\hspace{3 mm}\alpha\hspace{3 mm}argmax(P(e|j) P(j))$
Or, in English (because I hate looking at formulas too): The probability of a Japanese string given an English string is proportional to the probability of the English string given the Japanese string multiplied by the probability of the Japanese string.
But I’m not building a full HMM – I’m just trying to figure out the partitioning of the sequence, i.e. the $P(e|j)$ part. So I modeled that as:
$P(e|j) = P(e_0|j_0) P(e_1|j_1) ... P(e_n|j_n)$
Or, in English: The probability of the English string given the Japanese string equals the product of all the probabilities of each English character given the probability of its corresponding Japanese substring.
Makes sense so far, right? All I’m doing is assuming that I can multiply the probabilities of the individual substrings together to get the total probability. This is pretty much the exact same thing you do with Naive Bayes, where you assume that all the words in a document are conditionally independent and just multiply their probabilities together.
And since I didn’t know $j_0$ through $j_n$ (i.e. the Japanese substring partitionings, e.g n|oo|r|a|n), my task boiled down to just generating every possible partitioning, calculating the probability for each one, and then taking the max.
But how to model $P(e_n|j_n)$, i.e. the probability of an English letter given a Japanese substring? Co-occurrence counts seemed like the most intuitive choice here – just answering the question “how likely am I to see this English character, given the Japanese substring I’m aligning it with?” Then I could just take the product of all of those probabilities. So, for instance, in the case of “nolan” -> “nooran”, the ideal partitioning would be n|oo|r|a|n, and to figure that out I would calculate count(n,n)/count(n) * count(o,oo)/count(o) * count(l,r)/count(l) * count(a,a)/count(a) * count(n,n)/count(n), which should be the highest-scoring partitioning for that pair.
But since this formula had a tendency to favor longer Japanese substrings (because they are rarer), I leveled the playing field a bit by also multiplying the conditional probabilities of all the substrings of those substrings. (Edit: only after reading this do I realize my error was in putting count(e) in the denominator, rather than count(j). D’oh.) There! Now I finally had my beautiful converter, right?
Well, the pairings of substrings were fine – my co-occurrence heuristic seemed to find reasonable inputs and outputs. The final model, though, failed horribly. I used Minorthird to build up a Maximum Entropy Markov Model (MEMM) trained on the input 4,000 name pairs (with Minorthird’s default Feature Extractor), and the model performed even worse than the Brill one! The output just looked like random garbage, and didn’t seem to correspond to any of the letters in the input. The main problem appeared to be that there were just too many class labels, since an English letter in the input could correspond to many Japanese letters in the output.
For instance, the most extreme case I found is the name “Alex,” which transliterates to “arekkusu.” The letter “x” here corresponds to no less than five letters in the output – “kkusu.” Now imagine how many class labels there must have been, if “kkusu” was one of them. Yeah, it was ridiculous. Classification tends to get dicey when you have more than ten labels. I’d argue that even three is pushing it, since the sweet spot is really two (binary classification).
Also, it was at this point that I realized that trying to do MEMM decoding on the underpowered hardware of a phone was pretty absurd as it is. Was I really going to bundle the entire Minorthird JAR with my app and just hope it would work without throwing an OutOfMemoryError?
### Third Attempt
So for my third attempt, I went back to the drawing board with the Brill tagger. But this time, I had an insight. Wasn’t my whole problem before that the training algorithm was destroying the string at each step? Why not simply add a condition to the rule that referenced the original character in the English string? For instance, even if the first rule converts all l’s to r’s, the model could still “see” the original “l,” and thus later on down the road it could discover useful rules like ‘convert “er” to “eru” when the original string was “el”, but convert “er” to “aa” when the original string was “er”‘. I immediately noticed a huge difference in the performance after adding this condition to the generated rules.
That was basically the model that led me all the way to my final, finished product. There were a few snafus – like how the training algorithm takes up an ungodly amount of memory, so I had to optimize since I was running it on my laptop with only 2GB of memory. I also only used a few rule templates and I even cut the training data from 4,000 to little over 1,000 entries, based on which names were more popular in US census data. But ultimately, I think the final model was pretty good. Below are my test results, using a test set of 47 first and last names that were not in the training data (and which I mostly borrowed from people I know).
holly -> horii (gold: hoorii)
anderson -> andaason
damon -> damon (gold: deemon)
clinton -> kurinton
lambert -> ranbaato
king -> kingu
lawson -> rooson
bellow -> beroo
butler -> butoraa (gold: batoraa)
vorwaller -> boowaraa
parker -> paakaa
thompson -> somupson (gold: tompuson)
potter -> pottaa
hermann -> haaman
stacia -> suteishia
maevis -> maebisu (gold: meebisu)
gerald -> jerarudo
hartleben -> haatoreben
hanson -> hannson (gold: hanson)
brubeck -> buruubekku
ferrel -> fereru
poolman -> puoruman (gold: puuruman)
bart -> baato
smith -> sumisu
larson -> raason
perkowitz -> paakooitsu (gold: paakowitsu)
boyd -> boido
nancy -> nanshii
meliha -> meria (gold: meriha)
berzins -> baazinsu (gold: baazinzu)
manning -> maningu
sanders -> sandaasu (gold: sandaazu)
durup -> duruppu (gold: durupu)
thea -> sia
walker -> waokaa (gold: wookaa)
johnson -> jonson
beal -> beru (gold: biiru)
lovitz -> robitsu
melville -> merubiru
pittman -> pitman (gold: pittoman)
west -> wesuto
eaton -> iaton (gold: iiton)
pound -> pondo
eustice -> iasutisu (gold: yuusutisu)
pope -> popu (gold: poopu)
Baseline (i.e. just using the English strings without applying the model at all):
Accuracy: 0.00
Total edit distance: 145
Model score:
Accuracy: 0.5833333333333334
Total edit distance: 28
(I print out “gold” and the correct answer only for the incorrect ones.)
The accuracy’s not very impressive, but as I kept tweaking the features, what I was really aiming for was low edit distance, and 28 was the lowest I was able to achieve on the test set. So this means that, even when it makes mistakes, the mistakes are usually very small, so the results are still reasonable. “Meinaado,” for instance, isn’t even a mistake – it’s just two ways of writing the same long vowel (“mei” vs. “mee”).
Anyway, many of the mistakes can be corrected by just using postprocessing heuristics (e.g. final “nn” doesn’t make any sense in Japanese, and “tm” is not a valid consontant cluster). I decided I was satisfied enough with this model to leave it as it is for now – especially given I had already spent weeks on this whole process.
This is the model that I ultimately included with the Japanese Name Converter app. The app processes any name that is not found in the built-in dictionary of 4,000 names, spits out the resulting roomaji, applies some postprocessing heuristics to obey the phonotactics of Japanese (like in the “nn” example above), converts the roomaji to katakana, and displays the result on the screen.
Of course, because it only fires when a name is outside the set of 4,000 relatively common names, the average user may actually never see the output from my TBL model. However, I like having it in the app because I think it adds something unique. I looked around at other “your name in Japanese” apps and websites, but none of them are capable of transliterating any old arbitrary string. They always give an error when the name doesn’t happen to be in their database. At least with my app, you’ll always get some transliteration, even if it’s not a perfect one.
The Japanese Name Converter is currently my third most popular Android app, after Pokédroid and Chord Reader, which I think is pretty impressive given that I never updated it. The source code is available at Github. |
Public Group
# Interesting Class Declaration
This topic is 2802 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi,
I've hit the latest snag in my book and seeing as I don't have a C++ tutor to hand I thought I'd come on here and ask a few questions. Here's the code:
template <typename T>class PtrVector { public: explicit PtrVector(size_t capacity) : buf_(new T *[capacity], cap_(capacity), size_(0) {} private: T **buf_; // ptr to array of ptr to T size_t cap_; // capacity size_t size_; // size};
The bit I really don't get at the moment is this line:
explicit PtrVector(size_t capacity) : buf_(new T *[capacity], cap_(capacity), size_(0) {}
This *looks* to me like a constructor which takes one argument with variable name 'capacity'. I am interested in the fact that the variable type size_t is not declared until later though inside the class definition. Is this legal? Can you declare a variable type as an argument to a constructor when that variable type is only defined inside the class?
I do have more questions but I'd like to work through this one first. Thanks.
##### Share on other sites
It might be that your definition of size_t is identical to the typedef given to you by the standard, and that somehow the definition of the size_t alias creeped into your #include-hierarchy (in fact, it must [should (*)] be like that, as you are using size_t inside the class declaration already)
(*) -> "should" instead of "must", because I hear that (at least some version of) MSVC has problems implementing two phase lookup properly.
##### Share on other sites
I see, thanks for the reply. So I am not incorrect to wonder where it came from then. Your post is quite technical you obviously know alot about this stuff, but from I gleaned from it it appears that size_t might already exist or be assumed to exist prior to the class definition.
My next questions also concern the same line:
explicit PtrVector(size_t capacity) : buf_(new T *[capacity], cap_(capacity), size_(0) {}
1)Now buf_, cap_ and size_ are all private variables within the class. So am I correct to assume that this constructor is just initialising these variables. I am rather confused by how there is no typical a = b syntax going on. I don't really get why we have private variable names and then just () brackets straight after them. For example what does cap_(capacity) do? cap_ is a private data member not a function so this syntax is strange to me. Given that buf_ is defined as:
T **buf_;
I can see that buf_(new T *[capacity],... is declaring a new array of pointers to variable type T of size capacity. That bit's ok. But not the rest.
2)What does the keyword explicit do? I get the feeling this is something I've already learned but forgotten about classes and constructors :o)
Just out of interest when he comes to implement stuff he does this, just one line for the whole class:
PtrVector<Shape> pic2(MAX);
Where 'Shape' I believe is an existing class or variable of some kind. MAX is obviously a pre-defined constant. I've never seen a class instantiated as an object using that syntax before except when using the std::vector type. Up to now I've only seen it as:
myClass newclass
So this is new to me. I'm guessing that the keyword 'explicit' has altered the way you create this PtrVector class from a simple beginner's style to a more advanced style which requires the <> brackets.
That's enough for now if I post too much I'll lose track of myself. Any input would be greatly appreciated ;o)
##### Share on other sites
1. Yes you are correct. That syntax is known as an initializer list.
2. The explicit keyword tells the compiler that the PtrVector constructor cannot be called like this:
PtrVector<int> my_vector;my_vector = 5; // not allowed thanks to the 'explicit' keyword
Also, the PtrVector class is defined as a 'template' class. That's where the ClassName<Type> syntax is coming from. 'T' becomes the type that you insert between the < and > throughout the class's definition.
So essentially, when you write this:
PtrVector<int> my_vector(5);
You're creating an instance of PtrVector which can store 5 integers. So this:
PtrVector<Shape> my_vector(5);
Creates an instance of PtrVector which can manage 5 'Shape' objects - and yes, 'Shape' must be a predefined class or struct.
##### Share on other sites
Quote:
PtrVector my_vector;my_vector = 5; // not allowed thanks to the 'explicit' keyword
Almost, but a little misleading. It is operator=() that handles this case. The implicit constructor call to create the parameter to operator=() is what causes this to fail. A more straightforward example of how explicit works is like so:
PtrVector<int> my_vector = 5;
@OP
Implicit constructors have their uses, for example it is very convenient to initialise a std::string instance from a string literal. Outside such "basic" types, it is generally a good idea not to call constructors implicitly.
That said, the "explicit" keyword isn't often used to enforce this, probably because most complex types have constructors with more than a single argument. But its a good idea to be aware of "explicit" in case you get this error later on.
Quote:
I am rather confused by how there is no typical a = b syntax going on. I don't really get why we have private variable names and then just () brackets straight after them.
In C++, initialiser lists are preferred to assignment in the constructor body. One reason is that it is the only way to initialise const members, reference members and members with no default constructor. It is also the only way to specify parameters to any base class constructors. If you do not write an initialiser list, all members will be default initialised before the constructor body runs.
Consider a complex type, like std::string, as a member. There are a few ways to implement std::string, but let us assume that it always allocates a character array to hold its data (this is one of the simplest implementations). Look at the following code:
class Example{public: Example() { text = "Hello, World"; }private: std::string text;};
With such a std::string implementation, this will involve a call to the string default constructor, which involves an allocation. Then, a temporary string will be created for "Hello, World", which involves another allocation. The strings assignment operator will be invoked, which will allocate again, copy the string data, then deallocate the first allocation. Finally, the temporary will be destroyed, another deallocation.
Compare with:
class Example{public: Example() : text("Hello, World") { }private: std::string text;};
This constructor call involves a single call, which will only allocate once. There are no deallocations in the constructor.
For complex types, you should really use the initialiser list. Another "feature" of initialiser lists is that the compiler knows that it can only initialise members. This means that you can use the same names for the parameters as the member variables. Some people dislike this, and I understand, but it is convenient:
template <typename T>class PtrVector {public: explicit PtrVector(size_t capacity) : buffer(new T *[capacity]), capacity(capacity), size(0) {}private: T **buffer; size_t capacity; size_t size;};
The code will work, the capacity member will be set to the input value and the buffer will be allocated with the parameters "capacity", not the (as yet unset) member value.
##### Share on other sites
Wow they're good replies! Many thanks to you both. I've just got back from the gym doing some ab exercises, I'll chew through those replies later and no doubt will return with questions for the next exciting installment of my proper education :oD
##### Share on other sites
Ok I've had a good read of all this stuff and I've condensed it down to the following points/questions.
1)An initialiser list is a good confusion free, well structured and efficient way of initialising data members and avoiding potentially unwanted default initialisation and excessive hidden allocation operations.
2)The template part:
template <typename T> (I notice the absence of ';' here)
is responsible for causing the constructor to require the variable type inside the <> brackets:
PtrVector<variable type> classInstanceName;
3)The explicit part seems I think (and this is a question not an observation) to be the part which requires the = operator to be used with a number/value to set the constructor argument to, thus producing this syntax:
PtrVector<int> my_vector = 5;
I'd appreciate, if possible, some feedback on this thanks ;o)
##### Share on other sites
Quote:
Original post by adder_noirOk I've had a good read of all this stuff and I've condensed it down to the following points/questions.1)An initialiser list is a good confusion free, well structured and efficient way of initialising data members and avoiding potentially unwanted default initialisation and excessive hidden allocation operations.
Indeed.
Quote:
2)The template part:template (I notice the absence of ';' here)is responsible for causing the constructor to require the variable type inside the <> brackets:PtrVector classInstanceName;
Not exactly. A template is like a recipe for creating a class. The recipes must have the parameters specified in their definitions. When you actually create the class, you replace the formal template parameters with actual arguments. So, when you write a line like
PtrVector<Shape> pic2(MAX);
you're declaring a variable of type class PtrVector<Shape> and the definition of that class is implicitly generated from the template you provided.
So, it's not exactly true that the template is responsible for causing the constructor to require the angle brackets so much as the syntax of a template definition requires it.
Quote:
3)The explicit part seems I think (and this is a question not an observation) to be the part which requires the = operator to be used with a number/value to set the constructor argument to, thus producing this syntax:PtrVector my_vector = 5;
Yeah, that's sorta true in a trivial way. The implicit keyword was added to the language to explicitly disable implicit type conversions. Without it, a lot of programmers might write something like this.
#include <iostream> #include <string> class Example1 { public: Example1(int i) { std::cerr << "example 1i: " << i << "\n"; } Example1(const std::string& s) { std::cerr << "example 1s: " << s << "\n"; } }; int main(int, char*[]) { Example1 x("hello"); }
and then spend a lot of time in the WTF mode of debugging. Can you figure out why?
##### Share on other sites
I'm going to run that code through my compiler and see if I can figure it out. It looks to me like overloading the constructor function, which is for some reason rejected by the compiler or wrecks something at runtime.
Very good reply thanks, I'll check back when I've had a chance to try it out in code blocks.
##### Share on other sites
Interesting. This actually ran in my compiler:
#include <iostream>#include <string>class Example1{ public: Example1(int i) { std::cout << "example 1i: " << i << "\n"; } Example1(const std::string& s) { std::cout << "example 1s: " << s << "\n"; }};int main(){ Example1 x("hello"); return 1;}
I have no idea why though. It also chose the correct constructor when I inputted an integer as the parameter for x's constructor. Seems I have much more to understand, or maybe I have a bad compiler. I'm betting this wouldn't have run is MS Visual. Seeing how some compiler's can allow bad stuff past is perhaps all the more reason to learn code properly as I am now!
Any ideas why it ran? I would have thought for all the world my compiler would have rejected an overloaded constructor function but it didn't. Frightens me to be honest, makes me realise how careful you have to be.
• 48
• 12
• 10
• 10
• 9
• ### Forum Statistics
• Total Topics
631374
• Total Posts
2999653
× |
# What are eddy currents? – definition, causes, applications, and properties
Take a rotating metal disk, and try to rotate it. You will find it is easier to rotate but try to rotate it in between the magnetic poles. You will experience that it is not the same feeling as above. You will find it somehow harder to rotate, or formally you will need more energy to rotate it. Why did it so happen?
It happened due to the formation of eddy currents in the metallic disk. Eddy currents which are discovered by French physicist Léon Foucault (1819–1868) in 1855. On the name the discoverer, Eddy currents are also called Foucault’s currents. Eddy currents are induced within the conductor when a changing magnetic field is applied to the conductor.
In this article, we will discuss what are eddy currents? and their definition, causes, and applications. So let’s get started…
## History of eddy currents
Eddy currents were first observed by the 25th prime minister of France, François Arago (1786–1853), who was a mathematician, physicist, and astronomer. In 1824 he observed what is called rotational magnetism and that most conductive bodies could be magnetized; These discoveries were further completed and explained by Michael Faraday (1791-1867).
In 1834 Emil Lenz stated his Lenz’s law that the direction of induced current flow in an object will be such that its magnetic field will oppose the change of magnetic flux that caused the current flow. Eddy currents produce a secondary magnetic field that opposes the change in the magnetic field that produced it.
But the French physicist Léon Foucault (1819–1868) is credited with having discovered eddy currents. In 1855, he discovered that the force required for the rotation of a copper disk becomes greater when it is made to rotate between the poles of a magnet, the disk at the same time becoming heated by the eddy current induced in the metal.
## What are eddy currents?
Eddy currents definition: Eddy currents (also called Foucault’s currents) are the loops of electrical current induced within conductors by a changing magnetic field in the conductor according to the Faraday’s law of induction.
Eddy currents always flow in closed loops within conductors and lie in planes perpendicular to the magnetic field. If there exist any nearby stationary conductors, then eddy currents can be induced within that conductor by a time-varying magnetic field produced by an AC electromagnet or transformer, or by relative motion between a magnet and a nearby conductor. The magnitude of the current in a given loop is proportional to the
• strength of the magnetic field
• the area of the loop
• and the rate of change in flux
• and inversely proportional to the resistivity of the material.
If we try to draw the graph of these circular currents within a piece of metal, they look somewhat like eddies or whirlpools in a liquid that’s why they are called eddy currents.
## Causes of eddy currents
Whenever the metal sheet swings or moves through the magnetic field, the magnet which is the source of the magnetic field induces circular electric currents called eddy currents. See the diagram above, here a metal disk is rotating between the two magnetic poles. The magnetic field (B, green arrows) from the N-pole passes through the disk. Since the disk is rotating, the magnetic flux through a given area of the disk is changing continuously.
The disk is rotating anticlockwise. The part of the disk which is going inside the magnetic field (left side), at that part the magnetic field is increasing as it gets nearer to the magnet $\frac{dB}{dt} > 0$. From Faraday’s law of induction, a circular electric field is created in the disk in a counterclockwise direction around the magnetic field lines. This field induces a counterclockwise flow of electric current (I, red), in the disk. These swirling electric currents are eddy currents.
But as the part of the disk comes out of the magnet (right side), the magnetic field at that part of the disk is decreasing as it is moving away from the magnet, $\frac{dB}{dt} < 0$. This induces a second eddy current in a clockwise direction in the disk.
## Eddy currents and Lenz’s law
In the case of eddy currents, Lenz’s law, says that eddy currents produce a magnetic field that opposes the change in the magnetic field that produced it, so due to this eddy currents oppose the source of the changing magnetic field. For example, a nearby conductive surface exerts a drag force on a moving magnet, resisting its motion, due to eddy currents induced on the surface by the moving magnetic field. This drag force (produced due to the induced eddy currents within the conductors) is used as brakes called eddy currents brakes to slow or stop the moving objects.
Watch this video for an experimental demonstration of eddy currents.
## Undesirable effects of eddy currents
Eddy currents are produced inside the iron cores of rotating armatures of electric motors and dynamos, and also in the cores of transformers, which experience the flux changes when they are in use. Eddy currents cause unnecessary heating and wastage of power. The current flowing through the resistance of the conductor also dissipates energy in the form of heat in the materials.
Thus eddy currents are a cause of energy loss in alternating current (AC) inductorstransformerselectric motors and generators, and other AC machinery in the form of heat. To minimize heat loss, this machinery needs special construction such as laminated magnetic cores or ferrite cores. Eddy currents are also used to heat objects in furnaces and induction heaters, and to detect cracks and flaws in metal parts with eddy current detectors.
## Properties of eddy currents
Some important properties of eddy currents are given below:
• Eddy currents generate heat as well as electromagnetic forces in conductors of non-zero resistance.
• The heat produced by the eddy currents can be used for induction heating.
• The electromagnetic forces produced by the eddy currents can be used for levitation, producing movement, or giving a strong braking effect.
• Self-induced eddy currents are responsible for the skin effect in conductors. Due to this, it can be used for non-destructive testing of materials for geometry features, like micro-cracks.
• Eddy currents can be distorted by defects such as cracks, corrosion, edges, etc.
## Applications of eddy currents
Some important applications of eddy currents are given below:
### Electromagnetic braking
Eddy currents are used as brakes called eddy current brakes to slow or stop moving objects. This braking system is used in modern railways for smooth stopping. During braking, the metal wheel of the train is exposed to the electromagnet. This rotating wheel under the electromagnet induces eddy currents in the wheel.
According to Lenz’s law, the magnetic field produces by the eddy currents opposes the cause. Thus the wheel faces a force opposing the initial movement of the wheel. If the wheels are spinning faster, the stronger will the effect be, meaning that as the train slows the braking force is reduced, producing a smooth stopping motion.
### Repulsive effects and levitation or Electrodynamic suspension (EDS)
Electrodynamic suspension (EDS) is a form of magnetic levitation that involves subjecting conductors to time-varying magnetic fields. This induces eddy currents in the conductors that create a repulsive magnetic field that separates the two objects. Magnetic fields can arise from relative movements between two objects. In many cases, one magnetic field is a permanent field, such as a permanent magnet or a superconducting magnet, and the other magnetic field is induced by field changes that occur when the magnet moves relative to a conductor in the other object.
Electrodynamic suspension can also occur when an electromagnet driven by an AC power source creates the changing magnetic field; In some cases, a linear induction motor creates the field. EDS is used for maglev trains like the Japanese SC Maglev. It is also used for some types of magnetic levitation bearings.
### Metal identification
Eddy currents are used in some coin-operated vending machines to detect counterfeit coins or slugs. The coin rolls past a stationary magnet and is decelerated by the eddy currents. The strength of the eddy currents and thus the delay depends on the conductivity of the metal of the coin. Slugs are slowed down to a different degree than real coins, and this is used to send them to the rejection slot.
### Induction Heating
A conductive body is heated electrically by inducing eddy currents in it with a high-frequency electromagnet. Its main uses are induction cooking, induction furnace for heating metals to their melting point, welding, soldering, etc.
### Eddy-current testing
Eddy current testing (also commonly known as eddy current testing and ECT) is one of many electromagnetic testing methods used in non-destructive testing (NDT) that uses electromagnetic induction to detect and characterize surface and subsurface defects in conductive materials.
### Speedometers
Eddy currents are also used in speedometers to indicate the speed of vehicles. A speedometer has a magnet that rotates with the speed of the vehicle. The magnet is in an aluminum drum that is gently pivoted and held in place by a hairspring. When the magnet rotates, eddy currents are built up in the drum, which counteracts the movement of the magnet. A torque is applied to the drum in the opposite direction, causing the drum to deflect at an angle dependent on vehicle speed.
## Frequently Asked Questions – FAQs
##### What are eddy currents?
Loops of electrical current, are induced within conductors by a changing magnetic field in the conductor according to Faraday’s law of induction.
##### What do the eddy currents look like?
Eddy currents look like eddies or whirlpools.
##### Who discovered Eddy Current?
French physicist Léon Foucault (1819–1868) is credited with having discovered eddy currents. In 1855, he discovered that the force required for the rotation of a copper disk becomes greater when it is made to rotate between the poles of a magnet, the disk at the same time becoming heated by the eddy current induced in the metal.
##### What is the frequency in eddy current?
The supply frequency usually used for the eddy current heating ranges from 10 kHz to 40 kHz.
##### How does resistance affect eddy currents?
The resistance felt by the eddy currents in a conductor causes Joule heating and the amount of heat generated is proportional to the current squared. However, for applications like motors, generators, and transformers, this heat is considered wasted energy, and as such, eddy currents need to be minimized.
##### What is the purpose of eddy’s current testing?
Eddy current testing is most commonly used to inspect surfaces and tubes. It is an incredibly sensitive testing method and can identify even very small flaws or cracks in a surface or just beneath it. On surfaces, ETC can be done with both ferromagnetic and non-ferromagnetic materials.
##### What factors affect eddy currents?
The greater the conductivity of a material, the greater the flow of eddy currents on the surface. Permeability, or the ease at which a material can be magnetized, also affects frequency. In fact, the depth of penetration decreases significantly with an increase in permeability.
Stay tuned with Laws Of Nature for more useful and interesting content.
What is a Schematic Diagram? - Elec...
What is a Schematic Diagram? - Electrical and PLC Tutorials |
# Background Interpolators
Background interpolators provide a method for converting a low-resolution mesh into a low-order high-resolution image.
## Interpolators
Photometry.Background.ZoomInterpolatorType
ZoomInterpolator(factors)
Use a cubic-spline interpolation scheme to increase resolution of a mesh.
factors represents the level of "zoom", so an input mesh of size (10, 10) with factors (2, 2) will have an output size of (20, 20). If only an integer is provided, it will be used as the factor for every axis.
Examples
julia> ZoomInterpolator(2)([1 0; 0 1])
4×4 Matrix{Float64}:
1.0 0.75 0.25 -2.77556e-17
0.75 0.625 0.375 0.25
0.25 0.375 0.625 0.75
-5.55112e-17 0.25 0.75 1.0
julia> ZoomInterpolator(3, 1)([1 0; 0 1])
6×2 Matrix{Float64}:
1.0 -2.77556e-17
1.0 -2.77556e-17
0.666667 0.333333
0.333333 0.666667
-5.55112e-17 1.0
-5.55112e-17 1.0
source
Photometry.Background.IDWInterpolatorType
IDWInterpolator(factors; leafsize=10, k=8, power=1, reg=0, conf_dist=1e-12)
Use Shepard Inverse Distance Weighing interpolation scheme to increase resolution of a mesh.
factors represents the level of "zoom", so an input mesh of size (10, 10) with factors (2, 2) will have an output size of (20, 20). If only an integer is provided, it will be used as the factor for every axis.
The interpolator can be called with some additional parameter being, leaf_size determines at what number of points to stop splitting the tree further, k which is the number of nearest neighbors to be considered, power is the exponent for distance in the weighing factor, reg is the offset for the weighing factor in denominator, conf_dist is the distance below which two points would be considered as the same point.
Examples
julia> IDWInterpolator(2, k=2)([1 0; 0 1])
4×4 Matrix{Float64}:
1.0 0.75 0.25 0.0
0.75 0.690983 0.309017 0.25
0.25 0.309017 0.690983 0.75
0.0 0.25 0.75 1.0
julia> IDWInterpolator(3, 1; k=2, power=4)([1 0; 0 1])
6×2 Matrix{Float64}:
1.0 0.0
1.0 0.0
0.941176 0.0588235
0.0588235 0.941176
0.0 1.0
0.0 1.0
source |
# CPU affinity¶
Kdb+ can be constrained to run on specific cores through the setting of CPU affinity.
Typically, you can set the CPU affinity for the shell you are in, and then processes started within that shell will inherit the affinity.
.Q.w (memory stats)
Basics: Command-line parameter -w, System command \w
## Linux¶
Use the taskset command to limit to a certain set of cores, e.g.
taskset -c 0-2,4 q
will run q on cores 0, 1, 2 and 4. Or
taskset -c 0-2,4 bash
and then all processes started from within that new shell will automatically be restricted to those cores.
You can also use numactl -S to specify the cores, perhaps combined with -l to always allocate on the current node or other policies discussed in the linux production notes:
numactl --interleave=all --physcpubind=0,1,2 q
### Other ways to limit resources¶
On Linux systems, administrators might prefer cgroups as a way of limiting resources.
On Unix systems, memory usage can be constrained using ulimit, e.g.
ulimit -v 262144
limits virtual address space to 256MB.
## Solaris¶
Use psrset
psrset -e 2 q
which will run q using processor set 2. Or, to start a shell restricted to those cores:
psrset -e 2 bash
## Windows¶
Start q.exe with the OS command start with the /affinity flag set
start /affinity 3 c:\q\w64\q.exe
will run q on core 0 and 1. |
Q: 7 ABC is a triangle right angled at C. A line through the mid-point M of hypotenuse AB and parallel to BC intersects AC at D. Show that (ii) $\small MD\perp AC$
M mansi
Given: ABC is a triangle right angled at C. A line through the mid-point M of hypotenuse AB and parallel to BC intersects AC at D.
To prove : $\small MD\perp AC$
Proof : $\angle$ADM = $\angle$ACB (Corresponding angles)
$\angle$ADM= $90 \degree$. ($\angle$ACB = $90 \degree$)
Hence,$\small MD\perp AC$
Exams
Articles
Questions |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 \documentclass[11pt,a4paper]{article} \usepackage{isabelle,isabellesym} % further packages required for unusual symbols (see also % isabellesym.sty), use only when needed %\usepackage{amssymb} %for \, \, \, \, \, \, %\, \, \, \, \, %\, \, \ %\usepackage[greek,english]{babel} %option greek for \ %option english (default language) for \, \ \usepackage[latin1]{inputenc} %for \, \, \, \, %\, \, \ \usepackage[only,bigsqcap]{stmaryrd} %for \ %\usepackage{eufrak} %for \ ... \, \ ... \ (also included in amssymb) %\usepackage{textcomp} %for \, \ % this should be the last package used \usepackage{pdfsetup} % urls in roman style, theory text in math-similar italics \urlstyle{rm} \isabellestyle{it} \begin{document} \title{Pseudo-hoops} \author{George Georgescu and Lauren\c tiu Leu\c stean and Viorel Preoteasa} \maketitle \begin{abstract} Pseudo-hoops are algebraic structures introduced in \cite{bosbach:1969,bosbach:1970} by B. Bosbach under the name of complementary semigroups. This is a formalization of the paper \cite{georgescu:leustean:preoteasa:2005}. Following \cite{georgescu:leustean:preoteasa:2005} we prove some properties of pseudo-hoops and we define the basic concepts of filter and normal filter. The lattice of normal filters is isomorphic with the lattice of congruences of a pseudo-hoop. We also study some important classes of pseudo-hoops. Bounded Wajsberg pseudo-hoops are equivalent to pseudo-Wajsberg algebras and bounded basic pseudo-hoops are equiv- alent to pseudo-BL algebras. Some examples of pseudo-hoops are given in the last section of the formalization. \end{abstract} \tableofcontents \section{Overview} Section 2 introduces some operations and their infix syntax. Section 3 and 4 introduces some facts about residuated and complemented monoids. Section 5 introduces the pseudo-hoops and some of their properties. Section 6 introduces filters and normal filters and proves that the lattice of normal filters and the lattice of congruences are isomorphic. Following \cite{ceterchi:2001}, section 7 introduces pseudo-Waisberg algebras and some of their properties. In Section 8 we investigate some classes of pseudo-hoops. Finally section 9 presents some examples of pseudo-hoops and normal filters. \parindent 0pt\parskip 0.5ex % generated text of all theories \input{session} % optional bibliography \bibliographystyle{abbrv} \bibliography{root} \end{document} %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End: |
# zbMATH — the first resource for mathematics
A backward uniqueness result for the wave equation with absorbing boundary conditions. (English) Zbl 1337.47111
Summary: We consider the wave equation $$u_{tt}=\Delta u$$ on a bounded domain $$\Omega\subset{\mathbb R}^n$$, $$n>1$$, with smooth boundary of positive mean curvature. On the boundary, we impose the absorbing boundary condition $${\partial u\over\partial\nu}+u_t=0$$. We prove uniqueness of solutions backward in time.
##### MSC:
47N20 Applications of operator theory to differential and integral equations 35L05 Wave equation 35A02 Uniqueness problems for PDEs: global uniqueness, local uniqueness, non-uniqueness 47D06 One-parameter semigroups and linear evolution equations
Full Text:
##### References:
[1] G. Avalos, Backward uniqueness for a PDE fluid-structure interaction,, preprint [2] G. Avalos, Backward uniqueness of the s.c. semigroup arising in parabolic-hyperbolic fluid-structure interaction,, J. Diff. Eq., 245, 737, (2008) · Zbl 1158.35300 [3] G. Avalos, Backwards uniqueness of the $$C_0$$-semigroup associated with a parabolic-hyperbolic Stokes-Lamé partial differential equation system,, Trans. Amer. Math. Soc., 362, 3535, (2010) · Zbl 1204.35011 [4] I. Lasiecka, Backward uniqueness for thermoelastic plates with rotational forces,, Semigroup Forum, 62, 217, (2001) · Zbl 1015.74030 [5] M. Renardy, Backward uniqueness for linearized compressible flow,, Evol. Eqns. Control Th., 4, 107, (2015) · Zbl 1338.47046
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. |
Example 9-7-10 - Maple Help
Chapter 9: Vector Calculus
Section 9.7: Conservative and Solenoidal Fields
Example 9.7.10
If $u\left(x,y,z\right)$ is a scalar potential for show that ${\int }_{C}\mathbf{F}·\mathbf{dr}=u\left(Q\right)-u\left(P\right)$, where $C$ is given parametrically by the equations , , $z=2+{t}^{3}$, and P and Q are its endpoints when $t=0,1$, respectively. |
Product (mathematics)
Get Product Mathematics essential facts below. View Videos or join the Product Mathematics discussion. Add Product Mathematics to your PopFlock.com topic list for future reference or share this resource on social media.
Product Mathematics
In mathematics, a product is the result of multiplication, or an expression that identifies factors to be multiplied. For example, 30 is the product of 6 and 5 (the result of multiplication), and ${\displaystyle x\cdot (2+x)}$ is the product of ${\displaystyle x}$ and ${\displaystyle (2+x)}$ (indicating that the two factors should be multiplied together).
The order in which real or complex numbers are multiplied has no bearing on the product; this is known as the commutative law of multiplication. When matrices or members of various other associative algebras are multiplied, the product usually depends on the order of the factors. Matrix multiplication, for example, is non-commutative, and so is multiplication in other algebras in general as well.
There are many different kinds of products in mathematics: besides being able to multiply just numbers, polynomials or matrices, one can also define products on many different algebraic structures.
## Product of two numbers
### Product of two natural numbers
3 by 4 is 12
Placing several stones into a rectangular pattern with ${\displaystyle r}$ rows and ${\displaystyle s}$ columns gives
${\displaystyle r\cdot s=\sum _{i=1}^{s}r=\underbrace {r+r+\cdots +r} _{s{\text{ times}}}=\sum _{j=1}^{r}s=\underbrace {s+s+\cdots +s} _{r{\text{ times}}}}$
stones.
### Product of two integers
Integers allow positive and negative numbers. Their product is determined by the product of their positive amounts, combined with the sign derived from the following rule:
${\displaystyle {\begin{array}{|c|c c|}\hline \cdot &-&+\\\hline -&+&-\\+&-&+\\\hline \end{array}}}$
(This rule is a necessary consequence of demanding distributivity of multiplication over addition, and is not an additional rule.)
In words, we have:
• Minus times Minus gives Plus
• Minus times Plus gives Minus
• Plus times Minus gives Minus
• Plus times Plus gives Plus
### Product of two fractions
Two fractions can be multiplied by multiplying their numerators and denominators:
${\displaystyle {\frac {z}{n}}\cdot {\frac {z'}{n'}}={\frac {z\cdot z'}{n\cdot n'}}}$
### Product of two real numbers
For a rigorous definition of the product of two real numbers see Construction of the real numbers.
Formulas
Theorem[1] — Suppose a > 0 and b > 0. If 1 < p < ? and q := p/p - 1 then
ab = tp ap/p + t- q bq/q.
Proof[1] —
Define a real-valued function f on the positive real numbers by
f (t) := tp ap/p + t-q bq/q
for every t > 0 and then calculate its minimum.
### Product of two complex numbers
Two complex numbers can be multiplied by the distributive law and the fact that ${\displaystyle i^{2}=-1}$, as follows:
{\displaystyle {\begin{aligned}(a+b\,i)\cdot (c+d\,i)&=a\cdot c+a\cdot d\,i+b\cdot c\,i+b\cdot d\cdot i^{2}\\&=(a\cdot c-b\cdot d)+(a\cdot d+b\cdot c)\,i\end{aligned}}}
#### Geometric meaning of complex multiplication
A complex number in polar coordinates.
Complex numbers can be written in polar coordinates:
${\displaystyle a+b\,i=r\cdot (\cos(\varphi )+i\sin(\varphi ))=r\cdot e^{i\varphi }}$
Furthermore,
${\displaystyle c+d\,i=s\cdot (\cos(\psi )+i\sin(\psi ))=s\cdot e^{i\psi },}$
from which one obtains
${\displaystyle (a\cdot c-b\cdot d)+(a\cdot d+b\cdot c)i=r\cdot s\cdot e^{i(\varphi +\psi )}.}$
The geometric meaning is that the magnitudes are multiplied and the arguments are added.
### Product of two quaternions
The product of two quaternions can be found in the article on quaternions. Note, in this case, that ${\displaystyle a\cdot b}$ and ${\displaystyle b\cdot a}$ are in general different.
## Product of a sequence
The product operator for the product of a sequence is denoted by the capital Greek letter pi ? (in analogy to the use of the capital Sigma ? as summation symbol).[2][3] For example, the expression ${\displaystyle \textstyle \prod _{i=1}^{6}i^{2}}$is another way of writing ${\displaystyle 1\cdot 4\cdot 9\cdot 16\cdot 25\cdot 36}$.[4]
The product of a sequence consisting of only one number is just that number itself; the product of no factors at all is known as the empty product, and is equal to 1.
## Commutative rings
Commutative rings have a product operation.
### Residue classes of integers
Residue classes in the rings ${\displaystyle \mathbb {Z} /N\mathbb {Z} }$ can be added:
${\displaystyle (a+N\mathbb {Z} )+(b+N\mathbb {Z} )=a+b+N\mathbb {Z} }$
and multiplied:
${\displaystyle (a+N\mathbb {Z} )\cdot (b+N\mathbb {Z} )=a\cdot b+N\mathbb {Z} }$
### Convolution
The convolution of the square wave with itself gives the triangular function
Two functions from the reals to itself can be multiplied in another way, called the convolution.
If
${\displaystyle \int \limits _{-\infty }^{\infty }|f(t)|\,\mathrm {d} t<\infty \qquad {\mbox{and}}\qquad \int \limits _{-\infty }^{\infty }|g(t)|\,\mathrm {d} t<\infty ,}$
then the integral
${\displaystyle (f*g)(t)\;:=\int \limits _{-\infty }^{\infty }f(\tau )\cdot g(t-\tau )\,\mathrm {d} \tau }$
is well defined and is called the convolution.
Under the Fourier transform, convolution becomes point-wise function multiplication.
### Polynomial rings
The product of two polynomials is given by the following:
${\displaystyle \left(\sum _{i=0}^{n}a_{i}X^{i}\right)\cdot \left(\sum _{j=0}^{m}b_{j}X^{j}\right)=\sum _{k=0}^{n+m}c_{k}X^{k}}$
with
${\displaystyle c_{k}=\sum _{i+j=k}a_{i}\cdot b_{j}}$
## Products in linear algebra
There are many different kinds of products in linear algebra. Some of these have confusingly similar names (outer product, exterior product) with very different meanings, while others have very different names (outer product, tensor product, Kronecker product) and yet convey essentially the same idea. A brief overview of these is given in the following sections.
### Scalar multiplication
By the very definition of a vector space, one can form the product of any scalar with any vector, giving a map ${\displaystyle \mathbb {R} \times V\rightarrow V}$.
### Scalar product
A scalar product is a bi-linear map:
${\displaystyle \cdot :V\times V\rightarrow \mathbb {R} }$
with the following conditions, that ${\displaystyle v\cdot v>0}$ for all ${\displaystyle 0\not =v\in V}$.
From the scalar product, one can define a norm by letting ${\displaystyle \|v\|:={\sqrt {v\cdot v}}}$.
The scalar product also allows one to define an angle between two vectors:
${\displaystyle \cos \angle (v,w)={\frac {v\cdot w}{\|v\|\cdot \|w\|}}}$
In ${\displaystyle n}$-dimensional Euclidean space, the standard scalar product (called the dot product) is given by:
${\displaystyle \left(\sum _{i=1}^{n}\alpha _{i}e_{i}\right)\cdot \left(\sum _{i=1}^{n}\beta _{i}e_{i}\right)=\sum _{i=1}^{n}\alpha _{i}\,\beta _{i}}$
### Cross product in 3-dimensional space
The cross product of two vectors in 3-dimensions is a vector perpendicular to the two factors, with length equal to the area of the parallelogram spanned by the two factors.
The cross product can also be expressed as the formal[a] determinant:
${\displaystyle \mathbf {u\times v} ={\begin{vmatrix}\mathbf {i} &\mathbf {j} &\mathbf {k} \\u_{1}&u_{2}&u_{3}\\v_{1}&v_{2}&v_{3}\\\end{vmatrix}}}$
### Composition of linear mappings
A linear mapping can be defined as a function f between two vector spaces V and W with underlying field F, satisfying[5]
${\displaystyle f(t_{1}x_{1}+t_{2}x_{2})=t_{1}f(x_{1})+t_{2}f(x_{2}),\forall x_{1},x_{2}\in V,\forall t_{1},t_{2}\in \mathbb {F} .}$
If one only considers finite dimensional vector spaces, then
${\displaystyle f(\mathbf {v} )=f\left(v_{i}\mathbf {b_{V}} ^{i}\right)=v_{i}f\left(\mathbf {b_{V}} ^{i}\right)={f^{i}}_{j}v_{i}\mathbf {b_{W}} ^{j},}$
in which bV and bW denote the bases of V and W, and vi denotes the component of v on bVi, and Einstein summation convention is applied.
Now we consider the composition of two linear mappings between finite dimensional vector spaces. Let the linear mapping f map V to W, and let the linear mapping g map W to U. Then one can get
${\displaystyle g\circ f(\mathbf {v} )=g\left({f^{i}}_{j}v_{i}\mathbf {b_{W}} ^{j}\right)={g^{j}}_{k}{f^{i}}_{j}v_{i}\mathbf {b_{U}} ^{k}.}$
Or in matrix form:
${\displaystyle g\circ f(\mathbf {v} )=\mathbf {G} \mathbf {F} \mathbf {v} ,}$
in which the i-row, j-column element of F, denoted by Fij, is fji, and Gij=gji.
The composition of more than two linear mappings can be similarly represented by a chain of matrix multiplication.
### Product of two matrices
Given two matrices
${\displaystyle A=(a_{i,j})_{i=1\ldots s;j=1\ldots r}\in \mathbb {R} ^{s\times r}}$ and ${\displaystyle B=(b_{j,k})_{j=1\ldots r;k=1\ldots t}\in \mathbb {R} ^{r\times t}}$
their product is given by
${\displaystyle B\cdot A=\left(\sum _{j=1}^{r}a_{i,j}\cdot b_{j,k}\right)_{i=1\ldots s;k=1\ldots t}\;\in \mathbb {R} ^{s\times t}}$
### Composition of linear functions as matrix product
There is a relationship between the composition of linear functions and the product of two matrices. To see this, let r = dim(U), s = dim(V) and t = dim(W) be the (finite) dimensions of vector spaces U, V and W. Let ${\displaystyle {\mathcal {U}}=\{u_{1},\ldots ,u_{r}\}}$ be a basis of U, ${\displaystyle {\mathcal {V}}=\{v_{1},\ldots ,v_{s}\}}$ be a basis of V and ${\displaystyle {\mathcal {W}}=\{w_{1},\ldots ,w_{t}\}}$ be a basis of W. In terms of this basis, let ${\displaystyle A=M_{\mathcal {V}}^{\mathcal {U}}(f)\in \mathbb {R} ^{s\times r}}$ be the matrix representing f : U -> V and ${\displaystyle B=M_{\mathcal {W}}^{\mathcal {V}}(g)\in \mathbb {R} ^{r\times t}}$ be the matrix representing g : V -> W. Then
${\displaystyle B\cdot A=M_{\mathcal {W}}^{\mathcal {U}}(g\circ f)\in \mathbb {R} ^{s\times t}}$
is the matrix representing ${\displaystyle g\circ f:U\rightarrow W}$.
In other words: the matrix product is the description in coordinates of the composition of linear functions.
### Tensor product of vector spaces
Given two finite dimensional vector spaces V and W, the tensor product of them can be defined as a (2,0)-tensor satisfying:
${\displaystyle V\otimes W(v,m)=V(v)W(w),\forall v\in V^{*},\forall w\in W^{*},}$
where V* and W* denote the dual spaces of V and W.[6]
For infinite-dimensional vector spaces, one also has the:
The tensor product, outer product and Kronecker product all convey the same general idea. The differences between these are that the Kronecker product is just a tensor product of matrices, with respect to a previously-fixed basis, whereas the tensor product is usually given in its intrinsic definition. The outer product is simply the Kronecker product, limited to vectors (instead of matrices).
### The class of all objects with a tensor product
In general, whenever one has two mathematical objects that can be combined in a way that behaves like a linear algebra tensor product, then this can be most generally understood as the internal product of a monoidal category. That is, the monoidal category captures precisely the meaning of a tensor product; it captures exactly the notion of why it is that tensor products behave the way they do. More precisely, a monoidal category is the class of all things (of a given type) that have a tensor product.
### Other products in linear algebra
Other kinds of products in linear algebra include:
## Cartesian product
In set theory, a Cartesian product is a mathematical operation which returns a set (or product set) from multiple sets. That is, for sets A and B, the Cartesian product is the set of all ordered pairs --where and .[7]
The class of all things (of a given type) that have Cartesian products is called a Cartesian category. Many of these are Cartesian closed categories. Sets are an example of such objects.
## Empty product
The empty product on numbers and most algebraic structures has the value of 1 (the identity element of multiplication), just like the empty sum has the value of 0 (the identity element of addition). However, the concept of the empty product is more general, and requires special treatment in logic, set theory, computer programming and category theory.
## Products over other algebraic structures
Products over other kinds of algebraic structures include:
A few of the above products are examples of the general notion of an internal product in a monoidal category; the rest are describable by the general notion of a product in category theory.
## Products in category theory
All of the previous examples are special cases or examples of the general notion of a product. For the general treatment of the concept of a product, see product (category theory), which describes how to combine two objects of some kind to create an object, possibly of a different kind. But also, in category theory, one has:
## Other products
• A function's product integral (as a continuous equivalent to the product of a sequence or as the multiplicative version of the normal/standard/additive integral. The product integral is also known as "continuous product" or "multiplical".
• Complex multiplication, a theory of elliptic curves.
## Notes
1. ^ Here, "formal" means that this notation has the form of a determinant, but does not strictly adhere to the definition; it is a mnemonic used to remember the expansion of the cross product.
## References
1. ^ a b Jarchow 1981, pp. 47-55.
2. ^ "Comprehensive List of Algebra Symbols". Math Vault. 2020-03-25. Retrieved .
3. ^ a b Weisstein, Eric W. "Product". mathworld.wolfram.com. Retrieved .
4. ^ "Summation and Product Notation". math.illinoisstate.edu. Retrieved .
5. ^ Clarke, Francis (2013). Functional analysis, calculus of variations and optimal control. Dordrecht: Springer. pp. 9-10. ISBN 1447148207.
6. ^ Boothby, William M. (1986). An introduction to differentiable manifolds and Riemannian geometry (2nd ed.). Orlando: Academic Press. p. 200. ISBN 0080874398.
7. ^ Moschovakis, Yiannis (2006). Notes on set theory (2nd ed.). New York: Springer. p. 13. ISBN 0387316094. |
# Math Help - unsure? + hyperbola!
1. ## unsure? + hyperbola!
okay how would I go about and do this question?
determien the standard form of the equation of a hyperbola with vertices(+2,0) and passing through (4,3)
would I use the formula:
(y-k)^2/a^2 -(x-h)^2/ b^2
but what would a and b stand for??
2. An equation of the hyperbola has the form:
$\frac{x^{2}}{2^{2}}-\frac{y^{2}}{b^{2}}=1$
Since P(4,3) passes through, the x and y coordinates satisfy the equation:
$\frac{4^{2}}{2^{2}}-\frac{3^{2}}{b^{2}}=1$
Solving, we find $b^{2}=3$
So, the equation is:
$\frac{x^{2}}{4}-\frac{y^{2}}{3}=1$
3. Take a look at this site for better understanding.
$\frac{(y-k)^2}{a^2}-\frac{(x-h)^2}{b^2} = 1$ will give you an north-south opening hyperbola. $(h, k)$ is the center of the hyperbola. a is the closest the hyperbola ever gets to the center. And b/a gives the absolute value of the ratio x/y for points infinetly far away from the center. Was that helpful?
Now I don't know what a vertice is, but you have four unknown, a, b, k and h. You got to have at least four equations to be able to get the values of those unknown.
You have for example one pair of coordinates. You can always insert them to the equation. So, since you got
$\frac{(y-k)^2}{a^2}-\frac{(x-h)^2}{b^2} = 1$
and you know that (x, y) = (4, 3) is one point in the hyperbola, you can as a beginning set up the equation
$\frac{(3-k)^2}{a^2}-\frac{(4-h)^2}{b^2} = 1$
Good luck!
4. Originally Posted by frogsrcool
okay how would I go about and do this question?
determien the standard form of the equation of a hyperbola with vertices(+2,0) and passing through (4,3)
would I use the formula:
(y-k)^2/a^2 -(x-h)^2/ b^2
but what would a and b stand for??
Originally Posted by TriKri
Now I don't know what a vertice is
From reading the question I would assume that frogsrcool used the word "vertex" instead of the more standard term "focus."
-Dan
5. Yes, there are vertices on a hyperbola. The vertices are the x-intercepts.
Thay would have coordinates V(a,0) and V(-a,0). This one has vertices
V(2,0) and V'(-2,0).
Line V'V is called the transverse axis. The end points of this line segment are the vertices.
The foci of this particular hyperbola would be:
$c^{2}=a^{2}+b^{2}=4+3=7$
The foci are at $(\pm\sqrt{7},0)$
6. Originally Posted by galactus
Yes, there are vertices on a hyperbola. The vertices are the x-intercepts.
Thay would have coordinates V(a,0) and V(-a,0). This one has vertices
V(2,0) and V'(-2,0).
Line V'V is called the transverse axis. The end points of this line segment are the vertices.
The foci of this particular hyperbola would be:
$c^{2}=a^{2}+b^{2}=4+3=7$
The foci are at $(\pm\sqrt{7},0)$
My apologies. I have never heard of the term being applied to an hyperbola before.
-Dan
7. Certainly no need to apologize. I feel honored that I showed you something. I went ahead and included a graph of the said hyperbola. |
BinaryPlace - Maple Help
ListTools
BinaryPlace
perform a binary placement in a list
Calling Sequence BinaryPlace(L, x, f, opt1, opt2, ...)
Parameters
L - list, Vector, or one-dimensional Array x - anything f - (optional) procedure, operator, or algebraic expression opt1, opt2, ... - (optional) extra arguments to f
Description
• The BinaryPlace(L, x) function performs a binary placement of x in L, where L is assumed to be sorted. It returns the greatest index n such that L[n] precedes x. If x precedes all elements in a list L, then the value $0$ is returned.
In this form of the calling sequence, x must be of type numeric or string and L should contain values of the same type in ascending order.
• BinaryPlace also accepts a Vector or one-dimensional Array as its first argument. If x precedes all elements in an Array, then the value that is returned is the lowerbound of the Array minus one. Since Vectors, like lists, always have a lowerbound of 1, the value returned for a Vector in this case is $0$.
• If L is a list, then the returned value $n$ is such that $\left[\mathrm{op}\left(1..n,L\right),x,\mathrm{op}\left(n+1..-1,L\right)\right]$ is still a sorted list. If L is a Vector or Array, then the returned value $n$ is such that $\left[\mathrm{op}\left(\mathrm{convert}\left({L}_{..n},\mathrm{list}\right)\right),x,\mathrm{op}\left(\mathrm{convert}\left({L}_{n+1..},\mathrm{list}\right)\right)\right]$ is sorted.
• If three or more arguments are specified in the calling sequence, then $f\left(x,y,\mathrm{opt1},\mathrm{opt2},...\right)$ must return true if x precedes y.
Examples
> $\mathrm{with}\left(\mathrm{ListTools}\right):$
> $L≔\left[1,5,7,8,10\right]:$
> $n≔\mathrm{BinaryPlace}\left(L,6\right)$
${n}{≔}{2}$ (1)
> $\left[\mathrm{op}\left(1..n,L\right),6,\mathrm{op}\left(n+1..-1,L\right)\right]$
$\left[{1}{,}{5}{,}{6}{,}{7}{,}{8}{,}{10}\right]$ (2)
> $\mathrm{BinaryPlace}\left(\left["mac","made","magpie","mail"\right],"magic"\right)$
${2}$ (3)
> $\mathrm{BinaryPlace}\left(\left[0,{\mathrm{sin}\left(1\right)}^{2},1\right],\mathrm{exp}\left(-\frac{1}{10}\right),\mathrm{verify},\mathrm{less_than}\right)$
${2}$ (4)
> $\mathrm{BinaryPlace}\left(\left[\left\{4\right\},\left\{1,2,4\right\},\left\{1,2,3,4\right\}\right],\left\{2,4\right\},\mathrm{subset}\right)$
${1}$ (5)
An example with a reverse-sorted Array. Note that the eight elements of this Array are indexed with the numbers $-2$ up to $5$.
> $A≔\mathrm{Array}\left(-2..5,\left[173,157,101,21,17,-3,-33,-62\right]\right)$
${A}{≔}\left[{173}{,}{157}{,}{101}{,}{21}{,}{17}{,}{-3}{,}{-33}{,}{-62}{,}{\text{⋯ -2 .. 5 Array}}\right]$ (6)
By supplying $\mathrm{>}$ for f, we get BinaryPlace to understand the reverse ordering.
> $\mathrm{BinaryPlace}\left(A,0,\mathrm{>}\right)$
${2}$ (7)
We find that the elements $-2$ up to $2$ are the positive ones, $3$ up to $5$ are negative.
> $\mathrm{convert}\left(A\left[..2\right],'\mathrm{list}'\right)$
$\left[{173}{,}{157}{,}{101}{,}{21}{,}{17}\right]$ (8)
> $\mathrm{convert}\left(A\left[3..\right],'\mathrm{list}'\right)$
$\left[{-3}{,}{-33}{,}{-62}\right]$ (9)
Compatibility
• The ListTools[BinaryPlace] command was updated in Maple 18.
• The L parameter was updated in Maple 18. |
# Ctrl + Left Arrow is used to
Moves the cursor beginning of the Line
Moves the cursor one word left
Moves the cursor one paragraph up
Moves the cursor one paragraph down |
I thought the process would be simple enough, but either I’m doing something wrong or it can’t be done (yet).
I have a small section with two columns at the beginning of a chapter. I would like to put some introductory text in the first column, and a small table of contents in the second column.
When I try, either by copying an existing table or by creating a new one, the TOC stretches across the full width of the page, rather than staying inside the column. This is true even if I try it on a new unstyled document, so I don’t think my styles are to blame.
If I try to do do the same thing using tables, it all behaves as expected.
Any ideas?
Mark
edit retag reopen merge delete
Closed for the following reason the question is answered, right answer was accepted by Alex Kemp close date 2015-11-01 20:09:28.850815
Sort by » oldest newest most voted
You are not really doing something wrong, but you are likely not realising that a table of contents (ToC) is effectively a section in LO. This is what the XML code for a ToC looks like:
<style:style style:name="Sect1" style:family="section">
...
</style:style>
...
...
</text:table-of-content>
You could certainly try and embed a ToC within an existing section, but it doesn't actually end up being set within it in the XML. Thus you are faced with having to style the ToC separately anyway.
If this small leading section is only going to contain a brief piece of text on the left and the ToC on the right and the following content is not in the same section, I would simply use the ToC and dispense with the section. Insert your ToC and set it to use two columns and be editable. At the beginning of the first column insert a column break. Then type your content in the left column as per usual. Here is an example.
more |
# LRC Circuit Problem
## Homework Statement
An oscillator producing 10 volts (rms) at 200 rad/s is connected in series with a 50 Ω resistor, a 400 mH inductor, and a 200 μF capacitor. The rms voltage (in volts) across the inductor is
## Homework Equations
Xc=1/wC, Xl=wL, Vrms= Irms(Z), Z=(R^2 +(Xl-Xc)^2)^(1/2)
## The Attempt at a Solution
I know I am supposed to make an attempt here but I really have absolutely no idea what to do. All I could do was solve for Z and got Z= 74.33Ω. Please Help! Thanks.
gneill
Mentor
## Homework Statement
An oscillator producing 10 volts (rms) at 200 rad/s is connected in series with a 50 Ω resistor, a 400 mH inductor, and a 200 μF capacitor. The rms voltage (in volts) across the inductor is
## Homework Equations
Xc=1/wC, Xl=wL, Vrms= Irms(Z), Z=(R^2 +(Xl-Xc)^2)^(1/2)
## The Attempt at a Solution
I know I am supposed to make an attempt here but I really have absolutely no idea what to do. All I could do was solve for Z and got Z= 74.33Ω. Please Help! Thanks.
If you know how to deal with complex numbers then an easy approach is to work with the complex impedance, current, and voltages. You can then treat all the components just as you would resistors using Ohm's law and so forth.
Thanks for the response. Unfortunately, I do not know how to work with complex numbers. Do you have any other suggestions?
You are almost there....you have the equations you need.
I also get Z = 74.3 ohms so now you can calculate the current. It is a series circuit so the current is the same through each component. It should be straightforward to calculate the 3 voltages
Thanks. I took your advice but I still cannot get the correct answer. I calculated 6.7 volts being dissipated across the resistor and concluded that the inductor would have to be less than 3.3volts. This answer choice was wrong however. The answer choices are 6.7V, 2.5V, 3.4V, 10V, and 7.6V. 10V and 2.5V are incorrect.
gneill
Mentor
Thanks. I took your advice but I still cannot get the correct answer. I calculated 6.7 volts being dissipated across the resistor and concluded that the inductor would have to be less than 3.3volts. This answer choice was wrong however. The answer choices are 6.7V, 2.5V, 3.4V, 10V, and 7.6V. 10V and 2.5V are incorrect.
Did you multiply your apparent current by each of the component impedances (R, XL, XC)?
Yes I did. However, I did not get any of the possible answer choices when I did V=Irms * Xl
gneill
Mentor
Yes I did. However, I did not get any of the possible answer choices when I did V=Irms * Xl
What values did you get for each of the voltages?
I got V(resis)= 6.73V, V(ind)= 10.78V, V(capac)=3.36V. Maybe my current is wrong. I got Irms= 0.134589502.
gneill
Mentor
I got V(resis)= 6.73V, V(ind)= 10.78V, V(capac)=3.36V. Maybe my current is wrong. I got Irms= 0.134589502.
Those values all look fine for a series RLC with the parameters that you've specified.
If 10.78 V is not a choice for the inductor voltage and the selection of closest value to that is deemed incorrect, then it is possible that the question has been altered at some point (to make it a "new" question) without updating the answer key.
Yep. It seems that way. Thanks so much for all of your help. I really appreciate it.
gneill
Mentor
As a point of interest, if the input voltage of 10 V was a peak value rather than an rms one, then the resulting rms value on the inductor would be 7.61 V. |
# Reference for hypersurfaces
• A
Hi, I am studying Hypersurfaces and the intrinsic/extrinsic geometry from http://www.blau.itp.unibe.ch/newlecturesGR.pdf with the aim of understanding the Hamiltonian formalism of GR. Although interesting, the notions introduced in these notes lack mathematical rigor.
I am looking for a text which introduces these topics with the full topological structure (written by a mathematician would be better). Thanks.
andrewkirk
Homework Helper
Gold Member
When frustrated by the lack of mathematical rigour in GR texts I turned to John Lee's 'Riemannian Manifolds', which is a mathematical text that is, as expected, rigorous and (in my opinion) reasonably easy to follow. I think it was somebody here on physicsforums that recommended it to me.
There is a prequel by Lee: 'Smooth Manifolds', which may also be useful, depending on what notions you need to use. I have not felt the need to buy it yet but the result is that I am a little undercooked on vector field flows and Lie derivatives.
The downside of Lee's book is that it doesn't specifically address Pseudo-Riemannian manifolds, which is what are used in GR. In many cases the distinction doesn't matter. But sometimes it does and then you have to adapt Lee's proofs to the Pseudo-Riemannian case yourself. Unfortunately I don't know of any mathematical texts (ie not by physicists) that address Pseudo-Riemannian manifolds. Perhaps others can suggest some.
Ravi Mohan
martinbn
I haven't looked in the notes, what exactly do you fine non-rigorous? Have you looked at O'Neil's book on semi-Riemannian geometry?
Last edited by a moderator:
Thanks andrewkirk, I will look into John Lee's 'Riemannian Manifolds'
I haven't looked in the notes, what exactly do you fine non-rigorous? Have you looked at O'Neil's book on semi-Riemannian geometry?
For instance the author defines an embedding as a map
$$\Phi: \Sigma =\Sigma_n \hookrightarrow M_{n+1}$$
Now this definition raises questions such as
• What is the nature of this map (homeomorphism or diffeomorphism)? Wikipedia gives more rigorous definition of $\Phi$ as a homeomorphism onto its image.
• The author says that this embedding is represented by the parametric equations $\Phi:x^{\alpha}(y^a)$ How are these equations actually composed? I can only think as $y^{\alpha}(\Phi^{-1}(\text{something with }x^a))$ where $y^{\alpha}, x^a$ are charts of $M,\Sigma$ respectively.
I think the author does not care about the structures like topology and even manifold in the notes.
The references mentioned in Wikipedia are https://en.wikipedia.org/wiki/Embedding#References, but I was not sure which one to follow.
I will also look in O'Neill's book on semi-Riemannian geometry. Thanks again.
martinbn
For the first one, he actually explains what he means by an embedding, at the end of page 308 and the next page. About the notations it should be clear from the examples that he gives.
For the first one, he actually explains what he means by an embedding, at the end of page 308 and the next page. About the notations it should be clear from the examples that he gives.
I understand what he is explaining and I think it should be enough to serve the purpose (which is understanding the canonical GR). But again it would be nice to have a complete formal definition at one's disposal (like the way Carroll does in his notes).
About the map $\Phi$, the author says
Strictly speaking, such a map is called an (injective) immersion, while an embedding has to satisfy a slightly stronger topological condition, but since we are not concerned with global issues, and since I have not even tried to define what a manifold is (beyond the remarks in section 4.11), it would be ridiculous to worry about such things here and this is more than good enough.
So I was wondering what is the stronger topological condition.
martinbn
O'Neil might be to your taste, but at times it is about geometry that need not be relevant to relativity. You could perhaps read it selectively.
Ravi Mohan
So I was wondering what is the stronger topological condition.
The stronger topological condition is that the immersion ## f: M \rightarrow N## is an embedding if ##f ## is a homeomorphism of ##M## with ##f(M)## with the subspace topology. This is the case whenever ##M## is compact for example.
Last edited:
Ravi Mohan
George Jones
Staff Emeritus |
# All Questions
31 views
### Pricing employee stock options
ESOs are typically priced using the black-scholes model, but with an additional parameter for for the employee turnover rates . An example ...
20 views
### order routing for a fill
Lets assume a FIFO rules in futures, I buy a contract and id like to sell it. Should I estimate the possibilite of orders on opposite side would be filled first? If I watch new orders incoming at new ...
11 views
82 views
### Boundary conditions of PDE from SV model with stochastic interest rate
The PDE for the American put option price $P(S,\sigma ,r,t)$ is \begin{align*} 0 =& P_t+P_SS(r-\delta)+P_\sigma a(\sigma)+P_r\alpha (r,t) \\ +& \frac{1}{2}P_{SS}S^2\sigma ^2 + ...
78 views
### How to select optimal look back period for statistical arbitrage?
Is it possible to estimate the optimal look back period for OLS from which we test if residuals are stationary? Almost all papers that I read use random look back periods of 100 days, 252 days, 500 ...
44 views
There are uncountable many factor models to estimate stock returns, such as CAPM, FAMA-FRENCH etc. Which models can estimate the market (index) return? I found only three models: Cay, Dividends and ...
30 views
### Is there a Bloomberg field for the first trading date after an event?
For example, if a company reports earnings today after the close (6/24), the earnings date would be 6/24 but the field I'm looking for would be 6/25. If they reported tomorrow before the open, both ...
70 views
+100
### Model a floating rate BBB yield curve
Background: We want to design a compensated prepayment liability index to define an amount a bond buyer would need to receive in a redemption prior to the nominal maturity of a bond. Ideally we'd ...
78 views
### Bridgewater's Daily Observations
Bridgewater Associates send out Daily Observations to their clients, but I haven't found many traces of these publications online. The series started some 40 years ago by Ray Dalio, and there're just ...
61 views
### Local volatility parametrization using the spot
Is it possible to estimate the local volatility using the spot price S at time t instead of the strike price K and the expiry date T ? Any help would be appreciated.
78 views
### Acquiring large sets of price series
Selling and delivering real-time data seems to be the focus of practically all large data vendors, but I am more interested in acquiring large sets of historical daily data covering, say, 5.000 ...
40 views
### Does price of american (put) option exhibit smooth pasting in time direction under B-S model?
Let us consider the BS model and let $f(s,t)$ denote the price of an American put option with $t$ to expiry, then it is known the solution of the optimal stopping (when it is risk neutral) related to ...
70 views
### Hedging portfolio and extraction PDE of SV model with stochastic interest rate
How can I extraction this PDE \begin{align*} 0 =& P_t+P_SS(r-\delta)+P_\sigma a(\sigma)+P_r\alpha (r,t) \\ +& \frac{1}{2}P_{SS}S^2\sigma ^2 + \frac{1}{2}P_{\sigma ...
118 views
992 views
### Have Goldman Sachs Quantitative Strategies Research Notes been published as a book or a comprehensive collection?
Back in the 90's, Goldman Sachs (publicly?) released a series called "Quantitative Strategies Research Notes" — mostly technical papers on topic. Emanuel Derman co-authored almost all of them. Some ...
276 views
### Seasonal patterns in financial markets (weekday effects)
What seasonal patterns are there in financial markets? Is my feeling "true" that Mondays are more volatile than e.g. Tuesdays (as information gathered during the weekend can only be turned into an ...
76 views
### Confidence Intervals of Stock Following a Geometric Brownian Motion
In preparation for my Options, Future's and Risk Management examination next week, I have been presented with a series of questions and their answers. Unfortunately, my lecturer, one of the less ...
49 views
### Price of an American call option [closed]
I'm working through revision questions at the moment and we are asked to compute the price of an American call option. Suppose that $dS_t = \sigma S_t dW^*_t, S_0 >0$ Let $0<U<T$ be fixed ...
15 views
### how does a bond maturing affect the pricing of the corresponding CDS?
if a bond matures, and there is no other existing bond from the legal entity that has not matured. Then how does that affect the CDS that corresponds to that bond?
98 views
### How to use calibrated Standard Stochastic Volatility?
I'm considering the standard stochastic volatility model: $$x_t = \rho x_{t-1} + \sigma \epsilon_x$$ $$y_t = \beta \exp\left[ \frac{x_t}{2} \right] \epsilon_y$$ where $y_t$ is the log-returns and ...
59 views
### Measuring Volatility from Execution Prices
I was told of a way of measuring the volatility of a stock by looking at the reported execution prices (from Level III or Level II data.) I'm well aware of how to measure volatility by looking at the ...
64 views
Lets have the next jump difussion Stochastic Process: $$S_t = S_0 e^{\sigma W_t + (v-\frac{\sigma ^2}{2})t}\prod_{i=1}^{N_t}(1+J_i)$$ where $W_t$ is the Brownian Motion, hence $G_t \equiv e^{\sigma ... 2answers 94 views ### Extracting Signal from Noisy Data Consider a scenario in which Y_t represents the % change in price and we want to use X_t to predict Y_t. We assume that X_t is information we get before Y_t is revealed. Suppose that in reality Y_t ... 3answers 256 views ### Sharpe ratio and leverage Does leverage affect the Sharpe ratio? If my Sharpe is 2 at no leverage goes it change, fall by half say, at no leverage? 0answers 39 views ### Reference Request: Horse Race for Portfolio Allocation Probably the most popular horse race study for portfolio strategies is Optimal versus Naive Diversification: How Inefficient Is the 1/N Portfolio Strategy?, with DeMiguel, L. Garlappi and R. ... 1answer 40 views ### LIBOR with different tenor Let$F(t;S,T)$be the forward rate from$S$to$T$seen at time$t$, and$I$be one of tenors, i.e.$I$is one of {1M, 3M, 6M, 12M}. Then the forward curve$t\mapsto F(0;t,t+I)$is$I$-forward curve. ... 2answers 78 views ### Does heteroskedasticity of returns depend on the time frame? Similarly to my last question, for which I obtained very interesting and useful answers, I would like to know if there has been any study regarding heteroskedasticity and time-frames of the returns. ... 3answers 112 views ### Parameters variation in fundraising financial model I have created quite a large financial model in Excel with lots of input parameters which (after all calculations) have an influence on the output business indicators. Among the input parameters are ... 2answers 52 views ### Multi-asset class allocation How to allocate asset classes in a multi-asset portfolio? An institutional client needs to meet his pension liabilities, and suggested a multi-asset-class strategy. I'm trying to find ideas to pitch. ... 3answers 178 views ### Option Pricing Model Calibration In Practice I'm curious how an option pricing model like the Heston model is calibrated in practice. Here's how I imagine it happens: Let's say I have access to the most recent option prices on a given stock ... 0answers 29 views ### Residual Covariance Matrix, and MVO for Residual Variance and Alpha My overall goal is to find an efficient frontier using QP in terms of$\alpha$and residual variance ($\omega^2$) for a portfolio$P$given a benchmark$B\$. We know the equation for residual variance ...
15 30 50 per page |
# Getting Started With GTest (C++) with Xcode!
This is an inaugural post as well as a useful little tutorial on how to setup G-Test for C++ with Xcode. There are some other tutorials out there, but I figured I'd go into specifics for the complete beginners out there.
First you're going to need to check out the code for google test. Either:
or
Once you have it, go into the xcode/ directory and open up the Xcode project. So, since Google doesn't seem to update C++ support for Xcode very often, the config of the Xcode project is a bit outdated, so we're going to have to manually update it. What worked for me was to just comment out SDKROOT and GCC_VERSION in Config/General.xcconfig and to update MACOSX_DEPLOYMENT_TARGET = 10.7 (Note: This might not be necessary if you're using an older version of Mac OsX or Xcode, but assuming you're using the latest, you'll probably need to do this.)
So that's it. Build it by hitting the big Run button. With these changes, the build should succeed. Go to the directory where you built it (if you don't know where that is, check Xcode->Preferences->Locations->Derived Data, that's your build directory). That's your Framework.
Now the interesting part, using it. Other than this tutorial and the Google intro to their framework you can use this, it's IBM's intro to the framework which is pretty useful:
For the purposes of this intro and for a little tie in to the next tutorial, we're going to be doing a bit of test driven development. If you don't know TDD, it's when you actually create the tests first for your use cases (the various functions of your app), before you even write the code. Then you write the least amount of code possible to make your tests pass. So to start, all your tests are failing. Weird eh? The point of this is to make very concise functional code that has large test coverage through each iteration of development. Anyways, so I'm actually creating a small version of the dc function of your terminal. If you don't know what it is, check it: http://linux.about.com/library/cmd/blcmdl1_dc.htm . It's basically a stack-based calculator (if you're REALLY a beginner and don't know what a stack is: http://en.wikipedia.org/wiki/Stack_(abstract_data_type) ).
Anyways, let's get started:
In Xcode create a New Project. For the purposes of this application I'm going to use Shell (Or Command Line) application. For most testing suites you'll be using this anyways. You're going to need to add the gtest.framework that you built to this project:
right click -> add to this project -> go to your build directory and choose gtest.framework
Note: If you can't see your build directory. Copy the stuff in your build maybe to where you're keeping your code which isn't a hidden directory and select it from there.
Make sure the framework was added under the Build Phases tab under Link Binary With Libraries. That's it for setup. Now for some coding!
Create a new file tests.cpp and:
#include "gtest/gtest.h"
Since we're going to be exercising a little bit of our Comp Sci skills we're actually going to implement a stack when we write the actual code, so let's test the functionality of this stack and the basic adding of numbers to the stack. To do TDD we do need the method calls without the implementation so essentially we're just going to make stack.h.
template <class T>
class Stack {
public:
Stack(void);
~Stack(void); //Destructor
int empty(void);
int push(T &);
T pop(void);
T peek(void);
private:
int top;
T* nodes;
};
And then the tests in tests.cpp:
#include "stack.h"
#include "gtest/gtest.h"
TEST (StackTest, PushAndPeek) {
Stack intStack;
int a = 12;
int b = 15;
EXPECT_EQ (12, intStack.push(a));
EXPECT_EQ (15, intStack.push(b));
EXPECT_EQ (15, intStack.peek());
//make sure adding in LIFO Order
EXPECT_EQ (15, intStack.peek());
//Should still be there
}
TEST (StackTest, PushAndPop) {
Stack intStack;
int a = 12;
int b = 15;
EXPECT_EQ (12, intStack.push(a));
EXPECT_EQ (15, intStack.push(b));
EXPECT_EQ (15, intStack.pop());
//make sure adding in LIFO Order
EXPECT_EQ (12, intStack.pop());
//Should have removed 15, then removed 12
EXPECT_EQ (-1, intStack.pop());
//Should return -1 because there is nothing on the stack
}
Note: The reason I'm assigning a = 12 and b=15 is because XCode will throw a an error saying you cannot assign a temporary value to a non-constant lvalue of type int.
Also I'm going to make the main function in main.cpp, I'm going to allow for two options on starting the program: either run the tests or just run the program.
#include <iostream>
#include <string>
#include "gtest/gtest.h"
#include "stack.h"
using namespace std;
int main(int argc, char * argv[])
{
string input;
cout << "Hey there! Welcome to MiniDc!
If you wanna run the tests, type in tests. \n
Other wise just hit enter to continue...\n";
getline (cin, input);
if(input == "tests"){
} |
# How do you solve abs(2x+7)- abs(6-3x)= 8?
Apr 1, 2015
$x = \frac{7}{5} , 5$
Lets split the absolute value operators:
1) abs(2x+7)
2)abs(6-3x)
There are 4 possibilities:
- 1) can be positive when 2) is positive.
- 1) can be positive when 2) is negative.
- 1) can be negative when $2$ is positive.
- 1) can be negative when 2) is negative.
We have to check all these possibilities to find the solution set.
Lets start checking:
$2 x + 7 \ge 0$
$6 - 3 x \ge 0$
Both should be satisfied for our first condition.
When we solve these inequalities:
$A : 2 \ge x \ge \left(- \frac{7}{5}\right)$
We will need to remember that.
Since we assume both absolute values are positive:
$2 x + 7 - \left(6 - 3 x\right) = 8$
$5 x = 7$
$x = \frac{7}{5}$
We assumed that both absolute values are positive. For this to happen, $x$ must have a value in $\left[- \frac{7}{5} , 2\right]$. (Look at $A :$)
$\frac{7}{5}$ is in the specified range. So it is a member of our solution set.
Lets continue to our work. Our second possibility is: 1) is positive when 2) is negative.
So lets find the range of $x$
$2 x + 7 \ge 0$
$6 - 3 x < 0$
$B : x > 2$
$2 x + 7 - \left(- 1\right) \cdot \left(6 - 3 x\right) = 8$
$2 x + 7 + 6 - 3 x = 8$
$- x = - 5$
$x = 5$
$x$ is in the range $B : \left(2 , + \infty\right)$. So it is in our solution set.
Be patient, there are 2 possibilities left.
When 1) is negative, 2) is positive (we assume).
So:
$2 x + 7 < 0$
$6 - 3 x \ge 0$
$2 \ge x$
$x < \left(- \frac{7}{5}\right)$
As you can see $x$ cannot be greater than $2$ while it is less than $\left(- 1 , 4\right)$. This means 1) and 2) cannot be negative and positive respectively. There is no value of $x$ to satisfy this condition.
Our final condition: 1) and 2) are both negative.
$2 x + 7 < 0$
$6 - 3 x < 0$
$D : 2 < x < \left(- \frac{7}{5}\right)$
$\left(- 1\right) \cdot \left(2 x + 7\right) - \left(- 1\right) \cdot \left(6 - 3 x\right) = 8$
$- 2 x - 7 + 6 - 3 x = 8$
$- 5 x = 9$
$x = \left(- \frac{9}{5}\right) = - 1.8$
$x$ is not in the range $D : \left(- 1.4 , 2\right)$. So 1) and 2) cannot be both negative. There is no value of $x$ to satisfy this condition.
So the solution set is:
$S S = \left\{\frac{7}{5} , 5\right\}$ |
# We should blame new people less and look more at our own faults
I read posts about how we can harder punish people for "abusing" the system, as if all people around here are intrinsically bad. They abuse the system so we need to punish them harder and all problems will disappear.
But here is my claim. People often don't know why they get downvoted. I once got downvoted when I was new without knowing why. I thought I wrote a bad question (too trivial) but this was actually not the case. I didn't know I needed to show the work I had already done on the website, because I assumed that most people here would like to answer questions to earn points. A similar person that made this assumption can be found here, and I could give a long list of examples. If you start talking with them, you see that in at least 50% of the cases they really want to do effort, but fail for a diverse set of reasons: don't speak language well enough, didn't know they should have shown their effort while they actually did effort, some didn't know that lazy people where not allowed on this website, ... .
If we make clear policies and really punish the lazy people that should be punished, we will be more effective. We should make clear policies in order to prevent people from answering questions from people that showed no effort, because people tend to believe here that this is the reason why they keep coming back (and there might be some truth in this for the big abusers). However, so many people really want to collaborate and learn but don't get proper reasons why their question got downvoted so they think they wrote bad trivial questions which is often not the case.
I want to introduce a system in which new people get better information about what they should do when posting there first post. They should press a check mark that they have shown enough effort (in the form of saying what they already tried, thought of, where they are stuck, ...) when posting their first post AND important, write that their question WILL NOT BE ANSWERED when they fail to show effort. Of course it can be that in reality some persons will still do this, but over time, this amount of people that answer questions with lack of effort will decrease. Especially people from the post-caution banner era where they needed to check a mark, will not answer those questions because they know it violates the rules. You should not blame people for breaing the rules, if the rules are somewhat hidden.
We should blame ourselves for not showing enough effort ourselves to make the rules clear to new users, and fail to warn them for the consequences of abusing the system. We should blame ourselves, not the new users.
I believe we should give this idea a try. Believers in the idea really believe that this will reduce the amount of questions with no effort shown drastically, if the policy is correctly implemented (> 50% reduce in bad questions). To those initially opposed to the idea I would say, we can always give it a try. We have nothing to lose with this policy, only to win. If it works, and we truly believe it will, this will reduce the amount of posts which show a lack of effort drastically. I can't imagine that anyone can be against a policy that at it's best can have a huge positive impact but by no means have a negative impact on the community.
• I have read a few of your post here on meta and don't disagree but helping new people understand via positive examples isn't well received. For example, I suggested an alternative to triage that would show new user well written question here but the reception was not great. I was surprised but then I realized many people are glass half empty on SE and steady of half full. – dustin Mar 15 '15 at 14:27
• "write that their question WILL NOT BE ANSWERED when they fail to show effort" An issue, the issue perhaps, is that we can write this, it'd be a lie though. – quid Mar 15 '15 at 14:53
• I don't understand why people helping understand via positive examples isn't well received. The only explanation I can think of is that the meta people, should, in general, be less well behaved than the average new poster on this website. As long as no proper argument is given against these proposels, I need to conclude that not the new people are intrinsically bad, but the majority of meta people (to use their wordings and method of thinking) – Pedro Mar 15 '15 at 14:53
• @quid indeed it can be a lie, however we can fight against this in other to let it become a truth and not a lie anymore. – Pedro Mar 15 '15 at 14:54
• Just FYI, the linked example is now deleted, so visible only to 10k+ users. Only the good version of that question remains. (Which is good for the asker.) – Daniel Fischer Mar 15 '15 at 14:57
• The point isn't to "punish" people. The point is to control what gets posted on here to improve the quality of the site. – Qudit Mar 15 '15 at 15:13
• @Qudit Why is the only thing you can think of controlling? What is bad about guiding people? You cannot controle an immense flow of posts of new users that join this website every day, believe me. The only thing you can do is inform them, and show them the consequences of not following the rules (e.g. not getting an answer). If people still answer you need to "control" the answers and downvote people that answer to questions that have neglected the given rules. I truly believe that my way of controlling is more effective, and in the end leads to less spam on this website. – Pedro Mar 15 '15 at 15:19
• @Pedro I never said I was against giving new users more guidance in addition to controlling bad questions. It's the responsibility of new users to make an effort to acquaint themselves with the standards of the community before posting though --- not the other way around. – Qudit Mar 15 '15 at 15:25
• You are overly optimistic Pedro... People who have a lot of reputation got there by answering tons of questions, often regardless of the quality of the question. Some of them even answer closed questions in the comments to bypass the system. The result? Math.SE is known throughout the internet as the place to get other people to do your homework for free. Users sometimes literally post a blurry picture of their assignment taken with their phone and still get an answer. And why shouldn't they? The end result is a complete win for them. – Najib Idrissi Mar 15 '15 at 15:27
• @Pedro It's expected in most communities that one should read the FAQ and other documents before posting. This isn't something unique to Math.SE. – Qudit Mar 15 '15 at 15:30
• But "we" cannot agree, Pedro. It's been discussed many times, and the result is always the same. These users don't care. They answer terrible question, they knowingly answer duplicates (sometimes they literally copy their previous answer, letter for letter!), they answer closed questions in the comments... (But as I was typing that I saw your newer comments, and this discussion is starting to get completely derailed, so I'm not sure if I should continue) – Najib Idrissi Mar 15 '15 at 15:36
• For whatever interest it may add to the discussion, I wrote a query to find the first posts of new users; sampling the results a little, it looks to me that about half of new users* actually do get the point on their first try - it is clearly not so unreasonable to expect that people actually read the existing documentation before posting as enough people live up to that expectation. (*Extrapolating from a sample $n=4$. Don't ask me about confidence intervals or $p$ values, please) – Milo Brandt Mar 15 '15 at 18:26
• @Pedro I think you'd have a lot less opposition/downvotes to this if you didn't use language like "We should blame new people less and look more at our own faults", "We should blame ourselves for not showing enough effort ourselves to make the rules clear to new users, and fail to warn them for the consequences of abusing the system" and "Or wait, maybe you want a system where older users can abuse the system, and is this the reason you don't agree? Just wondering." This is without even bringing up the other two posts you deleted. – Qudit Mar 16 '15 at 0:04
• As I read the comments, what I see here is an attempt to paint with a broad brush "older" users who are bent on suborning cheating in order to fatten their already generous reps. I call bullshit. There are people who are so desperate for rep that they temporarily lose judgment and help cheaters. But you'll also find that people with large rep earned it by numerous positive contributions to the site. – Ron Gordon Mar 16 '15 at 2:34
• Something like this should also be implemented on meta posts: users should not be able to post proposals without passing a 30 question test to prove that theey have read all the relevant past meta posts on the subject at least twice... The level of noise would be improved muchly! – Mariano Suárez-Álvarez Mar 16 '15 at 3:11
"I can't imagine that anyone can be against a policy that at its best can have a huge positive impact but by no means have a negative impact on the community."
The arrogance of this statement reveals the naïveté of its creator, however well-intentioned. What in the OP's experience makes him believe this with such certitude? Part of the reason that Math.SE has grown the way it does is the opportunity afforded to both those who answer those who ask questions, however bad. That growth has afforded Math.SE investment in the site to make it better. Less questions, however bad, means less answerers, less growth, and less investment. There's your means.
That all said, I sympathize. We have created a nasty little prisoner's dilemma with regard to answering questions on the site. (Hey, nobody should answer this horrible question showing zero effort, but I know that so-and-so is going to answer and get the +15 acceptance for little effort, so why shouldn't I?) Nevertheless, the system as we have it works as well as any other. In short, rather than change the system, put the onus on those who think they will profit from answering such questions. They should know better.
Some thoughts:
• It is not our job to enforce any school's honesty policy, although we are also not other peoples' slaves. So some schmuck answers a ZEQ (Zero Effort Question) because...rep. I think the community tries to make it not worthwhile for such a person to answer them in the future. People who answer clear ZEQs may score an acceptance, but with that they quickly get downvotes. I have seen this and it is the correct response. Whatever gain in rep obtained will be more than offset in the loss of upvotes in the question tags. (Protests that the answer is useful ring hollow because they contribute to lower standards for the site.)
• Not all apparent ZEQs are pleas to do homework, but are just hard problems that the OP would like to see done. This is especially true in the , , and tags. Such questions are welcome here because they bring in a lot of traffic and spur interesting discussions.
So, yes, let's look at our own "faults." In short, self-policing is how the community thrives, and I think we do a pretty good job of it. Sure we get a lot of bad questions and answers...but this is, you know, the Internet. The downvote and comment tools are our means of maintaining quality. Use them.
• Thank you for this in-depth answer. I was probably a bit naive also. And as you pointed out, actually we do a pretty good job at self-policing, and maybe I should have given meta more credit for that. I am also happy that new users are now better guided due to the new tour page. – Pedro Mar 16 '15 at 12:03
• @Pedro: Thanks for taking the blunt remarks with good humor. Not everyone can do that. – Ron Gordon Mar 16 '15 at 12:09
• @RonGordon You make some good points, but I think the proliferation low quality copy/paste questions on this site shows that current efforts are not sufficient. As for answers to such questions being downvoted, I don't see this happen often. It seems to me that the questions are what gets downvoted. In fact, answers to such questions are often upvoted. Part of the problem is also that every upvote is worth 5 downvotes, so it takes a lot of downvotes to make it not worthwhile to answer. – Qudit Mar 16 '15 at 12:38
• @Qudit: The reason we don't see more of it is that people may generally be unaware when a ZEQ gets answered. Of course, we can vote to close faster. But when people are aware of an answer, there are downvotes. And rest assured, those downvotes hurt. Let's say the answerer got an accept and an upvote from the OP, +25 rep. Maybe he gets away with it. But if he doesn't, then there will be three, four downvotes. Yeah, the guy got a +17 rep, but is that really worth a glaring "-3" next to your answer, along with a few hostile comments? I doubt it. – Ron Gordon Mar 16 '15 at 12:41
• @RonGordon That's a good point about the disapproval implied by downvotes. I definitely agree about downvotes being painful since I have received them myself when I wrote bad answers (not to ZEQs). However, I think that answers to ZEQs are rarely downvoted. Answers are usually only downvoted if they contain glaring inaccuracies based on what I see. – Qudit Mar 16 '15 at 13:03 |
My boyfriend wants to pay the same percentage out of each of our salaries for rent (eg; 18% of each person's salaries go to rent.) What would that amount be according to our salaries?
Using x as the percentage that would be the same for each person, we need to look at yearly rent vs yearly salary
104,000\*x + 45,700\* x = 19,200
149,700\*x = 19,200
x = 19,200 / 149,700 = 0.12826= 12.83%
Edit: updated the correct rent amount.
Rent is 19200 per year. Your boyfriend makes 104/45.7 in terms of ratio compared to you. Let's call what you must pay x. Then x + x*104/45.7 = 19200. (If you struggle to see where this comes from, think of this as if your boyfriend earned twice your salary then you would have had x + 2x = 19200).
Solving for this means that x = 5861 is what you must pay annually. Multiply that with 104/45.7 for what your boyfriend must pay annually.
Divide both results by 12 to see what you must each pay monthly.
Just in case you want to re-calculate when one of these numbers change, or you decide you want to use post-tax numbers.
If your boyfriend makes B a year, you (OP) make O a year and you pay a combined R in rent a month.
Boyfriend Monthly Rent Payment = R\*B/(B+O).
OP Monthly Rent Payment = R\*O/(B+O).
In this case we get 1600\*104/(104+45.7) = $1111.55 Your monthly rent payment would be the leftover which is 1600-111.55 =$488.45
First, imagine pooling all of your money together. $104k +$45.7k = $149.7k What percentage of the total is your bf's income?$104k/$149.7k = 0.695 = 69.5% What percentage of the total is your income?$45.7k/149.7k = 0.305 = 30.5%
So, to have rent be an equal percentage for both of you:
He would pay 0.695 \* $1600 =$1112
You would pay 0.305 \* $1600 =$488
This would mean both of you pay an equal 12.83% of your pre tax income.
$1112 \* 12 /$104,000 = 0.1283 = 12.83%
$488 \* 12 /$45,700 = 0.1283 = 12.83%
If you calculate what percent of your total household income is contributed by each of you, this percentage can be used to figure out the fair share no matter what the expense is.
Your total household income is just the sum of your income, so $104,000 +$45,700 = $149,700. Your boyfriend contributes$104,000 of the $149,700, which is about 69.5%. You contribute the difference, or 30.5%. So, to pay fair shares of a$1,600 rent payment, your boyfriend would pay 69.5% of it, or $1112, and you would pay your 30.5%, or$488.
Edit to add: I just rounded the percentage to the nearest tenth, to make it simpler. This means that the share isn't *exactly* proportional, but I feel like most people would agree it is close enough. If you prefer an exact proportion, just don't convert to a percentage at all. Multiply the rent payment by the original ratio of contributed money to total income. For example, an exact proportion would mean that your boyfriend would pay ($104,000 /$149,700) × $1600 =$1111.56. (This isn't entirely accurate either, because the actual number of cents is 55.6446, but of course you can't pay in fractions of a cent, so $1111.56 is as close as you're going to get to a perfect proportional split.) You obviously would pay the difference here, or$488.44. My approximation is simpler and within a dollar. If that isn't precise enough, using the exact ratio is as close as you are going to be able to get.
Your share is 45.7/(45.7+104.0) * 1600.
The other share is 104.0/(45.7 + 104.0) * 1600 |
Tag Info
New answers tagged circuit-design
1
I came here looking for the solution to this problem myself, and then realised the answer. With a normal binary comparator, all bits have a positive assigned value, bit 0 is 1, bit 1 is 2, etc. So, with 5 bits, bit 4 would be 16. Dealing with signed numbers, using the 2's complement, the last bit is just the negative value of whatever it should be, in this ...
1
The supply voltage is in the formula for calculating the op-amp output voltage, so it is not left out. But as the NTC and the fixed resistor use the op-amp supply and ground for the voltage divider, all it is needed to know is that the divider output will be within the supply voltages. The R1 value is calculated with a geometric mean instead of arithmetic ...
3
It means the components are drawn in the schematic and there are places reserved for them on the board, but during board manufacture these components are not assembled on the board. So they are not needed for basic functionality but if someone wants to try out a feature such as the RTC clock crystal it can be soldered there for experimenting and prototyping ...
2
It is quite common to have components which are not populated on a PCB during manufacturing. They may be omitted for several reasons: The design does not require the component but it could be added later for various reasons including noise immunity, stability, performance, tuning, etc. This is common in prototyping and early product versions. The schematic ...
0
I think $R_C$ is in the wrong place. You have accidentally created a first stage with gain in the tens of thousands, or whatever the open loop gain of the opamp is! Also 1: If you consider the push-pull stage to be a voltage amplifier with unity gain, then U2B with feeback via the potential divider Rf & Ra produce a total gain of: \begin{aligned} \...
0
Not an expert here, but diodes D1 and D2 do not look like they are in the right place. You want something like this: Picture is from https://www.electronics-tutorials.ws/amplifier/amp_6.html. Take them out and run your simulation. If everything else is OK, you should see a sine wave on your output with a bit of discontinuity at the crossover regions.
0
Rc should be in series with C1 as in the first circuit diagram. As you have it in the simulation the first stage is acting as a differentiator. Insert a 2.2uF dc blocking capacitor in between Ra and ground. Add small (1 Ohm) emitter resistors in series with each of the emitters of Q1 and Q2 to take up the voltage difference between the double diode drop and ...
0
You do compare the input sine voltage VS. output pulsed voltage of discrete values, of course nothing useful could be . L You have to put a LC lowpass filter on the output stage and take the feedback signal only after the LP filter.
0
The VCC voltage of 5V that biases Q1 and Q2 might be too low compared to the signal that drives the bases of Q1 and Q2 themselves. Try VCC = 15 V.
1
At 30 mA you can just power the buzzer from the 555's output (assuming the 555 is running from a suitable voltage)
1
You could still have thermal issues with derating the Zener in an enclosure above 25'C, unless designed properly such as using 2 in parallel. Always derate power by 50% when you want it to run at 50% of the max temperature rise as MTBF drops 50% for every 10'C rise. If you operate more than 5V it will draw proportionally more current and fail sooner from ...
1
Based on buzzer's datasheet max current and voltage you can find the maximum Power needed. P= V * I= 8(V) * 0.03(A)= 0.24 Watt. I suggest you to simply do this schematic: For 24V as Vcc you have 5v on the buzzer. So 24-5= 19V. If you have a resistor with value 1k you will get 19mA current flowing to your buzzer. If you limit the current then your buzzer ...
5
In this case, a "strap" refers to a low-impedance connection between the substrate and a power rail, in order to provide a stable substrate voltage at the body of a MOSFET. If this impedance is too high, then a parasitic PNPN structure formed between the power rails can act as a parasitic thyristor under certain conditions, leading to a short ...
2
You could feed each Thermistor output into an input of an Analog 2:1 Multiplexer, and also into a Comparator. Then use the comparator output as the Select signal for the Analog 2:1 Multiplexer. Which thermistor goes into the positive terminal of the Comparator and which goes into the negative intput of the Comparator depends on whether the thermistors are ...
0
The CD4053 has three independently controllable SPDT analog switches, with a global inhibit. One chip can handle all three modes for inputs 1, 2, and 4. Input 3 never is inhibited, so that would use one channel of a second CD4053, with the inhibit input grounded. https://www.ti.com/lit/ds/symlink/cd4053b.pdf?ts=1634601279843&ref_url=https%253A%252F%...
0
There is a lot to say about each component changes I made to improve this design. But Rather than explain how your circuit works and why it overstresses an LED and why the frequency control of the pot is suboptimal, allow me to show a better solution with slight changes in values to reduce base drive currents and LED output powers from 10W pulses to 50 mW ...
1
Let me tell you how I did it. I used an analog input port with a 1 megaohm resistor between +5V and the port. With zero load, it gives a pretty good high signal - over 1000. Then, I connected the port to a stainless steel probe, in a stainless container (this is for an automatic distiller), container is grounded. When the distilled water hits it, there is a ...
2
In the circuit below, focus on C1 and R1 to understand the differentiator behavior: Assuming $V_z$ always zero, $\frac{dV}{dt}$ at the capacitor is $\frac{1V}{10ms}$, so: $I_c = C \frac{dv}{dt} = 100 nF * \frac{1V}{10ms} = 10 \mu A$ Considering this current goes only through R1: $V_{out} = -10\mu A * 100k\Omega = -1V$ R2 and C2, with such small ...
0
For manufacturability, don't put those vias between pins 3/4. FWIW, to keep the power clean I would put a series L or ferrite in series to create an analog supply for the op amps. You could also have divided grounds by splitting the plane. In Altium this can be done with polygons or by drawing lines on the negative plane layer if you used that. The ground ...
1
Comments on the updated design: You want to minimize the length of the traces connecting the photodiode to the opamp to the feedback network. Move the opamp closer to the photodiode, even directly over it so that you can eliminate the boxed trace: Since you are putting components on both sides, consider putting C1/R1 on the backside of the board, directly ...
0
In this low frequency range, it is not usual to work with the 50 Ohm system. This is rather used in the RF range, in which you want to achieve maximum power transfer. So you have both 50 Ohm output resistors and 50 Ohm input resistors (the actual value of the resistors can be different if they are all the same, e.g. in the 75 Ohm system). In your ...
1
You are mixing up with different things, protective earth as "ground" required for protection against electrical shock in high voltage system. in low voltage ground is used for protecting system from EMI. do not use for that ground from receptacle.separate grounding system is used in labs, etc. in your case the system will work without ground. just ...
0
The current-carrying parts of a domestic light switch must not be grounded - there should be no connection between those parts and the mounting bracket.
1
You have misread the datasheet. The ADC input does not have input impedance of 3500 ohms to ground. Your simulation has 3500 ohms to ground so a series resistor of 100k will form a resistor divider network and thus explains your simulation having low voltages at the ADC input. You can fix the simulation by just removing the 3500 ohm resistor.
1
R13 when pulled low draws 10 mA with 23V across it or 1/4W at burning hot temps . Change R13 to 10x R11. R18 turns off LED until R19 is pulled high.
1
I figured it out. I had it grounded out because on the seven segment pins 3 and 8 are connected. I did not know this and connected pin 3 to vcc and pin 8 to ground. After cutting the connection of pin 3 to vcc the pcb worked as needed!
2
When you are using a BJT as a switch (and not just as an analog amplifier), you will usually have a base current more than the minimum needed. That's not really driving it hard. For the 2N2222 in particular, the max base current is 200mA. See https://html.alldatasheet.com/html-pdf/15067/PHILIPS/2N2222/745/3/2N2222.html That would be absolute max. I wouldnt ...
4
It's intentional, this is a known design for an off-line switcher with buck topology. The internal supply of the chip charges the capacitor on the VDD pin with the voltage differential between the drain and GND (source) pin when the internal MOSFET switch is off. The capacitor will then continue to power the circuit long enough when the MOSFET is switched on ...
0
First, if the PE has to be disconnected, then also the power has to be disconnected at the same time. In order to find if your car has a leaky connection from the battery to the chassis you have to measure that before connecting the power and PE, first. simulate this circuit – Schematic created using CircuitLab Eval board tester from TI
2
4 layers is overkill, but these days it is pretty cheap and it saves time, so no problem with that. how can I send signals to the middle 2 layers in a 4 stack without the use of vias? When you use a standard thru via, it goes through all the layers. It will link all the layers where you connect a track to the via, and it will also connect to power/ground ...
1
This exercise is really good because it covers a lot of material in a seemingly simple question. Most of the work you did is good (except getting the sign of e wrong), but you missed a key aspect, as you'll see. Forgive me for redrawing the circuit with some extra labels, because it's going to make things a lot easier to refer to later. simulate this ...
1
You can try using a conductive adhesive (e.g. silver paint) I have tried it myself. Insert your wire in and after that, paint it from both sides at both entrances on the cloth and let it dry. I suggest: Silver paint 503 is a flexible, high-temperature conductive material designed for a wide variety of uses, and adheres to most substrates. Source: Electron ...
0
It's largely a matter of consistency. Note the top half of the schematic. It uses an AND/OR structure. The bottom half has a row of ANDs, mostly connected to XOR gates. In the case of the Cn+4, since the left-hand input comes from one of the AND gates, they continued the use of OR gates, even though it required negating the inputs.
0
I guess 249Ω resistors using for low pass filter. You can look 10.2.1.2.7 Input and Reference Low-Pass Filters section on ADS1248 datasheet.
1
simulate this circuit – Schematic created using CircuitLab That would be more correct. You may not bias all nodes with 680 Ohm, only at the both ends, where also the termination resistor is connected. If you want a 120 Ohm characteristics impedance termination, you should calculate the equivalent of all resistors, including the bias resistors. EDIT: ...
1
The basic difference between a MAX485 and MAX487 is the driver slew rate. There should be no difference to the number of nodes you can run with either of them. I've done hundreds of installations with MAX485/487 with around 16 nodes over 1km. You need to ensure the 0V is connected between all nodes. Also your biasing resistors of 680Ohm are way too low. With ...
3
Forget about the cable capacitance. Drive it source terminated through a $100\Omega$ resistor as I suggested in the other question. The cable looks like another resistor (about $100\Omega$) during the transient. At the far open-circuit end 10m down the cable, you will see a clean edge, just 50ns later. If the cable was 20m long, you would still get a ...
0
For long-distance and high frequency signal transmission, you can try the following two methods: Use shielded coaxial cable. Use subsystem, the control signal transmission by differential method, CAN, RS-485 eg. The method 2 is recommend.
2
Well, they are quite overpowered for the job but a mosfet push/pull driver would do the work. Your voltage would be at least 5V to make them work correctly. A pair of consideration: 10m of twisted pair for only 1pF seems a little low to me. Are you sure it isn't more like 1nF ? The AVR GPIO is rated for 20mA, 40mA is the absolute maximum Since you are ...
2
While the IC allows you to use differential input signals, it doesn't mean you have to. It's perfectly valid to just tie one side of a differential input to ground and drive the other side with a single-ended signal. Be careful with your current return paths, though, you don't want to have high power circuitry attached to the same ground as your sensitive ...
3
As these are cascadable, and the datasheet claims that they can drive 10m to the next one, why don't you simply put one pixel in your controller and turn it off. Use it to drive the 10m to the next one. If you want to make your own driver, then there are two things to watch out for: driving the capacitance of the cable to give you a reasonable risetime ...
2
You could have tested this idea out using a SPICE simulator. Just for fun, I ran this in LTspice (free and full featured SPICE simulator). R15 needs to be much lower in value due to capacitance from the FET and cable. Four runs were made with the cable capacitance at 1pF, 50pF, 500pF, and 5nF. At 500pF the signal is having a hard time getting above 2.7V and ...
Top 50 recent answers are included |
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 16 Jul 2018, 23:12
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# M04-12
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 47030
### Show Tags
16 Sep 2014, 00:22
14
00:00
Difficulty:
45% (medium)
Question Stats:
62% (01:03) correct 38% (01:19) wrong based on 214 sessions
### HideShow timer Statistics
A, B, C, and D are distinct points on a plane. If triangle ABC is right angled and BD is a height of this triangle, what is the value of AB times BC ?
(1) $$AB = 6$$
(2) The product of the non-hypotenuse sides of triangle ABC is equal to 24.
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 47030
### Show Tags
16 Sep 2014, 00:22
Official Solution:
A, B, C, and D are distinct points on a plane. If triangle ABC is right angled and BD is a height of this triangle, what is the value of AB times BC ?
Since all points are distinct and BD is a height then B must be a right angle and AC must be a hypotenuse (so BD is a height from right angle B to the hypotenuse AC). Question thus asks about the product of non-hypotenuse sides AB and BC.
(1) AB = 6. Clearly insufficient.
(2) The product of the non-hypotenuse sides of triangle ABC is equal to 24 $$\rightarrow$$ directly gives us the value of AB*BC. Sufficient.
_________________
Intern
Joined: 04 Nov 2014
Posts: 1
### Show Tags
30 Jan 2015, 14:55
Bunuel wrote:
Official Solution:
Since all points are distinct and BD is a height then B must be a right angle and AC must be a hypotenuse (so BD is a height from right angle B to the hypotenuse AC). Question thus asks about the product of non-hypotenuse sides AB and BC.
(1) AB = 6. Clearly insufficient.
(2) The product of the non-hypotenuse sides is equal to 24 $$\rightarrow$$ directly gives us the value of AB*BC. Sufficient.
How do you know that B is a right angle? Couldn't we also have BC or BA as a hypotenuse with BD still indicating the height of the triangle?
Math Expert
Joined: 02 Sep 2009
Posts: 47030
### Show Tags
31 Jan 2015, 06:13
3
1
chieffarmer wrote:
Bunuel wrote:
Official Solution:
Since all points are distinct and BD is a height then B must be a right angle and AC must be a hypotenuse (so BD is a height from right angle B to the hypotenuse AC). Question thus asks about the product of non-hypotenuse sides AB and BC.
(1) AB = 6. Clearly insufficient.
(2) The product of the non-hypotenuse sides is equal to 24 $$\rightarrow$$ directly gives us the value of AB*BC. Sufficient.
How do you know that B is a right angle? Couldn't we also have BC or BA as a hypotenuse with BD still indicating the height of the triangle?
BD is a height means that B is a right angle and AC is a hypotenuse (so BD is a height from right angle B to the hypotenuse AC).
_________________
Current Student
Joined: 21 Apr 2015
Posts: 13
Schools: Broad '18 (A)
### Show Tags
26 Apr 2015, 18:31
first option says AB =6
doesnt it imply that the sides are 6,8,10?
Math Expert
Joined: 02 Sep 2009
Posts: 47030
### Show Tags
27 Apr 2015, 01:44
ishitathukral wrote:
first option says AB =6
doesnt it imply that the sides are 6,8,10?
No.
Knowing that one side of a right triangle is 6 DOES NOT mean that the sides of the right triangle necessarily must be in the ratio of Pythagorean triple - 6:8:10. Or in other words: if $$6^2+y^2=z^2$$ DOES NOT mean that $$y=8$$ and $$z=10$$. Certainly this is one of the possibilities but definitely not the only one. In fact $$6^2+y^2=z^2$$ has infinitely many solutions for $$y$$ and $$z$$ and only one of them is $$y=8$$ and $$z=10$$.
For example: $$y=1$$ and $$z=\sqrt{37}$$ or $$y=2$$ and $$z=\sqrt{40}$$...
For more on this trap check the following questions:
what-is-the-area-of-parallelogram-abcd-111927.html
the-circular-base-of-an-above-ground-swimming-pool-lies-in-a-167645.html
figure-abcd-is-a-rectangle-with-sides-of-length-x-centimete-48899.html
in-right-triangle-abc-bc-is-the-hypotenuse-if-bc-is-13-and-163591.html
m22-73309-20.html
if-vertices-of-a-triangle-have-coordinates-2-2-3-2-and-82159-20.html
if-p-is-the-perimeter-of-rectangle-q-what-is-the-value-of-p-135832.html
if-the-diagonal-of-rectangle-z-is-d-and-the-perimeter-of-104205.html
what-is-the-area-of-rectangular-region-r-105414.html
what-is-the-perimeter-of-rectangle-r-96381.html
pythagorean-triples-131161.html
given-that-abcd-is-a-rectangle-is-the-area-of-triangle-abe-127051.html
m13-q5-69732-20.html#p1176059
m20-07-triangle-inside-a-circle-71559.html
what-is-the-perimeter-of-rectangle-r-96381.html
what-is-the-area-of-rectangular-region-r-166186.html
if-distinct-points-a-b-c-and-d-form-a-right-triangle-abc-129328.html
Hope this helps.
_________________
Intern
Joined: 29 Jun 2014
Posts: 6
### Show Tags
08 Jul 2015, 09:42
Bunuel wrote:
chieffarmer wrote:
Bunuel wrote:
Official Solution:
Since all points are distinct and BD is a height then B must be a right angle and AC must be a hypotenuse (so BD is a height from right angle B to the hypotenuse AC). Question thus asks about the product of non-hypotenuse sides AB and BC.
(1) AB = 6. Clearly insufficient.
(2) The product of the non-hypotenuse sides is equal to 24 $$\rightarrow$$ directly gives us the value of AB*BC. Sufficient.
How do you know that B is a right angle? Couldn't we also have BC or BA as a hypotenuse with BD still indicating the height of the triangle?
BD is a height means that B is a right angle and AC is a hypotenuse (so BD is a height from right angle B to the hypotenuse AC).
Bunuel -- in the figure above isn't BD an altitude and AB the height of triangle ABC??
Math Expert
Joined: 02 Aug 2009
Posts: 6217
### Show Tags
12 Jul 2015, 00:10
efforts wrote:
Bunuel -- in the figure above isn't BD an altitude and AB the height of triangle ABC??
Hi,
there is no difference in " altitude and height"..
" altitude or height" is the shortest distance from a point to the line..
so here BD is the altitude/height, when AC is the base and AB is the alt/height when base is BC...
Hope it clears the query..
_________________
1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372
2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html
3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html
GMAT online Tutor
Intern
Joined: 12 Jul 2015
Posts: 5
### Show Tags
21 Jul 2015, 04:32
Hi Bunuel,
Can you please explain why the triangle cannot have A or C as the right angle, and the height BD drawn outside of the triangle? If you flip the triangle upside down, it looks to me that you can have a height outside of the triangle, which would make the location of the right angle ambiguous.
Math Expert
Joined: 02 Sep 2009
Posts: 47030
### Show Tags
22 Jul 2015, 01:16
CountClaud wrote:
Hi Bunuel,
Can you please explain why the triangle cannot have A or C as the right angle, and the height BD drawn outside of the triangle? If you flip the triangle upside down, it looks to me that you can have a height outside of the triangle, which would make the location of the right angle ambiguous.
If A is a right angle, so if BA is perpendicular to CA, then the height from B to CA will be BA, making A and D to coincide, which is not possible since we are told that A, B, C, and D are distinct points on a plane. The same if C is a right angle.
_________________
Intern
Joined: 22 Jun 2014
Posts: 22
Concentration: General Management, Finance
GMAT 1: 700 Q50 V34
GRE 1: Q800 V600
GPA: 3.68
### Show Tags
12 Jan 2016, 07:50
1
1
buffaloboy wrote:
I think this is a poor-quality question and the explanation isn't clear enough, please elaborate. I am surprised if A,B, C, and D are distinct points and ABC is right triangle, right angled at B, then how come A , and D are not the same point. AB is the height . I am damn confused.
Each triangle can have three different bases and perpendicular to each base, three different heights. In the given description of triangle ABC, angle B being right angle, if you take BA or BC as height, then D coincides with A or C. But the question also says all four points are distinct. Hence we need to take AC as base and BD as height. Thats why this problem is in 700-800 difficulty level. Hope this helps.
Intern
Joined: 12 Mar 2015
Posts: 45
Schools: Haas '20
GPA: 2.99
WE: Corporate Finance (Aerospace and Defense)
### Show Tags
27 Mar 2016, 23:04
wow I can't believe this question tricked me so hard. Thank you for this question. It really makes me read a LOT more carefully.
Current Student
Joined: 21 Apr 2016
Posts: 29
Location: United States
### Show Tags
29 Apr 2016, 13:35
2
Bunuel wrote:
CountClaud wrote:
Hi Bunuel,
Can you please explain why the triangle cannot have A or C as the right angle, and the height BD drawn outside of the triangle? If you flip the triangle upside down, it looks to me that you can have a height outside of the triangle, which would make the location of the right angle ambiguous.
If A is a right angle, so if BA is perpendicular to CA, then the height from B to CA will be BA, making A and D to coincide, which is not possible since we are told that A, B, C, and D are distinct points on a plane. The same if C is a right angle.
How about this figure? It doesn't say 'D' is a point on the triangle, just says D is a point on the plane...
>> !!!
You do not have the required permissions to view the files attached to this post.
Math Expert
Joined: 02 Sep 2009
Posts: 47030
### Show Tags
02 May 2016, 04:10
nk18967 wrote:
Bunuel wrote:
CountClaud wrote:
Hi Bunuel,
Can you please explain why the triangle cannot have A or C as the right angle, and the height BD drawn outside of the triangle? If you flip the triangle upside down, it looks to me that you can have a height outside of the triangle, which would make the location of the right angle ambiguous.
If A is a right angle, so if BA is perpendicular to CA, then the height from B to CA will be BA, making A and D to coincide, which is not possible since we are told that A, B, C, and D are distinct points on a plane. The same if C is a right angle.
How about this figure? It doesn't say 'D' is a point on the triangle, just says D is a point on the plane...
The figure is NOT right. We are given that BD is a height of the triangle. The height is a perpendicular dropped from one of the vertices to the opposite side.
_________________
Current Student
Joined: 21 Apr 2016
Posts: 29
Location: United States
### Show Tags
02 May 2016, 04:26
How about this figure? It doesn't say 'D' is a point on the triangle, just says D is a point on the plane...[/quote]
The figure is NOT right. We are given that BD is a height of the triangle. The height is a perpendicular dropped from one of the vertices to the opposite side.[/quote]
-----
Hmm...I guess my concept of height wasn't clear. So, height of a triangle is ALWAYS the altitude of the triangle, inside the triangle.. and that altitude changes depending on which side is the 'base'.
Thanks!
Math Expert
Joined: 02 Sep 2009
Posts: 47030
### Show Tags
02 May 2016, 04:30
nk18967 wrote:
How about this figure? It doesn't say 'D' is a point on the triangle, just says D is a point on the plane...
The figure is NOT right. We are given that BD is a height of the triangle. The height is a perpendicular dropped from one of the vertices to the opposite side.[/quote]
-----
Hmm...I guess my concept of height wasn't clear. So, height of a triangle is ALWAYS the altitude of the triangle, inside the triangle.. and that altitude changes depending on which side is the 'base'.
Thanks![/quote]
It's not necessary to be inside a triangle:
Attachment:
alt2.gif
>> !!!
You do not have the required permissions to view the files attached to this post.
_________________
Current Student
Joined: 21 Apr 2016
Posts: 29
Location: United States
### Show Tags
02 May 2016, 07:04
Bunuel wrote:
nk18967 wrote:
How about this figure? It doesn't say 'D' is a point on the triangle, just says D is a point on the plane...
The figure is NOT right. We are given that BD is a height of the triangle. The height is a perpendicular dropped from one of the vertices to the opposite side.
-----
Hmm...I guess my concept of height wasn't clear. So, height of a triangle is ALWAYS the altitude of the triangle, inside the triangle.. and that altitude changes depending on which side is the 'base'.
Thanks![/quote]
It's not necessary to be inside a triangle:
Attachment:
alt2.gif
[/quote]
'VERTICES' is the magic word!! Gotcha, Thanks!
Intern
Joined: 26 Apr 2016
Posts: 8
### Show Tags
24 May 2016, 18:57
I think this is a poor-quality question and I don't agree with the explanation. Solution is incorrect. Answer should be (E). There is no way to prove that AB or BC are both legs, or alternatively that one is a leg and the other a hypotenuse. The height BD can lie outside the triangle contrary to the discussion posted in the forum.
Math Expert
Joined: 02 Sep 2009
Posts: 47030
### Show Tags
25 May 2016, 08:43
DavidFox wrote:
I think this is a poor-quality question and I don't agree with the explanation. Solution is incorrect. Answer should be (E). There is no way to prove that AB or BC are both legs, or alternatively that one is a leg and the other a hypotenuse. The height BD can lie outside the triangle contrary to the discussion posted in the forum.
That's not correct. The height in a right triangle is either one of the legs or the perpendicular from right angle to the hypotenuse. Thus the height of a right triangle cannot lie outside the triangle. Moreover, since A, B, C, and D are distinct points then BD cannot be the height from non-right angle because in this case it would coincide with one of the legs.
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 47030
### Show Tags
15 Jul 2016, 07:25
banty1987 wrote:
I think this is a high-quality question and I don't agree with the explanation. In statement (2) it say's product of the non-hypotenuse sides. from the figure in the explanation even AB and BC are hypotenuse to triangle's ADB & BDC respectively. Please explain if I am wrong. in the question it does not say hypotenuse as AC only.
Please read the discussion on previous pages.
_________________
Re: M04-12 [#permalink] 15 Jul 2016, 07:25
Go to page 1 2 Next [ 36 posts ]
Display posts from previous: Sort by
# M04-12
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Moderators: chetan2u, Bunuel
# Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. |
Home > Margin Of > Standard Error Margin Of Error
# Standard Error Margin Of Error
## Contents
In other words, it is the standard deviation of the sampling distribution of the sample statistic. At X confidence, E m = erf − 1 ( X ) 2 n {\displaystyle E_{m}={\frac {\operatorname {erf} ^{-1}(X)}{\sqrt {2n}}}} (See Inverse error function) At 99% confidence, E m ≈ Note the greater the unbiased samples, the smaller the margin of error. T Score vs. this contact form
MSNBC, October 2, 2004. A medical research team tests a new drug to lower cholesterol. Video should be smaller than 600mb/5 minutes Photo should be smaller than 5mb Video should be smaller than 600mb/5 minutesPhoto should be smaller than 5mb Related Questions What is the difference This gives 9.27/sqrt(16) = 2.32.
## Margin Of Error Calculator
It does not represent other potential sources of error or bias such as a non-representative sample-design, poorly phrased questions, people lying or refusing to respond, the exclusion of people who could Long answer: you are estimating a certain population parameter (say, proportion of people with red hair; it may be something far more complicated, from say a logistic regression parameter to the Asking Questions: A Practical Guide to Questionnaire Design. When the sampling fraction is large (approximately at 5% or more) in an enumerative study, the estimate of the standard error must be corrected by multiplying by a "finite population correction"[9]
Consider the following scenarios. However, different samples drawn from that same population would in general have different values of the sample mean, so there is a distribution of sampled means (with its own mean and Car travels 20kmperhour four hours later another car starts and travels 40kmperhour in how many hours will the second car overtake the first? Margin Of Error In Polls For any random sample from a population, the sample mean will usually be less than or greater than the population mean.
The margin of error of an estimate is the half-width of the confidence interval ... ^ Stokes, Lynne; Tom Belin (2004). "What is a Margin of Error?" (PDF). Graphically, share|improve this answer answered Mar 20 at 4:56 Antoni Parellada 7,45522261 add a comment| up vote 0 down vote sampling error measures the extent to which a sample statistic differs Standard errors provide simple measures of uncertainty in a value and are often used because: If the standard error of several individual quantities is known then the standard error of some you could try here Note: The Student's probability distribution is a good approximation of the Gaussian when the sample size is over 100.
share|improve this answer edited Sep 23 '11 at 21:24 whuber♦ 146k18285547 answered Sep 23 '11 at 18:21 StasK 21.5k47102 add a comment| up vote 2 down vote This is an expanded Margin Of Error Sample Size Blackwell Publishing. 81 (1): 75–81. In general, for small sample sizes (under 30) or when you don't know the population standard deviation, use a t-score. These two may not be directly related, although in general, for large distributions that look like normal curves, there is a direct relationship.
## Margin Of Error Excel
Using a sample to estimate the standard error In the examples so far, the population standard deviation σ was assumed to be known. http://stattrek.com/estimation/margin-of-error.aspx Consider a sample of n=16 runners selected at random from the 9,732. Margin Of Error Calculator How to Calculate Margin of Error in Easy Steps was last modified: March 22nd, 2016 by Andale By Andale | August 24, 2013 | Hypothesis Testing | 2 Comments | ← Margin Of Error Confidence Interval Calculator A natural way to describe the variation of these sample means around the true population mean is the standard deviation of the distribution of the sample means.
The stated confidence level was 95% with a margin of error of +/- 2, which means that the results were calculated to be accurate to within 2 percentages points 95% of weblink The next graph shows the sampling distribution of the mean (the distribution of the 20,000 sample means) superimposed on the distribution of ages for the 9,732 women. In addition, for cases where you don't know the population standard deviation, you can substitute it with s, the sample standard deviation; from there you use a t*-value instead of a The distribution of these 20,000 sample means indicate how far the mean of a sample may be from the true population mean. Margin Of Error Definition
JSTOR2682923. ^ Sokal and Rohlf (1981) Biometry: Principles and Practice of Statistics in Biological Research , 2nd ed. Despite the small difference in equations for the standard deviation and the standard error, this small difference changes the meaning of what is being reported from a description of the variation While the point estimate is your best guess regarding the population parameter, the standard error is your best guess regarding the standard deviation of your estimator (or, in some cases, the http://comunidadwindows.org/margin-of/standard-margin-of-error.php The margin of error is a statistic expressing the amount of random sampling error in a survey's results.
Phelps (Ed.), Defending standardized testing (pp. 205–226). Standard Error Formula For example, suppose we wanted to know the percentage of adults that exercise daily. Statistics: What is the difference between margin of error and Margin of sampling error?
## Tip: You can use the t-distribution calculator on this site to find the t-score and the variance and standard deviation calculator will calculate the standard deviation from a sample.
Contents 1 Explanation 2 Concept 2.1 Basic concept 2.2 Calculations assuming random sampling 2.3 Definition 2.4 Different confidence levels 2.5 Maximum and specific margins of error 2.6 Effect of population size The true standard error of the mean, using σ = 9.27, is σ x ¯ = σ n = 9.27 16 = 2.32 {\displaystyle \sigma _{\bar {x}}\ ={\frac {\sigma }{\sqrt Since we don't know the population standard deviation, we'll express the critical value as a t statistic. Standard Error Calculator The following expressions can be used to calculate the upper and lower 95% confidence limits, where x ¯ {\displaystyle {\bar {x}}} is equal to the sample mean, S E {\displaystyle SE}
The mean age for the 16 runners in this particular sample is 37.25. In other words, the range of likely values for the average weight of all large cones made for the day is estimated (with 95% confidence) to be between 10.30 - 0.17 The mean age was 23.44 years. his comment is here general term for wheat, barley, oat, rye Why is the FBI making such a big deal out Hillary Clinton's private email server?
See also Engineering tolerance Key relevance Measurement uncertainty Random error Observational error Notes ^ "Errors". Note: The larger the sample size, the more closely the t distribution looks like the normal distribution. Check out our Youtube channel for video tips on statistics! Expand» Details Details Existing questions More Tell us some more Upload in Progress Upload failed.
The true standard error of the statistic is the square root of the true sampling variance of the statistic. The more people that are sampled, the more confident pollsters can be that the "true" percentage is close to the observed percentage. It is useful to compare the standard error of the mean for the age of the runners versus the age at first marriage, as in the graph. In regression analysis, the term "standard error" is also used in the phrase standard error of the regression to mean the ordinary least squares estimate of the standard deviation of the |
# truncated mean formula
For example, given a set of 8 points, trimming by 12.5% would discard the minimum and maximum value in the sample: the smallest and largest values, and would compute the mean of the remaining 6 points. ε > 0. τ n = y ( t n ) − y ( t n − 1 ) − h A ( t n − 1 , y ( t n − 1 ) , h , f ) . A study of the number of journal articles published bytenured faculty as a function of discipline (fine arts, science, social science,humanities, medical, etc). @BYJU'S the player parrying an attack from the boss) is: Tp = b_spd/P For proc-type buffs the average uptime will be equal to the sum of oneproc, twoproc and threeproc (and so on) chance if T is set to the buff duration. If you get an error from the Excel Trimmean function this is likely to be one of the following: (adsbygoogle = window.adsbygoogle || []).push({}); An array of numeric values, for which you want to calculate the trimmed mean. As with other trimmed estimators, the main advantage of the trimmed mean is robustness and higher efficiency for mixed distributions and heavy-tailed distribution (like the Cauchy distribution), at the cost of lower efficiency for some other less heavily-tailed distributions (such as the normal distribution). Traducción de 'truncated' en el diccionario gratuito de inglés-español y muchas otras traducciones en español. The truncated mean is a useful estimator because it is less sensitive to outliers than the mean but will still give a reasonable estimate of central tendency or mean for many statistical models. For example, if you want to calculate the trimmed mean of an array of 10 values, then: Cells B1-B3 of the spreadsheet below show 3 examples of the Excel Trimmean Function, all of which are used to calculate the trimmed mean of the values in cells A1-A10, for different percent values. One situation in which it can be advantageous to use a truncated mean is when estimating the location parameter of a Cauchy distribution, a bell shaped probability distribution with (much) fatter tails than a normal distribution. The percentage of values that you want to be discarded from the supplied. The interquartile mean is a specific example of a truncated mean. The student’s t-distribution is a continuous probability distribution that is frequently used in testing hypotheses on small sample data sets. If interpolating, one would instead compute the 10% trimmed mean (discarding 1 point from each end) and the 20% trimmed mean (discarding 2 points from each end), and then interpolating, in this case averaging these two values. How to use truncated in a sentence. Further examples of the Excel Trimmean function are provided on the Microsoft Office website. See Synonyms at shorten. Truncated means shortened by having a part cut off. The es-timators are compared with the sample mean and variance When the percentage of points to discard does not yield a whole number, the trimmed mean may be defined by interpolation, generally linear interpolation, between the nearest whole numbers. It is possible to perform a Student's t-test based on the truncated mean, which is called Yuen's t-test [6][7], which also has several implementations in R. [8][9], The scoring method used in many sports that are evaluated by a panel of judges is a truncated mean: discard the lowest and the highest scores; calculate the mean value of the remaining scores. The TINV Excel Function is categorized under Statistical functions. 3. It can be shown that the truncated mean of the middle 24% sample order statistics (i.e., truncate the sample by 38% at each end) produces an estimate for the population location parameter that is more efficient than using either the sample median or the full sample mean. This prevents the calculated mean being skewed by extreme values (also known as outliers). Assuming the mean is known, the variance is de ned as: var(ˆ()) = Z b a (x )2 ˆ(x)dx For the standard normal distribution, we … The expression for the mean is given as: μ + ϕ ( α ) − ϕ ( β ) Z σ {\displaystyle \mu + {\frac {\phi (\alpha )-\phi (\beta )} {Z}}\sigma } . Number (required argument) – This is the number we wish to truncate. Home » Excel-Built-In-Functions » Excel-Statistical-Functions » Excel-Trimmean-Function. Does it mean, perhaps, that the truncated democracies we have observed in this survey are the necessary outcome of the interaction of national and transnational forces in all cases? Censored would mean that the $0$'s had somehow replaced the negative values. For example, if you need to calculate the 15% trimmed mean of a sample containing 10 entries, strictly this would mean discarding 1 point from each end (equivalent to the 10% trimmed mean). Truncated definition is - cut short : curtailed. Truncated Rectangular Pyramid Volume Formula Cones Pyramids And Spheres Home AMSI. Truncated means your sample would be biased in the sense that the negative values would not exist in the sample at all, whereas you have $0$'s. The problem with Blade Warding is that the buff is consumed when the player parries. The truncated mean uses more information from the distribution or sample than the median, but unless the underlying distribution is symmetric, the truncated mean of a sample is unlikely to produce an unbiased estimator for either the mean or the median. In some regions of Central Europe it is also known as a Windsor mean,[citation needed] but this name should not be confused with the Winsorized mean: in the latter, the observations that the trimmed mean would discard are instead replaced by the largest/smallest of the remaining values. TrakEM2 User Manual INI Institute Of Neuroinformatics. adj. [1] This is also known as the Olympic average (for example in US agriculture, like the Average Crop Revenue Election), due to its use in Olympic events, such as the ISU Judging System in figure skating, to make the score robust to a single outlier judge.[2]. If x > b or x = ∞ then φ(x, µ, σ) = 0 and Φ(x, µ, σ) = 1. Know the difference between Mean, Median and Mode. Now, if the num_digits argument is: 1. [3][4] Note that for the Cauchy distribution, neither the truncated mean, full sample mean or sample median represents a maximum likelihood estimator, nor are any as asymptotically efficient as the maximum likelihood estimator; however, the maximum likelihood estimate is more difficult to compute, leaving the truncated mean as a useful alternative.[4][5]. We assume that if x < a or x = -∞ then φ(x, µ, σ) = 0 and Φ(x, µ, σ) = 0. mean(˚(0;1;)) = 0. 2. truncation reduces the variance compared with the variance in the untruncated distribution. The syntax of the function is: TRIMMEAN( array , percent ) o ( h ) {\displaystyle o (h)} (this means that for every. [3][4] However, due to the fat tails of the Cauchy distribution, the efficiency of the estimator decreases as more of the sample gets used in the estimate. A truncated probability distribution object cannot be an input argument of an entry-point function. This number of points to be discarded is usually given as a percentage of the total number of points, but may also be given … Note that the specified percent value is the total percentage of values to be excluded from the calculation. Provide more information regarding this issue so that we could help you further. Example 2. Num_digits (optional argument) – This is a number that specifies the precision of the truncation. A percentage of 15%, is 1.5 values, which will be rounded down to 0 (i.e. A positive value that is greater than zero, it specifies the number of digits to the right of the decimal point. where φ is the pdf of the normal distribution and Φ is the cdf of the normal distribution. The Trimmed Mean (also known as the truncated mean) is a measure of mean that indicates the central tendancy of a set of values. To evaluate a truncated distribution using object functions such as cdf , pdf , mean , and so on, call truncate and one or more of these object functions within a single entry-point function. (7) By means of heavy integrations and simplifications, the following general formula of the mean difference of truncated normal distribution is obtained For most statistical applications, 5 to 25 percent of the ends are discarded. {\displaystyle \operatorname {trunc} (x,n)={\frac {\lfloor 10^{n}\cdot x\rfloor }{10^{n}}}.} Hope this information helps. The generalized Beta probability density function is given by: f ( x) = ( x − A) α − 1 ( B − x) β − 1 ( B − A) α + β − 1 B ( α, β) for A < x < B, and f ( x) = 0 otherwise. The mean time between each parry (i.e. x ¯ = 2 n ∑ i = n 4 + 1 3 4 n x i {\displaystyle {\bar {x}}={\frac {2}{n}}\;\sum _{i={\frac {n}{4}}+1}^{{\frac {3}{4}}n}\!\!x_{i}} the truncated variable is smaller than the mean of the original one. The 25% trimmed mean (when the lowest 25% and the highest 25% are discarded) is known as the interquartile mean. To replace (the edge of a crystal) with a plane face. The Excel TRIMMEAN function calculates the trimmed mean (or truncated mean) of a supplied set of values. As there are 10 values in the supplied array, the number of values to be ignored is 1.5 rounded down to the nearest multiple of 2 which is zero. A truncated mean or trimmed mean is a statistical measure of central tendency, much like the mean and median. Learn how and when to remove this template message, https://cran.r-project.org/web/packages/WRS2/, https://cran.r-project.org/web/packages/DescTools/, "Removing Judges' Bias Is Olympic-Size Challenge", https://en.wikipedia.org/w/index.php?title=Truncated_mean&oldid=983627573, Articles needing additional references from July 2010, All articles needing additional references, Articles with unsourced statements from October 2016, Creative Commons Attribution-ShareAlike License, This page was last edited on 15 October 2020, at 09:13. Wilcox, R.R. 2. Biometrika, 61, 165-170. Arulmozhi, G.; Statistics For Management, 2nd Edition, Tata McGraw-Hill Education, 2009, p. Yuen, K.K. (1974) The two-sample trimmed t for unequal population variances. To get tenure faculty must publish, therefore,there are no tenured faculty with zero publications. Formulas & Solved Example. For intermediate distributions the differences between the efficiency of the mean and the median are not very big, e.g. This number of points to be discarded is usually given as a percentage of the total number of points, but may also be given as a fixed number of points. The Excel TRIMMEAN function calculates the trimmed mean (or truncated mean) of a supplied set of values. How can I calculate the truncated or trimmed mean? A truncated distribution where just the bottom of the distribution has been removed is as follows: f ( x | X > y ) = g ( x ) 1 − F ( y ) {\displaystyle f(x|X>y)={\frac {g(x)}{1-F(y)}}} where g ( x ) = f ( x ) {\displaystyle g(x)=f(x)} for all y < x {\displaystyle y 0 and β > 0. It will calculate the left-tailed student's t-distribution. Similarly, a truncated pyramid. Henceforth, we shall use the terms truncated mean and truncated variance to refer to the mean and variance of the random variable with a truncated distribution. A truncated mean or trimmed mean is a statistical measure of central tendency, much like the mean and median. cates 1. Discarding only the maximum and minimum is known as the modified mean, particularly in management statistics. A study by the co… This means that the email that got sent back was too long, the mail server send back to you instead of sending all the parts. A major concern isthat students are required to have a minimum achievement score of 40 to enterthe special program. Mean and Variance of Truncated Normal Distributions Donald R. BARR and E. Todd SHERRILL Maximum likelihood estimators for the mean and variance of a truncated normal distribution, based on the entire sam-ple from the original distribution, are developed. To calculate the Mean, Median, Mode for the given data. Theorem 2.2 on p. 50 shows that the (α,β)trimmed mean Tn is estimating a parameterμT with an asymptotic variance equal toσ2 W Mathwords Index For Geometry. Example 1. Similarly, if interpolating the 12% trimmed mean, one would take the weighted average: weight the 10% trimmed mean by 0.8 and the 20% trimmed mean by 0.2. Many thanks to Roman Dzhafarov for pointing out a small error in this video. On page 4, line 1, there should be a minus between the expectations. It involves the calculation of the mean after discarding given parts of a probability distribution or sample at the high and low end, and typically discarding an equal amount of both. In mathematics, a Fourier series (/ ˈ f ʊr i eɪ,-i ər /) is a periodic function composed of harmonically related sinusoids, combined by a weighted summation.With appropriate weights, one cycle (or period) of the summation can be made to approximate an arbitrary function in that interval (or the entire function if it too is periodic).As such, the summation is a synthesis of another function. Sometimes when emails are too long they cut the ends off. Given a number ∈ + to be truncated and ∈, the number of elements to be kept behind the decimal point, the truncated value of x is trunc ( x , n ) = ⌊ 10 n ⋅ x ⌋ 10 n . A truncated cone is one with a piece cut off the top. In this regard it is referred to as a robust estimator. This percentage is divided by two, to get the number of values that are removed from each end of the range. Thus, the sample is truncated at an achievement scoreof 40. The 5th percentile (−6.75) lies between −40 and −5, while the 95th percentile (148.6) lies between 101 and 1053 (values shown in bold). 2. Let's say truncated by 10%? =TRUNC(number,[num_digits]) The TRUNC function uses the following arguments: 1. For example, in its use in Olympic judging, truncating the maximum and minimum prevents a single judge from increasing or lowering the overall score by giving an exceptionally high or low score. truncate definition: 1. to make something shorter or quicker, especially by removing the end of it: 2. to make…. Example 2. Chapter 4 Truncated Distributions This chapterpresentsa simulationstudy of several of the confidence intervals first presented in Chapter 2. Learn more. 1 value will be discarded from each end of the range before calculating the mean of the remaining values). The variance of a distribution ˆ(x), symbolized by var(ˆ()) is a measure of the average squared distance between a randomly selected item and the mean. A study of length of hospital stay, in days, as a functionof age, kind of health insurance and whether or not the patient died while in the hospital.Length of hospital stay is recorded as a minimum of at least one day. For example, μ = 0 {\displaystyle \mu =0} , The median can be regarded as a fully truncated mean and is most robust. This must be incorrect, because it sometimes gives mean values outside the truncation bounds. dd-rd.ca ¿Significa acaso que las democracias bloqueadas contempladas en este análisis son el resultado inevitable de la interacción entre las fuerzas nacionales y las transnacionales en todos los casos? A study of students in a special GATE (gifted and talented education) programwishes to model achievement as a function of language skills and the type ofprogram in which the student is currently enrolled. I can imagine how to do it if you have 10 entries or so, but how can I do it for a lot of entries? Hence, the lo… If kept blank, it will take 0 as the default value. It should also be noted that, when Excel is calculating how many values to discard from the supplied array of values, the calculated percentage is rounded down to the nearest multiple of 2. Academic Press. In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform.Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. Dealing with truncated normal distribution, it is necessary to use the cumulative distribution function Ψ ( x ) : Δ = ∫ a b 2 Ψ ( x ) [ 1 − Ψ ( x ) ] d x . Example 1. It involves the calculation of the mean after discarding given parts of a probability distribution or sample at the high and low end, and typically discarding an equal amount of both. for the student-t distribution with 2 degrees of freedom the variances for mean and median are nearly equal. The Math Worksheet Site Com. This word means to cut off. Then the pdf of the truncated normal distribution with mean μ and variance σ 2 constrained by is. | Meaning, pronunciation, translations and examples BDSM Library Torture The Widow. Example 3. 1. To shorten or reduce: The script was truncated to leave time for commercials. It is simply the arithmetic mean after removing the lowest and the highest quarter of values. Cuboid Wikipedia. The numerical method is consistent if the local truncation error is. [10], The Libor benchmark interest rate is calculated as a trimmed mean: given 18 response, the top 4 and bottom 4 are discarded, and the remaining 10 are averaged (yielding trim factor of 4/18 ≈ 22%).[11]. To shorten (a number) by dropping one or more digits after the decimal point. Note that, although we define the truncated normal distribution function in terms of a parent normal distribution with mean MU and standard deviation SIGMA, in general, the mean and standard deviation of the truncated normal distribution are different values entirely; however, their values can be worked out from the parent values MU and SIGMA, and the truncation limits. , because it sometimes gives mean values outside the truncation bounds known the., K.K means shortened by having a part cut off the top and its What... Had somehow replaced the negative values cone is one with a plane face by removing end... An achievement scoreof 40 specified percent value is the cdf of the one..., pronunciation, translations and examples a truncated mean ) of a crystal ) with a plane face McGraw-Hill!, 2nd Edition, Tata McGraw-Hill Education, 2009, p. Yuen, K.K φ... Be an input argument of an entry-point function the difference between mean, particularly management. Or quicker, especially by removing the lowest and the median can be regarded a... Study by the co… the mean of the normal distribution this means that for every \displaystyle o ( h {. The player parries the student ’ s t-distribution is a statistical measure of central tendency much. Values ( also known as the modified mean, median and Mode the problem with Blade Warding that. 1.5 values, which will be rounded down to 0 ( i.e: 1,., especially by removing the lowest and the median can be regarded as a fully truncated mean ) of crystal... Study by the co… the mean and median are nearly equal this must incorrect. Is frequently used in testing hypotheses on small sample data sets number of digits the. Freedom the variances for mean and is most robust than the mean time truncated mean formula each parry i.e! The player parries ] ) the TRUNC function uses the following arguments: 1 Answers Com difference mean!, 5 to 25 percent of the normal distribution with mean μ and variance σ 2 constrained is. It specifies the number we wish to truncate tendency, much like the mean and.. The mean and median precision of the normal distribution the calculation cone is one that has been shortened in... Reduce: the script was truncated to leave time for commercials data sets {. And is most robust are discarded pronunciation, translations and examples a truncated cone is one with a cut... To have a minimum achievement score of 40 to enterthe special program Answers... The default value, therefore, there should be a minus between the expectations and is robust! Now, if the data generating process were normal, your sample would be not... Vat Answers Com function are provided on the Microsoft Office website of it: 2. to make… the edge a! Sample data sets was truncated to leave time for commercials wish to truncate study the. Excel TRIMMEAN function are provided on the Microsoft Office website ) by dropping or. ) – this is a number ) by dropping one or more digits after the decimal point to... Zero publications highest quarter of values to be discarded from each end of the Excel TRIMMEAN function are provided the... Or truncated mean ) of a crystal ) with a piece cut off: the script was truncated to time. Argument of an entry-point function are required to have a minimum achievement score of 40 to enterthe special.! Part cut off trimmed t for unequal population variances value is the pdf of the ends are discarded the one. Being skewed by extreme values ( also known as the modified mean, median and.. The truncation bounds can I calculate the truncated variable is smaller than truncated mean formula and. Than zero, it specifies the number we wish to truncate ( this means that for every means shortened having! This regard it is referred to as a robust estimator we could you... A minus between the expectations must be incorrect, because it sometimes gives mean values outside the bounds!, it will take 0 as the default value smaller than the mean of the point... The buff is consumed when the player parries for mean and the median can be regarded as financial! Truncated means shortened by having a part cut off because it sometimes gives mean values outside truncation! Income isabove the poverty line, is 1.5 values, which will be rounded down to 0 ( i.e a! Truncated Base VAt Answers Com Education, 2009, p. Yuen, K.K values to discarded... Its Applications What is a specific example of a truncated cone is one that has been shortened divided two. Enterthe special program can not be an input argument of an entry-point function of. The co… the mean of the confidence intervals first presented in chapter 2 cut off the.. Shorten or reduce: the script was truncated mean formula to leave time for.. A minus between the efficiency of the normal distribution with mean μ and variance σ 2 constrained is... Differences between the expectations number that specifies the number we wish to truncate truncated normal distribution robust estimator special.! Than zero, it will take 0 as the modified mean, median Mode... If kept blank, it will take 0 as the default value statistical Applications 5... %, is 1.5 values, which will be discarded from each of! A specific example of a supplied set of values most robust by,! Warding is that the $0$ 's had somehow replaced the negative values unequal population variances untruncated distribution outliers!, K.K tenure faculty must publish, therefore, there should be a minus between the efficiency of the values. Censored would mean that the specified percent value is the cdf of the range before calculating the mean and are! Function calculates the trimmed mean normal distribution to 25 percent of the Excel TRIMMEAN function provided... Roman Dzhafarov for pointing out a small error in this video object can be. Process were normal, your sample would be censored not truncated wish to truncate value is the cdf the. With zero publications ) – this is a truncated version of something is one that has been shortened the intervals! The arithmetic mean after removing the lowest and the median are not very big, e.g function is under... An input argument of an entry-point function is consistent if the data generating process were normal, your would. 1. to make something shorter or quicker, especially by removing the end it... Mean, particularly in management statistics this means that for every error in this regard is! For commercials students truncated mean formula required to have a minimum achievement score of 40 to special. Especially by removing the lowest and the highest quarter of values that want... Regarded as a financial analyst, the lo… Even if the data generating process were normal, your would. Simply the arithmetic mean after removing the end of the range before calculating the mean and.! Local truncation error is of 15 % this chapterpresentsa simulationstudy of several of the distribution... Have a minimum achievement score of 40 to enterthe special program mean and median are very... Achievement scoreof 40 the range before calculating the mean of the spreadsheet above, the sample is at. Microsoft Office website a percentage of values freedom the variances for mean and.... The script was truncated to leave time for commercials percentage is divided by two, to get tenure faculty publish. Arulmozhi, G. ; statistics for management, 2nd Edition, Tata McGraw-Hill Education, 2009, p. Yuen K.K. Big, e.g a minimum achievement score of 40 to enterthe special.!, which will be discarded from the calculation to 0 ( i.e statistics for management 2nd... 0 $'s had somehow replaced the negative values the calculated mean being skewed by extreme values ( also as! With 2 degrees of freedom the variances for mean and median, especially by removing the end of the.. Values outside the truncation by removing the end of it: 2. to.... To replace ( the edge of a crystal ) with a plane face the TINV Excel function is categorized statistical... Do you mean truncated Base VAt Answers Com be discarded from each end of the bounds. Between each parry ( i.e the Microsoft Office website function uses the following:... Above, the T.INV truncated definition is - cut short: curtailed plane. Statistical Applications, 5 to 25 percent of the ends off by extreme values ( also known as ). Study by the co… the mean and is most robust by is highest quarter of values achievement score 40. Minus between the expectations ; 1 ; ) ) = 0 edge a! Divided by two, to get the number of digits to the right of the Excel TRIMMEAN function the! The number we wish to truncate, to get the number we wish to truncate and its What. Distributions the differences between the expectations each parry ( i.e shorter or quicker, by! =Trunc ( number, [ num_digits ] ) the TRUNC function uses the following arguments: 1 is... Of 40 to enterthe special program is the number of digits to the right of the values. Student-T distribution with 2 degrees of freedom the variances for mean and median the top the Microsoft Office.... Or quicker, especially by removing the lowest and the median are not very big e.g..., particularly in management statistics frequently used in testing hypotheses on small sample data sets that you to... The right of the ends off 0 ; 1 ; ) ) = 0 is greater than zero it! For pointing out a small error in this regard it is simply arithmetic! Each parry ( i.e the precision of the range$ 's had somehow replaced the values... Number that specifies the number we wish to truncate %, is 1.5 values, which will be down... To get tenure faculty must publish, therefore, there are no tenured faculty with zero publications is that... That we could help you further how can truncated mean formula calculate the truncated or trimmed mean is statistical! |
OK, Don't show this again
### ExtendedBlocks
#### by JavierLeon9966
##### Adds support to implements blocks with ID's 255+
###### version 1.2.0
Approved
How to install?
Switch version
11 Reviews
Plugin Description §
# ExtendedBlocks
This plugin adds the compatibility to add more blocks above the 255 ID in Pocketmine API 3.0.0+
WARNINGS
• This plugin has a large impact on server performance and is NOT recommended for production use.
• Worlds using this plugin may find that the blocks placed using this plugin will no longer work in PocketMine-MP 4.0.
• If you remove this plugin after some blocks has been placed it may transform into a Reserved6 block and can't be reversible.
To update this plugin safely you must stop your server and install the updated version, then you can start the server again.
## How the plugins works
This plugin uses a Tile Entity which saves the registered blocks from another plugin and a Placeholder block which replaces the Reserved6 that makes instance with the block you want that uses PlaceholderTrait and gives info to the server.
It checks when a LevelChunkPacket is sent and finds every tile within the chunk that is a Placeholder block and sends the block to the player.
## How to use the plugin
For now you would need to use another plugin that register the blocks itself.
For the blocks to be able to register correctly you must add this trait in the block class:
use JavierLeon9966\ExtendedBlocks\block\PlaceholderTrait;
Soon It'll be able to add new blocks in a configuration, but for now it's just like a API plugin.
use pocketmine\block\BlockFactory;
use pocketmine\item\ItemBlock;
use pocketmine\plugin\PluginBase;
use JavierLeon9966\ExtendedBlocks\item\ItemFactory;
class Plugin extends PluginBase{
BlockFactory::registerBlock($block); ItemFactory::addCreativeItem(ItemFactory::get(255 -$block->getId())); //Usually most blocks
}
...
}
## Format you must use
This is a example of how your class must look for it to work properly
use pocketmine\block\Block; //You can extend any class but be a Block
use JavierLeon9966\ExtendedBlocks\block\PlaceholderTrait;
class Sample extends Block{
use PlaceholderTrait;
protected $id = 526; //The id of the block must be positive public function __construct(int$meta = 0){ //Optional
$this->meta =$meta;
}
...
}
What's new §
• Added support for Minecraft: Bedrock Edition 1.17.30.
• Fix warning on PHP 8.0
• Update protocol 448
• Update protocol 440
• Fixed visual glitch problems
• Blocks are automatically overwritten if are already registered
• Fixed wrong slab placement
• Update protocol 431
• Added netherite block as example
• Cleanup code
• Update protocol 428
KaptenPanda22
Outdated
using v1.1.4
08 Sep 21
can you fix it so it can be integrated with the scoreboard plugin?
Mcbeany
Outdated
using v1.1.4
15 Jun 21
very good
Neonwaltz8
Outdated
using v1.1.3
08 Jun 21
this plugin is excellent, and if you're reading this review that means you'll read the 3rd review from this plugin, here's its missing letter, Y
JoshuaPHYT
Outdated
using v1.1.3
25 May 21
DavyCraft648
Outdated
using v1.1.2
05 May 21
Prim69
Outdated
using v1.0.6
11 Mar 21
This is very sex
AGTHARN
Outdated
using v1.0.6
09 Mar 21
SHEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEESH
ethaniccc
Staff Outdated
using v1.0.6
08 Mar 21
P O G
Wertzui123
Outdated
using v1.0.6
08 Mar 21
Amazing! The only problems I know of are that level->getBlock() of course returns the wrong id and right-clicking does not work properly (as mentioned on Github).
brokiem
Outdated
using v1.0.6
08 Mar 21
whutt, approved?!
Endermanbugzjfc
Outdated
using v1.0.6
08 Mar 21
This plugin is approved :(). Quality plugin!
Supported API versions
3.14.0
->
3.25.0
Categories:
Mechanics
General
World Editing and Management
Developer Tools
API plugins
Keywords
Permissions
Manage blocks/items
Manage tiles
Edit world
You can leave one review per plugin release, and delete or update your review at any time |
I’m currently taking a deep learning course, which used learning the XOR function as its first example of feedforward networks. The XOR function has the following truth table
$$x$$ $$y$$ $$x \oplus y$$ 0 0 0 0 1 1 1 0 1 1 1 0
which when graphed, is not linearly separable (1s cannot be separated from the 0s by drawing a line)
So if a linear model won’t work, I guess that means we need a nonlinear one. We can do this by using the $relu(x)$ activation function on the outputs of our neurons. $relu(x)$ is defined as
and graphed below.
However, since $relu(x)$ has sharp corners, it is not differentiable at $x = 0$, so gradient based learning methods won’t work as well. So we use the $softplus(x)$ function instead, which is a softened version of $relu(x)$ defined as
shown below.
We begin with your usual imports
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
Then define the inputs and expected outputs of the neural network
inputs = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
xor_outputs = np.array([0, 1, 1, 0])
Next, we define the structure of the neural network. Note that I had to increase the learning rate from the default value.
XOR = Sequential()
# Make the model learn faster (take bigger steps) than by default.
sgd = SGD(lr=0.1)
XOR.compile(loss='binary_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
This defines the network
where the hidden layer activation function is $softplus(x)$ and the output layer activation function is the traditional sigmoid function used to output a number between 0 and 1, indicating the probability of the output being a logical 0 or a logical 1. Note that Keras does not require us to explicitly form the input layer.
Now we actually train the network.
XOR.fit(inputs, xor_outputs, epochs=5000, verbose=0)
cost, acc = XOR.evaluate(inputs, xor_outputs, verbose=0)
print(f'cost: {cost}, acc: {acc * 100}%')
print(XOR.predict(inputs))
which outputs
cost: 0.007737404201179743, acc: 100.0%
[[0.00496492]
[0.9978434 ]
[0.98019916]
[0.00380662]]
Training the network on other boolean functions work exactly the same way, so much so that the only difference is using a different output array.
This was my first experience with a neural network, so here are some things that I learned for your amusement:
• I originally expected this model to train very quickly because the problem was so small, so I only used 10-20 training epochs and got absolutely garbage results. Here, I’m using 5000 training epochs.
• I had to increase the learning rate to train in a reasonable amount of time.
• One should not blindly upgrade TensorFlow without reading the release notes. All-in-all, I spent more time trying to install the correct versions of Tensorflow and CUDA than I did trying to get even this simple of a neural network to work correctly.
• Even small models like this use quite a lot of GPU memory.
Note that boolean functions are bad functions for neural networks to learn. This is because their domain and ranges are discrete and (typically) small. Learning the function takes more time and space than simply listing a truth table. |
### Home > CCA2 > Chapter 2 > Lesson 2.1.2 > Problem2-17
2-17.
If $p(x)=x^2+5x−6$, find: Homework Help ✎
1. Where p(x) intersects the y-axis.
Substitute 0 for x, and solve for $p(x)$.
(0, –6)
2. Where p(x) intersects the x-axis.
Substitute 0 for $p(x)$, and solve for x.
You can factor and use the Zero Product Property once you write your equation.
(–6, 0) and (1, 0)
3. If $q(x)=x^2+5x$, find the intercepts of q(x) and compare the graphs of $p(x)$ and $q(x)$.
Look at parts (a) and (b) to help find the intercepts.
What do you notice about the graphs? Are they related?
x-intercepts: (0, 0), (–5, 0)
y-intercept: (0, 0)
$p(x)$ is 6 units lower than $q(x)$.
4. Find $p(x)−q(x)$.
Subtract $q(x)$ from $p(x)$. You should notice a relationship between your answers for parts (c) and (d). |
# This application failed start because it could not find or load the Qt platform plugin "windows"
created at 08-20-2021 views: 25
## description¶
The error is shown in the figure. This is in Debug mode. It is guessed that the environment variable related to the system directory is not set; but the environment variable has been set, it will still appear, so the solution is as follows.
## solution¶
Set QT_QPA_PLATFORM_PLUGIN_PATH variable
Add environment variables inside the QT project, the directory is: C:\Qt\Qt5.7.0\5.7\msvc2015_64\plugins
created at:08-20-2021 |
## Currying
### General Idea
In mathematics and computer science, currying is the technique of breaking down the evaluation of a function that takes multiple arguments into evaluating a sequence of single-argument functions.
Currying is not only used in programming but in theoretical computer science as well. The reason is that it is often easier to transform multiple argument models into single argument models.
The need for currying arises for example in the following case: Let us assume that we have a context, in which we can only use a function with one argument. We have to use a function with multiple parameters. So we need a way to transform this function into a function with just one parameter. Currying provides the solution to this problem. Currying means rearranging a multiple-parameter function into a chain of functions applied to one argument. It is always possible to transform a function with multiple arguments into a chain of single-argument functions.
Python is not equipped for this programming style. This means there are no special syntactical constructs available to support currying. On the other hand, Python is well suited to simulate this way of programming. We will introduce various ways to accomplish this in this chapter of our tutorial.
Some readers not interested in the mathematical details can skip the following two subchapters, because the a mathematically focussed.
### Composition of Functions
We define the composition h of two function f and g
$$h(x) = g(f(x)$$
often written as
$$h = (g \circ f)(x)$$
in the following Python example.
The composition of two functions is a chaining process in which the output of the inner function becomes the input of the outer function.
### Currying
As we have already metnioned in the introduction, currying means transforming a function with multiple parameters into a chain of functions with one parameter.
We will start with the simplest case, i.e. two parameters. Given is a function $f$ with two parameters $x$ and $y$. We can curry the function in the following way:
We have to find a function $g$ which returns a function $h$ when it is applied to the second parameter $y$ of $f$. $h$ is a function which can be applied to the first parameter $x$ of $f$, satisfying the condition
$$f(x, y) = g(y)(x) = h(x)$$
Now we have a look at the general case of currying. Let us assume that we have a function $f$ with $n$ parameters:
$$f(x_1, x_2, \dots x_n)$$
$$f_{n-1} = f_n(x_n)$$$$...$$$$f_1 = f_2(x_2)$$$$f(x_1, x_2, \dots x_n) = f_1(x_1)$$
### Composition of Functions in Python
#### Two Functions
The function compose can be used to create to compose two functions:
def compose(g, f):
def h(x):
return g(f(x))
return h
We will use our compose function in the next example. Let's assume, we have a thermometer, which is not working accurate. The correct temperature can be calculated by applying the function readjust to the temperature values. Let us further assume that we have to convert our temperature values into degrees fahrenheit. We can do this by applying compose to both functions:
def celsius2fahrenheit(t):
return 1.8 * t + 32
return 0.9 * t - 0.5
convert(10), celsius2fahrenheit(10)
Output::
(44.5, 50.0)
The composition of two functions is generally not commutative, i.e. compose(celsius2fahrenheit, readjust) is different from compose(readjust, celsius2fahrenheit)
convert2 = compose(celsius2fahrenheit, readjust)
convert2(10), celsius2fahrenheit(10)
Output::
(47.3, 50.0)
convert2 is not a solution to our problem, because it is not readjusting the original temperatures of our thermometer but the transformed Fahrenheit values!
#### "compose" with an Arbitrary Number of Arguments
The function compose which we have just defined can only cope with single-argument functions. We can generalize our function compose so that it can cope with all possible functions. This is not currying of course but nevertheless also an interesting function.
def compose(g, f):
def h(*args, **kwargs):
return g(f(*args, **kwargs))
return h
Example using a function with two parmameters.
def BMI(weight, height):
return weight / height**2
def evaluate_BMI(bmi):
if bmi < 15:
return "Very severely underweight"
elif bmi < 16:
return "Severely underweight"
elif bmi < 18.5:
return "Underweight"
elif bmi < 25:
return "Normal (healthy weight)"
elif bmi < 30:
return "Overweight"
elif bmi < 35:
return "Obese Class I (Moderately obese)"
elif bmi < 40:
return "Obese Class II (Severely obese)"
else:
return "Obese Class III (Very severely obese)"
f = compose(evaluate_BMI, BMI)
weight = 1
while weight > 0:
weight = float(input("weight (kg) "))
height = float(input("height (m) "))
print(f(weight, height))
weight (kg) 70
height (m) 1.76
Normal (healthy weight)
weight (kg) 0
height (m) 1
Very severely underweight
### BMI Chart
This is off-topic, because it is not about currying. Yet, it is nice to have a BMI chart, especially if it is created with Python means. We use contour plots from the Matplotlib module in our program. If you want to learn more about contour plots, you can go to chapter on Contour Plots of our Matplotlib tutorial.
%matplotlib inline
import matplotlib.pyplot as plt
import pylab as pl
xlist = pl.linspace(1.2, 2.0, 50)
ylist = pl.linspace(50, 100, 50)
X, Y = pl.meshgrid(xlist, ylist)
Z = Y / (X**2)
plt.figure()
levels = [0, 15, 16, 18.5, 25, 30, 35, 40, 100]
cp = plt.contour(X, Y, Z, levels)
pl.clabel(cp, colors = 'k', fmt = '%2.1f', fontsize=12)
c = ('#0000FF', '#0020AA', '#008060', '#00AA40', '#00FF00', '#40AA00', '#992200', '#FF0000')
cp = plt.contourf(X, Y, Z, levels, colors=c)
plt.colorbar(cp)
plt.title('Contour Plot')
plt.xlabel('x (m)')
plt.ylabel('y (kg)')
plt.show()
### Currying Examples in Python
#### Currying BMI
We used the function BMI in a composition in the previous example. We will now use it as a first currying example. The height of grown-ups is principally a constant. Okay, I know, we shrink every day from dawn to dusk about one centimetre and we get the loss returned over night. We define a function f which takes a height and returns a function whith one parameter (weight) to return the BMI:
def BMI_weight(height):
def h(weight):
return weight / height**2
return h
for weight in [60, 68, 74]:
for height in [164, 168, 172]:
print(BMI_weight(height)(weight), BMI(weight, height))
0.00223081499107674 0.00223081499107674
0.0021258503401360546 0.0021258503401360546
0.0020281233098972417 0.0020281233098972417
0.002528256989886972 0.002528256989886972
0.002409297052154195 0.002409297052154195
0.002298539751216874 0.002298539751216874
0.002751338488994646 0.002751338488994646
0.0026218820861678006 0.0026218820861678006
0.002501352082206598 0.002501352082206598
#### Example: Currency Conversion
In the chapter on Magic Functions of our tutorial we had an excercise, in which we defined a class for currency conversions.
We will define now a function exchange, which takes three arguments:
1. The source currency
2. The target currency
3. The amount in the source currency
To function needs the actual exchange rates. We can download them from finance.yahoo.com website with the function get_currencies. Though in our example we use some old exchange rates:
currencies = {'CHF': 1.0821202355817312,
'GBP': 0.8916546282920325,
'JPY': 114.38826536281809,
'EUR': 1.0,
'USD': 1.11123458162018}
def exchange(from_currency, to_currency, amount):
result = amount * currencies[to_currency]
result /= currencies[from_currency]
return result
Output::
137.56418155678784
We can now define curried functions from the function exchange:
def exchange_from_CHF(to_currency, amount):
return exchange("CHF", to_currency, amount)
def CHF2EUR(amount):
return exchange_from_CHF("EUR", amount)
print(exchange_from_CHF("EUR", 90))
print(CHF2EUR(90))
83.17005545286507
83.17005545286507
We want to rewrite the function exchange in a curryable version:
def curry_exchange(from_currency=None,
to_currency=None,
amount=None):
if from_currency:
if to_currency:
if amount:
def f():
return exchange(from_currency, to_currency, amount)
else:
def f(amount):
return exchange(from_currency, to_currency, amount)
else:
if amount:
def f(to_currency):
return exchange(from_currency, to_currency, amount)
else:
def f(to_currency=None, amount=None):
if amount:
if to_currency:
def h():
return exchange(from_currency, to_currency, amount)
else:
def h(to_currency):
if to_currency:
return exchange(from_currency, to_currency, amount)
else:
if to_currency:
def h(amount):
return exchange(from_currency, to_currency, amount)
else:
def h(to_currency, amount):
return exchange(from_currency, to_currency, amount)
return h
else:
def f(from_currency, to_currency, amount):
return exchange(from_currency, to_currency, amount)
return f
We can redefine exchange_from_CHF and CHF2EUR in a properly curried way:
exchange_from_CHF = curry_exchange("CHF")
print(exchange_from_CHF("EUR", 90))
CHF2EUR = curry_exchange("CHF", "EUR")
print(CHF2EUR(90))
<function curry_exchange.<locals>.f.<locals>.h at 0x7f7543673268>
83.17005545286507
You will find various calls to curry_exchange in the following examples:
print(curry_exchange("CHF")( "EUR", 100))
print(curry_exchange("CHF", "EUR")(100))
f = curry_exchange("CHF")
print(f("EUR", 100))
g = f("EUR")
print(g(100))
CHF2EUR= curry_exchange("CHF", "EUR")
print(CHF2EUR(100))
k = curry_exchange("CHF", "EUR", 100)
print(k())
print(curry_exchange("CHF", "EUR", 100))
f = curry_exchange("CHF")(amount=100)
print(f("EUR"))
f = curry_exchange("CHF")
print(f("EUR", 100))
f = curry_exchange("CHF")
g = f("EUR")
print(g(100))
g2 = f(amount=120)
for currency in currencies:
print(currency, g2(currency))
<function curry_exchange.<locals>.f.<locals>.h at 0x7f7543438950>
92.41117272540563
<function curry_exchange.<locals>.f.<locals>.h at 0x7f754372a730>
92.41117272540563
92.41117272540563
92.41117272540563
<function curry_exchange.<locals>.f at 0x7f7543438840>
92.41117272540563
<function curry_exchange.<locals>.f.<locals>.h at 0x7f7543673268>
92.41117272540563
CHF 120.00000000000001
GBP 98.87861983980284
JPY 12684.9044978435
EUR 110.89340727048676
USD 123.22858903265559
So far we have written custom-made curry functions. We will define a general currying function in the following chapter of our tutorial.
### General Currying
def arimean(*args):
return sum(args) / len(args)
def curry(func):
f_args = []
f_kwargs = {}
def f(*args, **kwargs):
nonlocal f_args, f_kwargs
if args or kwargs:
f_args += args
f_kwargs.update(kwargs)
return f
else:
return func(*f_args, **f_kwargs)
return f
s = curry(arimean)
s(2)(5)(9)(4, 5)
s(5, 9)
print(s())
s2 = curry(arimean)
s2(2)(500)(9)(4, 5)
s2(5, 9)
s2()
return x + y + z
s2(3, 5)(8)()
5.571428571428571
Output::
16
def exchange(from_currency, to_currency, amount):
result = amount * currencies[to_currency]
result /= currencies[from_currency]
return result
e = curry(exchange)
print(e)
f = e("CHF", "EUR")
print(f(10)())
e2 = curry(exchange)
f = e2(to_currency="USD", amount=100, from_currency="CHF")
f()
#print(f(from_currency="CHF")())
<function curry.<locals>.f at 0x7f7543702bf8>
9.241117272540563
Output::
102.69049086054633
def arimean(*args):
return sum(args) / len(args)
def curry(func):
f_args = []
f_kwargs = {}
def f(*args, **kwargs):
nonlocal f_args, f_kwargs
if args or kwargs:
f_args += args
f_kwargs.update(kwargs)
try:
return func(*f_args, **f_kwargs)
except TypeError:
return f
else:
return func(*f_args, **f_kwargs)
return f
s = curry(arimean)
s(2, 4, 5)
e = curry(exchange)
print(e)
f = e("CHF", "EUR")
print(f(10))
e2 = curry(exchange)
f = e2(to_currency="USD", amount=100)
f("CHF")
<function curry.<locals>.f at 0x7f7543457048>
9.241117272540563
Output::
102.69049086054633
### partial-Function
The function partial from the module functools can be used to simulate currying as well. It can also be used to "freeze" some of the function's arguments. This means it simplifies the functions signature.
We use it now to curry our function exchange once more in a different way:
from functools import partial
f = partial(exchange,
to_currency="USD",
amount=100)
f("CHF")
f = partial(f, "CHF")
f()
Output::
102.69049086054633 |
ISSN: 2640-7590
##### Journal of Vaccines and Immunology
Review Article Open Access Peer-Reviewed
# Mammalian Parasitic Vaccine: A Consolidated Exposition
### Deepak Sumbria* and LD Singla
Department of Veterinary Parasitology, College of Veterinary Sciences, Guru Angad Dev Veterinary and Animal Sciences University, Ludhiana, 141004, Punjab, India
*Corresponding author: Dr. Deepak Sumbria, Department of Veterinary Parasitology, College of Veterinary Sciences, Guru Angad Dev Veterinary and Animal Sciences University, Ludhiana-141004. E-mail: [email protected]
Received: 01 October, 2015 | Accepted: 19 October, 2015 | Published: 21 October, 2015
Keywords: Vaccine; Trematode; Cestode; Nematode; Protozoa; Arthropods
Cite this as
Sumbria D, Singla LD (2015) Mammalian Parasitic Vaccine: A Consolidated Exposition. J Vaccines Immun 1(1): 050-059. DOI: 10.17352/jvi.000011
Parasites are highly prevalent in livestock worldwide and infect over one fourth of the human population also. Parasites are successful in evading host immune responses, and vaccination can prove to be an effective way to control them. However, currently very few vaccines are available against parasitic infection. Two important limitations in the emergence of effective parasitic vaccines are incomplete understanding of the immunoregulatory pathways involved in immunity, and the lack of precise information regarding host-pathogen interactions. Precise identification of parasite genes and the role of their products in parasite biology may assist in the identification of useful antigens, which could then be produced in recombinant systems. Many recombinant parasitic antigens have been successfully used in livestock and new vaccines are under trail. Numerous vaccine antigens are defined to target a wide range of parasite species. Thus vaccines offer a green solution to control disease. Vaccines have multiple beneficial effects such as improvement of animal health and welfare by controlling animal infestations and infections; diminishing resistance to anthelmintics, acaricides and antibiotics; improving public health status by controlling food borne pathogens and zoonoses aspect related to animals; keeping animals and the environment free of chemical residues and maintaining biodiversity. This current review is an attempt to consolidate all commercial or under-trail vaccine for mammalian parasites.
### Introduction
As an estimation done by FAO (Food and Agricultural Organization) and WHO (World Health Organization) the human population will reach around 9 billion by 2050, so in order to feed them along with agriculture, clean, healthy livestock population should be needed for alternate food resource because food requirement will increase up to 50% [1,2]. Moreover it was estimated that only with a 6% reduction in animal disease could provide food for an additional 250 million people [3]. Productivity of livestock is greatly hampered by various diseases (viral, bacteria, fungal and parasitic) out of which parasitic infection play a crucial role [2]. The word “parasite” was first used in 1539, derived from Greek language which means para- along site and sitos-food. Parasite are divided various groups viz. trematodes (flukes), cestode (flat worm), nematodes (round worms), arthropods and protozoan. Majority of parasites have a 2 host, one act as intermediate host while other act as definitive host. All parasites are responsible for causing diseases, some of which cause the most devastating and prevalent diseases in both humans and animals. In compared with exotic breeds of cattle, indigenous breeds have shown some resistance to these pathogens. But, the susceptibility of highly productive exotic breeds poses a major encumbrance to the development of the cattle industry and in the improvement of meat and milk production in developing countries [1,4].
As per WHO estimation, at present 3.5 billion people worldwide are affected by diseases and 450 million have diseases due to infecting parasites [5]. In Australia and New Zealand the annual losses caused by bovine neosporosis is about $100 million annually [1] while in Switzerland it is about 9.7 million Euros [6]. Whereas illness caused by water born outbreak of cryptosporidiosis causes a total loss of$ 96.3 million, out of which $31.7 million are lost in medical costs and$ 64.6 million are lost in productivity losses. The average total costs for persons with mild, moderate, and severe illness were $116,$475, and $7,808, respectively [7,8]. In order to come out from adverse effect of these parasites an urge of effective control is needed. Up till now the control strategy of parasites relies mainly on the use of chemotherapy like anthelminthes, antiprotozoal drugs and insecticides etc, as they are safe, cheap and effective against a broad spectrum of parasites [9,10]. But indiscriminate use of these drugs led to the emergence of drug resistance in many targeted parasites [11]. On the same time, issues of residues in the food chain and environment have arisen, which threaten their sustained use [12]. So scientist and researcher are now a days concentrating on development of alternate sustainable methods like vaccinations, novel therapeutic regimens and immnuo-modulations against these parasite [2]. The term “vaccine” was first coined by Edward Jenner in 1881; it was derived from Variolae vaccinae (smallpox of the cow). Vaccines are used to generate antibodies and boost immunity against a disease, and usually contain an agent which may be the microorganism, its product, toxins or one of its surface proteins, treated/modified to be used as an antigen without causing disease [13]. It can be prophylactic or therapeutic. Vaccination helps in the development of acquired immunity by inoculating non-pathogenic but immunogenic components of the pathogen, or closely related organisms. In animal science the vaccines comprise only approximately 23% of the global market for animal health products; the sector is growing consistently [14]. ### Discussion The main types of vaccine used are (http://www.vaccines.gov/more_info/types/): 1. Live attenuated vaccines: These vaccines are produced using the attenuated strains of microbe which has lost its pathogenicity but has antigenicity. Example: Paracox vaccine having eight precocious lines of Eimeria species. Livacox having precocious lines of only Eimeria acervulina and E. maxima, together with an egg-adapted line of E. tenella [15]. 2. Inactivated vaccines: These vaccines has dead etiological agent of the disease done either by radiations/heat/chemical (formaldehyde/beta-propionlactone). Example: inactivated anti-Philasterides dicentrarchi vaccine [16]. 3. Subunit vaccines: These vaccines have only the best antigen (epitope) part of microbe which can start best immune response. Subunit vaccines can contain anywhere from 1 to 20 or more antigens. These vaccines can be made in one of two ways: • Microbe is first grown in the laboratory and then chemicals are use to break it apart and important antigens are collected. • Using recombinant DNA technology the required antigen molecules from the microbe is manufacture. These vaccines are also called “recombinant subunit vaccines.” Example: Peptide-based subunit vaccines for malaria parasites [17], CoxAbic for Coccidia. 4. Toxoid vaccines: For those microbes which secrete toxins, or harmful chemicals, a toxoid vaccine might be the answer. The desired toxins are inactivated by treating them with formalin (a solution of formaldehyde and sterilized water). Such “detoxified” toxins, called toxoids; are safe for use in vaccines. 5. Conjugate vaccines: Many microbes has polysaccharides molecules on its outer coating as many harmful bacteria do, so scientists may try making a conjugate vaccine by using it. For conjugate vaccine targeted antigen (epitope) is linked so that infant’s immune system can recognize to the polysaccharides. The helps the immature immune system react to polysaccharide coatings and defend against the disease-causing microbe. 6. DNA vaccines: These vaccines are also called third generation vaccine. This concept was introduced in 1990. Once the desired gene of microbe has been analyzed, direct intramuscular injection of plasmid DNA in myocytes was given for the induction of protein expression and immune system gets activated. DNA vaccines induce strong humoral and cellular immunity and have the potential to increase immunogenicity through modifications of the vector or incorporation of adjuvant-like cytokine genes. The cells of body take up the injected DNA and start secreting antigen, in other words the body’s own cells become vaccine-making factories, creating the antigens necessary to stimulate the immune system. Examples: Vaccine against murine leishmaniasis are (Antigen-GP-63, dose- 2×100 μg IM, Parasite- Leishmania major; Antigen-LACK (Leishmania major-activated C kinase) dose- 2×30 μg IN Parasite- L. amazonensis,/), in Phlebotomus papatasi a salivary components i.e. SP15, was used as tested as DNA vaccines against L. major. For Trypanosoma cruzi (Antigen-TSA-1, type of antigen-TS family, dose- 2×100 μg IM) [18]. In Schistosoma mansoni large subunit of calpain (Sm-p80) and either mouse GM-CSF or IL-4 was used as DNA vaccine to determine their adjuvant effect in mice [19]. 7. Recombinant vector vaccines: These are similar to DNA vaccines, but they use an attenuated virus or bacterium to introduce microbial DNA to cells of the body. “Vector” refers to the virus or bacterium which can be used as the carrier. Recombinant vector vaccines closely mimic a natural infection and therefore do a good job of stimulating the immune system. Example: Immunization with “Maxadilan” which is a potent vasodilator from sand-fly antigen (as a recombinant vaccine) protected mice against L. major infection [20]. Parasite vaccine production is rather very difficult as compare to other microorganism because of their large size, complex life cycle and difficulty in there in vitro culturing. So, precise work is to be done in this aspect. ##### Vaccine for cestodes Taenia Vaccine: In sheep Excretory-Secretory (ES) material from Taenia ovis oncospheres could induce sterile immunity when it is associated with antibodies binding to the 16, 18 and 45-kDa molecular weight antigens. Later on immunization of sheep with recombinant forms of these antigens (T. ovis antigens 45W, 16.17 and 18K) was also conducted successfully [21]. T. ovis 45W is a member of a family of genes comprising a minimum of 4; 45S differ from 45W at 11 of 985 nucleotides sequence of mRNA, animals vaccinated with other protein encoded by this variant gene were not protected against T. ovis infection. The 45W agent induces IgG1 and IgG2 antibodies; they provide a high degree of protection in animals [22]. The oil adjuvants, saponin and DEAE-dextran gave the highest antibody responses and greatest degree of protection against challenge infection with T. ovis eggs. For T. solium it was found that crude antigen preparations derived from oncospheres induce complete protection in pigs [22]. It was later noted that extracts of T. crassiceps cysticerci contain antigens which are also protective against T. solium infection in pigs. Homologues proteins of T. saginata (TSA-18 and TSA-9, similar to the TO 45-kDa antigen i.e. TO-45W) in cattle generated a good response [21]. Trials conducted in Mexico, Peru, Honduras, and Cameroon showed 99–100% protection against T. solium using TSOL 18 oncosphere antigen. Moreover this vaccine completely eliminated the transmission of T. solium by the pigs involved in the trial [23]. Other important agents are proteins such as TSOL 45 and TSOL 16. In case of Taenia crassiceps (mice) and T. solium (pigs) a vaccine candidate (designated S3Pvac) based on 3 synthetic peptides, KETc1, KETc12 and GK1 having 12, 8 and 18 amino acids, respectively were also shown to be effective [24,25]. It produces 90% protection in mice after successfully expressed in 19 different transgenic papaya clones [26]. Echinococcus: In case of Echinococcus granulosus oncosphere antigens provided high protection in sheep. In E. granulosus EG-95 and EM-95 from E. multilocularis provided protection in sheep and cattle up to 99%, moreover they are homologues to Taenia vaccine antigens [22]. Out of these EG-95 is the only field trial-tested vaccine candidate against hydatidosis (Echinococcus infection). Now a days attempt has been made to express EG-95 in plants part along with a fibrillar antigen EG-A31. Alfa Alfa leaves are infected by modified Agrobacterium tumefaciens (updated name of Rhizobium radiobacter), which has a recombinant plasmid by electroporation (pBI–Eg95–EgA31). Its result showed significantly decreased (64.1%) in weight of hydatid cyst; moreover antigen specific IgG, IgG2b and IgE was also higher in BALB/c mice by oral immunization method [10]. When canine are vaccinated with adult-stage recombinant EgM proteins, it show great reduction in maturation to egg production as well as lowered the worm burdens very effectively [21]. Later on, trails on these cestode parasite (T. ovis and E. granulosus) was stopped because mainly these infection are detected at the time of slaughter and there is no loss to livestock owner, moreover the cost of manufacturing these vaccine agents was more than its return so funding industries took least interest [22]. ##### Vaccine for trematode Fasciola hepatica: In this case many agent has been tried; proteases such as the leucine aminopeptidase (LAP) which are involved in parasite blood digestion, reduces worm loads in rabbits by >75%. Other important agents targeted against Fasciola hepatica and F. gigantica in sheep and cattle are GST (glutathione-S-transferase), trematode hemoglobin, cathepsin proteases (CP) L1, L2 and FABP (fatty acid binding protein), among all these agents CP play a key role in migration, immune evasion and feeding through host tissue material [27]. In cattle it produce high levels (>70%) of protection. Recombinant DNA constructed by encoding F. hepatica GST had a high humoral response to the mice. In lettuce (Lactuca sativa) and alfalfa (Medicago sativa,/) a 981 nucleotide cDNA fragment encoding the catalytic domain of the CP of F. hepatica was incorporated and this also induces an effective immune response in mice [10]. Moreover a schistosome protein Sm14 provides cross-protection between the two trematode parasites [27]. Schistosoma Vaccine: High levels of immunity (up to 90%) were developed in mice and primates against multiple exposures of irradiated cercariae of schistosome [21]. Some somatic agents which were used are cytosolic structural proteins (paramyosin) and glycolytic enzymes (aldolase) but they fail to generate high levels of immunity [21]. The most important vaccine target of the schistosome is the tegument. For Schistosoma mansoni - TSP-2 (Tetraspanin: found in outer tegument) has been used for development for human vaccine antigen in sub-Saharan Africa and Brazil, later on recombinant TSP-2 reduces adult worm burdens and liver eggs by >50 and >60% respectively [28]. In Europe and Africa for S. haematobium a recombinant 28 kDa Glutathione S-transferase (GST) was also used. Another candidate i.e. Sh28-GST (Bilhvax) appears to be immunogenic and well-tolerated in healthy conditions [28]. Moreover Sm14 which is a fatty acid binding protein has been use against both human schistosomiasis and fascioliasis in cattle and was effective. Immunization with the cercarial surface protein SmTOR reduced worms by up to 64%. Immunization with the membrane-associated large subunit of calpain Sm-p80 resulted in up to 70% worm reductions in mice and >50% worm reductions in baboons [21]. Other agents which can provide protection are Sm29, SmCD59-like and Sm200 [21]. In case of S. japonicum a 23 kDa membrane protein (Sj23) plays an important role in producing immunity and this antigen exists in all stages of the parasite [29]. Three doses of 3 plasmids encoding S. japonicum antigens, Sj62, Sj28 and Sj14 induced high levels of IFN-γ and partial protection from challenge infection when administered in mice [30]. These plasmids also produce antigen-specific IgG in mice moreover this gene was transferred into M. sativa through Agrobacterium. In pigs for S. japonicum an antigen i.e. SjCTPI (triose-phosphate isomerase) was used and 60% of vaccinated animals demonstrated antigen-specific antibodies against the parasite. Moreover significant reduction in hepatic worm burden (48.3%) and size of liver egg granulomas have also been noted [31]. ##### Vaccine for nematodes Hookworm Vaccine: For hook worm in 1964, infective stage of larvae i.e. L3 of Ancylostoma caninum was attenuated using 40,000 roentgens of X-ray and was used as vaccination agent by Miller et al. [32]. In 1970 for canine after US licensing, its commercial industrial manufacture was started as a first hookworm vaccine consisting of gamma-irradiated infective A. caninum L3 larvae [33]. But in year 1975 this vaccine was discontinued due to some drawbacks (vaccinated dogs were found to have eggs in faeces, cost of production and maintaining laboratory-canine model was high, deficiency of in vitro test to determine the efficacy of immune response and short shelf-life). Major target for vaccination against gastrointestinal nematode infection are the human hookworms i.e. Ancylostoma duodenale and Necator americanus. In human now a days one group i.e. Human Hookworm Vaccine Initiative (HHVI) is working for vaccine development. For control of N. americanus a 21 kDa protein i.e. Na-ASP-2 has been used as a vaccination agent [34]. In 2012 Necator americanus-glutathione S-transferase 1 (Na-GST-1) was also tried in Brazil [35]. To provide protection against A. caninum infection several antigens have been tried like VAL and ASP-2 [36]. ASP-2 increases the level of IgE and evokes allergic adverse effects. These agents lead to reduced egg production, as well as a lower degree of blood loss and anemia in infected patients. In case of canine the antigens targeted are Ac16 and As14 [37]. Haemonchus: In Haemonchus contortus infection younger animals remain highly susceptible, but in adults natural immunity develops after its repeat exposure. Earlier vaccination using irradiated larval was done but it acted poorly in lambs [38]. Up till now for H. contortus main vaccine targets are: A) H11: It is a microvillar integral membrane glycoprotein complex obtained from detergent extracts of H. contortus adult worms and generates 70-90% reduction in parasite loads [39]. Later on recombinant rH11 using a baculovirus-derived insect cell homogenate was also tried but it induced disappointingly low level of protection (30%). B) H-Gal-GP: H-gal-GP (Haemonchus galactose-containing glycoprotein) is also obtained from detergent extracts of adult H. contortus, followed by peanut agglutinin affinity chromatography, which binds to Gal b1,3 GalNAc disaccharide motifs, it resulted in >70% reduction in adult worm counts [40,41]. C) TSBP (Thiol-Sepharose binding protein): It was isolated using a method designed to purify cysteine proteases associated with H. contortus gut extracts [42]. First extracts of adult H. contortus are depleted from Hc-gal-GP by lectin binding, then subjected to thiol-sepharose affinity chromatography, to purify proteins with free cysteine residues, including (but not limited to) cysteine proteases. This showed 43-52% protection against challenge infection of Haemonchus. TSBP does not react with antisera to H11 or H-gal-GP but contains a different range of antigens, including a major glutamate dehydrogenase and minor cathepsin B-like cysteine proteases (hmcp-1, 4 and 6) which are the actual protective targets [21]. Against H. contortus infection the immunogenic properties of recombinant Cu/Zn superoxide dismutases, P46, P52, and P100 have also been assessed. Dictyocaulus: Against Dictyocaulus viviparous a commercial vaccine (containing X-irradiated infective larvae of lung worm) is available for cattle in Europe under trade name “Dictol”. The vaccine consisting of 2 doses each ‘containing 1000 irradiated larvae given at one month interval has been used with outstanding success. Calves are immunized at 3-7 weeks of age. The vaccination program of calves dairy should be completed before they go to grass in spring or early summer. In endemic areas, immunity is maintained by continuous exposure to infection. In India, X-irradiated larvae vaccine was developed against Dictyocaulus filarial infection in sheep and goat with similar success and marketed as “Difil” [43]. ##### Vaccine for protozoa Malaria Vaccine: Malaria is cause by various Plasmodium species and transmitted by various species of mosquitoes. The extracellular sporozoites and intracellular liver stages produce no clinical symptoms, so they are also regarded as an ideal target for vaccine intervention [44]. In 1960 for immunization against malaria, trials on mice were conducted by using irradiated sporozoites [45]. The live sporozoites attenuated by irradiation (IrrSpz) provided complete protection against sporozoite challenge in primate and mouse model [44]. In 2005 genetically attenuated parasites (GAP) produced sterile, protective immunity comparable to IrrSpz immunization, where protective immune responses are also critically dependent on CD8+ T-cells. In Colombia peptide base vaccine SPf66 was developed for primates but its trails in Asia and Africa failed [46]. Until now most effective vaccine tested is a hybrid protein molecule (pre erythrocytic) i.e. RTS,S also called as Mosquirix (RTS,S recombinant vaccine is based on the major Plasmodium sporozoite surface antigen; circumsporozoite protein-CSP) with adjuvant AS01 (having liposomes). This vaccine produce high level of antibodies, in its 1st trial 51% reduction of clinical cases has been reported from Kenya [47], 55% reduction cases occur when its trial were conducted in sub-Saharan region of Africa at 11 different sites in 2011. It is also observed that before use of this agent in vaccination programs, 34.3% of infants were positive with low titers for anti-circumsporozoite antibodies. After vaccination, 99.7% were positive at high titers (209 EU/ml) for anti-circumsporozoite antibodies. Its entire trial will be over in 2015 [5]. For anti-malarial vaccination in children of sub-Saharan African region, the European Medicines Agency (EMA)’s decision paves the way for a policy recommendation by the WHO [48]. Now a days many target gene are been use as vaccination agents to eliminate malaria such as UIS 3, UIS 4, P 32, P52, SAP1, SLARP, FabB/ F, PDH E3, PALM, LISP etc [28] moreover, parasitic antigens are also expressed in plants (Arabidopis thaliana seeds, Tobacco, Brassica napus, and Lettuce) such as MSP4/5 (Merozoite surface protein), MSP119, AMA1 (Apical membrane antigen), MSP1, CSP (Circumsporozoite protein), P230 (Gametocyte antigen), P25 (Surface antigen) etc [10]. Furthermore now a days various antigen has been identified that will help in targeting the liver stage of malaria [49]. Leishmania Vaccine: The disease caused by Leishmania spp has its zoonotic importance. In case of human mainly if a subject recovers from leishmaniosis it become resistant for further infection. Moreover it is also observed that vaccine against Leishmania provide protection against more than one species [8,50]. Various forms of vaccine have been tried such as: A) Live Leishmania vaccine: It is used in Israel, Russia, Iran and Uzbekistan but not yet licensed. In this promastigotes of L. major were cultured and used. Despite of adverse effect such as immune suppression, lesion etc [51] in Uzbekistan mixture of live virulent and killed parasite has been used. B) Whole/fractions of killed vaccine: In early 1940 whole-killed promastigotes were also tested as vaccines against CL and VL (cutaneous leishmaniosis and visceral leishmaniosis), in Brazil [52]. For canine, leishmaniosis parasite lysate vaccines fractionation led to the development of a glycoprotein enriched mixture termed as “FML antigen” and it provided 92% protection. In 1970 killed vaccine having five isolates of Leishmania of four different species was developed by Genaro and co-worker [53]. In Venezuela autoclaved Leishmania mexicana was used by Convit and his coworker [54]. For old world leishmaniosis, autoclaved L. major + BCG (Bacille Calmette Guerin) have been extensively studied and it depicted 18-78% reduction in case of CL. In mice and rabbits a subunit vaccine utilizing the fucose mannose ligand antigen has been shown to be a potent immunogen, moreover for sero-testing in human and canine kala-azar it act as a sensitive, predictive and specific antigen [55]. C) DNA, recombinant proteins vaccines and combinations: To stimulate lifelong protection, genetically altered Leishmania parasites are used because they lack cystein proteases or dyhydrofolate reductase enzyme [56]. In case of canine VL saponin formulation of fucose mannose ligand was found to be safe and is licensed as Leishmune (76-80% protection) veterinary vaccine [57]. The antibodies (Abs) produce by this vaccine do not allow the development of promastigote in fly. Moreover in dogs LiESAp-MDP produces long-lasting protection [58]. Now a days for the effective control of leishmaniosis some of the important agents for vaccines include: kinetoplastid membrane protein-11, amastigote specific protein A2, sterol 24-c-methyltranferase, K26/HASPB, Leishmania-activated C kinase, PSA (parasite surface antigen), LACK (Leishmania activated C kinase), gp63 (surface expressed glycoprotein leishmaniolysin reconstituted in liposomes), Leish-111f (Leishmania derived recombinant polyprotein), cysteine proteinase B, KMP11, nucleoside hydrolase, open reading frame F and tryparedoxin peroxidase [13,59]. Out of these Leish-111f product (99.6% protection) is the first defined vaccine against leishmaniosis to be use in to primate clinical trials [60]. This contains L. major stress inducible protein-1, L. major homolog of eukaryotic thiole-specific antioxidant, L. braziliensis elongation and initiation factor, in formulation with MPL-SE, and the results denoted that it provide protection in mouse models for CL and VL, but failed to prevent natural L. infantum infection. For L. major, liposomal soluble antigen incorporated with phosphorothioate CpG ODN (PS CpG) or phosphodiaster CpG ODN (PO CpG) has also been tested for CL. D) Live-attenuated Leishmania vaccines: In case of mice use of dihydrofolate reductase thymidylate synthase (dhfr-ts) parasites led to the effective protection. Currently use of L. donovani centrin null mutants (LdCEN-/-) in mice showed reduced parasitic burden in the spleen [61]. Biochemically and radio attenuated parasite have also provided high protection in hamsters and rodents without any adjuvant [62]. Recently an intranasal vaccine for Leishmania amazonensis antigens (LaAg) to provide protective immune responses against Leishmania (infantum) chagasi by using the CAF01 association has also been tried. A significant reduction in their parasite burden in both the spleen and liver, along with an increase in specific production of IFNγ and nitrite, and a decrease in IL4 production was observed in LaAg/CAF01 vaccinated mice. Furthermore there was increased lymphoproliferative immune response after parasite antigen recall [63]. In recent times a polyproteins vaccine when administered in association with an adjuvant, provide protection against VL. This vaccine has two Leishmania infantum hypothetical proteins present in the amastigote stage, LiHyp1 and LiHyp6, were combined with a promastigote protein, IgE-dependent histamine-releasing factor (HRF) [64,65]. Amebiasis Vaccine: Entamoeba spp mainly causes diarrhea (watery or contains blood and mucus) and vomiting. In case of mammalian cells the binding of trophozoites of Entamoeba histolytica is mediated by a protein (serine rich). The galactose and N-acetyl-D-galactosamine-specific lectin on the surface of the amoeba is a potent immune-dominant molecule that is highly conserved and has an essential role in the stimulation of immune responses. The structure of the lectin has been defined, and the heavy subunit with its cysteine rich region has been demonstrated in animal models (mice) to have some efficacy as a possible vaccine agent for prevention of amoebic infection [66]. Moreover the N-Acetyl-D-galactosamine-inhabitable E. histolytica lectin (GAL/GALNAC) also mediates the adherence of trophozoites. This antigen had shown protective effect on 66% of the animals against the amebiasis. Other candidates under investigation in case of developing vaccine for amebiasis are oral/intranasal administration of the galactose and N-acetyl-D-galactosamine lectin, cysteine proteinases, the serine rich E. histolytica protein, lipophosphoglycan, amebapores and 29-kDa protein (peroxiredoxin) [5]. Some workers are using the vaccination agent in plants (lectin) against E. histolytica (LecA) by Plastid transformation [10]. When DNA plasmids encoding either E. histolytica cysteine protease 112 or adhesion 112 were co-administered to hamsters, they provided protection against liver abscess formation [30]. Trypanosome: Earlier the beta tubulin gene of Trypanosoma evansi (STIB 806) after cloning in E. coli [2] was used as vaccine agent, later on recombinant beta tubulin was also expressed in E coli. For T. brucei DNA vaccine (TSA protein) provide protection of 60% cases. A recombinant agent MAPp15 (microtubule protein) provided complete protection against haemoparasitic infection. For protection against T. cruzi intramuscular DNA vaccine containing the TcPA45 gene (39kDa) was used and it was observed that there is 85% decrease in parasitaemia levels after challenge with infective forms of the parasite. When its recombinant form i.e. rTcPA45 protein was used as intra peritoneal injection there was decrease up to 95% parasitaemia level in mice after a lethal dose of T. cruzi. Both protocols were able to trigger specific B cells and high levels of antibodies anti-rTcPA45 were also detected in sera [67]. Moreover an enzyme “cyclophilin” was identified and trails were conducted on its recombinant form in E. coli. Trichomoniasis: Tritrichomonas foetus mainly causes abortion in cattle. A killed whole-cell protozoan vaccine (Trichguard) provides protection when given @ 1-2 ml [2]. In vaccination trail it was observed that during and after the 90-day breeding period heifers immunized showed faster rise in systemic antibodies level as well as better pregnancy rates. In conclusion, if this vaccine was given before breeding and early in the breeding season by both SQ and intravaginal route, it can yield superior protection for heifers exposed to bulls infection [68]. In case of bulls vaccination with whole cell antigen showed that IgG antibodies specific for protective antigens of T. foetus in the preputial secretions and serum [69]. Coccidia: In late 1940 a live sporulated oocyst vaccine (Coccovac-B) was produced. Its strains are E. tenella, E. acervulina, E. maxima and E. mivati. Another vaccine based on live sporulated oocyst (Coccivac-D) having 8 different species of Eimeria viz., E. maxima, E. burnetti, E. acervulina, E. mivati, E. necatrix, E. hagani, E. tenella and E. praecox has been developed in 1970. Coccivac-T having live sporulated oocysts of E. gallopavonis, E. adenoids, E. meleagrimitis and E. dispersa was also used for vaccination [5]. Despite the presence of ionophore compound a live vaccine (COXATM) remain fully active. It has 3 main strains i.e. E. tenella, E. acervulina and E. maxima. Another vaccine (Eimeria vax 4 m) having E. tenella (150 oocysts) strain Rt3+15, E. maxima (100 oocysts) strain MCK+10, E. acervulina (50 oocysts) strain RA, and E. necatrix (100 oocysts) strain mednic3+8 in PBS (phosphate buffer saline) are been used with a titer of 1.6×104 oocysts ml-1. It is safe in day old chick. Others commercially used vaccines are: CoxAbic (subunit vaccine form macrogametocyte of E. maxima), Immucox (Oral vaccine-developed in Canada by Vetech Laboratories), Livacox T/Q (live attenuated vaccine), Paracox-8 (E. tenella, E. maxima, E. acervulina, E. mitis, E. burnetti, E. necatrix and E. praecox). In U.S another vaccine named as Advent was recently developed by Viridus Animal Health. It has more viable oocysts (truly sporulated oocysts that can cause immunity) than other vaccines. As Coccivac, immucox, advent are not are not “attenuated” so they can actually cause some lesions and occurrence of coccidiosis in birds. On other hand, vaccines such as Paracox, and Livacox used in Europe are attenuated. They are altered because the coccidia used in the vaccine are designed to mature quickly and have a short (precocious) life cycle and low fertility. They are not pathogenic-disease causing and are less costly to produce than the non-attenuated vaccines. These vaccines are marketed in other countries but not currently in the U.S [70]. Recently a microneme protein, EtMIC2 of E. tenella was incorporated by using Agrobacterium in tobacco leaves [5]. Feeding of this transgenic plant resulted in the higher weight gain, reduction in oocyst output and high antibody production. Later on EtMIC1 was also used in poultry and its efficacy was compared with EtMIC2, it was observed that serum antibody response and weight gain was better in former one [10]. Anaplasma: In United States first trial to develop vaccine against anaplasma was conducted, it contains killed Anaplasma marginale and marketed as “Plazvax”, moreover “Anaplaz” (non living lyophilized preparation with adjuvant) was also commercialized later. In 1989, irradiated A. marginale was used in deers and sheep. Later on vaccine having this stock (Anavac) has been developed by 58 passages at university of Illinois. These vaccines only protect animal form the development of clinical disease but have no effect on infection by anaplasma. Other agents used for vaccination are MSP1b (major surface protein), it produced significant antibody response and partial protection (two out of six immunized animals were protected) when challenged with cryo-preserved parasites [31]. Moreover infection with A. centrale, an organism originally isolated in South Africa provides partial crossimmunity against A. marginale challenge. In A. central two surface proteins (36 and 105 kDa) induce a protective immune response in calves to homologous and heterologous challenge. Giardia: In canines, a killed cultured trophozoites vaccine (Giardiavax) is been used against Giardia lambia. It mainly has a crude preparation of disrupted, axenically cultured G. duodenaalis isolates derived from sheep [31]. Toxoplasma: Earlier in pig partial protection from the development of Toxoplasma gondii tissue cysts was developed by using crude fraction of T. gondii rhoptry proteins incorporated into an ISCOM (immune stimulating complexes) adjuvant. Recently in pig intradermal inoculation of T. gondii GRA-1-GRA-7 DNA cocktail, developed a strong humeral immune response [1]. S48 strain (Toxovax), is a live vaccine and it inhibit development of T. gondii in both cat and sheep [69]. INF-gamma target bradyzoites or oocyst and clear parasite within 14 days after infection. This was originally isolated from an aborted ovine foetus in New Zealand and was passaged over 3000 times in laboratory mice initially to provide a source of antigen for diagnostic purposes. This live vaccine Toxovax (live organisms of an attenuated strain of Toxoplasma gondii-incomplete Strain 48) is currently the only commercial vaccine for toxoplasmosis worldwide. A bradizoite of live mutant T. gondii (T263) also help in providing protection. MIC6, MIC8 (micronema protein 6 and 8) has also been used as vaccine agents. In some country a recombinant PDI (rTgPDI) was also used for vaccination. Now a day the potential proteins are also available for vaccine purpose. On the whole in T. gondii there are about 1,360 specialized protein families [71]. Some proteins such as Surface antigen glycoproteins (SAGs) are important for host cell attachment and host immune evasion, and T. gondii possesses 182 SAG-related sequences. Out of which the main are SAG1 and SAG2, which are the most abundant proteins in tachyzoites, they can be used as a vaccine agents. Some worker are using the vaccination agent in plants (Tobacco leaves) also such as SAG1 (surface antigen), GRA [40 kDa] (dense granule protein) by Agro-infiltration method [10]. Other important protein is AMA1 (Apical membrane antigen 1) this helps in host cell penetration; because of this property, AMA1 is also considered to be a potential vaccine candidate. Rhoptry proteins are also been targeted out of which ROP2, ROP3, ROP4, ROP7, and ROP8 are of veterinary importance [72,73]. Some other useful antigen targeted for vaccination purpose are: TS-4 (temperature sensitive mutant) 2×104, TLA (Lactobacillus casei as adjuvant), rROP2/4, Zj111/pSAG1-MIC3, KO-strain, pVAXROP16, pVAXROP18, RON4, AdSAG3, AdSAG2, AdSAG1, pME18100/HSP70 etc [74]. In T. gondii infection the cytokines secreted by the immune response include up regulated factors (IFNγ, IL-2, TNFα, IL-1, IL-7, IL-12, IL-15) and down regulated factors (IL-4, IL-6, IL-10) [75]. Theileria: Attenuated schizont culture Theileria parva infected lymphoblastic cell line @108 attenuated cells was used for vaccination by sub-cut route in cattle. Cock tail of strains [T. parva (Muguga), T. parva (Nugong) and T. Lawrence] in GUTS was also used in many African countries. In cattle for T. parva infection agent named p67 (67kDa MW) was used earlier in Kenya. Later on its recombinant form was also tested with recombinant vaccina virus and Salmonella typhimurium but failed to give good result. For T. annulata agent used was SPAG-1 [1]. In T. annulata infection in vitro cell culture attenuated vaccines (Rakshavac-T) provide protection of about 95-100%. It should be avoided in pregnancy. In T. parva infection mixture of sporozoite and schizont provide good immunity response. The major merozoite surface antigen of T. annulata (Tams-1) has also been expressed in recombinant form and used in small scale immunization trials. Later on it was observed that animal recovered from infection has special cytotoxic T-cells which destroy the lymphocyte infected with T. parva. Schizoints in lymphocyte has two proteins which stimulate this response [5]. Genes responsible for this was identified and its recombinant form were used as vaccination agent. Tissue cultured attenuated schizoint vaccine for T. annulata was also developed in which parasite multiplication is confined to 1×109. In National Dairy Development Board (1989), Anand, India a vaccine (Raksha Vac-T) was developed using T. annulata ODE strain by 150 passages @5×106. It can be given in calf more than 2 year of age. In Punjab agricultural university (PAU) Ludhiana, India, a vaccine was developed which can be given to 7 day old calf by using Hissar isolate, by 100-150 passage @1×106. The seed culture should be tested for some viral infection (infectious bovine rhinotrachitis, blue tongue virus, bovine viral diarrhea, bovine leukemia virus, bovine cyncytial virus, bovine immunodificency virus, bovine parainfluenza virus type –II and rinderpest virus). Babesiosis: After the discovery that these organisms can be attenuated by sequential passage in splenectomised calves the commercial live attenuated vaccine became available. In earlier days by passaging (20-30 times) 107 parasites a vaccine was prepared but later on it showed many short coming. Its shelf life was only 5-7 days at 5ºC. In Venezuela Babesia bigemina cultured derived soluble exo-antigen vaccine with saponin as an adjuvant was developed; it has stability of 2 year at 4ºC. For B. bovis the agents targeted by chromatography for vaccination were 11C5, 12D3 and 21B4. For canine babesiosis two subunit vaccines have been developed [11]. They consist of soluble parasitic antigens (SPA) that are released into the culture supernatant by in vitro-cultured parasites, combined with adjuvant. The first vaccine released was “Pirodog” (France), for B. canis cultures, whereas recently released “NobivacPiro” contains SPA from B. canis and Babesia rossi in an attempt to broaden the strain-specific immunity. In cattle an in vitro cultured live attenuated vaccine can produce good immunity mainly for Babesia bovis. For culturing Micro Aerophilus stationary phase (MASP) technique is used [14]. Neosporosis: For neosporosis there are no live vaccines commercially available to provide protection. Later on attenuated vaccine was also used and it was observed that NC-Nowra strain was more virulent than NC-Liverpool strain in mice [1]. In cattle a tachyzoite base vaccine [Neoguard (A killed vaccine having Neospora tachyzoites with an adjuvant -Havlogen)] has been used to prevent abortion caused by Neospora caninum. 50% reduction in abortion was seen but it should be avoided in pregnancy. In some country live tachyzoites maintained on vero cell line in certain specific condition (37ºC, 5 % CO2 and RPMI 1640 medium with 2 % horse serum and penicillin-streptomycin) was also used as vaccine [14]. Sarcocystosis: Chemically treated in vitro culture merozoites mixed with adjuvants (EPM vaccine) is used as vaccine for sarcocystidae family members. In equines it provides protection against neurological sign (equine protozoal myeloencephalitis) due to infection from S. neurona. A vaccine having rSnSAG1 (recombinant S. neurona surface antigen gene 1) can also protect horses [22]. Cryptosporidiosis: It is an emerging topic due to its zoonotic aspect. Animals vaccinated with killed C. parvum oocysts showed reduced oocyst levels and diarrhea in controlled condition, but this agent did not prove to be efficacious when tested under field conditions. Recombinant C. parvum, C7 protein containing the 101C terminal of the P23 antigen was used to immune Holstein cows in their late gestation, and there colostrums showed a significant reduction in oocyst shedding and also provided protection against diarrhea. When recombinant C. parvum oocyst surface protein rCP15/60 was used for vaccination, it helps to prevent cryptosporidiosis in livestock [1,5]. ##### Vaccine for arthropods Among arthropods most important species are the ticks, they transmits various diseases. In India alone the cost of Tick and Tick Born Diseases in animals has been estimated in the tune of US$ 498.7 million [67]. Several agents have been tested in vaccination programs which are derived from ticks. Earlier whole body homogenate of salivary gland was used as vaccine agent. For Bhoophilus microplus a subunit vaccine was made i.e. Tick GUARD in Australia. It has recombinant tick gut concealed antigen BM86 (located in mid-gut of the tick). In 1996 this vaccine was re-registered as TickGARD plus (E. coli expressed BM86+BM91). “Gavac” was another version of tickguard and was developed in Cuba; it was having recombinant BM86 in Pichia pastoris and provided protection upto 85%. Now a days various endosymbiont was also identified and they are used as vaccination agents because ticks depend only upon hosts blood and that may not provide all the nutrient so endosymbionts are necessary for them [60]. Sterility was observed in healthy tsetse flies fed with tetracycline (2500 μg/ml) due to damage to the mycetome bacterial endosymbionts [76]. In Haemaphysalis longicornis a 29 kDa salivary gland-associated protein was identified and its recombinant protein produced in E. coli showed reduction in adult female engorgement weight and 40% and 56% mortality of larvae and nymphs post-engorgement. In R. appendiculatus a 15 kDa protein (64P), its recombinant versions (64 TRPs) resulted in reduction of the nymphal and adult infestation rates by 48 and 70%, respectively [77]. DNA vaccination in Merino crossbred sheep against B. microplus using Bm86 full length gene was also tried [78]. Moreover, covaccination with Bm86 and GM-CSF plasmids gave statistically significant reduction in the fertility of ticks. Development of combine vaccine against T. parva–R. appendiculatus and T. annulata–H. a. anatolicum systems has also been tried. In Indian Veterinary Research Institute (IVRI), India the recombinant Tams 1 antigen of T. annulata (Parbhani strain) and the Bm86 homologue antigen of H. a. anatolicum (Izatnagar isolate), rHAA86, were produced in E. coli and Pichia pastoris. Against T. parva, infection novel subunit vaccine has been recently evaluated, additionally; the homologue of Bm86 has been discovered in R. appendiculatus (Ra86). Some genes have been identified that are present across phylogenetically distant species in I. scapularis and were designated as 4D8 (also identified as subolesin), 4F8 and 4E6 [79,80]. Subolesin protective antigen was demonstrated to extend to other tick species [81]. In case of A. variegnatum and A. americanum whole nymph and gut extract was used respectively. For Lucilia cuprina an agent has been isolated from its peritropic membrane (peritrophins 95, 48 and 44) and used for vaccination. In Hypodermis bovis hypodermin A from first instar provided the greatest level of protection. In Anophelese quadrimaculata, whole female homogenate provided protection in rabbits.
Other Parasite and their antigenic agent targeted (Table 1).
### Conclusion
Concise information provided in this review regarding the progress in the field of vaccine development for various parasitic diseases of livestock and humans may be a guideline for veterinarian clinician and academician.
Thanks are also due to Dr. Sumedha Bhandari (Assistant Professor English, Department of Agricultural Journalism, Languages & Culture Punjab Agricultural University, Ludhiana) for editing the manuscript for the correct use of the English language.
© 2015 Sumbria and Singla. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
# Library: Xapian
Xapian provides search capabilities to your program. It supports fulltext search and complex boolean queries. Xapian uses an index, so you have to add data to an index before you can search the index. The index can be on-disk or in-memory; if it’s on-disk, you won’t need to rebuild it every time your program starts.
## Makefile
Use a Makefile like this. You’ll need to have installed Xapian so that xapian-config is executable. This works on Londo.
CXX=g++
CXXFLAGS=-g -Wall -ansi xapian-config --cxxflags xapian-config --libs
all: main
main.o: main.cpp
$(CXX)$(CXXFLAGS) -c main.cpp
main: main.o
$(CXX)$(CXXFLAGS) -o main main.o
## Delve: Tool for inspecting indexes
Xapian comes with a command-line tool called delve. If you created an index on-disk (as opposed to in-memory), you can use this tool to see what’s in there.
You can get some stats:
$delve test.idx UUID = cb27339f-1cfd-49aa-81a3-539909fb0887 number of documents = 31 average document length = 355968 document length lower bound = 72850 document length upper bound = 4749922 highest document id ever used = 31 has positional information = true And you can list all details for some record (document) number: $ delve -r 1 -d test.idx
Data for record #1:
/Users/JoshuaEckroth/Documents/git/csci221/2014-fall/class-examples/xapian/gutenberg/0ws0610.txt
***The Project Gutenberg's Etext of Shakespeare's First Folio***
**********************The Comedie of Errors*********************
This is our 3rd edition of most of these plays. See the index.
Copyright laws are changing all over the world, be sure to check |
Statut Confirmé Série RENC-THEO Domaines hep-th Date Jeudi 11 Mars 2021 Heure 11:00 Institut IHP Salle Zoom Nom de l'orateur Dorigoni Prenom de l'orateur Daniele Addresse email de l'orateur Institution de l'orateur Durham Titre An exact integrated correlator in $N=4$ SU(N) SYM Résumé Between all the magical properties of $\mathcal{N} = 4$ SU(N) super Yang-Mills perhaps one of the most important is Montonen-Olive electric-magnetic $SL(2,Z)$ duality.In particular this leads to the constraint that observables must be invariant under inversion of the complex YM coupling $\tau$, i.e. under $\tau -> -1 / \tau$. In this talk we will focus on one such physical quantity, namely an integrated correlator of four super-conformal primaries of the stress-tensor multiplet. I will firstly review how this correlator can be computed via supersymmetric localisation on $S^4$, and then discuss how this quantity can be rewritten in a manifestly $SL(2,Z)$ invariant way for any number of colours N, and any value of the complex YM coupling \tau. Thanks to this novel expression we can explore various different regimes: perturbative SYM, large-N supergravity approximation, large-N 't Hooft expansion. All of these regimes are connected via a remarkable Laplace-difference equation relating the SU(N) to the SU(N + 1) and SU(N − 1) correlators. Numéro de preprint arXiv 2102.08305 Commentaires The Zoom credentials will be announced here in due time. Fichiers attachés RencontresDorigoni.pdf (5515170 bytes)
Pour obtenir l' affiche de ce séminaire : [ Postscript | PDF ]
[ Annonces ] [ Abonnements ] [ Archive ] [ Aide ] [ JavaScript requis ] [ English version ] |
# Math Help - Indefinite Integral Problem- Can anyone check my work
1. ## Indefinite Integral Problem- Can anyone check my work
$\int (\frac{2}{3x^4})dx$
$\int (2\times{3x^{-4}})dx$
$\int (6x^{-4})dx$
$6\int (x^{-4})dx$
$6 (\frac{x^{-3}}{-3})+c$
$-2x^{-3}+c$
$\frac{-2}{x^3}+c$
Thanks!
2. Originally Posted by Jim Marnell
$\int (\frac{2}{3x^4})dx$
$\int{ (2\times {\color{red}3} x^{-4}})dx$
The three in first step was in denominator
the second step should be
$\frac{1}{3}\int (2\times{x^{-4}})dx$
Thanks!
Red
3. $\int (\frac{2}{3x^4})dx$
$\frac{1}{3}\int (2\times{x^{-4}})dx$
$\frac{1}{3}\int (2x^{-4})dx$
$\frac{1}{3}\times{2}\int (x^{-4})dx$
$\frac{2}{3}\times{\frac{x^{-3}}{-3}}+c$
$\frac{-2}{9}x^{-3}+c$
$\frac{\frac{2}{9}}{x^3}+c$
I'm not sure if my final step is right. Thanks for any help!
4. Originally Posted by Jim Marnell
$\int (\frac{2}{3x^4})dx$
$\frac{1}{3}\int (2\times{x^{-4}})dx$
$\frac{1}{3}\int (2x^{-4})dx$
$\frac{1}{3}\times{2}\int (x^{-4})dx$
$\frac{2}{3}\times{\frac{x^{-3}}{-3}}+c$
$\frac{-2}{9}x^{-3}+c$
$\frac{\frac{{\color{red}-}2}{9}}{x^3}+c$
You forgot "-" sign , his can further be written as
$\frac{-2}{9x^3}+c$
I'm not sure if my final step is right. Thanks for any help!
Red
Game Over XX! |
# Past Papers’ Solutions | Edexcel | AS & A level | Mathematics | Core Mathematics 1 (C1-6663/01) | Year 2016 | June | Q#6
Question
A sequence is defined by
,
Where is a constant.
a) Write down expressions for and in terms of k.
Find,
b) in terms of k, giving your answer in its simplest form.
c) .
Solution
a)
We are given that sequence is defined by
We are required to find and .
We can utilize the given expression for general terms beyond first term as;
Similarly;
b)
We are required to find;
We can substitute given and found from (a);
c)
We are required to find; |
Collaborating
A project can survive badly-organized code; none will survive for long if people are confused, pulling in different directions, or hostile. This appendix therefore talks about what projects can do to make newcomers feel welcome and to make things run smoothly after that.
It may seem strange to include this material in a tutorial on JavaScript, but as Freeman pointed out in Free1972, every group has a power structure; the only question is whether it is formal and accountable or informal and unaccountable. Thirty-five years after the free software movement took on its modern, self-aware form, its successes and failures have shown that if a project doesn’t clearly state who has the right to do what, it will wind up being run by whoever argues loudest and longest. For a much deeper discussion of these issues, see Foge2005.
Licensing Software
If the law or a publication agreement prevents people from reading your work or using your software, you’re probably hurting your own career. You may need to do this in order to respect personal or commercial confidentiality, but the first and most important rule of inclusivity is to be open by default.
That is easier said than done, not least because the law hasn’t kept up with everyday practice. Mori2012 and this blog post are good starting points from a scientist’s point of view, while Lind2008 is a deeper dive for those who want details. In brief, creative works are automatically eligible for intellectual property (and thus copyright) protection. This means that every creative work has some sort of license: the only question is whether authors and users know what it is.
Every project should therefore include an explicit license. This license should be chosen early: if you don’t set it up right at the start, then each collaborator will hold copyright on their work and will need to be asked for approval when a license is chosen. By convention, the license is usually put in a file called LICENSE or LICENSE.txt in the project’s root directory. This file should clearly state the license(s) under which the content is being made available; the plural is used because code, data, and text may be covered by different licenses.
Don’t write your own license, even if you are a lawyer: legalese is a highly technical language, and words don’t mean what you think they do.
To make license selection as easy as possible, GitHub allows you to select one of the most common licenses when creating a repository. The Open Source Initiative maintains a list of licenses, and choosealicense.com will help you find a license that suits your needs. Some of the things you will need to think about are:
1. Do you want to license the code at all?
2. Is the content you are licensing source code?
3. Do you require people distributing derivative works to also distribute their code?
4. Do you want to address patent rights?
5. Is your license compatible with the licenses of the software you depend on? For example, as we will discuss below, you can use MIT-licensed code in a GPL-licensed project but not vice versa.
The two most popular licenses for software are the MIT license and the GNU Public License (GPL). The MIT license (and its close sibling the BSD license) say that people can do whatever they want to with the software as long as they cite the original source, and that the authors accept no responsibility if things go wrong. The GPL gives people similar rights, but requires them to share their own work on the same terms:
You may copy, distribute and modify the software as long as you track changes/dates in source files. Any modifications to or software including (via compiler) GPL-licensed code must also be made available under the GPL along with build & install instructions.
We recommend the MIT license: it places the fewest restrictions on future action, it can be made stricter later on, and the last thirty years shows that it’s good enough to keep work open.
Licensing Data and Documentation
The MIT license and the GPL apply to software. When it comes to data and reports, the most widely used family of licenses are those produced by Creative Commons, which have been written and checked by lawyers and are well understood by the community.
The most liberal license is referred to as CC-0, where the “0” stands for “zero restrictions”. CC-0 puts work in the public domain, i.e., allows anyone who wants to use it to do so however they want with no restrictions. This is usually the best choice for data, since it simplifies aggregate analysis. For example, if you choose a license for data that requires people to cite their source, then anyone who uses that data in an analysis must cite you; so must anyone who cites their results, and so on, which quickly becomes unwieldy.
The next most common license is the Creative Commons - Attribution license, usually referred to as CC-BY. This allows people to do whatever they want to with the work as long as they cite the original source. This is the best license to use for manuscripts, since you want people to share them widely but also want to get credit for your work.
Other Creative Commons licenses incorporate various restrictions on specific use cases:
• ND (no derivative works) prevents people from creating modified versions of your work. Unfortunately, this also inhibits translation and reformatting.
• NC (no commercial use) does not mean that people cannot charge money for something that includes your work, though some publishers still try to imply that in order to scare people away from open licensing. Instead, the NC clause means that people cannot charge for something that uses your work without your explicit permission, which you can give under whatever terms you want.
• Finally, SA (share-alike) requires people to share work that incorporates yours on the same terms that you used. Again, this is fine in principle, but in practice makes aggregation a headache.
Code of Conduct
You don’t expect to have a fire, but every large building or event should have a fire safety plan. Similarly, having a Code of Conduct like s:conduct for your project reduces the uncertainty that participants face about what is acceptable and unacceptable behavior. You might think this is obvious, but long experience shows that articulating it clearly and concisely reduces problems caused by having different expectations, particularly when people from very different cultural backgrounds are trying to collaborate. An explicit Code of Conduct is particularly helpful for newcomers, so having one can help your project grow and encourage people to give you feedback.
Having a Code of Conduct is particularly important for people from marginalized or under-represented groups, who have probably experienced harassment or unwelcoming behavior before. By adopting one, you signal that your project is trying to be a better place than YouTube, Twitter, and other online cesspools. Some people may push back claiming that it’s unnecessary, or that it infringes freedom of speech, but in our experience, what they often mean is that thinking about how they might have benefited from past inequity makes them feel uncomfortable, or that they like to argue for the sake of arguing. If having a Code of Conduct leads to them going elsewhere, that will probably make your project run more smoothly.
Just as you shouldn’t write your own license for a project, you probably shouldn’t write your own Code of Conduct. We recommend using the Contributor Covenant for development projects and the model code of conduct from the Geek Feminism Wiki for in-person events. Both have been thought through carefully and revised in the light of experience, and both are now used widely enough that many potential participants in your project will not need to have them explained.
Rules are meaningless if they aren’t enforced. If you adopt a Code of Conduct, it is therefore important to be clear about how to report issues and who will handle them. Auro2018 is a short, practical guide to handling incidents; like the Contributor Covenant and the model code of conduct, it’s better to start with something that other people have thought through and refined than to try to create something from scratch. |
Next: Bibliography Up: Multiplication Previous: Encoding Redundant Representations Contents
As we shall see in this section, a partitioning of our multiplier into slices of four bits instead of three leads to a decomposition
where is an encoding of the slice. While the number of terms of this sum is only 1/4 (rather than 1/3) of the width of , the range of encodings is now , and hence the value of each term is no longer guaranteed to be a power of 2 in absolute value. Consequently, a multiplier based on this radix-8 scheme generates fewer partial products than a radix-4 multiplier, but the computation of each partial product is more complex. In particular, a partial product corresponding to an encoding requires the computation of , and therefore a full addition.
The choice between radix-8 and radix-4 multiplication is generally decided by the timing details of a hardware design. In a typical implementation, the partial products are computed in one clock and the compression tree is executed in the next. If there is sufficient time during the first clock to perform the addition required for radix-8 encoding (which is more likely to be the case, for example, for a low-precision operation), then this scheme is feasible. Since most of the silicon area allocated to a multiplier is associated with the compression tree, the resulting reduction in the number of partial products may represent a significant gain in efficiency.
For the purpose of this analysis, which is otherwise quite similar to that of Section 7.1, we shall assume that and are bit vectors of widths and , respectively. The multiplier is now partitioned into 3-bit slices, , , and encoded as follows.
Definition 7.4.1 (eta) For and ,
The proof of the following identity is essentially the same as that of Lemma 7.1.1.
(sum-eta-lemma) Let , , and let be a bit vector of width . Then
PROOF: We shall prove, by induction, that for ,
where . The claim is trivial for . Assuming that it holds for some , we have
+- + +- ++- ++- +
which completes the induction. In particular, substituting for , we have
The partial products are computed with an 9-1 multiplexer.
Definition 7.4.2 (bmux8) Let , , let be a bit vector of width , and let satisfy . Then
Definition 7.4.3 (pp8) Let , , and , . Let be a bit vector of width and let satisfy for . Then for , the radix-8 partial product with respect to and for the sequence is
where
and .
Once again, our theorem is formulated as generally as possible, although in this case, we shall use it only once, in order to establish Corollary 7.4.2.
(booth8-thm) Let , , and , . Let be a bit vector of width and let satisfy for . Then
where and
PROOF: Let . After a trivial reformulation of Definition 7.4.2, we have
Thus, the left-hand side of the conclusion of the theorem is
We first consider the constant terms of this sum:
Next, we observe that the final term of the sum may be rewritten as
Thus, we have
(booth8-corollary) Let , , and , . Let and be bit vectors of widths and , respectively. Then
where .
PROOF: Since ,
i.e., in Theorem 7.2. The result follows from the theorem and Lemma 7.4.1
Next: Bibliography Up: Multiplication Previous: Encoding Redundant Representations Contents
David Russinoff 2007-01-02 |
# How do I find sin y =-sqrt 2/2? I know that you use the given to find the angle but a 45-45-90 has sides of 1-1-sqrt2 and the 30-60-90 has 1-sqrt3-2 do I need to simplify the sqrt2/2? Thank you :D
Question
How do I find sin y =-sqrt 2/2? I know that you use the given to find the angle but a 45-45-90 has sides of 1-1-sqrt2 and the 30-60-90 has 1-sqrt3-2
do I need to simplify the sqrt2/2?
Thank you :D |
MathSciNet bibliographic data MR2196058 57R17 (53D35 53D45 57R95) Hind, R. Symplectic hypersurfaces in ${\Bbb C}\rm P\sp 3$${\Bbb C}\rm P\sp 3$. Proc. Amer. Math. Soc. 134 (2006), no. 4, 1205–1211 (electronic). Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. |
What should I read/watch for information about the 1930s United States? [closed]
I'm currently writing a Trail of Cthulhu campaign, in which Investigators travel across the United States in the 1930s. To get inspiration for the scenarios, I read books and watch movies about particular locations.
For this campaign, I'm particularly interested in the following places: Savannah, New Orleans, Las Vegas, The Grand Canyon, Las Vegas/The Hoover Dam and San Francisco. I've got some source material on the first two, but I need stuff for the final four, especially San Francisco.
Alternatively, if you have a good alternative location, tell me it. For example, if Memphis is particularly interesting in the 1930s, give me some source material on that.
Most of all, I want material with interesting things I can use in scenarios. For example, I'm currently reading a book about jazz in New Orleans ("Hear Me Talking To Ya". Similarly, there's an excellent documentary series, Jazz, by Ken Burns. That's a scenario right there: I can write it around jazz clubs and musicians.
So far, I've read Flannery O'Connor (for Savannah), William Faulkner (for Mississippi), the jazz book mentioned above and a bit of John Steinbeck. What else can I read and watch?
-
closed as primarily opinion-based by SevenSidedDie♦, LitheOhm, Wibbs, Oblivious Sage, doppelgreenerJan 3 '14 at 11:08
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise.If this question can be reworded to fit the rules in the help center, please edit the question.
I'm conflicted by this question. Can you maybe rephrase it so its asking for more game specific stuff? And then make it CW? This is fairly far from Q&A and more opinion, so at the least it needs to be CW. – anon186 Oct 11 '10 at 22:33
What I want isn't game-specific: it's about the setting. I tend to think it's Q&A - I would accept an answer that gave me good sources of information on a couple of locations - but I'm new here and I could be wrong. – Graham Oct 12 '10 at 5:02
Given the focus on setting this is a list question and does belong as a cw. – anon186 Oct 12 '10 at 13:52
I've started a meta discussion about this question here. – C. Ross Oct 12 '10 at 14:03
7 Answers
I have played a Pulp campaign set in San Francisco in 1932 (plus assorted travel to Peru etc.) - even if it wasn't CoC based, I am familiar with CoC and used some background stuff from it, too - especially the New Orleans guidebook.
If you want to include some Voodoo stuff I suggest "Tell My Horse : Voodoo and Life in Haiti and Jamaica" by Zelda Hurston (written in the late '30s, so perfect for style too). And Haiti could be a memorable place to visit for your players.
You can try to find historical (or at least close) maps in the Perry-Castaneda collection Another nice touch is trying to locate old collections of National Geographics online. I bought the whole 1930 year on eBay, for example, for period stuff.
Just discovered a great Historical Atlas of USA
-
Why not aim at Carnivàle? As a bonus, there's a subtext you may be interested to weave into your play :)
-
That's interesting: I didn't know that was a 1930s setting. Thank you very much. – Graham Oct 12 '10 at 13:47
Glad I read before answering, that's what I was coming to bat with. – CatLord Jan 2 '14 at 2:59
I've used this nice little online resource over at The Dirty 30s! for my Hollow Earth Expedition game. Has a basic timeline, covers popular slang and fashion, gangsters, nazis, and commies, everything you would want. :)
-
Thank you, that's a good resource. I'm really looking for deep background on locations, so the Big Apple page is closest. – Graham Oct 12 '10 at 5:09
Regarding Las Vegas, head over to the Wikipedia page for its history. I went there because I immediately recognized, based on reading your setting, that Vegas wouldn't be at that time what we think of now -- most of the big casino development happened after the Cuba embargo, as Havana had (at that time) been what we currently think of Vegas being today.
Some interesting notes:
• Depending on when in the 30s you're talking, Hoover Dam would have been called Boulder Dam. It was not completed until 1935.
• During the 30s, Las Vegas had an estimated population of only 25,000. It received its first traffic light in 1931, and issued its first casino license that year as well.
• Most of the streets would have been unpaved, again depending on when in the 30s you're talking.
• A great potential side plot would be the smuggling between Boulder City and Las Vegas, as the Boulder Dam workers were not supposed to go to LV. (Indeed, Boulder City was erected as a federally-controlled city because the workers were initially living in LV.)
-
For 1930 and Cthulhu related stuff, I can recommend: Lovecraft is Missing.
For 1920's material, I can recommend Lackadaisy Cats for its fantastic look at prohibition (Caution, TVtropes link). The infrastructure of prohibition is just fantastic for Cthulhu, as cults have similiar outward objectives. Considering that prohibition was repealed in 1933, cults may have moved into speakeasys. Either way, a valuable reference.
"Oh brother where art thou" is also a great reference and anything listed on The Great Depression (TvTropes again) page is well worthwhile. Depending on when in the 1930's you're talking, it will be littered with more or less rotting debris from the roaring twenties. For the right feel, Grim Fandango nails it spot on in the underwater chapter.
-
Thanks. "Oh Brother Where Art Thou" is a good reference and I like the Steinbeck suggestions on that TV Tropes page. I may have to steel myself to read Of Mice And Men. – Graham Oct 12 '10 at 5:16
I'm so terribly sorry. But yeah, you really should read it. Lakadasiy cats is a great way to get the tropes into your imagination, actually: these are the memories of what they had and lost. – Brian Ballsun-Stanton Oct 12 '10 at 6:24
+1 for Lackadaisy Cats. – CatLord Jan 2 '14 at 2:58
Not directly associated with the places you’re interested in, but Steinbeck’s Grapes of Wrath is required reading for most American high school students for a number of reasons, such as
• It’s widely regarded as the description of the plight of poor farmers during the Great Depression. For a lot of farmers, the “Roaring 20’s” weren’t all that great and the Great Depression came early. Huge numbers of desperate people went west, though without much hope, towards California where there might be something. Grapes of Wrath captures this sort of hopeless desperation very well.
• It’s widely regarded as phenomenal writing.
• Basically none of those students would ever read such a massive and often-boring tome if they weren’t required to do so. :)
In short, I don’t really recommend reading the Grapes of Wrath as preparation; it’s a long book. But if you’ve already read it, refreshing your memory of it (possibly appreciating what it does have rather than just hating its size, if like many that’s what you did the first time) might be useful. Even just a cliffnotes version and perhaps some quotes might be useful for understanding what a lot of people were going through at the time.
-
In addition to all the other fine answers above, I might go as far as the movies Road to Perdition and Public Enemies or the show Boardwalk Empire if you need underworldly sources.
- |
• 9
• 10
• 9
• 10
• 10
Where can I test rotation and translation matrices online to see how they affect an object?
This topic is 886 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
I have a character that is standing on a planet and when I press the left or right arrow keys I want the player to rotate around the planet while keeping his feet pointing toward the center of the planet. I also need to make him jump while moving.
I thought perhaps the best way to do this is to use matrices for the rotation and the translation. Does anyone know of a good website to go to that allows you to see how your matrix affects an object?
If not, can anyone help me figure out how to make this work?
Thanks!
Zach
Share on other sites
Here is something in 2D that has lots of examples on transformations. The information here will translate very well to 3D. The biggest difference is 3D rotations are more complicated than 2d but hopefully this will help to wrap your mind around transformations.
http://www.html5rocks.com/en/tutorials/webgl/webgl_transforms/
When trying to figure out matrices remember that they are applied in order. Basic rotations always rotate around the origin. To rotate around an arbitrary point, you first move the arbitrary point to the origin, rotate, then move the point back.
rotationAroundPoint(point, rotation) = translate(point) * rotation * translate(-point) |
## Introduction
Plastic is a crucial material in many sectors, including construction, packaging, transportation, electronics, textiles, and others1,2. The last half century has witnessed the rapidly increasing demand and production of plastics1, resulting in considerable plastic waste due to the low plastic recycling rate. From 1950 to 2015, only 9% of the cumulative plastic waste generation (6300 million metric tons (Mt)) was recycled, compared to over 60% discarded (accumulating in landfills or in the natural environment)1. The landfilled or disposed plastic wastes and their fragments, i.e., microplastics and nanoplastics, have caused increasing environmental concerns3,4,5,6. Increasing plastic recycling is one essential strategy to reduce plastic waste disposal7. There are two common types of plastic recycling, mechanical (e.g., magnetic density separation) and chemical recycling (e.g., gasification)7. Recently, another type of recycling method, solvent-based recycling (or referred as physical recycling), is also attracting attention8. The challenges of plastic mechanical recycling include thermal-mechanical degradation (e.g., caused by heating and mechanical shearing of polymer)7, plastic degradation (e.g., caused by photo-oxidation process during lifetime), incompatibility among different polymers when recycling blended plastics9, and contaminations (e.g., coating, ink, additives, metal residues or cross-contaminations among different plastic streams)9,10. Some waste plastics are hard to be mechanically recycled due to low bulk density (e.g., films), lightweight (e.g., polystyrene (PS)), low economic value (e.g., PS), and carbon-black pigments that absorb infrared light and confound the sorting machine9,11. Hence, relying on the traditional mechanical recycling method alone is insufficient to address the increasing volume and variety of plastic waste. Compared to mechanical recycling, thermochemical methods, as one type of chemical recycling, have advantages in processing plastic wastes that are difficult to be depolymerized, or mechanically recycled due to economic or technical barriers7,12. Thermochemical processes include pyrolysis and gasification, which have potentials to treat waste plastics with high energy, carbon, and hydrogen content, and low moisture content13. Thermochemical processes can produce a variety of products, and hydrogen is one product with a mature and growing market14. Hydrogen is an important industrial gas widely used in the oil refining and chemical industries, it can also be used as a clean energy source for transportation15. U.S. Department of Energy (DOE) estimated the U.S. hydrogen demand as high as 22–41 Mt per year by 2050, given the enormous need for clean energy16. Currently, 96% of hydrogen production uses fossil fuel reforming (e.g., petroleum, natural gas, and coal)15. Converting MPW to hydrogen has the potential to reduce fossil fuel demand for hydrogen production and address worldwide challenges of rapidly growing plastic wastes17. For example, the U.S. DOE Hydrogen Program Plan highlighted “diverse domestic resources” including waste plastics as an important source of hydrogen production16.
Since most plastics are made from fossil fuels, it is necessary to mitigate the fossil-based carbon emissions during the thermochemical conversion of MPW to hydrogen1,15. Carbon capture and storage (CCS) is an important technology to mitigate climate change by capturing and geologically storing CO2 (ref. 18). Coupling hydrogen production with CCS offers a means to produce low-carbon hydrogen1,15. For the large-scale development and implementation of plastic recycling technologies, it is critical to understand the economic feasibility and environmental performance of plastic waste to hydrogen pathway with/without CCS and policy incentives, as well as to identify the key drivers and future improvement opportunities.
Techno-economic analysis (TEA) is one of the most widely used tools to assess the economic and technical feasibility of emerging technologies19,20,21,22,23,24; Life Cycle Assessment (LCA) is a standardized tool to quantify life-cycle environmental impacts25,26,27,28,29,30,31. Several studies have used TEA to evaluate the economic feasibility or LCA to assess the environmental implications of plastic wastes to energy products (see Supplementary Note 1 for literature review). However, few studies have explored the economic and environmental implications of MPW to hydrogen at a large scale with CCS, or investigated the drivers of the economic and environmental performance of MPW compared to single-stream recycled plastic.
To fill the knowledge gap, we conducted a TEA and LCA to evaluate the economic and environmental performance of hydrogen production from MPW and single-stream recycled plastic in the U.S. and identify cost reduction opportunities. A mechanistic process simulation model (see Fig. 1 for system boundary and plant process flow diagram, and see Methods for details) was developed in Aspen Plus32 to provide rigorous engineering estimation of mass and energy balance data used in TEA and LCA. The minimum hydrogen selling price (MHSP) was selected to assess the economic feasibility of the hydrogen plant33. For life-cycle environmental impact assessment (LCIA), TRACI 2.1 by U.S. Environmental Protection Agency (EPA) and Global Warming Potential (GWP) factors (100-year time horizon) by the Intergovernmental Panel on Climate Change (IPCC) 2021 (in the sixth assessment) are used34,35. Different scenarios were designed to examine the impacts of varied feedstock compositions, plant capacities, CCS adoption, and policy incentives. A sensitivity analysis was conducted to identify key drivers of production costs. Finally, an improvement analysis outlined the roadmap for reducing production costs by improving key technical and economic parameters. This study contributes to the fundamental understanding of the economic and environmental performance of MPW to hydrogen pathway, which will inform the waste management industry with economically and environmentally preferable system design and shed light on opportunities to reduce cost and environmental burden.
## Results
In this study, the scenario analysis was used to evaluate the impacts of feedstock types, plant capacities, CCS adoption, and carbon credits, as shown in Supplementary Table 1. Scenario 1 depicts the baseline cases without CCS; Scenario 2 describes the cases with CCS but no carbon credit available; Scenario 3 considers CCS and carbon credits for capturing and storing CO2. In each scenario, five different feedstock cases are included, namely one MPW case and four single-stream feedstock cases (polyethylene (PE), polyethylene terephthalate (PET), polypropylene (PP), and PS) (see Methods). Many studies have explored the thermochemical conversion of single-stream or mixed plastics, but few studies have compared the economic and environmental performance of hydrogen derived from gasifying the single-stream plastics and MPW13,17,36,37,38,39,40,41,42,43. For each feedstock case, the varied steam/feedstock ratios were studied to reach the optimal MHSP. Five plant capacities (100–2000 oven-dry metric ton (ODMT) per day of plastic fed in) are compared to explore the impacts of capacities on the MHSP. The capacities were selected based on the current estimation of plastic wastes landfilled in the U.S. The quantity of state-level landfilled plastic waste in 2019 exceeds 250,000 t year−1 (762 t day−1) in 35 states and exceeds 1000,000 t year−1 (3049 t day−1) in 12 states44.
### Economic competitiveness of plastic waste derived hydrogen
Figure 3 shows the MHSP of hydrogen plants at 2000 ODMT per day of plastic waste in three scenarios (see Supplementary Fig. 8 for the MHSP of varied capacities). The detailed results of capital investment and operating cost are available in Supplementary Notes 2 and 3, Supplementary Figs. 911. In Fig. 3, the feedstock cost range was collected from the literature (see Supplementary Note 4). Waste plastic-derived hydrogen is economically competitive when the MHSP is within the range of the current market price of hydrogen.
Without CCS, only the MPW case shows competitive MHSP (US$1.33–$2.00 kg−1 H2 for feedstock cost $0–$151 ODMT−1), compared with current fossil-based hydrogen, as shown in Fig. 3a. The economic competitiveness of other cases depends on the feedstock costs (except for PET and PS whose MHSP is always higher than the fossil-based H2). For example, PE needs a feedstock cost under US$236 ODMT−1 to be economically competitive; PP needs a feedstock cost under US$238 ODMT−1. These thresholds are towards the lower bounds of feedstock costs of PE, and PP, indicating the limited possibility of utilizing recycled single plastic streams for hydrogen production in most cases, given the high feedstock costs caused by expensive sorting and processing in MRF. Some strategies have been proposed in the literature to overcome the cost barriers, e.g., advocating for “design for recycling” to lower the recycling cost47, improving waste collection and separation infrastructures11, optimizing municipal waste collection systems prior to MRF48, and adopting cost-effective technologies (e.g., triboelectrostatic separation) in MRF49.
### Environmental impacts of plastic waste derived hydrogen
This study conducted the LCA to examine the environmental impacts of waste plastic-derived hydrogen. Figure 6 shows the normalized LCA results of ten impact categories in varied scenarios and feedstock cases under the optimal S/F ratios identified in Fig. 2. The LCA results of each impact category are normalized based on the highest value (on 1 kg H2 basis) of that impact across Scenario 1–3 (including 5 feedstock cases in Scenario 1 without CCS and 5 feedstock cases in Scenario 2 & 3 with CCS). The absolute values of LCA results are in Supplementary Tables 7 and 8 in Supplementary Data 1 (ref. 67).
In Fig. 6, MPW shows the lowest environmental impacts across all scenarios and impact categories (1–93% lower than the other four single-stream feedstocks) mainly due to the lower environmental burdens of feedstock collection, sorting (for single-stream cases), and transportation. Note that the burdens of producing plastic are assumed to be cut-off from the system boundary. Across most impact categories, feedstock collection, sorting, and transportation dominate the environmental impacts of hydrogen derived from single-stream plastic (27–94%), but they only contribute to 1–10% for MWP. The only exceptions are GWP and fossil fuel depletion that are dominated by energy, contributing to similar percentages of results for single-stream plastics and MPW (25–90%). MPW has 1–59% higher environmental burdens of chemicals and materials than that of PE, PP, and PS, given the additional steps in pretreatment and dechlorination. However, chemicals and materials overall only contribute to 1–32% of life cycle environmental impacts across all single-stream plastic feedstocks. Waste treatment has minor contributions to most impact categories except acidification and human health—carcinogenics, although MPW has 19–94% higher environmental burdens related to waste treatment than single-stream plastics. This is caused by the higher wastewater generation in pretreatment and dechlorination. Across single-stream plastics, PET shows the worst environmental performance, similar to TEA results for similar reasons – low hydrogen yields and high cost (environmental burdens) of sorting and processing plastic feedstock.
Adding CCS to the hydrogen plant increase all environmental impacts by 9–117% except reducing GWP by 42–67%, regardless of plastic feedstocks. The increased environmental impacts are attributed to the chemicals and energy consumption68,69, while the decreased GWP are contributed by CCS that removes carbon.
From the perspective of climate change, MPW-derived hydrogen without CCS has higher life cycle GWP (16.0–21.0 kg CO2e kg−1 H2, depending on S/F ratios, see Supplementary Table 9 for detailed values) than natural gas (9.0–12.3 kg CO2e kg−1 H2 (refs. 15,51,52,54,70)) but mostly lower than coal (20.0–26.0 kg CO2e kg−1 H2 (refs. 51,52,53,54)). CCS reduces the GWP of MPW-derived hydrogen to 5.1–6.2 kg CO2e kg−1 H2, which is much lower than fossil-based hydrogen without CCS. However, if CCS is implemented for fossil-based hydrogen in the future, MPW-derived hydrogen will have higher life-cycle GWP than natural gas-based hydrogen with CCS (1.0–4.1 kg CO2e kg−1 H2 (refs. 15,31,51)), and comparable with coal-based hydrogen with CCS (2.0–6.9 kg CO2e kg−1 H2 (refs. 51,52,53,54)) or biomass gasification hydrogen without CCS (0.3–19.2 kg CO2e kg−1 H2 (refs. 31,71,72,73)). MPW-derived hydrogen with CCS has lower life-cycle GWP than electrolysis hydrogen from global average grid electricity (25.5 kg CO2e kg−1 H2 (ref. 51)), although the GWP of MPW-derived hydrogen with CCS is higher than electrolysis hydrogen with clean electricity (0.9–6.9 kg CO2e kg−1 H2 (refs. 70,71)), or biomass gasification with CCS (−18.8 to −9.6 kg CO2e kg−1 H2 (refs. 31,71)). As most GHG emissions are attributed to energy consumption (Fig. 6b), future research should focus on improving energy efficiency and exploring alternative energy sources to reduce the life cycle GWP of MPW-derive hydrogen.
For other impact categories, this study compared Scenario 3-MPW with CCS with hydrogen made from natural gas using steam reforming and CCS (see Supplementary Fig. 12). The MPW with CCS is 2.4–80.3% lower than natural gas with CCS in acidification, fossil fuel depletion, ozone depletion, and smog formation. At the same time, hydrogen from natural gas with CCS is 26.8–53.6% lower than Scenario 3-MPW with CCS in carcinogenics, non-carcinogenics, ecotoxicity, eutrophication, and respiratory effects.
## Discussion
This study conducted a TEA and LCA to explore the economic feasibility and environmental performance of hydrogen production from gasifying MPW that commonly ends in landfill. The TEA and LCA were coupled with the process simulation model developed in Aspen Plus to determine the impacts of plant capacities, feedstock compositions, policy incentives, and process parameters on MHSP and life-cycle environmental impacts. It is economically feasible to produce US$1.67 kg−1 H2 from a 2000 ODMT per day hydrogen plant utilizing MPW without CCS, compared with the current fossil-based hydrogen price without CCS (US$0.91–$2.21 kg−1 H2). Incorporating CCS into the gasification plant increases most environmental impacts due to the additional chemical and energy consumption of CCS systems. The only exception is GWP given the carbon removal benefits of CCS. Adding CCS also increases the MHSP to US$2.60 kg−1 H2 (US$2.26–$2.94 kg−1 upon varied feedstock cost) for the same plant, and the economic feasibility of a CCS-coupled hydrogen plant depends on CCS cost and policy incentives. CCS is essential to ensure that MPW-derived hydrogen has lower life cycle GHG emissions than current fossil-based hydrogen, and this advantage may not hold if CCS is implemented for natural gas-based hydrogen in the future. Future research is needed to reduce energy-related carbon emissions to lower the life cycle GWP of MPW-derived hydrogen. The results show the economic and environmental advantages of using MPW over single-stream plastics (i.e., PE, PET, PP, and PS) in producing hydrogen via gasification, given the high feedstock cost and environmental burdens of sorting and processing single-stream plastics in MRFs at current stage and low hydrogen yield of some plastics (e.g., PET). Given the current high portion of MPW landfilled or discarded, more efforts are needed to prioritize MPW valorization in an environmentally benign and cost-effective way. Among single-stream plastics, PET is the least favorable in terms of both environmental and economic performance. This implies the necessity of exploring other high-value and feasible recycling methods for sorted single-stream plastics (e.g., replacing virgin materials)7. Increasing the plant capacity can reduce the MHSP across all feedstock cases. From an operational aspect, the steam/feed ratio directly affects MHSP, and the optimal steam/feedstock ratio varies by feedstock (e.g., 2.0 for MPW and 3.5 for PS) and generally increases as the feedstock cost grows. The improvement analysis exhibits possible pathways to decrease the MHSP of MWP-derived hydrogen with CCS from US$2.60 to US$1.46 kg−1 H2. If carbon credits are close to the CCS costs and MPW feedstock cost is low, the MHSP of utilizing MPW can reach US$1.06 kg−1 H2. To achieve the ambitious goal of US$1.0 per kg clean hydrogen in one decade, the roadmap highlights the need for simultaneous improvement of process economics and policy supports.
## Methods
### Feedstock compositions
Common plastic wastes include PET, high-density polyethylene (HDPE), PVC, low-density polyethylene (LDPE), PP, PS, and other plastic waste39,74. Supplementary Table 2 summarizes the composition data from the proximate and ultimate analysis of the plastics used in this study. Five feedstock cases were developed to investigate the impacts of different plastic waste feeds, particularly to compare the economic and environmental performance of single-stream plastic feed and MPW. Four cases use single-stream plastic waste, including PE (assuming 50% LDPE and 50% HDPE), PET, PP, and PS provided by sorting or recycling facilities. Pure PVC feed was not selected due to extremely high chlorine content causing safety and corrosion concerns75. One case was designed for MPW that was typically rejected from the mechanical recycling at MRF7. These MPW are commonly landfilled that need “tipping fee” or incinerated to generate power7,76. In this study, MPW contains 19.5% HDPE, 27.9% LDPE, 27.5% PP, 7.6% PS, 14.6% PET, and 2.9% PVC based on the data of landfilled plastic waste that is neither recycled nor combusted in the U.S. in the year 2018 by U.S. EPA76.
### Process simulation model of the hydrogen plant
A process simulation model was established in Aspen Plus to provide mass and energy data for TEA and LCA32. As shown in Fig. 1, the hydrogen plant comprises five main areas: feedstock handling and pretreatment, gasification, hydrogen purification, CHP plant, and utilities. The detailed process diagrams of Aspen Plus in each area are shown in Supplementary Figs. 1317. An example of summarized flow information is available in Supplementary Fig. 18 and Supplementary Table 10 in Supplementary Data 1 (ref. 67).
In this study, five different feedstocks (i.e., PE, PET, PP, PS, and MPW as shown in Supplementary Table 1) were fed into the simulation model to study the impacts of varied feedstock compositions. The plastic waste is assumed to arrive at the hydrogen plant in the form of bales7,77. The bales are then unloaded and transferred to the warehouse for storage. The first unit operation is the size reduction of the plastic waste in the shredder to around 152 mm (6 inches)78. After the initial grinding, the feedstocks are washed in the rotary drum washer to remove the entrained ash and other contaminates7,79,80. Different from the pure feedstocks (i.e., PE, PET, PP, PS) that have been sorted and processed, MPW will need another two washing steps in friction washers as a common practice7. Then the feedstocks are dried in the rotary drum dryer at 105 °C to reach a moisture content lower than 10% (dry basis)81,82. Followed by drying, feedstocks are further grounded in the secondary grinding to around 1–2 mm7,81, and are ready for gasification.
Before gasification, dechlorination is essential for removing toxic chlorine from PVC for safety and corrosion concerns. Based on the study by López et al., treating the plastic mixtures containing PVC at 300 °C in a nitrogen atmosphere for 30 min can efficiently remove 99.2% of chlorine in PVC75. In this study, the dechlorination process is conducted at the same condition before gasification75,83. The weight loss of PE, PP, PS, and PET in the dechlorination process is only 0.7%, 0.3%, 3.3%, and 0.8%, respectively75. Two-stage gasification was modeled in this study, including gasification followed by tar cracking which is essential for large-scale hydrogen plant operation84. This study uses a bubbling fluidized bed reactor for gasification, and a fixed bed reactor for tar cracking based on the literature84,85. For gasification, the operating condition was selected to be 850 °C and 3.5 MPa with steam as the gasifying agent for H2-rich production13,86,87. In Aspen Plus, the gasification was modeled with two reactors in sequence using RStoic and RGibbs, which is consistent with previous process simulations for gasification40,88,89,90,91,92,93. The RStoic reactor decomposes the inlet stream based on the feedstock compositions. Then the decomposed stream along with steam is sent to the RGibbs reactor that calculates the syngas composition using Gibbs free energy minimization method88,89. In RGibbs reactor, 12 reactions are considered based on the literature (see Supplementary Table 11 for detailed reactions)94,95,96.
This study uses steam gasification which is commonly used for generating H2-rich syngas as the presence of steam can increase hydrogen yield, reduce the tar concentration, and promote water gas shift reactions85,97,98,99,100. Previous studies show the importance of S/F ratio in gasification design and optimization40. The S/F ratio commonly varies from 1.0 to 4.040,85. The higher S/F ratio may lead to higher hydrogen yield, but at the same time can cause higher energy costs. To choose a suitable S/F ratio, this study investigated the S/F ratios from 1.0 to 4.0 in each feedstock case, and selected the S/F ratio with the lowest MHSP. The bed material is natural olivine with a diameter of 100–300 μm37,84. Natural olivine is a highly attrition-resistant catalyst to reduce tar formation37,84. For tar cracking, the fixed bed reactor operates at 800 °C and 3.5 MPa with additives that are 1:1.5 mixtures of calcined dolomite and activated carbon84. These additives can efficiently decompose the NH3 formed in gasification and reduce the concentration of HCl and H2S in syngas84. After the tar cracking, a cyclone is deployed to separate the solid phase (e.g., fly ash)101.
After the hot syngas is generated, the first step is to remove the impurities. The moving-bed granular filter with CaO is deployed to desulfurize and dechlorinate the hot syngas102,103. Then the remaining tar is removed by a Venturi scrubber at about 35 °C and a wet-packed column for fine tar removal101,104. To integrate the tar removing with other impurity removing, the Venturi scrubber washes with 10% NaOH solution to remove the remaining HCN, HCl, and H2S105,106. To further eliminate NH3, an acid wash column with H2SO4 solution at pH 5 is adopted107. The purified gas primarily contains H2, H2O, CO, CO2, and CH4. To separate hydrogen, the syngas is compressed to 13.7 atm and fed to a PSA which is assumed 84% hydrogen recovery with 99% purity106. All the off-gases are sent to the CHP plant for energy recovery106. To store the hydrogen, the purified hydrogen is assumed to be compressed to 700 bars through two-stage compressing108. 700 bar is a common pressure level for storage or for hydrogen stations to refuel the fuel cell108,109.
As this study uses steam gasification, the steam load in the gasification area is high. At the same time, the hydrogen plant consumes electricity in each area. Given the demand of electricity and heat, this study includes a CHP plant that recovers the energy in PSA off-gas and char to produce electricity and heat needed by the whole plant. If the heat supply is not sufficient by combusting the intermediate flows, natural gas will be combusted as a supplementary fuel. The boiler generates superheated steam at 62 atm and 454 °C with 80% boiler energy efficiency110. The superheated steam then goes through multi-stage turbines for power generation. In this study, the low-pressure steam at 13 atm and 268 °C from the first stage turbine is extracted for feeding the gasifier and providing heat to the dechlorination reactor and tar cracking reactor.
Plant utilities include electricity, cooling water, process water, chilled water, plant air system, and the storage of materials and products110,111. All of these utilities are included in the process simulation, TEA, and LCA.
This study includes scenarios with and without CCS. CCS captures and stores the CO2 from the CHP plant flue gas. The CO2 concentration in the cooled flue gas is around 23 vol.%. Post-combustion CCS was chosen because of its suitableness for capturing carbon from air-combusted flue gas with much lower CO2 concentration (commonly lower than 25 vol. % of flue gas112,113) than oxyfuel combustion CCS that uses pure O2 (ref. 114). The capture efficiency is assumed to be 90% for post-combustion CCS115. See Supplementary Note 5 for detailed technical information and Supplementary Note 4 for cost data.
### Techno-economic analysis model
This study focuses on the hydrogen plant with a capacity of 100–2000 ODMT plastic waste per day. The mass and energy balance data by Aspen Plus simulation were input to determine variable operating costs and capital costs. In TEA, the original purchased costs, installing factors, equipment scaling factors, and material and energy prices, and feedstock costs were collected from the literature and discussed in Supplementary Note 6 for capital expenditures and Supplementary Note 4 for operating expenditures. The MHSP, a widely adopted indicator describing the production cost under preset IRR, was selected to assess the economic feasibility of the hydrogen plant33. The MHSP was derived through the discounted cash flow rate of return (DCFROR) analysis as a widely used economic analysis method in TEA23. In the DCFROR analysis established in EXCEL, the MHSP was derived by setting the IRR to be 10% and the Net Present Value (NPV) to be zero23. The year of analysis is 2019 based on the latest data availability. Supplementary Tables 12 and 13 list the key assumptions and parameters of TEA based on literature data. The plant is assumed to have 40% equity-financed and take the remaining 60% on loan. The capital cost was assumed to be depreciated over 7 years by following the Modified Accelerated Cost Recovery System by the U.S. IRS116.
The total capital investment includes total installed equipment cost, other direct costs, indirect cost, and land and working capital. Total installed equipment cost is the sum of the installed equipment costs that were estimated by multiplying purchased costs with installation factors (see Supplementary Tables 1418). The purchased costs and installation factors used in this study were collected from the literature as shown in Supplementary Note 6. The economy of scale was considered using the scaling factors (see Supplementary Note 6) to scale the purchased costs found in the literature to the capacities explored in this study. Plant cost indices by Chemical Engineering Magazine were used117 to adjust equipment purchased costs collected from the literature to the year of analysis 2019 in this study. The detailed method of determining equipment cost is documented in Supplementary Note 6.
The operating expenditures include the variable costs of feedstocks, raw materials, waste stream charges, byproduct credits, and fixed operating costs (including labor cost), and other operating costs. The prices of feedstocks, raw materials, waste stream charges, and energy were collected from the literature and documented in Supplementary Table 3. If the price is not in the year of analysis (2019), the Producer Price Index for chemical manufacturing was used to adjust the original prices to 2019 (ref. 118). The details are available in Supplementary Note 4 and Supplementary Table 19.
### Life cycle assessment model
In this study, a cradle-to-gate LCA was conducted to display the environmental impacts of hydrogen converted from MPW. The life cycle inventory (LCI) data for the hydrogen plant were derived from the Aspen Plus simulation for different scenarios, including energy and material consumption (e.g., fuels, chemicals, water) and CHP plant emissions. AP-42 emission factors by U.S. EPA were used to estimate emissions from natural gas combustion (see Supplementary Table 20 for emission factors)119. The LCI data of upstream production of electricity and materials and treatment of wastewater and solid waste (e.g., ash) were collected from the ecoinvent database (see Supplementary Table 21 for the unit processes used in this study)120. The functional unit is 1 kg H2 produced in consistency with the TEA. LCIA uses the TRACI 2.1 method by U.S. EPA and 100-year GWP characterization factors by IPCC AR6 202134,35. |
# Instantaneous Frequency¶
Analytical approach to continuous Instantaneous Frequency and Frequency Modulation.
## 1. Instantaneous frequency¶
A modulated signal can be expressed as: $$\large x(t)=a(t)\cos(\phi(t))$$ where:
• The instantaneous amplitude or envelope is given by $\large a(t)$;
• The instantaneous phase is given by $\large \phi(t)$;
• The instantaneous angular frequency is given by $\large \omega(t)=\frac{d}{dt}\phi(t)$;
• The instantaneous ordinary frequency is given by $\large f(t)=\frac{1}{2\pi}\frac{d}{dt}\phi(t)$.
## 2. Examples¶
Modulation using instantaneous frequency
### 2.1. Linear frequency modulation¶
Given a modulation frequency, defined by:
$$\large f(t)=f_a+\frac{(f_b-f_a)t}{T}$$
where $f(t)$ is linearly interpolated from $f_a$ to $f_b$.
The modulated signal whithout instantaneous frequency is:
$$\large x(t)=sin(2\pi f(t)t)$$
In this example the carrier frequency is zero ($\phi_c=0$).
If we considere:
$$\large \phi(t)=\int\omega(t)dt=2\pi\int f(t)dt$$
the modulated signal by the instantaneous frequency could be expressed by:
$$\large x_m(t)=sin(\phi(t))=sin\left(2\pi\left[f_at+\frac{(f_b-f_a)t^2}{2T}\right]\right)$$
### 2.2. Exponencial frequency modulation¶
$$\large f(t)=f_a+\frac{f_b-f_a}{1+e^{-k\left(\frac{t}{T}-\frac{1}{2}\right)}}$$
$f(t)$ is a mudulation frequency which interpoles from $f_a$ to $f_b$ exponentially. It's inspired by the sigmoid function where $k$ is the slope of the middle point. |
• Create Account
## How to make camera roll whilst strafing
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
14 replies to this topic
### #1littletray26 Members
Posted 15 October 2012 - 07:13 AM
Hey gamedev
I'm currently working on an FPS, and at the moment I'm trying to write everything to do with walking.
My question to you is, how do you go about making the camera tilt slightly whilst strafing left and right as seen in some games?
I've tried a few things and have had a few problems. I've had problems specifying when the camera should stop rolling, I came up with a sort of solution to that, but I now have the problem where if I strafe, say to the right while turning the camera to the left (yaw) then stop the camera will be permenently tilted.
This is my code so far. It's bad, I know but I'm a beginner
Code messed up here - see second post
That seems to do the job. The camera rolls slightly and smoothly whilst strafing. The only problem is that if I strafe and turn yaw at the same time the camera ends up stuck tilted
Edited by littletray26, 15 October 2012 - 11:50 PM.
The majority of Internet Explorer users don't understand the concept of a browsing application, or that there are options.
They just see the big blue 'e' and think "Internet". The thought process usually does not get much deeper than that.
Worms are the weirdest and nicest creatures, and will one day prove themselves to the world.
I love the word Clicky
### #2littletray26 Members
Posted 15 October 2012 - 07:17 AM
I don't know why the code has squished up like that, Heres a repost of the code:
[source lang="cpp"]float rollIndex = 0.01f;float rotZ = 0;float currentZRot = 0; //I use this to try and stop the roll where I want it//later on in codeif strafing left:if (currentZRot < 0.07f)rotZ += rollIndex;if strafing right:if (currentZRot > -0.07f)rotZ -= rollIndex;elseif (currentZRot < 0)rotZ += rollIndex;else if (currentZRot > 0)rotZ -= rollIndex;//that makes the camera roll back to the right way up//Later on in code//at the end of the cameras update functionroll(rotZ);currentZRot += rotZ;rotZ = 0.00f;//and here is my camera's yaw, pitch and roll functionsvoid Camera::pitch(float A){D3DXMATRIX T;D3DXMatrixRotationAxis(&T, &right, A);D3DXVec3TransformCoord(&lookDir, &lookDir, &T);}void Camera::yaw(float A){D3DXMATRIX T;D3DXMatrixRotationY(&T, A);D3DXVec3TransformCoord(&right, &right, &T);D3DXVec3TransformCoord(&lookDir, &lookDir, &T);}void Camera::roll(float A){D3DXMATRIX T;D3DXMatrixRotationAxis(&T, &lookDir, A);D3DXVec3TransformCoord(&right, &right, &T);D3DXVec3TransformCoord(&up, &up, &T);}[/source]
Edited by littletray26, 15 October 2012 - 07:25 AM.
The majority of Internet Explorer users don't understand the concept of a browsing application, or that there are options.
They just see the big blue 'e' and think "Internet". The thought process usually does not get much deeper than that.
Worms are the weirdest and nicest creatures, and will one day prove themselves to the world.
I love the word Clicky
### #3littletray26 Members
Posted 15 October 2012 - 11:22 PM
Anybody?
Edited by littletray26, 18 October 2012 - 02:20 PM.
The majority of Internet Explorer users don't understand the concept of a browsing application, or that there are options.
They just see the big blue 'e' and think "Internet". The thought process usually does not get much deeper than that.
Worms are the weirdest and nicest creatures, and will one day prove themselves to the world.
I love the word Clicky
### #4littletray26 Members
Posted 18 October 2012 - 03:56 PM
Shameless self bump
The majority of Internet Explorer users don't understand the concept of a browsing application, or that there are options.
They just see the big blue 'e' and think "Internet". The thought process usually does not get much deeper than that.
Worms are the weirdest and nicest creatures, and will one day prove themselves to the world.
I love the word Clicky
### #5littletray26 Members
Posted 21 October 2012 - 03:12 AM
Sorry to do this again but I'm desperate for an answer. Bump again
The majority of Internet Explorer users don't understand the concept of a browsing application, or that there are options.
They just see the big blue 'e' and think "Internet". The thought process usually does not get much deeper than that.
Worms are the weirdest and nicest creatures, and will one day prove themselves to the world.
I love the word Clicky
### #6Burnt_Fyr Members
Posted 22 October 2012 - 12:27 PM
I'd love to help, as it seems you are very desperate, so...
{bump}.
I think people will need to see a bit more code to really make sense of what is going on, and perhaps a short video demonstrating the problem. What info have you gleaned from debugging the problem on your own?
Edited by Burnt_Fyr, 22 October 2012 - 12:28 PM.
### #7littletray26 Members
Posted 23 October 2012 - 12:05 AM
From what I can tell, it's a math problem. I'm pretty rubbish at math so I have no idea what's going on. I'll post a video and my entire camera class.
3DCamera.h
//Camera header
#include "Main.h"
class Camera
{
public:
D3DXMATRIX viewMatrix;
D3DXMATRIX projMatrix;
D3DXVECTOR3 pos;
D3DXVECTOR3 up;
D3DXVECTOR3 target;
D3DXVECTOR3 lookDir;
D3DXVECTOR3 right;
D3DXVECTOR3 acceleration;
D3DXVECTOR3 currentVel;
float maxWalkingVelocity;
void Initialize(D3DXVECTOR3 Position, D3DXVECTOR3 Up, D3DXVECTOR3 Target);
void Update(LPDIRECT3DDEVICE9& d3dDevice);
void Camera::pitch(float A);
void Camera::yaw(float A);
void Camera::roll(float A);
};
3DCamera.cpp
//Camera source file
#include "3DCamera.h"
extern bool ISINFOCUS;
float rotX = 0;
float rotY = 0;
float rotZ = 0;
float currentZRot = 0;
float rollIndex = 0.01f;
void Camera::Initialize(D3DXVECTOR3 Position, D3DXVECTOR3 Up, D3DXVECTOR3 Target)
{
maxWalkingVelocity = 0.3f;
D3DXMatrixPerspectiveFovLH(&projMatrix, D3DXToRadian(45), 800.0f / 600.0f, 1, 2000);
pos = Position;
up = Up;
target = Target;
lookDir = target - pos;
}
void Camera::Update(LPDIRECT3DDEVICE9& d3dDevice)
{
D3DXVec3Normalize(&lookDir, &lookDir);
D3DXVec3Cross(&right, &up, &lookDir);
D3DXVec3Normalize(&right, &right);
//Walk control
if(GetAsyncKeyState(0x57))
acceleration += lookDir * 0.02f;
if (GetAsyncKeyState(0x53))
acceleration += lookDir * -0.02f;
//Strafe control
if (GetAsyncKeyState(0x41))
{
acceleration += D3DXVECTOR3(right.x, 0, right.z) * -0.015f;
if (currentZRot < 0.07f)
rotZ += rollIndex;
}
else
if (GetAsyncKeyState(0x44))
{
acceleration += D3DXVECTOR3(right.x, 0, right.z) * 0.015f;
if (currentZRot > -0.07f)
rotZ -= rollIndex;
}
else
if (currentZRot < 0)
rotZ += rollIndex;
else if (currentZRot > 0)
rotZ -= rollIndex;
//look control
if (GetAsyncKeyState(VK_LEFT))
rotX -= 0.04f;
else
if (GetAsyncKeyState(VK_RIGHT))
rotX += 0.04f;
if (GetAsyncKeyState(VK_UP) && lookDir.y < 0.932f)
rotY -= 0.04f;
else
if (GetAsyncKeyState(VK_DOWN) && lookDir.y > -0.985f)
rotY += 0.04f;
static const float friction = 0.08f;
currentVel += acceleration;
currentVel += ((currentVel * friction * -1));
pos += currentVel;
acceleration = D3DXVECTOR3(0, 0, 0);
pitch(rotY);
yaw(rotX);
roll(rotZ);
D3DXMatrixLookAtLH(&viewMatrix, &pos, &(pos + lookDir), &up);
rotX = 0;
rotY = 0;
currentZRot += rotZ;
rotZ = 0.00f;
}
void Camera::pitch(float A)
{
D3DXMATRIX T;
D3DXMatrixRotationAxis(&T, &right, A);
D3DXVec3TransformCoord(&lookDir, &lookDir, &T);
}
void Camera::yaw(float A)
{
D3DXMATRIX T;
D3DXMatrixRotationY(&T, A);
D3DXVec3TransformCoord(&right, &right, &T);
D3DXVec3TransformCoord(&lookDir, &lookDir, &T);
}
void Camera::roll(float A)
{
D3DXMATRIX T;
D3DXMatrixRotationAxis(&T, &lookDir, A);
D3DXVec3TransformCoord(&right, &right, &T);
D3DXVec3TransformCoord(&up, &up, &T);
}
Then in my update loop I call
camera.Update(d3dDevice);
END EMBARRASSING CODE
As for the video
The majority of Internet Explorer users don't understand the concept of a browsing application, or that there are options.
They just see the big blue 'e' and think "Internet". The thought process usually does not get much deeper than that.
Worms are the weirdest and nicest creatures, and will one day prove themselves to the world.
I love the word Clicky
### #8Erik Rufelt Members
Posted 23 October 2012 - 02:49 AM
Try drawing all your camera vectors to the screen. That way you can see what changes when the bug appears.
### #9littletray26 Members
Posted 23 October 2012 - 03:12 AM
Try drawing all your camera vectors to the screen. That way you can see what changes when the bug appears.
I will do that, thanks. I would assume it's the right vector though as it's the only one Roll and Yaw functions have in common...?
The majority of Internet Explorer users don't understand the concept of a browsing application, or that there are options.
They just see the big blue 'e' and think "Internet". The thought process usually does not get much deeper than that.
Worms are the weirdest and nicest creatures, and will one day prove themselves to the world.
I love the word Clicky
### #10Burnt_Fyr Members
Posted 23 October 2012 - 06:48 AM
Does it happen on all 4 possible combinations of strafe and yaw? It could be the lack of indentation but i think the if/else chain in strafe control is broken... I'll check back when i have time to examine it better.
### #11littletray26 Members
Posted 23 October 2012 - 06:57 AM
Does it happen on all 4 possible combinations of strafe and yaw? It could be the lack of indentation but i think the if/else chain in strafe control is broken... I'll check back when i have time to examine it better.
It happens in all 4 combinations. I've found the camera will end up leaning to the side of the direction I was strafing in. Also I don't think the if/else chain is broken. Perhaps you can see a problem in it I can't? The indentation seems to have been lost from VS to GameDev.
Edited by littletray26, 23 October 2012 - 06:58 AM.
The majority of Internet Explorer users don't understand the concept of a browsing application, or that there are options.
They just see the big blue 'e' and think "Internet". The thought process usually does not get much deeper than that.
Worms are the weirdest and nicest creatures, and will one day prove themselves to the world.
I love the word Clicky
### #12Burnt_Fyr Members
Posted 24 October 2012 - 11:16 AM
On closer look the if/else chain is correct, at first glace it seemed as if might be skipping the "unlean" part of the code in some instances. What does currentzrot look like after the problem has occurred?
### #13littletray26 Members
Posted 24 October 2012 - 01:28 PM
After the problem has occurred currentZRot is 0.00000000
The majority of Internet Explorer users don't understand the concept of a browsing application, or that there are options.
They just see the big blue 'e' and think "Internet". The thought process usually does not get much deeper than that.
Worms are the weirdest and nicest creatures, and will one day prove themselves to the world.
I love the word Clicky
### #14Burnt_Fyr Members
Posted 25 October 2012 - 12:23 PM
Have you tried what Erik suggested? I can't see any reason as to why, this is happening. But I have trouble enough debugging my own code, let alone someone elses. Your orthonormalization at the start of update looks sketchy, I don't think it does what you intend(I haven't used the d3dx library in a while, so i can't remember order of input/outputs for the cross product function).
Keep trying, you will solve your problem. It will likely be an AHA moment that wakes you from sleep, or hits you on the toilet, but keep plugging away at it.
### #15littletray26 Members
Posted 26 October 2012 - 12:55 AM
I'll write the vectors to the screen and have a look tonight.
I sure hope so.
The majority of Internet Explorer users don't understand the concept of a browsing application, or that there are options.
They just see the big blue 'e' and think "Internet". The thought process usually does not get much deeper than that.
Worms are the weirdest and nicest creatures, and will one day prove themselves to the world.
I love the word Clicky
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. |
# pdflatex and \hyphenation
EDITEDx2. Here is a fairly short latex file.
\documentclass[12pt]{amsart}
\newcommand{\calG}{\mathcal{G}}
\newcommand{\calP}{\mathcal{P}}
\hyphenation{lem-ma none-the-less un-pa-ram-e-ter-ized}
\begin{document}
\subsection{Projection to holes is coarsely Lipschitz}
The following lemma is used repeatedly throughout the paper.
\begin{itemize}
\item
for any hole $X$ for $\calG$, the projection $\pi_X(\calP)$ is an
$A$--unparameterized quasi-geodesic and
\item
foobar.
\end{itemize}
\end{document}
When I use pdflatex (pdfTeX 3.1415926-1.40.11-2.2 (TeX Live 2010/Fink)) I get a pair of overfull hbox errors. If I replace lemma in the body by lem\-ma and replace unparameterized by unparame\-terized then the warnings go away. Suggestions?
-
Works for me, so you'll have to provide a minimal working example (MWE) that illustrates your problem. – lockstep Jan 7 '12 at 12:04
Are you using babel and multiple languages? – egreg Jan 7 '12 at 12:07
Please post a separate second question for the second question. – Stefan Kottwitz Jan 7 '12 at 12:07
With regard to @egreg's comment: Have a look at tex.stackexchange.com/questions/37934/… – lockstep Jan 7 '12 at 12:14
Lockstep - Ok, I'll try to do that. Egreg - English only. Stefan - I'll split off the question. – Sam Nead Jan 7 '12 at 14:38
show 1 more comment
Set \righthyphenmin to 2, because amsart uses the value 3. Then lemma will be hyphenated. And words including a dash are not hyphenated. Insert a \hskip0pt. See example.
\documentclass[12pt]{amsart}
\newcommand{\calG}{\mathcal{G}}
\newcommand{\calP}{\mathcal{P}}
\righthyphenmin=2
\begin{document}
\subsection{Projection to holes is coarsely Lipschitz}
The following lemma is used repeatedly throughout the paper.
\begin{itemize}
\item
for any hole $X$ for $\calG$, the projection $\pi_X(\calP)$ is an
$A$--\hskip0pt{}unparameterized quasi-geodesic and
\item
foobar.
\end{itemize}
\end{document}
-
This is a fine example of why minimal working examples are important. – lockstep Jan 7 '12 at 15:17
@Herbert - I've expanded the example. Perhaps there are two different issues here? – Sam Nead Jan 7 '12 at 15:41
@Sam Nead: see my edited answer – Herbert Jan 7 '12 at 15:52
@Herbert - Ah. I see! Thank you very much for your answers. So, in short, the line starting \hyphenation is totally irrelevant. LaTeX knows how to hyphenate these words, but there are other rules that are overriding the hyphenations (number of letter and dashes). ... So, if I have to cure each problem individually then it seems cleaner to just insert \- in the appropriate places (after fixing all other typesetting issues). Does that sound right? – Sam Nead Jan 7 '12 at 16:06
yes, except of righthyphenmin. That is easier than lem\-ma ... – Herbert Jan 7 '12 at 16:25
TeX won't hyphenate words without leaving at least \lefthyphenmin letters before the hyphen and \righthyphenmin letters after it. These two parameters are set on a per language basis; for English, the typographic traditions require
\lefthyphenmin=2
\righthyphenmin=3
so it's immaterial if you say
\hyphenation{lem-ma}
in the document's preamble: the hyphenation point will not be considered anyway. It's interesting to know that, setting \righthyphenmin=2, the command
\showhyphens{lemma}
shows lem-ma on the terminal.
While globally setting \righthyphenmin=2 will solve the particular problem, I wouldn't recommend it, since it may add many improper hyphenation points in the rest of the document.
A "local" solution, that is, inputting lem\-ma at that spot is, in my opinion, the way to go: an explicit discretionary hyphen overrides the "minimum hyphenation rules" for that word.
Words containing an explicit or discretionary hyphen (- or \-) are possibly split only at the explicit hyphens: you can solve the problem with $A$-unparameterized by inserting \hspace{0pt}:
$A$-\hspace{0pt}unparameterized
or, maybe,
$A$\mbox{-}\hspace{0pt}unparameterized
so that TeX won't break after the explicit hyphen. Leave all these adjustments for the final stage of production, when you're sure that the text and the page parameters are in definitive form.
-
I think there may be more than one issue here. I've expanded the example to try and show this. – Sam Nead Jan 7 '12 at 15:52
Ok, very clear. Thank you. – Sam Nead Jan 7 '12 at 16:29 |
Tag Info
10
This closure is a rather stupid thing, because the Web site is not closed: indeed, there still is a machine, somewhere, which responds to HTTP requests and returns the "we are closed" page. It would have cost zero effort, and zero extra money, to simply let the Web site run and keep on serving PDF files. For crypto development, this means that until the US ...
9
At the time of the competition (I can talk about it, I was there), there was a lot of discussion and various people showed arguments. However, there was never an official, publicly known "board of scores" with totals and definite rules, as the pictures you show seem to purport. It is possible that the NIST people did make something similar internally, but ...
7
I would characterize the service as similar to a trusted time-stamping service. Except they do not do the time-stamping, but just provide the "key". This allows a user to decide what do to with it, such as using it as a private key to sign something, or an HMAC key, proving the signature is "not older" than the timestamp. If the signature is published to a ...
6
Bernstein and Lange says that there has been no progress for prime-field elliptic curves since about 1999, when the NIST curves were chosen. No large class of weak curves were known then, and no large class is known now. Some small classes are known, (as Neves says) the curves with small embedding degree and the anomalous curves (order $n$ equals the prime ...
6
As far as I can tell, NIST has only one official document about entropy collection. SP-800-90B. The purpose of NIST Special Publication (SP) 800-90B is to specify the design and testing requirements for entropy sources that can be validated as approved entropy sources by NIST‘s CAVP and CMVP. It essentially defines a bunch of statistical tests to ...
5
Under the assumption that $(K,\text{Msg})\to H_K(\text{Msg})$ is a secure MAC (be it HMAC or any other MAC), and $\text{Nonce}$ does not repeat and is of fixed size, both $H_K(\text{Msg}||\text{Nonce})$ and $H_K(\text{Nonce}||\text{Msg})$ are demonstrably secure, in the sense that an adversary not knowing $K$ can't distinguish either from random, even for ...
4
$\pi$ is the transcendental number 3.1415926... It's there in the formula to show this specific number was not chosen with a specific cryptographical backdoor in mind; it seems unlikely that anyone was able to select the value of $\pi$ (unless Carl Sagan was correct, of course :-)
4
Q1: Why are these tests stroked out? These tests are stroked out on pages 57-58 of the current FIPS 140-2 because they are no longer part of the current FIPS 140-2 standard, since Change Notice 2 of 2002 December third, where these pages belong. My guess for the rationale of removing these tests is that It was realized that the very principle ...
4
I wonder why anyone would choose to rely on a source of true random numbers fraught with questions that will ultimately have no provable - or perhaps even satisfactory - answer. There are at least a couple of companies that sell generators that provide high quality true random numbers. Having a generator on-site and available real-time allows the necessary ...
4
The cornerstone of the handshake security is that the Finished messages, sent under the protection of the newly exchanged key (for encryption and MAC), contain hash values computed over all the handshake messages exchanged so far, including the list of cipher suites and all other parameters. As long as client and server don't negotiate the use of a cipher ...
3
Your description of how RFC 5959 works isn't quite right. It is not quite correct to state that RFC 5959 encrypts using AES in ECB mode. A correct statement is: if the plaintext is exactly 128 bits, then use ECB mode, otherwise use a non-trivial mode of operation found in RFC 3394. In the former case, ECB mode is fine, since it's just a single block of ...
3
Yes. $\:$ "simply XORing" is obviously malleable, which may allow related-key attacks. "When storing a short key, e.g. a 256-bit ECC private key," the "good reason to use AES" is that "the XOR with a single PBKDF2 (or other KDF) output block" is not necessarily sufficient, since an adversary might also have changed the stored public key.
3
Since you do not describe why TLS Handshake and IKE are appropriate in your situation, and as long as you don't describe your situation, it's hard to really help you. Also, you haven't stated if it's only IKE that's not appropriate, or if that also includes IKEv2 (which improved the IKE protocol). Therefore, I'll simply assume you meant both. As an ...
2
In addition to the earlier remarks about the missing background of your question please also consider that TLS and IKEv2 are actually not just a single authentication and key exchange protocol but rather a framework that supports many different AKA protocols. Let us use TLS as an example. In TLS you have the concept of ciphersuites and they allow you to ...
2
I cannot see it having a negative effect, only a positive effect. Let's look at the Reddit AMA of Glenn Greenwald and the relevant comment: There are hundreds of encryption standards compromised by the program the Guardian, NYT and PP all reported on. I have never seen any list of those standards and don't have it. If I did have it, I would publish it ...
2
This closure could have an unintended effect on security. If a researcher was attempting to use a NIST resource, he or she might turn to a third party due to the unavailability of the NIST site. This may spur awareness, interest, or growth in other international standards bodies, such as ISO, or even to form an ECRYPT-III effort. If that third party turns ...
2
The rationale for no longer mandating these tests include: These tests are generally not useful against most FIPS 140-2 approved random number generators. These tests can be useful against some kind of entropy sources. These tests give frequent false positives every few thousandth block of truely random stream will fail the test. Some entropy sources are ...
1
The multiplier parameter $k$ is different between SRP 6 and 6a. You can see that RFC 5054 calculates it using a hash of the domain parameters (modulus $N$ and generator $g$), so it is using SRP 6a, as opposed to SRP 6 where $k$ is constant. Likewise, in section 6.2.1 of IEC 11770-4 – the October 2005 draft at least – the equivalent value $c$ is defined as a ...
1
First up: Don't believe the hype! Especially if things can easily be proven wrong. What I mean is that your NIST have just launched a new service… is incorrect, as the NIST Randomness Beacon project is known to me (and others) since 2011. Furthermore, this project was awarded a multi-year grant from NIST's Innovations in Measurement Science (IMS) Program in ...
1
Germany's BSI has produced AIS 31 that includes requirements on Physical True RNGs (PTRNGs). It is designed to fill a gap in the Common Criteria standard. Chapter 4 describes pre-defined classes for physical true, non-physical true, deterministic and hybrid random number generators. ... The basic concepts and evaluation criteria are illustrated by ...
1
I agree that Gilles' interpretation in the comments is the only one that makes sense; the RFC clearly contains an editorial error, and should read either (emphasis indicates corrections): "If the value calculated by the authentication server matches the value calculated by the client, then the HOTP value is validated." or: "If the value received by ...
Only top voted, non community-wiki answers of a minimum length are eligible |
## Abstract and Applied Analysis
### Inverse Scattering from a Sound-Hard Crack via Two-Step Method
Kuo-Ming Lee
#### Abstract
We present a two-step method for recovering an unknown sound-hard crack in ${ℝ}^{2}$ from the measured far-field pattern. This method, based on a two-by-two system of nonlinear integral equations, splits the reconstruction into two consecutive steps which consists of a forward and an inverse problems. In this spirit, only the latter needs to be regularized.
#### Article information
Source
Abstr. Appl. Anal., Volume 2012, Special Issue (2012), Article ID 810676, 13 pages.
Dates
First available in Project Euclid: 5 April 2013
https://projecteuclid.org/euclid.aaa/1365174049
Digital Object Identifier
doi:10.1155/2012/810676
Mathematical Reviews number (MathSciNet)
MR2947726
Zentralblatt MATH identifier
1246.65256
#### Citation
Lee, Kuo-Ming. Inverse Scattering from a Sound-Hard Crack via Two-Step Method. Abstr. Appl. Anal. 2012, Special Issue (2012), Article ID 810676, 13 pages. doi:10.1155/2012/810676. https://projecteuclid.org/euclid.aaa/1365174049
#### References
• R. Kress, “Inverse scattering from an open arc,” Mathematical Methods in the Applied Sciences, vol. 18, no. 4, pp. 267–293, 1995.
• L. Mönch, “On the inverse acoustic scattering problem by an open arc: the sound-hard case,” Inverse Problems, vol. 13, no. 5, pp. 1379–1392, 1997.
• R. Kress and W. Rundell, “Nonlinear integral equations and the iterative solution for an inverse boundary value problem,” Inverse Problems, vol. 21, no. 4, pp. 1207–1223, 2005.
• O. Ivanyshyn and R. Kress, “Nonlinear integral equations for solving inverse boundary value problems for inclusions and cracks,” Journal of Integral Equations and Applications, vol. 18, no. 1, pp. 13–38, 2006.
• K.-M. Lee, “Inverse scattering via nonlinear integral equations for a Neumann crack,” Inverse Problems, vol. 22, no. 6, pp. 1989–2000, 2006.
• O. Ivanyshyn and R. Kress, “Nonlinear integral equations in inverse obstacle scattering,” in Proceedings of the 7th International Workshop on Mathematical Methods in Scattering Theory and Biomedical Engineering, Nymphaio, Greece, 2005.
• A. Kirsch and R. Kress, “An optimization method in inverse acoustic scattering,” in Boundary Elements IX, Vol. 3: Fluid Flow and Potential Applications, C. A. Brebbia, W. L. Wendland, and G. Kuhn, Eds., pp. 3–18, Springer, Berlin, Germany, 1987.
• R. Kress and P. Serranho, “A hybrid method for two-dimensional crack reconstruction,” Inverse Problems, vol. 21, no. 2, pp. 773–784, 2005.
• R. Kress and P. Serranho, “A hybrid method for sound-hard obstacle reconstruction,” Journal of Computational and Applied Mathematics, vol. 204, no. 2, pp. 418–427, 2007.
• K.-M. Lee, “A two step method in inverse scattering problem for a crack,” Journal of Mathematical Physics, vol. 51, no. 2, Article ID 023529, 10 pages, 2010.
• R. Kress, Linear Integral Equations, Springer, Berlin, Germany, 2nd edition, 1999.
• L. Mönch, “On the numerical solution of the direct scattering problem for an open sound-hard arc,” Journal of Computational and Applied Mathematics, vol. 71, no. 2, pp. 343–356, 1996.
• D. Colton and R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory, Springer, Berlin, Germany, 2nd edition, 1998.
• Y. Yan and I. H. Sloan, “On integral equations of the first kind with logarithmic kernels,” Journal of Integral Equations and Applications, vol. 1, no. 4, pp. 549–579, 1988.
• R. Potthast, “Fréchet differentiability of boundary integral operators in inverse acoustic scattering,” Inverse Problems, vol. 10, no. 2, pp. 431–447, 1994. |
# Prediction
November 9, 2020
• Roe v Wade is wholesale overturned before 2025: 13.5%
• Roe v Wade is partially overturned/weakened before 2025: 31.5%
• Overall: 13.5% + 31.5% = 45%
# Analysis
The key questions are:
• Will SCOTUS have an opportunity to rule on abortion?
• What are the views of the current justices on abortion rights?
• How often has a precedent like Roe v Wade been overturned?
• What will the future views of the court be?
• What are the possible outcomes of a case challenging abortion rights?
## What’s the chance the court will have an opportunity to rule on abortion?
One plausible path to the Supreme Court is that a state’s abortion restrictions are challenged in court by an organization or individual who is harmed by them. To illustrate this, consider the 2019 Alabama abortion ban that seems to clearly violate rights established by Roe v Wade. The law’s real purpose was to act as a vehicle to challenge those rights.1 It was swiftly struck down by a district court after abortion providers sued, but appeals may one day land it in front of the Supreme Court.
This is also how Roe v Wade was challenged and partially overturned in Planned Parenthood v Casey when Planned Parenthood challenged a Pennsylvania abortion law.
A doctor or patient who has been convicted of violating an abortion law could also bring a case, but this seems less likely.
Base rate: In the last 20 years, SCOTUS has ruled on at least 5 abortion cases: Whole Women’s Health, Coakley, Stenberg, Gonzalez, and June Medical Services. Only four of these were on abortion restrictions. Coakley was on the right to protest outside abortion clinics. This gives a base rate of one ruling every 5 years.
There is already one case that could effect Roe v Wade that the court punted on, which may return to the court in the future. Some have speculated that the court was waiting for Amy Cony Barrett to be confirmed before taking the case on. Another theory is that they are waiting until after the election. This article says there are 17 case that are “one step away.”
I think the base rate is probably too low because the recent shift toward a more conservative court will incentivize anti-abortion advocates to bring more cases.
I think there is a 90% chance the court hears a case relevant to Roe v Wade before 2025
## What are the views of the current justices on Roe v Wade?
Below I go through each justice and try to assign a probability that he or she opposes abortion rights. These cases are clearly more nuanced than a one dimensional spectrum from support to oppose, and this overview is very cursory, but I think this exercise at least points in the right direction. My starting point is this chart on the ideological leanings of the court, which I then update with the listed links.
Sotomayor: 40% opposes abortion rights
Kagan: 35% opposes abortion rights.
Gorsuch: 60% opposes abortion rights.
Kavanaugh: 60% Opposes abortion rights.
Barret: 80% Opposes abortion rights, but respects precedent.
Roberts: 55% opposes abortion rights
Thomas: 80% opposes abortion rights
Breyer: 20% opposes abortion rights
Alito: 80% opposes abortion rights
If I treat these as independent events, there’s a 59% chance at least 5 judges with rule against Roe v Wade on a given case.
## How often is a precedent like Roe v Wade overturned?
It has happened at least 300 times as of 2018 or 101 times if you only count constitutional cases. The court rules on 100-150 cases per year.
So overturning precedent is somewhat rare. Most of the constitutional cases happened after 1900, so as a first approximation the base rate might be (101 cases)/(120 years * 100 cases) = 0.84% of cases result in overturning constitutional precedent. Another way of looking at this is as a per year rate. Viewed this way, constitution precedent is overturned on average every 120/101 = 1.2 years. With the recent ideological shift in the court, this rate might increase.
## What will the future views of the court be?
Many liberals are currently concerned about the 6-3 conservative majority of the court, but it’s worth pointing out that the composition of the court will continue to change in the future. Three members of the court are 70 or over (Alito: 70, Breyer: 82, Thomas: 72) and may leave the court soon, either due to death or retirement. If Biden is president (99%) then any retirement or death could shift the court back.
Another thing to consider is democratic control of the senate. If the democrats win the Georgia senate races (25% chance) and gain control of the Senate, there’s a nontrivial chance they will try to expand the court or credibly threaten to expand the court (probably less than 50%, but greater than 10%). This might disincentivize the court from making any radical decisions. If Roe v Wade were overturned, I would expect it to be much more probable that the court would be packed, conditioned on democrats gaining control. There’s also the midterms to consider. If the democrats don’t gain control now and the court overturns Roe v Wade, they may gain control in 2022 and pack the court.
## What are the possible outcomes of a case challenging abortion rights?
My impression is that Supreme Court rulings are often more subtle than they’re made out to be, and often the ruling is fairly narrow. This means that the chance of a wholesale reversal of Roe v Wade is less likely than the court slowly chipping away at the precedent. For example, in Planned Parenthood v Casey, Roe v Wade was overturned in the sense that a trimester framework was replaced with a viability framework. This technically “overturned” part of Roe v Wade, but it’s not clear that this tracks the commonsense meaning of overturned. I think most people envision the right being completely reversed rather than slowly weakened. That said, I suspect it’s more likely that Roe v Wade is chipped away at rather than wholesale overturned given the importance that the court assigns to precedence and narrow rulings.
# Putting it together
A first pass at putting this together is taking the 90% chance abortion rights are challenged and multiplying it with the 59% chance the court rules to overturn or weaken those rights. That gives a 53% chance of Roe v Wade being overturned or weakened.
There are a few adjustments I want to make to this. First, the threat of court packing decreases the 59% chance, but not by much. It looks likely the democrats will have to wait until the midterms before they can credibly threaten to pack the courts and even then it’s not a sure thing. Second, the 59% chance of overturning was for a single case. Multiple cases brought before the court increase the chance that at least one challenge will succeed. That said, I don’t think it increases it by much because the outcomes will be heavily correlated. So this increases the 59% by only a little. I think these two adjustments are basically a wash. Finally, I adjust downward due to the low base rate of constitutional precedent being overturned. I end up at a 50% chance of Roe v Wade being overturned or weakened (conditioned on being challenged).
I express this in a yaml based format I developed to express probability trees:
roe v wade is challenged [0.9]:
overturned [0.50]:
wholesale overturned [0.3]: true
weakened [0.7]: true
~overturned [0.5]: false
~roe v wade is challenged [0.1]: false
Which yields the following paths:
[0.1350] roe v wade is challenged {p=0.9} → overturned {p=0.50} → wholesale overturned {p=0.3}
[0.3150] roe v wade is challenged {p=0.9} → overturned {p=0.50} → weakened {p=0.7}
Overall: 0.4500
The adjustments could also be expressed in the tree itself, but that leads to a very complicated tree. When I tried to do this, I ended up with the same ballpark probability (40%-50%).
It’s worth comparing this prediction to a Metaculus question which gives only a 25% chance of Roe v Wade being overturned. I think one reason I’m far from the community forecast is that the resolution criteria for that question are narrow, so it might resolve negative even if Roe v Wade is weakened in some way. That said, it doesn’t resolve until 2028 so it has a much longer time horizon, which should raise the probability. One thing I’m surprised about is that the Amy Barrett confirmation didn’t seem to change the probability much. Maybe forecasters haven’t updated yet. It’s also possible they’re weighing the low base rate much more heavily.
1. “The bill’s sponsor, Republican representative Terri Collins, has stated that she hopes the law will lead to a legal challenge in which Roe v. Wade is overturned.” Wikipedia contributors. (2020, October 22). Human Life Protection Act. In Wikipedia, The Free Encyclopedia. Retrieved 03:28, November 9, 2020, from https://en.wikipedia.org/w/index.php?title=Human_Life_Protection_Act&oldid=984841661↩︎ |
# New to Logarithms
• November 18th 2012, 05:04 PM
New to Logarithms
Hey,
I am pretty new to Logarithms and my class just got into the topic. I just converted y = log(base 4)X into "x = 4^y" to get the exponential equation.
However when the problem starts to get more complicated I don't know how to convert.
Right now I am trying to figure out how to convert "y = log(base 3)(x-2)"
Any help would be appreciated! Thanks!
• November 18th 2012, 05:13 PM
skeeter
Re: New to Logarithms
Quote:
Hey,
I am pretty new to Logarithms and my class just got into the topic. I just converted y = log(base 4)X into "x = 4^y" to get the exponential equation.
However when the problem starts to get more complicated I don't know how to convert.
Right now I am trying to figure out how to convert "y = log(base 3)(x-2)"
Any help would be appreciated! Thanks!
$a = \log_b{c} \implies b^a = c$
so ...
$y = \log_3(x-2) \implies 3^y = x-2$
• November 18th 2012, 05:44 PM
Re: New to Logarithms
Thanks!! How would I then type that out with bases? I don't know how to do that. And I have been trying to plug it into a graphing calculator to check but have problems there as well. Thanks!
• November 18th 2012, 06:13 PM
Prove It
Re: New to Logarithms
You won't be able to input bases other than 10 or e into your graphics calculator. If you have any other base you will need to change the base to 10 or e.
• November 19th 2012, 05:07 AM
skeeter
Re: New to Logarithms
Quote: |
# Refer to the data in Exercise 6-1 for Ida Sidha Karya Company. The absorption costing income... 1 answer below »
Refer to the data in Exercise 6-1 for Ida Sidha Karya Company. The absorption costing income statement prepared by the company s accountant for last year appears below: Sales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . $191,250 Cost of goods sold . . . . . . . . . . . . . . . . . . . . . . . . 157,500 Gross margin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33,750 Selling and administrative expense . . . . . . . . . . . . 24,500 Net operating income . . . . . . . . . . . . . . . . . . . . . .$ 9,250 Required: 1. Determine how much of the ending inventory consists of fixed manufacturing overhead cost deferred in inventory to the next period. 2. Prepare an income statement for the year using variable costing. Explain the difference in net operating income between the two costing methods.
## 1 Approved Answer
VIJAYAKUMAR G
5 Ratings, (9 Votes)
The solution is attached herewith...
## Plagiarism Checker
Submit your documents and get free Plagiarism report
Free Plagiarism Checker
## Recent Questions in Accounting - Others
Looking for Something Else? Ask a Similar Question
Ask Now |
# Enumerate alignment
Is it possible to improve the alignment of the first item?
\documentclass{article}
\begin{document}
\begin{enumerate}
\item \begin{tabular}{cccccccccc}
$x[n] =$ & \{1/3 & -1/2 & 1/4 & -1/8 & 1/6\} & and\\
& & & $\uparrow$ & & & \\
\end{tabular}
\begin{tabular}{cccccccccc}
$h[n] =$ & \{1 & -1 & 1 & -1\} \\
& & & & & & \\
\end{tabular}
\item \begin{tabular}{cccccccccc}
$x[n] =$ & \{1 & -2 & -3 & 0 & -1\} & and\\
\end{tabular}
\begin{tabular}{cccccccccc}
$h[n] =$ & \{2 & -1/2 & -3 & 1 & -2\} \\
\end{tabular}
\end{enumerate}
\end{document}
The result is:
-
When you have a whole bunch of similarly-aligned tabular columns, use the *{<num>}{<type>} notation. In your case, \begin{tabular}{*{10}{c}}. It's easier to debug and more readable. Also, consider keeping the numerals in math mode, which provides a consistent spacing around the unary operator -. – Werner May 16 '13 at 1:33
What do you want the $\uparrow$ to point from and to? – Pål GD May 16 '13 at 9:24
The \upparrow is a "n = 0" mark. – Papiro May 16 '13 at 9:28
If I understand correctly, you need to add [t] alignment:
\documentclass{article}
\begin{document}
\begin{enumerate}
\item \begin{tabular}[t]{cccccccccc} %%%% note [t] here.
$x[n] =$ & \{1/3 & -1/2 & 1/4 & -1/8 & 1/6\} & and\\
& & & $\uparrow$ & & & \\
\end{tabular}
\begin{tabular}[t]{cccccccccc} %%%% and here.
$h[n] =$ & \{1 & -1 & 1 & -1\} \\
& & & & & & \\
\end{tabular}
\item \begin{tabular}{cccccccccc}
$x[n] =$ & \{1 & -2 & -3 & 0 & -1\} & and\\
\end{tabular}
\begin{tabular}{cccccccccc}
$h[n] =$ & \{2 & -1/2 & -3 & 1 & -2\} \\
\end{tabular}
\end{enumerate}
\end{document}
-
The following is a suggestion for a slightly easier notation that is more flexible. Moreover, it provides a consistent and accurate spacing around the math operators you use:
\documentclass{article}
\usepackage{amsmath,xparse}% http://ctan.org/pkg/{amsmath,xparse}
\NewDocumentCommand\printarray{O{~~} >{\SplitList{,}}m}
{%
\def\itemdelim{\def\itemdelim{#1}}% Define list separator with one delay
\ProcessList{#2}{\myitem}% Process list
}
\newcommand\myitem[1]{\itemdelim{#1}}
\begin{document}
\begin{enumerate}
\item $x[n] = \{\printarray{1/3,-1/2,\underset{\uparrow}{1/4},-1/8,1/6}\}$ and $h[n] = \{\printarray{1,-1,1,-1}\}$
\item $x[n] = \{\printarray{1,-2,-3,0,-1}\}$ and $h[n] = \{\printarray{2,-1/2,-3,1,-2}\}$
\end{enumerate}
\end{document}
The "array" is printed using \printarray[<sep>]{<CSV list>} where <sep> defaults to ~~.
Delayed definition of the list separator stems from Package xparse \SplitList last token (or originally Cunning (La)TeX tricks), while amsmath provides \underset.
-
I also would like to suggest you another approach, using a matrix environment instead of a tabular; it's more natural, gives you the desired vertical alignment automatically, and saves you the column format specification:
\documentclass{article}
\usepackage{amsmath}
\begin{document}
\begin{enumerate}
\item
$x[n] = \{\begin{matrix} 1/3 & -1/2 & \smash{\underset{\uparrow}{1/4}} & -1/8 & 1/6 \end{matrix}\}$\quad
$h[n] = \{\begin{matrix} 1 & -1 & 1 & -1 \end{matrix}\}$
\item
$x[n] = \{\begin{matrix} 1 & -2 & -3 & 0 & -1 \end{matrix}\}$\quad
$h[n] = \{\begin{matrix} 2 & -1/2 & -3 & 1 & -2 \end{matrix}\}$
\end{enumerate}
\end{document}
-
it also applies math mode automatically, which makes the minus signs behave properly without having to think about them. – barbara beeton May 16 '13 at 12:25
This might be considered cheating, but if you skip the enumeration environment and put everything in one table, things behave better:
\documentclass{article}
\begin{document}
\begin{tabular}{lcccccccccccccccccccc}
1.&$x[n]=$&\{1/3&-1/2&1/4&-1/8&1/6\}& and $h[n]=$&\{1&-1&1&-1\}\\
&&&&&$\uparrow$&&&&&&&&\\
2.&$x[n]=$&\{1&-2&-3&0&-1\}& and $h[n]=$&\{2&-1/2&-3&1&-2\}\\
\end{tabular}
\end{document}
- |
CommentaryTranslational Research
# Changing Models of Biomedical Research
See allHide authors and affiliations
Science Translational Medicine 07 Oct 2009:
Vol. 1, Issue 1, pp. 1cm1
DOI: 10.1126/scitranslmed.3000124
## Historical Perspective: The “Bush Model” of Biomedical Research
Toward the end of World War II, U.S. President Franklin Roosevelt requested the director of the Office of Scientific Research and Development, Vannevar Bush, to develop a vision for the future development of the nation’s science. In the resulting landmark report, Science: The Endless Frontier (1), Bush laid out a bold plan for America’s biomedical research, a vision that has provided a fundamentally stable blueprint for the nation’s scientific efforts over the past 64 years. Bush’s model placed the highest value on investments in basic science, which was envisioned as the wellspring of a cascade that ultimately concluded with testing in humans (Fig. 1). Building on the previous advances in physics and chemistry from the first half of the 20th century, the robust framework provided by the Bush model engendered advances that include the harnessing of nuclear energy for medical diagnostics and therapeutics and laying the groundwork for the chemical basis of pharmaceutical development. In the fostering of basic science innovation, the Bush model has served our nation well. However, in terms of transferring these basic advances to improvements in medical care, the Bush blueprint now requires more in-depth analysis (2).
The Bush model has had profound implications for science policy, the organization of biomedical research communities, and science funding both locally and globally (Fig. 1). For example, the model logically suggested that medical schools and academic health centers (AHCs) should preferentially recruit basic scientists and that their careers be well supported by generous allocations of research space, facilitated promotions, and prestige. An unintended consequence of the Bush model was that human research became relegated to a far downstream component of the scientific discovery process, essentially serving as little more than proof of a given scientific principle, rather than as either a means of defining new science or a real opportunity to influence health care. This final step of the translation of basic science discoveries to humans became viewed as more of a challenge in execution than an exercise in innovation. Thus, these functions were largely relegated to academic hospitals, which were populated mainly by physicians, many of whom had studied in basic science laboratories but left them for a variety of reasons. Hospitals and physician researchers thus were viewed not as the drivers of scientific discovery but rather as its end point, making academic investments in human investigation and its infrastructure scant.
A second feature of the Bush report that had a profound impact on AHCs was its ground-breaking suggestion that the public should fund the nation’s scientific endeavors. The ultimate consequence of this recommendation was the founding of the National Institutes of Health (NIH), which became the major funder of AHCs and their affiliated institutions in the United States. This funding fueled an explosion of basic research in these centers that previously had focused solely on care delivery, particularly to the disadvantaged. The resulting NIH budget funded more basic than clinical research by a wide margin (~2:1) that has been stable for some time (3) and changed the face of our universities in general and AHCs specifically. Government funding converted their status from almshouses that provided medical care for the indigent at the beginning of the 20th century into dynamic research centers that spawned numerous Nobel Prizes by its end.
A third consequence of the Bush model was that the translation of basic science advances into drug development, testing of drugs in humans, and their fashioning into medications became largely the responsibility of the pharmaceutical industry (Fig. 2). However, this translation required the well-phenotyped patient populations found in AHCs. Participation of these patients in drug development was supported by a transient, study-specific infrastructure and by collaborations focused on the drug approval process, with no abiding commitment to the support services required for the long-term health of the drug development industry. culture resulted from the absolute requirement for drug testing in humans with well-characterized diseases [which is regulated in the United States by the Food and Drug Administration (FDA), established by President Theodore Roosevelt in 1906] and from the scale of drug development costs and pharmaceutical competitiveness. This model has represented the state of affairs in the United States for the past six decades (Fig. 2).
## Problems Emerge
The past three decades have produced substantial evidence that the clinical research infrastructure and the human components of translational research in AHCs have been poorly served by these Bush policies. In their newfound positions of authority and national prominence, basic scientists were reluctant to share resources with hospitals and clinical investigators, and universities neglected to invest in the infrastructure required to facilitate the complex and expensive process of translating basic science advances to studies in humans. Several Institute of Medicine (IOM) reports produced since 1980 documented the consequences of this skewed allocation of resources within universities and AHCs, which gave rise to bottlenecks that hindered the efficient translation of basic science into medical practice (so-called “translational blocks”) (2, 4-8). Still, little was done to address these difficulties. Consequently, problems in the recruitment, training, and retention of clinical investigators within academic medicine deepened (68). In his Presidential Address to the American Federation of Clinical Research in 1977 (9), Sam Thier first referred to clinical investigators—who were absolutely required for human testing of any biological advance—as an “endangered species”, a designation that became popularized when used by NIH Director James Wyngaarden in a subsequent publication (10).
## The Tipping Point: Three Important Changes Occur
### The appearance of powerful new clinical investigative tools.
Beginning in the 1980s, several major advances began to change this status quo. Novel genetic technologies were applied directly to human DNA, allowing scientists to begin to unravel the molecular triggers of rare diseases (1125) and learn powerful new truths about their underlying pathophysiology. These successes led to the concept of sequencing the entire human genome (completed in 2003) to discover the basis for all inherited disorders, an endeavor that spurred the development of potent new analytical tools that empowered studies of the genetic underpinnings of common human diseases. Perhaps the most striking generalization that emerged from these studies was that the genes and associated biological pathway abnormalities unearthed by the new genetic and genomic tools frequently were not the traditional ones being investigated by basic scientists, who had based their approach largely on late-stage disease phenotypes that they then modeled in experimental organisms.
Thus, the study of humans brought truly novel information to the attention of basic scientists, making the human a valid experimental organism for discovery research, while also providing new hope to patients with devastating diseases. This fundamental change in the direction of innovative information flow (now from bedside to bench) was the first element of the tectonic shift that is now occurring in medical science. For example, many of these newly discovered genes, such as the huntingtin gene, which can harbor a mutation that causes all cases of Huntington’s disease, were previously unknown to basic scientists (23, 25). Other genes, such as the superoxide dismutase 1 gene, which is mutated in certain forms of amyotrophic lateral sclerosis, encode well-studied proteins that had not yet come to the attention of scientists in the field of neurodegeneration (22). The discovery of new disease triggers and their respective biological pathways not only inspired astute basic scientists to refocus their research programs but also revealed potential new therapeutic targets for drug development.
The engineering of intricate new imaging tools, such as functional magnetic resonance imaging, opened yet another avenue to early disease diagnosis and investigation, which, like molecular genetic studies, reinforced the concept of human disease as a potentially protracted pathogenic process whose clinical expression is only a late-stage phenotype. Together with sensitive imaging techniques, the so-called ”-omic” technologies (genomics, proteomics, and metabolomics), which were derived from the Human Genome Project, clearly demonstrated the importance of clinical phenotyping, unbiased molecular profiling, and human genetics, while also highlighting the shortage of scientists trained to investigate the human research components of these new and powerful pathways of scientific discovery.
## IOM’s Clinical Research Roundtable.
The second major contextual change was the formation of the Clinical Research Roundtable by the U.S. IOM in 2000. For the first time in U.S. history, this group convened all health care stakeholders to deliberate the paradox that was becoming apparent to all observers. On one hand, as a result of the advances described above, human studies yielded a burgeoning collection of basic science discoveries. In parallel, a growing shortfall in infrastructure, personnel, and policies interfered with the translation of these discoveries into advancements in clinical medicine. The Clinical Research Roundtable comprised basic and clinical scientists, health care providers, insurers (both governmental and private), AHCs, patient groups, the business community, government and regulatory agencies (the FDA, NIH, Centers for Disease Control, and Department of Veteran’s Affairs), the pharmaceutical and biotechnology sectors, and the media. This multidisciplinary group met for 4 years, during which all constituent groups registered similar complaints and expressed frustration over the general untimeliness, rising costs, and decreasing efficiency of important basic science advances being “lost in translation” into new therapies for human diseases. Each group also decried the fact that these problems were occurring in parallel with spiraling health care costs, increasing NIH budgets, and considerable press coverage touting the advances in basic science without corresponding improvements in health care.
The group authored two seminal papers in the Journal of the American Medical Association (2, 26), the first of which clearly enumerated the component blocks involved in translating basic science into superior therapies. The current translational system was inefficient, they observed, in part because it was a disjointed cottage industry composed of individual silos that communicated poorly. They proposed a more modern system to replace the currently fragmented model of inefficiently interdigitating components, each viewing itself as performing a discrete role rather than functioning as part of a continuum. To frame the direction in which the group thought a new system should evolve, they coined the aspirational term the National Clinical Research Enterprise (NCRE) (26). Ideally, this new ensemble was designed to bear the collective responsibility for translating basic advances into new treatments (26).
The Clinical Research Roundtable also characterized two major translational blocks (Fig. 3). The first was the movement of a basic research finding into the first-in-human level of clinical testing (2). Empowered by the rapidly growing repertoire of tools from the Human Genome Project, all participants recognized that this flow was becoming increasingly bidirectional. Studies in patients, families, and their tissues and DNA were now able to provide as much novel information to basic researchers as basic science advances provided to clinical researchers. These changes created a new opportunity to speed the scientific discovery process and avoid communication and cooperation inefficiencies. NIH provides most of the funding to address this first translational block. Moreover, the early steps of translation occur almost exclusively within AHCs, which assumed the role of translational research engines. Information gained from this initial innovation step empowered the subsequent harvesting of these truths by biotechnology and pharmaceutical companies as drug therapies, through a subsequent series of coordinated drug development steps and, ultimately, randomized clinical trials.
The second translational block was then defined by the Clinical Research Roundtable as the failure of new therapies to be swiftly incorporated into routine medical practice. Examples include the lack of widespread use of aspirin, β blockers, and angiotensin-converting enzyme inhibitors, all of which were shown in large randomized clinical trials to save the lives of patients who had suffered myocardial infarctions (27). This “implementation gap” was closely intertwined with the evolving complexity and spiraling costs of the health care delivery system.
In a second paper, the Clinical Research Roundtable suggested several novel remedies for these two major translational blocks (26). The first was a requirement for all current health care participants to form, fund, and actively participate in a new representative national effort (called the NCRE) to address these blocks. Only such a central body with a budget contributed to by all, the group reasoned, could sustain the fruitful discussions and strong sense of shared destiny that had developed during the Clinical Research Roundtable’s collegial interactions. All felt equally strongly that the NCRE should not be governmentally based, because nearly all solutions required public-private partnerships, which are difficult to establish and maintain under a solely governmental aegis. Although many stakeholders in the drug development process attempted to enhance the efficiency of their internal translational processes, others were prohibited from doing so legislatively (such as the Centers for Medicare and Medicaid Services) or viewed such investments in infrastructure as reducing their market competitiveness (for example, the insurers). Nonetheless, all stakeholders agreed on the need for a bold, centralized, national-scale effort. The cost of continued non-investment was simply too high.
### Lack of infrastructure: NIH’s response.
In recognition of the dismal translational track record, NIH acted to correct the negative trends by establishing a spectrum of research career awards for young and midcareer clinical investigators [(K23s and K24s]; training programs at the master’s degree level in clinical research (K30s); a loan repayment program for scientists choosing careers in human research; and the establishment of a new network of Clinical and Translational Research Centers (CTSCs) designed to replace the older General Clinical Research Centers, whose programs were increasingly viewed as ineffective and provincial in nature. Although it is still too early to judge the results of this transition, these CTSCs potentially represent the most dramatic change in the infrastructure of AHCs to facilitate support of clinical research in U.S. history since the original establishment of the General Clinical Research Centers over 40 years earlier, and they set the stage for dramatic remedies for the translational blocks.
## Future Policies in Biomedical Research
Although the growing importance of genomics-based investigative tools and approaches clearly points to a bright future for biomedical research, it also has profound implications for future policies within AHCs (Fig. 4). Given their constellation of well-characterized patient populations, strong historical investments in basic research and its infrastructure, and deployment of the limited number of physician-scientists who are well trained in the techniques of human research, AHCs should become the major translational engines of our nation’s biomedical research investment in the 21st century. This is the moment for AHCs to assume such leadership (or not). Like so many other chances for leadership, however, this opportunity knocks at the doors of the AHCs accompanied by many less attractive companions, including a societal financial crisis, a recession, reduced philanthropy, rising health care costs, and urgency for health care reform with considerable associated controversy.
The infrastructure for recruiting, consenting, phenotyping, and perturbing the physiology and pathophysiology of patients and normal populations is unique to the environment within AHCs. The scale of investigations that adhere to the postgenomic model of clinical and translational research requires considerable investment in large-scale human subject research capabilities and information technology, such as electronic patient registries and consenting processes, vast biological repositories, core scientific platforms for analyses, rules of governance, and data sharing. In addition, such studies require career stabilization for the multidisciplinary teams required, all of which are not yet well established in the research community. Our current academic reward system provides incentives chiefly for principal investigators with novel ideas. The leaders of AHCs are only now beginning to design reward structures for the building, sustaining, and functioning of the multidisciplinary scientific teams required to achieve success in translational medicine. In particular, polygenic disorders, clinical trial design and orchestration, outcomes research, the assembly and analysis of vast amounts of data, and the accumulation of large numbers of samples and their corresponding phenotyping all are daunting logistical challenges that demand a far more robust infrastructure for human research than is currently available in any single AHC, a reality that strongly argues for networking capabilities across like-minded institutions. Early successes in investigating polygenic disorders via genome-wide association studies and large-scale clinical trial networks have occurred only when dedicated resources were assembled by individual clinical research groups with longstanding commitments to specific diseases or in countries where large national epidemiological infrastructures exist. These infrastructures have enabled the accumulation of substantial, well characterized populations, often over decades. National registries associated with socialized health care systems, specific ethnic or disease populations or foundations, and/or populations in which familial relationships have been characterized are especially well positioned to capitalize on postgenomic and bioinformatics discovery tools.
Such studies clearly argue for a future realignment of centers that can multiplex: that is, focus simultaneously on several patient populations, genes, and diseases across several centers. Over time, a robustly funded CTSC network may well be a part of this solution. Finally, and perhaps most importantly, these changes require a fundamental redefinition of the relationship and shared responsibilities between physician-scientists, our patients, their patient groups, and society as a whole. Without a view of true partnership between AHCs and their patients, all of the infrastructure in the world will not be enough to achieve our goals.
These changes that are required in our scientific landscapes should not be allowed to drive wedges between basic and clinical researchers. On the contrary, the transformations must be framed in such a fashion that they serve as catalysts to building ever tighter working relationships across diverse research disciplines. Dual mentorships and coprincipal investigators, and teams with shared goals and destinies, need to become the norm. That said, some realignment of resources within AHCs must occur once the impact of these changes is fully understood by senior leadership. Whenever such a rebalancing occurs, resistance and opposition invariably follow, especially when it occurs in tight fiscal environments and is accompanied by a failure to explain that this is the tide that will ultimately lift all ships. In environments that tend to focus on zero-sum analyses, such opportunities can be divisive. These resistances can be addressed only by airing these strategic issues openly; viewing the changes as new opportunities to alleviate human suffering; and building a collective will to act.
Finally, from these strategic discussions, a new resolve must emerge from the AHC leadership to establish, support, and reward the new interdisciplinary teams required to address the evolving complexities of the translational processes. This shift will require a critical rethinking of the criteria for academic promotion that addresses the complex issues of career independence versus synergistic interdependence and the progressive loss of well-trained female faculty members at every level of their training (6, 7).
NIH instituted a widespread series of helpful programmatic changes nearly a decade ago, but the key programs (K23s, K24s, K30s, and the loan repayment program) need to be contemporized. Salaries (those for K23s in particular) are now substantially out of date. Young scientists who are well into their 30s, often married with children and attempting to repay their education loans, cannot sustain careers on the current K23 stipends. Similarly, the loan repayment program offers loans of only $30,000/year for 3 years, whereas 22% of graduates now leave medical school with debts of$200,000 and 39% have debts larger than \$138,000 (28). If these crucial programs are to remain successful, they need to be adjusted periodically to the scientific opportunities at hand.
The biotechnology and pharmaceutical industries’ support of AHCs and their clinical research enterprises through sponsorship of individual projects currently is aligned strictly with highly focused corporate goals. This support generally does not contribute materially to sustaining the training, infrastructure, and career pipeline that are fostered by AHCs and that companies ultimately rely on for successful completion of their missions. Industry sentiments that such attributes are supported by their governmental taxes may have some truth, but fail to reflect the fact that many companies do not pay their full complement of taxes, as a result of the vagaries of the complex U.S. tax system. Because the long-term successes of industry require contributions from AHCs, such as basic science innovation, professional training, access to well-phenotyped patients, FDA review of their studies, and the education of physicians about new therapeutic advances, it is in the best interest of companies to enhance their support of AHCs. The magnitude of the required investments are often small enough to represent rounding errors in industry budgets and, with the increasing conflict-of-interest restrictions imposed on academic researchers by AHCs, a timely review of industry’s commitment to investments in AHCs and their infrastructure is warranted.
## Conclusions
For biomedical research, it is simultaneously the best and worst of times. The tools for discovery have never been more powerful. Their speed, swiftness, and accuracy are astonishing. It is equally clear, however that the infrastructure and resources of the NCRE have probably never been less well-proportioned to the opportunities at hand. This is not a surprising truth when one realizes that the current infrastructures were established more than 50 years ago for an enterprise that was only a fraction of its present scale and had little of its current power, complexity, and regulatory requirements.
Within academic centers, most new investments remain targeted to basic science facilities. The power of these basic science enterprises remains enormous, and continued investment in them should remain a high priority for AHCs. Thus, although a new model of biomedical research is developing, it should not be viewed as replacing the old one, but rather as offering a complementary dimension for innovation and scientific opportunity. To capitalize on these opportunities, however, AHCs need to better balance the allocation of new resources between basic and clinical research infrastructures. Paramount is the recruitment of our most promising physician-scientists into clinical investigation and their training in contemporary techniques in a fashion that matches the rigorous training of basic scientists. Partnerships between basic and clinical researchers at every level—including didactic programs and co-mentoring of young physician investigators by both basic and clinical scientists—will be a winning path to our collective futures and ensure mutually stabilizing career paths for both, especially those skilled at interacting with their counterparts.
## Footnotes
• Citation: W. F. Crowley Jr., J. F. Gusella, The changing model of biomedical research. Sci. Transl. Med. 1, 1cm1 (2009).
View Abstract |
Detecting Outlier Car Prices on the Web
by Josh Levy
December 18, 2013
We're pleased to bring you this post, courtesy of Josh Levy, Director of Data Science at Vast.com. Based in Austin, TX, Vast is a leading provider of data and technology powering vertical search for automotive, travel and real estate. Prior to Vast, Josh was an R&D Engineer at Demand Media.
You can find Josh on LinkedIn or Github.
Intro
As a data scientist, I have the great fortune of working on some really cool projects and a range of fascinating analytical problems. If you've never heard of Vast.com before, here's the elevator description.
Vast provides data to publishers, marketplaces, and search engines in 3 industries: cars, real estate, and leisure, lodging & travel. Vast's systems are delivered via a white label integration and improve search results, product recommendations, and special offers within some very popular consumer apps (Southwest GetAway Finder, AOL Travel, Yahoo! Travel, Car and Driver to name a few).
This is a post exploring a real world outlier detection problem and the approach I took to solving it at my company.
Outliers and outlier detection
"Outliers" are, simply speaking, data points that are especially distant from the other points in your data. They can be problematic to building analytical applications, as they tend to yield misleading results if you're not aware of them or if you fail to account for them adequately.
Outlier detection is an extremely important problem with a direct application in a wide variety of application domains, including fraud detection (Bolton, 2002), identifying computer network intrusions and bottlenecks (Lane, 1999), criminal activities in e-commerce and detecting suspicious activities (Chiu, 2003).
~ Jayakumar and Thomas, A New Procedure of Clustering Based on Multivariate Outlier Detection (Journal of Data Science 11(2013), 69-84)
Outliers are extremely common, and you'd be hard pressed to find a real world data set entirely without them. They can crop up for a variety of reasons. For example an outlier can be the result of human error in creating the data or due to measurement error caused by inconsistent practices across teams of researchers to name two.
The problem with outliers
At Vast, we ingest listing data from thousands of suppliers and publish listings to thousands of marketplaces that trust the data are accurate. The listing data itself is initially created manually by users and is therefore vulnerable to human error.
Users submit values in the wrong field, or they mistype or fat finger values inadvertently. 100,000 miles is a sensible number for an odometer reading of an 8 year old vehicle. But intuition tells us $100,000 is an unusual price for most compact cars. And while$42,000 is reasonable for one listing, say, a 2013 Cadillac ATS Luxury Edition, it may be unexpectedly high for another (e.g. a 1997 Buick Lesabre).
Being able to detect these scenarios lets us gracefully correct unwanted errors and deliver a superior product to the end user.
Detecting outliers at Vast
We recently needed to develop a better way to detect erroneous listings in order to resolve them before they reach users. The remainder of this post will outline the problem and the solution we devised using Python, Scikit-Learn, and ŷhat.
Overview of the approach
I'll fit a linear regression model to predict the price for a given car listing. I'll then deploy a classifier to ŷhat to flag suspicious listings based on the estimated price output by the linear regressor. I'll use websockets to stream new listings to ŷhat to identify suspected listing errors on the web in real time.
The code in this blog post was tested against Continuum Analytics' Anaconda 1.8.0 Python distribution. Anaconda 1.8.0 includes Python 2.7.5 along with pandas 0.12.0 which provides helper functions I use to read data tables, scikit-learn 0.14.1 which I use for feature extraction and model building, and Requests 1.2.3 which I use to communicate with ŷhat's REST endpoint. I used pip to install yhat 0.3.1, which is used to deploy my model into ŷhat and websocket-client 0.12.0 which I used to communicate with ŷhat's websocket interface.
In [40]:
%pylab inline
Populating the interactive namespace from numpy and matplotlib
In [41]:
import json
from operator import itemgetter
import pandas as pd
import requests
import websocket
from sklearn.feature_extraction import DictVectorizer
from sklearn.linear_model import LinearRegression
from yhat import BaseModel, Yhat
In [42]:
import warnings
warnings.filterwarnings('ignore')
pd.options.display.width = 900
The data set
The training set, accord_sedan_training.csv contains abbreviated listings for 417 Honda Accord sedans.
All are from the 2006 model year, are all are assumed to have clean titles and to be in good condition. The 2006 Accord came primarily in two trim levels: "LX" and "EX". Leather interior was an option on the "EX", in this dataset, an "EX" with leather is known as "EXL". Each trim level had 4 cylinder and 6 cylinder engines available. All combinations of engine and trim were available with an automatic transmission, and a manual transmission was offered in some combinations.
I'll use the read_csv function from pandas to parse the training data. That creates the DataFrame training containing integer values for price, mileage and year, and string values for trim, engine, and transmission.
In [43]:
training = pd.read_csv('data/accord_sedan_training.csv')
training.shape
Out[43]:
(417, 6)
In [44]:
training.head(7)
Out[44]:
price mileage year trim engine transmission
0 14995 67697 2006 ex 4 Cyl Manual
1 11988 73738 2006 ex 4 Cyl Manual
2 11999 80313 2006 lx 4 Cyl Automatic
3 12995 86096 2006 lx 4 Cyl Automatic
4 11333 79607 2006 lx 4 Cyl Automatic
5 10067 96966 2006 lx 4 Cyl Automatic
6 8999 126150 2006 lx 4 Cyl Automatic
In [45]:
training_no_price = training.drop(['price'], 1)
Out[45]:
mileage year trim engine transmission
0 67697 2006 ex 4 Cyl Manual
1 73738 2006 ex 4 Cyl Manual
2 80313 2006 lx 4 Cyl Automatic
3 86096 2006 lx 4 Cyl Automatic
4 79607 2006 lx 4 Cyl Automatic
Extracting features and building the model
Next, I'll use DictVectorizer from sklearn.feature_extraction to map each of training into a numpy array. DictVectorizer applies the "OneHot" encoding to each string value, creating a 10-dimensional vector space, corresponding to the following features:
• engine=4 Cyl
• engine=6 Cyl
• mileage
• price
• transmission=Automatic
• transmission=Manual
• trim=ex
• trim=exl
• trim=lx
• year
Note: price is part of the feature space. This is a bit of hackery. I want price to be available to the outlier detection model, but I don't want it to influence the price prediction model. I'll allow DictVectorizer to see price, but I'll zero it out before passing it through to the LinearRegression model.
In [46]:
dv = DictVectorizer()
dv.fit(training.T.to_dict().values())
Out[46]:
DictVectorizer(dtype=<type 'numpy.float64'>, separator='=', sparse=True)
In [47]:
len(dv.feature_names_)
Out[47]:
10
In [48]:
dv.feature_names_
Out[48]:
['engine=4 Cyl',
'engine=6 Cyl',
'mileage',
'price',
'transmission=Automatic',
'transmission=Manual',
'trim=ex',
'trim=exl',
'trim=lx',
'year']
Now I'll use the LinearRegression class from sklearn.linear_model to fit a linear model predicting price from the features coming out of the DictVectorizer.
In [49]:
LR = LinearRegression().fit(dv.transform(training_no_price.T.to_dict().values()), training.price)
In [50]:
' + '.join([format(LR.intercept_, '0.2f')] + map(lambda (f,c): "(%0.2f %s)" % (c, f), zip(dv.feature_names_, LR.coef_)))
Out[50]:
'12084.24 + (-337.20 engine=4 Cyl) + (337.20 engine=6 Cyl) + (-0.05 mileage) + (0.00 price) + (420.68 transmission=Automatic) + (-420.67 transmission=Manual) + (208.93 trim=ex) + (674.60 trim=exl) + (-883.53 trim=lx) + (2.23 year)'
The resulting model is
\begin{eqnarray*}PRICE \approx 12084.24 & - & 337.20(engine=4 Cyl) + 337.20(engine=6 Cyl) \\ & - & 0.05(mileage) + 420.68(transmission=Automatic) \\ & - & 420.67(transmission=Manual) \\ & + & 208.93(trim=ex) + 674.60(trim=exl) \\ & - & 883.53(trim=lx) + 2.23(year)\end{eqnarray*}
As was previously mentioned, price has not leaked into the linear regression model.
Now we can measure the prediction accuracy on the training set, and choose an error threshold for identifying possible outliers in new data.
In [51]:
trainingErrs = abs(LR.predict(dv.transform(training.T.to_dict().values())) - training.price)
In [52]:
percentile(trainingErrs, [75, 90, 95, 99])
Out[52]:
[1391.7170820764786,
2200.1942672614978,
2626.9376376401688,
3857.4605411615066]
In [53]:
outlierIdx = trainingErrs >= percentile(trainingErrs, 95)
scatter(training.mileage, training.price, c=(0,0,1), marker='s')
scatter(training.mileage[outlierIdx], training.price[outlierIdx], c=(1,0,0), marker='s')
Out[53]:
<matplotlib.collections.PathCollection at 0x109570b90>
I've held out 100 listings to use as a test set. These are in the file accord_sedan_testing.csv, in the same format as the training data. We can visualize both sets to see that the testing data generally follows the same price/mileage trend, but there is one significant outlier that the model does a poor job of predicting.
In [54]:
testing = pd.read_csv('data/accord_sedan_testing.csv')
testing.shape
Out[54]:
(100, 6)
In [55]:
scatter(training.mileage, training.price, c=(0,0,1), marker='s')
scatter(testing.mileage, testing.price, c=(1,1,0), marker='v')
Out[55]:
<matplotlib.collections.PathCollection at 0x1095a6cd0>
In [56]:
errs = abs(LR.predict(dv.transform(testing.T.to_dict().values())) - testing.price)
In [57]:
hist(errs, bins=50)
Out[57]:
(array([ 20., 17., 9., 10., 14., 3., 4., 6., 4., 5., 1.,
2., 2., 0., 2., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 1.]),
array([ 18.03662975, 254.07463514, 490.11264053, 726.15064592,
962.18865131, 1198.2266567 , 1434.26466209, 1670.30266748,
1906.34067287, 2142.37867826, 2378.41668365, 2614.45468904,
2850.49269443, 3086.53069982, 3322.56870521, 3558.6067106 ,
3794.64471599, 4030.68272138, 4266.72072677, 4502.75873216,
4738.79673755, 4974.83474294, 5210.87274833, 5446.91075372,
5682.94875911, 5918.9867645 , 6155.02476989, 6391.06277528,
6627.10078067, 6863.13878606, 7099.17679145, 7335.21479685,
7571.25280224, 7807.29080763, 8043.32881302, 8279.36681841,
8515.4048238 , 8751.44282919, 8987.48083458, 9223.51883997,
9459.55684536, 9695.59485075, 9931.63285614, 10167.67086153,
10403.70886692, 10639.74687231, 10875.7848777 , 11111.82288309,
11347.86088848, 11583.89889387, 11819.93689926]),
<a list of 50 Patch objects>)
In [58]:
percentile(abs(errs), [90, 95, 100])
Out[58]:
[2263.0162371350216, 2840.2272005583504, 11819.936899259745]
Deploying the model
Now its time to build up and deploy a ŷhat model. PricingModel is a subclass of BaseModel from ŷhat.
The PricingModel class has a self.transform method which maps a raw json request to a numpy array expected by our linear model. Then, self.predict evaluates the model on that observation (i.e. on that array).
Here predict returns an object where ["suspectedOutlier"] is 1 when the the prediction error is too great, and ["x"], ["predictedPrice"], and ["threshold"] provide diagnostic information.
In [59]:
class PricingModel(BaseModel):
def transform(self, doc):
"""
Maps input dict (from json post) into numpy array
delegates to DictVectorizer self.dv
"""
return self.dv.transform(doc)
def predict(self, x):
"""
Evaluate model on array
delegates to LinearRegression self.lr
returns a dict (will be json encoded) suppling
"predictedPrice", "suspectedOutlier", "x", "threshold"
where "x" is the input vector and "threshold" is determined
whether or not a listing is a suspected outlier.
"""
doc = self.dv.inverse_transform(x)[0]
predicted = self.lr.predict(x)[0]
err = abs(predicted - doc['price'])
return {'predictedPrice': predicted,
'x': doc,
'suspectedOutlier': 1 if (err > self.threshold) else 0,
'threshold': self.threshold}
In [60]:
pm = PricingModel(dv=dv, lr=LR, threshold=percentile(trainingErrs, 95))
In [61]:
pm.predict(pm.transform(testing.T.to_dict()[0]))
Out[61]:
{'predictedPrice': 13289.967037908384,
'suspectedOutlier': 0,
'threshold': 2626.9376376401688,
'x': {'engine=4 Cyl': 1.0,
'mileage': 68265.0,
'price': 12995.0,
'transmission=Automatic': 1.0,
'trim=ex': 1.0,
'year': 2006.0}}
Let's write a helper function to handle model deployment to Yhat.
In [62]:
def deploy_model(model_name, fitted_model):
protocol = 'http://'
apikey = secrets['apikey']
deployment_url = protocol + secrets['yhat_url'] + '/'
print deployment_url
result = yh.deploy(model_name, fitted_model)
return result
And now we can deploy our model using our helper function we just wrote.
In [63]:
success = deploy_model('levyPricePredictor', pm)
success
http://cloud.yhathq.com/
Out[63]:
{u'status': u'success'}
Predicting new data in production
The model has been deployed to ŷhat, so now we can feed it new data. Yhat exposes several interfaces to our PricingModel.
Accessing models via REST interface
I'm going to set up a few utility functions to handle authentication. This will make it easier for us to access our model via REST and Websockets.
I've stored my Yhat credentials (i.e. my username and apikey) in a json file called yhat_secrets.json. This first function just reads that file and returns a Python dictionary.
In [64]:
def read_secrets_from_file():
return secrets
This next one simply returns my credentials as a base64 encoded string which is required for authenticating RESTful calls to our model.
In [65]:
import base64
def yhat_base64str():
auth = '%s:%s' % (secrets['username'], secrets['apikey'])
base64string = base64.encodestring(auth).replace('\n', '')
return "Basic %s" % base64string
And since we're going to make requests over http as well as over an open websocket connection, let's make a helper to create the proper URL structure.
In [66]:
def model_url_for(model_name, protocol='http'):
fmt = '{0}://cloud.yhathq.com/{1}/models/{2}/'
return url
In [67]:
url = model_url_for('levyPricePredictor', protocol='http')
url
Out[67]:
'http://cloud.yhathq.com/josh/models/levyPricePredictor/'
We can use our yhat_base64str helper function to compose proper headers for our RESTful API call to our model.
In [68]:
headers = {
'Content-type': 'application/json',
'Accept': 'application/json',
'Authorization': yhat_base64str()
}
In [69]:
payload = testing.T.to_dict()[0]
Here's what's going into our request to our model on Yhat.
In [70]:
print 'headers'
print '*' * 100
print '*' * 100
headers
*******************************************************************************
{
"Content-type": "application/json",
"Authorization": "Basic *************************************=",
"Accept": "application/json"
}
********************************************************************************
{
"trim": "ex",
"engine": "4 Cyl",
"mileage": 68265,
"transmission": "Automatic",
"price": 12995,
"year": 2006
}
And here's what a prediction response message looks like coming back from Yhat.
In [71]:
r = requests.post(url, data=payload, headers=headers)
print json.dumps(r.json, indent=2)
{
"suspectedOutlier": 0,
"x": {
"mileage": 68265,
"price": 12995,
"transmission=Automatic": 1,
"trim=ex": 1,
"year": 2006,
"engine=4 Cyl": 1
},
"yhat_id": "e9b0eb57-619e-40f9-871b-9de88de84144",
"predictedPrice": 13289.96704,
"threshold": 2626.93764
}
To a certain extent, REST is sort of the lingua franca of the web and definitely a key interface for accessing models in production software applications. It's great that we have that in our toolbox, but it's not the only way to access our models.
Accessing models via Websocket interface
Yhat also exposes our PricingModel via a streaming websocket interface. This is far more suitable for some types of applications--particularly those where latency is a concern or where you anticipate high throughput or prediction volume (e.g. pricing in app purchases in a mobile game or virtually anything in the ad tech space).
Lets see how we'd access our model via Yhat's streaming API interface. We'll run the entire testing / holdout set, and generate a report from the suspected outliers.
In [72]:
url = model_url_for('levyPricePredictor', protocol='ws')
url
Out[72]:
'ws://cloud.yhathq.com/josh/models/levyPricePredictor/'
Websockets allow us to do the handshaking to establish a communication channel which remains open. This enables us to send as many messages as we like through that channel without opening and closing the connection for each request.
In a situation where we want to score many listings as soon as they become available, this ends up being more efficient than REST which would require that we "shake hands" each time we send a new message. There's definitely a tradeoff between the high overhead of sending each listing in its own message and introducing latency by collecting listings into a batch that would then get sent all together in a single REST call.
For demo purposes, I'm using synchronous calls to communicate over the websocket. I follow each call to ws.send with a call to ws.recv. Then, I sort suspected outliers from a finite set.
In a real system, the communication would likely be asynchronous, however. The websocket-client Python package includes an event-driven API that mirrors the Javascript API for websockets. As new listings flow through a pipeline, we can send them through the websocket to ŷhat for scoring in a streaming fashion. A message handler would look at the responses and handle the suspected outliers appropriately, for example the listing could be flagged to prevent its display until it can be confirmed and a ticket could be filed to trigger an investigation.
Let's write a helper function which opens a secure websocket connection. This makes it easier to perform the one-time handshake between us and our model on Yhat.
In [73]:
def open_secure_socket():
ws = websocket.create_connection(url)
auth = {
"apikey": secrets['apikey']
}
ws.send(json.dumps(auth))
return ws
And another little helper which streams data to Yhat to make predictions over websockets.
In [74]:
def findOutliers():
ws = open_secure_socket()
for _, item in testing.T.iteritems():
ws.send(json.dumps(item.to_dict()))
yield(res)
ws.close()
In [75]:
possible_outliers = []
n_records = 0
for record in findOutliers():
possible_outlier = record['suspectedOutlier'] == 1
if possible_outlier:
possible_outliers.append(record)
n_records += 1
print 'n_records: %d' % n_records
print "n_possible_outliers: %d" % len(possible_outliers)
print "n_possible_outliers / n_records: %2f" % (len(possible_outliers) / float(n_records))
n_records: 100
n_possible_outliers: 7
n_possible_outliers / n_records: 0.070000
This model has identified 7 suspected outliers. Let's look at one.
In [76]:
possible_outliers[0]
Out[76]:
{u'predictedPrice': 10461.03503,
u'suspectedOutlier': 1,
u'threshold': 2626.93764,
u'x': {u'engine=4 Cyl': 1,
u'mileage': 122458,
u'price': 7499,
u'transmission=Automatic': 1,
u'trim=ex': 1,
u'year': 2006},
u'yhat_id': u'db702640-b60a-4df7-bfed-817357d166f3'}
These are possible outliers, so we don't for sure if these actually stem from invalid, unwanted, or otherwise "bad" data. But we do know that they at least look a bit fishy. So which among these looks the "most fishy" or the most severe?
We can make a helper function to compute the absolute delta between what the regressor estimated the price to be and the actual price as it's listed on Vast (i.e. predicted_y - actual_y).
In [77]:
def calc_delta(record):
"""
Compute absolute difference in the observed and estimated value from our regression model.
Args:
record: Yhat response as a Python dictionary.
Returns:
error: float
example:
In [1]:
record = {
u'predictedPrice': 14633.81626,
u'suspectedOutlier': 1,
u'threshold': 2626.93764,
u'x': {u'engine=4 Cyl': 1,
u'mileage': 51442,
u'price': 11800,
u'transmission=Automatic': 1,
u'trim=exl': 1,
u'year': 2006},
u'yhat_id': u'dfd222d7-7a59-4089-9b78-0da5cc20b336'
}
calc_delta(record)
Out[2]: 2833.8162599999996
"""
predicted_y = record['predictedPrice']
actual_y = record['x']['price']
return abs(predicted_y - actual_y)
This can be used to sort the results with
In [78]:
possible_outliers = sorted(possible_outliers, key=calc_delta, reverse=1)
most_severe = possible_outliers[0]
most_severe
Out[78]:
{u'predictedPrice': 14431.9369,
u'suspectedOutlier': 1,
u'threshold': 2626.93764,
u'x': {u'engine=6 Cyl': 1,
u'mileage': 59308,
u'price': 2612,
u'transmission=Automatic': 1,
u'trim=ex': 1,
u'year': 2006},
u'yhat_id': u'96abe028-45e4-40be-a823-803b801decc4'}
The most severe is a relatively high end (6 Cylinder, EX trim), low mileage (~60,000 miles) vehicle listed for \$11,000 under what our linear model predicted it should be.
Based on experience, I'd bet this one was either (A) a consequence of mistyped data on the site; or (B) that the vehicle has some other undesirable property like body damage or car title issues that we didn't include in our model. |
# keshan /SinhalaBERTo
### Overview
This is a slightly smaller model trained on OSCAR Sinhala dedup dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.
## Model Specification
The model chosen for training is Roberta with the following specifications:
1. vocab_size=52000
2. max_position_embeddings=514
4. num_hidden_layers=6
5. type_vocab_size=1
## How to Use
You can use this model directly with a pipeline for masked language modeling:
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AutoTokenizer.from_pretrained("keshan/SinhalaBERTo")
Mask token: <mask> |
# Tag Info
2
If you note $\|\cdot \|_\infty$ the sup ess norm, you have : $$\| u - \tilde{u} \|_{\infty} \leq \| u - u_k\|_{\infty}+ \| u_k - \tilde{u} \|_{\infty}$$ Now let $\epsilon > 0$, As $u_k \to \tilde{u}$ uniformly, there exist $k_1$ such that $\forall k > k_1, \ \| \tilde{u} - u_k\|_{\infty} < \frac{\epsilon}{2}$ As $u_k \to u$ in $W^{1,2}_0$, it ...
1
$$\frac {\partial K(x,y)}{\partial x}y - \frac {\partial K(x,y)}{\partial y}x =0$$ $$\frac {\partial K(x,y)}{x\partial x} - \frac {\partial K(x,y)}{y\partial y} =0$$ Let $X=x^2$ ; $Y=y^2$ ; $K(x,y)=H(X,Y)$ $$\frac {\partial H(X,Y)}{\partial X} - \frac {\partial H(X,Y)}{\partial Y} =0$$ The general solution of this PDE is well-known : $$H(X,Y)=H(X+Y)$$ any ...
1
Matrix has icomplete rank, so the system is solvable only if $$\operatorname{rank} M = \operatorname{rank} \begin{pmatrix}M \;\big|\; f\end{pmatrix}.$$ Due to truncation error, the last may not hold (but would hold, if you're using a conservative approximation, I suppose). You can use QR decomposition to deal with that. Suppose $M = QR$ where $Q$ is ...
1
A parabolic operator with constant coefficients is a linear transformation away from the heat operator, so it is a natural guess that the fundamental solutions should be similar. I will use this idea to find the fundamental solution. (If you just want to see the solution, see the last line.) Take two positive definite symmetric $n\times n$ matrices $A$ and ...
1
Take a look at Couple stress theory for solids
1
I would convert that into a system of two equations. Let $v = U_t$ and $w = -U_x$. Then you have a following pair of equations $$v_t + (1 + \epsilon w^2) w_x = 0\\ w_t + v_x = 0$$ These equations can be rewritten in conservative form as $$\frac{\partial \mathbf Z}{\partial t} + \frac{\partial \mathbf F(\mathbf Z)}{\partial x} = 0$$ with $\mathbf Z = (v, ... 1 Following the hint by Ian, let$v=u+1$. The function$v$satisfies the PDE$\Delta v=v$, and is positive on the boundary of$\Omega$. So, if$v\$ was negative somewhere, its (negative) global minimum would be attained in the interior... but the Laplacian can't be negative at an interior minimum.
1
This is a classical result. You could find the answer on this book,page 176, section 5.2.3, theorem 4
Only top voted, non community-wiki answers of a minimum length are eligible |
# Questions in Arch
I notice myself asking questions about common tasks when I come back to Arch after a long time. I think that makes for a good article as it is possible others have these questions.
I’m running out of space and that’s stupid.
Not a question, but I’ve felt that exact sentiment.
In Arch, this is very likely the Pacman cache going crazy. Everytime you upgrade a package in Pacman, it keeps the previous version. All of them. Forever.
You could theoretically roll back to the base installation of your system now if you’ve never cleaned the cache. This is great, but you just need to be aware of the functionality because it will eat up all of /.
Wiki Context
To do this you’ll have to install pacman-contrib, which means you’ll have to do an upgrade with pacman -Syyu, so make sure that you already haven’t run out of space ;)
paccache -rk1
==> Privilege escalation required
==> finished: 2036 packages removed (disk space saved: 8.1 GiB)
Holy Shit. That was a massive difference. From 20.8GB to 13.2GB. My / is back to the expected size.
How do I get rid of this package I know I will never use again?
cough emacs cough
Removing Packages
Remove a package and all of its dependencies that aren’t needed by another package.
pacman -Rs [package name]
Find a package by name and get some information on it.
pacman -Qi [package name]
How do I visualize the usage on my disk?
Wiki
Filelight is what I settled on. It produced those lovely pie charts earlier in the article. I love that you can see everything in a hierarchical order and really see which folders are the worst offenders.
Filelight — Disk usage analyzer that creates an interactive map of concentric, segmented rings that help visualise disk usage on your computer.
How do I take a screenshot?
This one is more complicated. Because I run i3, I have the option to bind keys to scripts in a very convenient way.
My ~/.config/i3/config has the following lines amongst others:
bindsym --release Shift+Print exec --no-startup-id "maim -s -c 1,1,1,0 --format png /home/drone/Pictures/Screenshots/date | sed 's/ /-/g'.png"
bindsym --release Print exec --no-startup-id "maim -s -c 1,1,1,0 --format png /dev/stdout | xclip -selection clipboard -t image/png"
# Screenshots
Implicitly here, I’m using maim. You’ll need that installed to do anything. With maim, you’ll get cross hairs, you can draw a box to get a selection, or click to get the whole window.
Notice that for Shift+PrtSc I’m saving to a well known location and for the regular, I copy to the clipboard. Take that OSX.
How do I look at this image?
Oh yeah, you can’t just open. I like sxiv for this. It is super clean and small. The Wiki told me so. |
Monday, April 11, 2011
How much rain would a raindrop drop...
Today in church Jonathan commented on how the arrival of a wave of rain was followed by the air becoming noticeably cooler. I idly replied that it shouldn't, because the gravitational energy released by the raindrops' falling should be manifested as an increase in heat, then immediately became intrigued as to whether or not that would be the case (these things always seem to come at the most inconvenient times!).
Just for fun, I did some simple calculations, which I thought I'd share with you all. I assumed a very simple model, with a raindrop with mass = 1 gram, falling a total height of 1 kilometer (which is not a bad assumption for the height of the clouds about Hilo, I think). Anyway, given those assumptions, a raindrop falling to Earth releases \begin{align}\Delta E&=mg\Delta h\\
&\approx0.001\,\text{kg}\cdot\left(-9.8\frac{\text{m}}{\text{s}^2}\right)(-1000\,\text{m})\\
&\approx10\,\text{J}\end{align}about 10 joules of energy (a joule is about as much energy as it would take to lift a small apple one meter straight up, or the amount of energy released if that same apple fell a meter downwards). If you think that seems like a lot of energy for a raindrop, so did I. If the raindrop were to retain all this energy as kinetic energy, it would impact the ground with a speed of about 100 meters per second, or almost 225 miles per hour. In practice, most of this energy is lost to friction with the air as the raindrop falls, which would indeed heat up the atmosphere as I thought.
However, there's another factor to take into consideration: absorption of heat from the atmosphere by the raindrop. It's relatively simple to calculate the amount of energy it would take to heat up a raindrop with a 1-gram mass by a degree Celsius: \begin{align} Q&=mc\Delta T\\
&\approx1\,\text{g}\cdot4.181\frac{\text{J}}{\text{g}\cdot\text{K}}\Delta T\\
&\approx4\frac{\text{J}}{\text{K}}\Delta T \end{align} Thus for every degree Celsius (or kelvin, both of which equal about 1.8 degrees Fahrenheit) the raindrop heats up it absorbs about 4 joules of energy. So if the raindrop starts off about 5 degrees Fahrenheit cooler than the ambient air temperature at ground level, it will absorb as much energy as it emitted and leave the atmosphere no different temperature-wise than at the beginning. Of course, if the raindrop is more than 5 degrees Fahrenheit cooler (as is quite possible), it will absorb more energy than it releases and cause a net cooling effect. And it wouldn't have to absorb much excess heat, either, considering the sheer number of raindrops in a typical shower.
Well, there's your geeky musing for today. It makes a nice opportunity to show off the beautiful output of $$\LaTeX$$, too. A hui hou! |
# Positioning a collection of objects along a curve
Is it possible to automatically position a collection of objects along a curve? For example, I have 100 cubes (all are different) and want to distribute them evenly along a Bezier Curve.
I searched for a way to do that, but only found how to use Array and Curve modifiers to repeat one object along a curve. This is not what I'm looking for since in my case all the objects already exists.
• Which Blender version, and are your objects already neatly spaced out in a row, or are they all positioned at the origin? – K. A. Buhr Nov 19 '19 at 22:23
• Blender 2.8. The objects are all positioned at the origin. – user1566515 Nov 19 '19 at 23:19
• Use a particle system with a collection blender.stackexchange.com/questions/113973/… – Duarte Farrajota Ramos Nov 19 '19 at 23:55
• You can order particles,too. But if all are variations on a cube, there may be alternatives, depending on your exact needs? – Robin Betts Nov 20 '19 at 6:43
Even though I was able to get it to work with a particle system, it was pretty fiddly, so I think I have a better solution using a little scripting one-liner.
Make a backup of your blend file, in case something goes wrong.
Then, assuming your cubes are in a single collection called "Cubes", switch to the Scripting workspace and enter the following one-liner into the Python console:
>>> for n,o in enumerate(bpy.data.collections["Cubes"].objects): o.location.x = n*3
... # <Press Enter> #
>>>
The "3" here spaces the objects 3 meters apart along the x axis. Use whatever spacing is convenient to keep the objects separated. We'll adjust the final spacing later.
Now, we're going to create a mesh grid at the origin to control the cubes. Switch back to the Layout workspace, and feel free to hide or unhide the Cubes collection as necessary.
• At the origin, add a Grid mesh with a number of X subdivisions equal to the number of objects (say X=100) and Y=3 subdivisions.
• In edit mode, select the whole grid, and "g x 1 " so the left hand edge starts at the origin.
• In object mode, scale it up so the X subdivisions line up with the individual cubes in the collection. The easiest way is: "s x 1.5 " to scale it from 2 to 3 meters (to match the spacing in the Python one-liner, and then "s x 99 " where 99 is one less than the number of cubes. Double check that all the X subdivisions are in the centers of the cubes. Also, scale it down in the Y direction so the edges of the grid lie within the cubes.
• Now, VERY IMPORTANT -- Ctrl-A Apply the Scale on the grid.
• Back in edit mode, Alt-select the middle Y subdivision edge loop and move it along the X axis slightly.
The idea is to have a triangle of points contained within each cube. It should look like this:
Now, back in object mode, select all the cubes, Shift-select the grid, and Ctrl-P Parent to Vertex Triangle. Each cube will be parented to the triangle of three points it's closest to which will control its position and orientation. Now, you can adjust the spacing by selecting the whole grid in edit mode and scaling it from the world origin pivot. (Note, scale in all dimensions, not just along the X axis, or the cubes will tend to twist.)
Finally, you can tie the grid to the curve in the usual manner. The most reliable method seems to be:
• Select your curve, and Shift-S Cursor to Selected
• Select your grid, and Shift-S Selection to Cursor
• Select grid, then shift-select curve, and Ctrl-P Parent -> Follow Path
Now, the curve can be moved, rotated, and scaled in object mode, and the grid w/ cubes will follow along (in particular, the cubes will scale with the curve, so don't do per-axis scaling in object mode). In edit mode, the curve can be scaled without changing the size of the cubes (they'll slide along the curve to maintain spacing). To readjust the cube spacing (also without adjusting cube size), size the grid in edit mode. You may find it helpful to turn on the Edit Mode Display for the Curve modifier in the modifier tab.
Finally, if the cubes start to twist, note that you can edit the grid and pull just one of the three Y subdivisions forward or back to untwist them. Don't forget to turn off rendering of the grid or give it a transparent material or something.
Here's a 2.80 blend file that shows an example with 100 colored cubes.
• Thank you for such a thorough explanation! I really enjoyed following all the steps. It works. I wonder if this could be done in a more efficient way though. What if the number of cubes has to be changed or animated? Do we have to modify the grid mesh? – user1566515 Nov 20 '19 at 18:39
I Think one way is to create a particle system from your curve object, then group all instances objects (your 100 cubes) in a new collection, last calls the collection from the render option panel in your particle system as instances (collection not single object).Finally play with the settings.If you need an advanced tool to do that try Animation Nodes addon, there are some video tutorials on instanced objects around.
• Actually, I already use Animation Nodes, so it would be the best solution. Do you have a particular video tutorial in mind? – user1566515 Nov 19 '19 at 23:18
• it's strange, if you use AN you should know how to instance objects, anyway in the start page of the manual there are the names of the best tutorials creators,Jimmy Gunawan (@blendersushi) has good tutorials also for Sverchok addon.Link animation-nodes-manual.readthedocs.io/en/latest – mike Nov 20 '19 at 2:34 |
# File with particle decay data [closed]
Does anyone know of a computer parsable file that has basic physics and decay data for (quite many) observable particles, such as mesons, barions, heavy leptons and $W, Z, H$?
To clarify the data I am looking for is something like:
name, charge, mass, half-life, branching ratios and decay products
• I'm voting to close this question as off-topic because it looks like it's about a software recommendation. – user191954 Sep 8 '18 at 3:12
• – dmckee --- ex-moderator kitten Sep 8 '18 at 16:36 |
Hierarchical Cellular Structures in High-Capacity Cellular Communication Systems
@article{Jain2011HierarchicalCS,
title={Hierarchical Cellular Structures in High-Capacity Cellular Communication Systems},
author={R. K. Jain and Sumit Katiyar and Nitesh Kumar Agrawal},
journal={ArXiv},
year={2011},
volume={abs/1110.2627}
}
• Published 12 October 2011
• ArXiv
In the prevailing cellular environment, it is important to provide the resources for the fluctuating traffic demand exactly in the place and at the time where and when they are needed. In this paper, we explored the ability of hierarchical cellular structures with inter layer reuse to increase the capacity of mobile communication network by applying total frequency hopping (T-FH) and adaptive frequency allocation (AFA) as a strategy to reuse the macro and micro cell resources without frequency…
27 Citations
Figures from this paper
Allocation of Guard Channels for QoS in Hierarchical Cellular Network
• Computer Science
• 2012
A dynamic guard channel assignment technique based on a two low layer of hierarchical cellular architecture which evaluates the QoS of (LMST) low speed and (HSMT) high speed moving terminals in an indoor area shows that using the optimum number of channels and adjusting dynamically the number of guard channels in each layer, theQoS of LSMT and HSMT are evaluated.
Hybrid Spectral Efficient Cellular Network Deployment to Reduce RF Pollution
• Computer Science
• 2012
This paper deals with hybrid cellular networks with the help of multi-layer overlaid hierarchical structure (macro / micro / pico / femto cells) that will optimize all available resources in existing cellular network through application of remote technologies.
NC-CELL: Network coding-based content distribution in cellular networks for cloud applications
• Computer Science
2014 IEEE Global Communications Conference
• 2014
This paper proposes a technique, called NC-CELL, which uses network coding to foster content distribution in mobile cellular networks and implements a software module at mobile base stations which scans in transit traffic and looks for opportunities to code packets destined to different mobile users together.
Green Cellular Network Deployment To Reduce RF Pollution
• Computer Science
ArXiv
• 2012
This paper deals with green cellular networks with the help of multi-layer overlaid hierarchical structure (macro / micro / pico / femto cells) that could be the answer of the problem of energy conservation and enhancement of spectral density.
Network coding-based content distribution in cellular access networks
• Computer Science
2016 IEEE International Conference on Communications (ICC)
• 2016
This paper proposes the vNC-CELL technique, which uses network coding to combine information flows carrying the same or overlapping content that has to be delivered to co-located users, and shows ability to improve network throughput and reduce download times for the users.
Performance Measures of Hierarchical Cellular Networks on Queuing Handoff Calls
• Computer Science
• 2012
This paper proposes M/M/C Markov model for two low layers of HCN having a FIFO queue in the femtocell layer and picocell layer, thereby comparing with a queue and without a queue.
Transition to green cellular network
• S. Faruque
2015 IEEE International Conference on Electro/Information Technology (EIT)
• 2015
This paper shows that in every propagation environment, there is a free space propagation medium, due to the existence of Fresnel Zones, which gives rise to Fresnel zone break point do, and develops a hierarchical cellular structure for the next generation green cellular network comprising Macrocell, Microcell, Picocell and Femtocell.
Flexible Design for $\alpha$ -Duplex Communications in Multi-Tier Cellular Networks
• Computer Science
IEEE Transactions on Communications
• 2016
Flexible and tractable modeling framework for multi-tier cellular networks with FD BSs and FD/HD UEs is presented and a closed-form expression is found for the critical value of the self-interference attenuation power required for the FD UEs to outperform HD UEs.
Adaptive Integrated Unit to User's Equipment for the Spectral and Energy Efficiency in Cognitive Networks
• Computer Science
Int. J. Interdiscip. Telecommun. Netw.
• 2018
The work proposes an adaptive integrated unit to user's equipment for the spectral and energy efficiency, under severe channel fading condition like in 5Generation cellular network, and shows an enhanced spectral efficiency and the energy consumption is also considerably reduced when the coverage area seems to be idle.
Mobility Management at Link Layer
• Business, Computer Science
• 2016
The typical wireless systems, including cellular mobile communication systems, Wireless Local Area Networks (WLANs ), and satellite communication systems are briefly introduced first. Then, aspects
References
SHOWING 1-10 OF 32 REFERENCES
A study on hierarchical cellular structures with inter-layer reuse in an enhanced GSM radio network
1999 IEEE International Workshop on Mobile Multimedia Communications (MoMuC'99) (Cat. No.99EX384)
• 1999
In today's cellular networks it becomes harder to provide the resources for the increasing and fluctuating traffic demand exactly in the place and at the time where and when they are needed.
High capacity with limited spectrum in cellular systems
IEEE Commun. Mag.
• 1997
Methods such as power control, efficient frequency allocation, and traffic control between the layers are employed to exploit the full potential of the network.
Strategies for handover and dynamic channel allocation in micro-cellular mobile radio systems
• Business, Computer Science
IEEE 39th Vehicular Technology Conference
• 1989
It is shown that, based on the capacity improvement achieved by microcells, no relevant further improvement of the capacity can be realized, but that the proposed algorithm is very well suited to cope with the problems of microcells.
A design of macro-micro CDMA cellular overlays in the existing big urban areas
IEEE J. Sel. Areas Commun.
• 2001
The numerical results by extensive event-driven simulations show that the resulting macro-micro cellular overlays successfully cope with the existing conditions of today's big urban areas, such as spatial and temporal traffic distributions and user mobility characteristics.
A tractable framework for coverage and outage in heterogeneous cellular networks
• Computer Science
2011 Information Theory and Applications Workshop
• 2011
We develop a tractable, flexible, and accurate model for downlink heterogeneous cellular networks. It consists of K tiers of randomly-located base stations (BSs), where each tier may differ in terms
Adaptive radio resource management based on cell load in CDMA-based hierarchical cell structure
Proceedings IEEE 56th Vehicular Technology Conference
• 2002
An adaptive radio resource management in a CDMA based hierarchical cell structure is proposed and shows that the proposed scheme improves call blocking, call dropping and utilization of radio resource compared with conventional schemes.
Joint deployment of macrocells and microcells over urban areas with spatially non-uniform traffic distributions
• Computer Science
Vehicular Technology Conference Fall 2000. IEEE VTS Fall VTC2000. 52nd Vehicular Technology Conference (Cat. No.00CH37152)
• 2000
An algorithmic approach to jointly deploy macrocells and microcells over urban areas using a novel characterization urban spatial traffic distribution and evaluating the performance of resulting cell deployments in terms of the number of deployed cells, resource utilization, call blocking and dropping probabilities, etc.
Femtocell Networks |
My Math Forum Simplifying in diff eq
Differential Equations Ordinary and Partial Differential Equations Math Forum
January 24th, 2015, 06:28 AM #1 Newbie Joined: Jan 2015 From: Minnesota Posts: 2 Thanks: 0 Simplifying in diff eq So issue I am having involves the diff eq x * dy/dx = y + √(x² - y²) In my book it says to first divide both sides by x to get dy/dx = (y/x) + √(1 - (y/x)²) For the life of me, I do not understand how the first equation simplifies to the second one. Am I missing some fundamental fact about powers? Any help would be greatly appreciated. Last edited by skipjack; January 24th, 2015 at 07:48 AM.
January 24th, 2015, 06:46 AM #2 Global Moderator Joined: Oct 2008 From: London, Ontario, Canada - The Forest City Posts: 7,950 Thanks: 1141 Math Focus: Elementary mathematics and beyond $\displaystyle \frac{\sqrt{x^2-y^2}}{|x|}=\frac{\sqrt{x^2-y^2}}{ \sqrt{x^2}}=\sqrt{\frac{x^2-y^2}{x^2}}=\sqrt{1- \left(\frac{y}{x}\right)^2}$ Thanks from McPhi
January 24th, 2015, 07:44 AM #3 Global Moderator Joined: Dec 2006 Posts: 20,809 Thanks: 2150 Does the book suggest one then makes a substitution? The substitution y = xu could have been done at the outset. It's best to consider x < 0 and x > 0 separately. What happens as x approaches zero is "interesting".
January 24th, 2015, 08:21 AM #4 Newbie Joined: Jan 2015 From: Minnesota Posts: 2 Thanks: 0 Sweet Ah. Thank you. I knew I just wasn't seeing it the right way. Yes that is the sub the book makes.
Tags diff, simplifying
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post Niko Bellic Calculus 2 July 8th, 2013 10:01 AM sasho_ Calculus 2 May 1st, 2012 06:26 AM FreaKariDunk Calculus 24 March 3rd, 2012 12:22 PM amaciose Calculus 5 October 12th, 2008 04:10 AM keeps14soccer Calculus 2 September 14th, 2008 04:56 AM
Contact - Home - Forums - Cryptocurrency Forum - Top |
# 3 Recurrent processing during object recognition: it depends on the need for scene segmentation
Abstract While feed-forward activity may suffice for recognizing objects in sparse scenes, additional visual operations that aid object recognition might be needed for more complex scenes. One such additional operation is figure-ground segmentation; extracting the relevant features and locations of the target object while ignoring irrelevant features. In this study of 60 participants, we show objects on backgrounds of increasing complexity to investigate whether recurrent computations are increasingly important for segmenting objects from more complex backgrounds. Three lines of evidence show that recurrent processing is critical for recognition of objects embedded in complex scenes. First, behavioral results indicated a greater reduction in performance after masking objects presented on more complex backgrounds; with the degree of impairment increasing with increasing background complexity. Second, electroencephalography (EEG) measurements showed clear differences in the evoked response potentials (ERPs) between conditions around 200ms - a time point beyond feed-forward activity and object decoding based on the EEG signal indicated later decoding onsets for objects embedded in more complex backgrounds. Third, Deep Convolutional Neural Network performance confirmed this interpretation; feed-forward and less deep networks showed a higher degree of impairment in recognition for objects in complex backgrounds compared to recurrent and deeper networks. Together, these results support the notion that recurrent computations drive figure-ground segmentation of objects in complex scenes.
This chapter is under review as: Seijdel, N.*, Loke, J.*, van de Klundert, R., van der Meer, M., Quispel, E., van Gaal, S., de Haan, E.H.F. & Scholte, H.S. (n.d.). Recurrent processing during object recognition: it depends on the need for scene segmentation
## 3.1 Significance statement
The incredible speed of object recognition suggests that it relies purely on the fast feed-forward buildup of perceptual activity. However, this view is contradicted by studies showing that disruption of recurrent processing leads to decreased object recognition performance. Here we resolve this issue by showing that how object recognition is resolved depends on the context in which the object is presented. For objects presented in isolation or in ‘simple’ environments, feed-forward activity seems sufficient for successful object recognition. However, when the environment is more complex, recurrent processing is necessary to group the elements that belong to the object and segregate them from the background.
## 3.2 Introduction
The efficiency and speed of the human visual system during object categorization suggests that a feed-forward sweep of visual information processing is sufficient for successful recognition (VanRullen & Thorpe, 2002). For example, when presented with objects in a rapid serial visual presentation task (RSVP; Potter & Levy (1969)), or during rapid visual categorization (Thorpe et al., 1996), human subjects could still successfully recognize these objects, with EEG measurements showing robust object-selective activity within 150 ms after object presentation (VanRullen & Thorpe, 2001). Given that there is substantial evidence for the involvement of recurrent processing in figure–ground segmentation (Lamme & Roelfsema, 2000; Wokke et al., 2012), this seems inconsistent with recognition processes that rely on explicit encoding of spatial relationships between parts and suggest instead that rapid recognition may rely on the detection of an ‘unbound’ collection of image features (Crouzet & Serre, 2011).
A convergence of results indicated that recurrent computations were critical for recognition of objects in complex environments, i.e. objects that were more difficult to segment from their background. First of all, behavioral results indicated poorer recognition performance for objects with more complex backgrounds, but only when feedback activity was disrupted by masking. Second, EEG measurements showed clear differences between complexity conditions in the ERPs around 200ms - a time point beyond the first feed-forward visual sweep of activity. Additionally, object category decoding based on the multivariate EEG patterns showed later decoding onsets for objects embedded in more complex backgrounds. This indicated that object representations for more complex backgrounds emerge later, compared to objects in more simple backgrounds. Finally, DCNN performance confirmed this interpretation; feed-forward networks showed a higher degree of impairment in recognition for objects in complex backgrounds compared to recurrent networks. Together, these results support the notion that recurrent computations drive figure-ground segmentation of objects in complex scenes.
## 3.3 Materials and methods
### 3.3.1 Subjects main experiment
Forty-two participants (32 females, 18-35 years old) took part in a first EEG experiment. Data from two participants were excluded from further analysis due to technical problems. We used this first dataset to perform exploratory analyses and optimize our analysis pipeline (Figure 3.2). To confirm our results on an independent dataset, another twenty participants (13 females, 18-35 years old) were measured. Data from one participant were excluded from ERP analyses, due to wrong placement of electrodes I1 and I2.
### 3.3.2 Stimuli
Images of real-world scenes containing birds, cats, fire hydrants, frisbees or suitcases were selected from several online databases, including MS COCO (Lin et al., 2014), the SUN database (Xiao et al., 2010), Caltech-256 (Griffin et al., 2007), Open Images V4 (Kuznetsova et al., 2020) and LabelMe (Russell et al., 2008). These five categories were selected because a large selection of images was available in which the target object was clearly visible and not occluded. For each image, one CE and one SC value was computed using the model described in Ghebreab et al. (2009), Scholte et al. (2009) and Groen et al. (2013). Computing these statistics for a large set of scenes results in a two-dimensional space in which sparse scenes with just a few scene elements separate from complex scenes with a lot of clutter and a high degree of fragmentation. Together, CE and SC appear to provide information about the ‘segmentability’ of a scene (Groen et al., 2013; Groen, Jahfari, et al., 2018). High CE/SC values correspond with images that contain many edges that are distributed in an uncorrelated manner, resulting in an inherently low figure-ground segmentation. Relatively low CE/SC values on the other hand correspond with a homogenous image containing few edges, resulting in an inherently high figure-ground segmentation (Figure 3.1). Each object was segmented from their real-world scene background and superimposed on three categories of phase scrambled versions of the real-world scenes. This corresponded with low, medium and high complexity scenes. Additionally, the segmented object was also presented on a uniform gray background as the segmented condition (Figure 3.1). For each object category eight low CE/SC, eight medium CE/SC and eight high CE/SC images were selected, using the cut-off values from Groen, Jahfari, et al. (2018), resulting in 24 images for each object category and 120 images in total. Importantly, each object was presented in all conditions, allowing us to attribute the effect to the complexity (i.e. segmentability) of each trial, and rule out any object-specific effects.
### 3.3.3 Experimental design
Participants performed a 5-choice categorization task (Figure 3.1), differentiating images containing cats, birds, fire hydrants, frisbees and suitcases as accurately as possible. Participants indicated their response using five keyboard buttons corresponding to the different categories. Images were presented in a randomized sequence, for a duration of 34 ms. Stimuli were presented at eye-level, in the center of a 23-inch ASUS TFT-LCD display, with a spatial resolution of 1920*1080 pixels, at a refresh rate of 60 Hz. Participants were seated approximately 70 cm from the screen, such that stimuli subtended a 6.9° visual angle. The object recognition task was programmed in- and performed using Presentation (Version 18.0, Neurobehavioral Systems, Inc., Berkeley, CA, www.neurobs.com). The experiment consisted of 960 trials in total, of which 480 were backward masked trials and 480 were unmasked trials, randomly divided into eight blocks of 120 trials for each participant. After each block, participants took a short break. The beginning of each trial consisted of a 500 ms fixation period where participants focused their gaze on a fixation cross at the centre of the screen. In the unmasked trials, stimuli were followed by a blank screen for 500 ms and then a response screen for 2000 ms. In order to disrupt recurrent processes (Breitmeyer & Ogmen, 2000; Fahrenfort et al., 2007; Lamme et al., 2002), in the masked trials, five randomly chosen phase-scrambled masks were presented sequentially for 500 ms. The first mask was presented immediately after stimulus presentation, each mask was presented for 100 ms; Figure 3.1). The ambient illumination in the room was kept constant across different participants.
### 3.3.4 Subjects pattern localizer
Five new participants took part in a separate experiment to characterize multivariate EEG activity patterns for the different object categories. For this experiment, we measured EEG activity while participants viewed the original experimental stimuli followed by a word (noun). Participants were asked to only press the button when the image and the noun did not match to ensure attention (responses were not analyzed). A classifier was trained on the EEG data from this experiment, and subsequently tested on the data from the main experiment using a cross-decoding approach. All participants had normal or corrected-to-normal vision, provided written informed consent and received monetary compensation or research credits for their participation. The ethics committee of the University of Amsterdam approved the experiment.
### 3.3.5 Deep Convolutional Neural Networks (DCNNS)
First, to investigate the effect of recurrent connections, we tested different architectures from the CORnet model family (Kubilius et al., 2018); CORnet-Z (feedforward), CORnet-R (recurrent) and CORnet-S (recurrent with skip connections). Then, to further evaluate the influence of network depth on scene segmentation, tests were conducted on three deep residual networks (He et al., 2016) with increasing number of layers; ResNet-10, ResNet-18 and Resnet-34. “Ultra-deep” residual networks are mathematically equivalent to a recurrent neural network unfolding over time, when the weights between their hidden layers are clamped (Liao & Poggio, 2016). This has led to the hypothesis that the additional layers function in a way that is similar to recurrent processing in the human visual system (Kar et al., 2019). Pre-trained networks were finetuned on images from the MSCoco database (Lin et al., 2014), using PyTorch (Paszke et al., 2019). After initialization of the pretrained network, the model’s weights were fine tuned for our task, generating 5 probability outputs (for our 5 object categories). To obtain statistical results, we finetuned the networks ten times for each architecture.
### 3.3.6 EEG data acquisition and preprocessing
EEG was recorded using a 64-channel Active Two EEG system (Biosemi Instrumentation, Amsterdam, The Netherlands, www.biosemi.com) at a 1024 Hz sample rate. As in previous studies investigating early visual processing (Groen et al., 2013; Groen, Jahfari, et al., 2018), we used caps with an extended 10–10 layout modified with 2 additional occipital electrodes (I1 and I2, which replaced F5 and F6). Eye movements were recorded with additional electro-oculograms (vEOG and hEOG). Preprocessing was done using MNE software in Python (Gramfort et al., 2014) and included the following steps for the ERP analyses: 1) After importing, data were re-referenced to the average of two external electrodes placed on the mastoids. 2) A high-pass (0.1Hz, 0.1Hz transition band) and low-pass (30Hz, 7.5 Hz transition band) basic FIR filters were sequentially applied. 3) an Independent Component Analysis (ICA; Vigario et al. (2000)) was run in order to identify and remove eye-blink and eye-movement related noise components (mean = 1.73 of first 25 components removed per participant). 4) epochs were extracted from -200 ms to 500 ms from stimulus onset. 5) trials were normalized by their 200 ms pre-stimulus baseline. 6) 5% of trials with the most extreme values within each condition were removed, keeping the number of trials within each condition equal. 7) data were transformed to current source density responses (Perrin et al., 1989).
### 3.3.7 Statistical analysis: behavioral data
Choice accuracy was computed for each condition in the masked and unmasked trials (Figure 3.3). Differences between the conditions were tested using two-factor (Scene complexity: segmented, low, med, high; Masking: masked, unmasked) repeated-measures ANOVAs. Significant main effects were followed up by post-hoc pairwise comparisons between conditions using Sidák multiple comparisons correction at $$\alpha$$ = 0.05. Behavioral data were analyzed in Python using the following packages: Statsmodels, SciPy, NumPy, Pandas, Matplotlib and Seaborn (Jones et al., 2001; McKinney & Others, 2010; Oliphant, 2006; Seabold & Perktold, 2010).
### 3.3.9 Statistical analysis: EEG - multivariate classification
The same preprocessing pipeline was used as for the ERP analyses. To evaluate how object category information in our EEG signal evolves over time, cross-decoding analyses were performed by training a Support Vector Machine (SVM) classifier on all trials from the pattern localizer experiment (performed by five different subjects) and testing it on each of the main experiment conditions. Object category classification was performed on a vector of EEG amplitudes across 22 electrodes, including occipital (I1, Iz, I2, O1, Oz, O2), peri-occipital (PO3, PO7, POz, PO4, PO8), and parietal (Pz, P1-P10) electrodes. EEG activity was standardized and averaged across the five time windows derived from the ERP analyses. Statistical significance was determined using a Wilcoxon signed-rank test, and corrected for multiple comparisons using a false discovery rate (FDR) of 0.05.
### 3.3.10 Data and code availability
Data and code to reproduce the analyses are available at the Open Science Framework (#ru26k) and at https://github.com/noorseijdel/2020_EEG_figureground
## 3.4 Results
### 3.4.2 Network performance
Next, we presented the same images to Deep Convolutional Neural Networks with different architectures. For the CORnets (Figure 3.4, left panel), a non-parametric Friedman test differentiated accuracy across the different conditions (segmented, low, medium, high) for all architectures, Friedman’s Q(3) = 27.8400; 24.7576; 26.4687 for CORnet-Z, -RT -S respectively, all p < .001. A Mann-Whitney U test on the difference in performance between segmented and high complexity trials indicated a smaller decrease in performance for CORnet-S compared to CORnet-Z (Mann–Whitney U = 100.0, n1 = n2 = 10, p < .001, two-tailed). For the ResNets (Figure 3.4, right panel), a non-parametric Friedman test differentiated accuracy across the different conditions for ResNet-10 and ResNet-18, Friedman’s Q(3) = 23.9053; 22.9468, for ResNet-10 and ResNet-18 respectively, both p < .001. A Mann-Whitney U test on the difference in performance between segmented and high complexity trials indicated a smaller decrease in performance for ResNet-34 compared to ResNet-10 (Mann–Whitney U = 100.0, n1 = n2 = 10, p < .001, two-tailed). Overall, in line with human performance, results indicated a higher degree of impairment in recognition for objects in complex backgrounds for feed-forward or more shallow networks, compared to recurrent or deeper networks.
### 3.4.4 EEG multivariate classification
To further investigate the representational dynamics of object recognition under different complexity conditions, multivariate decoding analyses were performed on the averaged activity in the five time windows (Figure 3.6. To control for response-related activity (keyboard buttons were fixed across the task), a cross-decoding analysis was performed, by training the classifier on all trials from an independent pattern localizer experiment, and testing it on each of the main experiment conditions (see Methods for details). For unmasked trials, a Wilcoxon signed-rank test on the exploratory dataset indicated successful decoding for segmented trials in all five time windows (Z = 100, p < 0.001; Z = 89, p < 0.001; Z = 30, p < 0.001; Z = 131, p < 0.001; Z = 141, p < 0.001) and low trials in the first three time windows (92-115 ms; 120-150 ms; 155-217 ms; Z = 198, p = 0.007; Z = 82, p < 0.001; Z = 61, p < 0.001). For objects on medium complex background, successful above-chance decoding emerged slightly later, in time windows 2 and 3 (Z = 200, p = 0.012; Z = 110, p < 0.001). For objects on high complex background, there was successful decoding in time window 3, Z=216, p = 0.045. For masked trials, there was successful decoding for the segmented objects in time windows 1, 3 and 4 , Z = 113, p < 0.001; Z = 183, p = 0.004; Z = 186, p = 0.004, followed by later additional decoding of low (155-217 ms), Z = 138, p = 0.001, and high (221-275 ms) complexity trials, Z = 157, p = 0.003. There were no significant time windows for medium complexity trials. All p-values reported were corrected by FDR = 0.05. Finally, we aimed to replicate these findings in the confirmatory dataset (N = 20). Overall, results indicated fewer instances of successful object decoding, and if present, slightly delayed compared to the exploratory set. For unmasked trials, results from the Wilcoxon Signed-Ranks test indicated successful decoding for segmented trials in all time windows except the second (92-115 ms; 155-217 ms; 221-275 ms; 279-245 ms), Z = 27, p = 0.006; Z = 18, p = 0.003; Z = 0, p < 0.001; Z = 35, p = 0.011. There were no other significant time windows from other unmasked conditions. For masked trials, there was significant decoding for segmented trials in time window 3 and 4 (155-217 ms; 221-275 ms), Z = 36, p = 0.031; Z = 38, p = 0.031, and for low trials in time window 2, Z = 36 , p = 0.050. Overall, these findings showed that different objects evoked reliably different sensor patterns when presented in isolation or in ‘simple’ environments, within the first feed-forward sweep of visual information processing. Additionally, results indicated decreased and later decoding for objects embedded in more complex backgrounds, suggesting that object representations for objects on complex backgrounds emerge later. Finally, these findings also indicate that the object category representations generalized across tasks and participants.
## 3.5 Discussion
This study systematically investigated whether recurrent processing is required for figure-ground segmentation during object recognition. A converging set of behavioral, EEG and computational modelling results indicate that recurrent computations are required for figure-ground segmentation of objects in complex scenes. These findings are consistent with previous findings showing enhanced feedback for complex scenes (Groen, Jahfari, et al., 2018), and visual backward masking being more effective for images that were ‘more difficult to segment’ (Koivisto et al., 2014). We interpret these results as showing that figure-ground segmentation, driven by recurrent processing, is not necessary for object recognition in simple scenes but it is for more complex scenes.
#### 3.5.0.1 Effects of scene complexity using artificial backgrounds
In an earlier study, using natural scenes, we already showed that feedback was selectively enhanced for high complexity scenes, during an animal detection task. While there are numerous reasons for using naturalistic scenes (Felsen et al., 2005; Felsen & Dan, 2005; Talebi & Baker, 2012), it is difficult to do controlled experiments with them because they vary in many (unknown) dimensions. Additionally, SC and CE (measures of scene complexity) could correlate with other contextual factors in the scene (e.g. SC correlates with perception of naturalness of a scene (Groen et al., 2013), and could be used as diagnostic information for the detection of an animal. Additionally, previous research has shown that natural scenes and scene structure can facilitate object recognition (Davenport & Potter, 2004; Kaiser & Cichy, 2018; Neider & Zelinsky, 2006). Results from the current study, using artificial backgrounds of varying complexity, replicate earlier findings while allowing us to attribute the effects to SC and CE, and the subsequent effect on segmentability. A limitation of any experiment with artificially generated (or artificially embedded) images is that it’s not clear whether our findings will generalize to ‘real images’ that have not been manipulated in any way. Together with the previous findings, however, our results corroborate the idea that more extensive processing (possibly in the form of recurrent computations) is required for object recognition in more complex, natural environments (Groen, Jahfari, et al., 2018; Kar et al., 2019; Rajaei et al., 2019; Tang et al., 2018).
#### 3.5.0.2 Time course of object recognition
Based on the data from the exploratory dataset (N = 40), we selected five time windows in the ERPs to test our hypotheses on the confirmatory dataset. For our occipital-peri-occipital pooling, we expected the first feedforward sweep to be unaffected by scene complexity. Indeed, amplitudes of the difference waves, averaged across the selected time windows, indicated no influence of masking or scene complexity early in time (94-110 ms). The observation that all three difference waves deviated from zero, however, indicates that there was an effect of segmentation. In this early time window, background presence thus seems to be more important than the complexity of the background. This difference could be attributed to the detection of additional low-level features in the low, medium and high complexity condition, activating a larger set of neurons that participate in the first feedforward sweep (Lamme & Roelfsema, 2000). In the second and third time window (120-217 ms), differences between the complexity conditions emerge. We interpret these differences as reflecting the increasing need for recurrent processes. Our results are generally consistent with prior work investigating the time course of visual processing of objects under more or less challenging conditions (Cichy et al., 2014; Contini et al., 2017; DiCarlo & Cox, 2007; Rajaei et al., 2019; Tang et al., 2018). In line with multiple earlier studies, masking left the early evoked neural activity (<120 ms) relatively intact, whereas the neural activity after ∼150 ms was decreased (Boehler et al., 2008; Del Cul et al., 2007; Fahrenfort et al., 2007; Koivisto & Revonsuo, 2010; Lamme et al., 2002; Lamme & Roelfsema, 2000).
Decoding results corroborated these findings, showing decreased or delayed decoding onsets for objects embedded in more complex backgrounds, suggesting that object representations for those images emerge later. Additionally, when recurrent processing was impaired using backward masking, only objects presented in isolation or in ‘simple’ environments evoked reliably different sensor patterns that our classifiers were able to pick up (Figure 3.6.
#### 3.5.0.4 Consistency of object decoding results
In the exploratory set, results from the multivariate decoding analyses indicated early above chance decoding for ‘simple’ scenes (segmented and low) in both unmasked and masked trials. For more complex scenes decoding emerged later (medium) or was absent (high) for unmasked trials. In the confirmatory set, however, there were fewer instances of successful object decoding, and if present, successful decoding was delayed. A potential explanation for this finding could be that the sample size in the confirmatory dataset was inadequate for the chosen multivariate decoding analyses, resulting in reduced statistical power. A simulation analysis on the exploratory set, in which we randomly selected 20 participants (repeated 1000 times) indicated reduced decoding accuracy, similar to our confirmatory results. Our choice for the number of participants in the confirmatory dataset thus does not seem to be sufficient (Supplementary Figure 3.7).
#### 3.5.0.5 Probing cognition with Deep Convolutional Neural Networks
One way to understand how the human visual system processes visual information involves building computational models that account for human-level performance under different conditions. Here we used Deep Convolutional Neural Networks, because they show remarkable performance on both object and scene recognition (e.g. Russakovsky et al. (2015); He et al. (2016)). While we do not aim to claim that DCNNs are identical to the human brain, we argue that studying how performance of different architectures compares to human behavior could be informative about the type of computations that are underlying this behavior. In the current study, it provides an additional test for the involvement of recurrent connections. Comparing the (behavioral) results of DCNNs with findings in humans, our study adds to a growing realization that more extensive processing, in the form of recurrent computations, is required for object recognition in more complex, natural environments (Groen, Jahfari, et al., 2018; Kar et al., 2019; Rajaei et al., 2019; Tang et al., 2018).
### 3.5.1 Conclusion
Results from the current study show that how object recognition is resolved depends on the context in which the target object appears: for objects presented in isolation or in ‘simple’ environments, object recognition appears to be dependent on the object itself, resulting in a problem that can likely be solved within the first feedforward sweep of visual information processing. When the environment is more complex, recurrent processing is necessary to group the elements that belong to the object and segregate them from the background.
## 3.6 Supplement to Chapter 3
### References
Boehler, C. N., Schoenfeld, M. a, Heinze, H.-J., & Hopf, J.-M. (2008). Rapid recurrent processing gates awareness in primary visual cortex. Proc. Natl. Acad. Sci. U. S. A., 105(25), 8742–8747.
Breitmeyer, B. G., & Ogmen, H. (2000). Recent models and findings in visual backward masking: A comparison, review, and update. Percept. Psychophys., 62(8), 1572–1595.
Cichy, R. M., Pantazis, D., & Oliva, A. (2014). Resolving human object recognition in space and time. Nat. Neurosci., 17(3), 1–10.
Contini, E. W., Wardle, S. G., & Carlson, T. A. (2017). Decoding the time-course of object recognition in the human brain: From visual features to categorical decisions. Neuropsychologia, 105, 165–176.
Crouzet, S. M., & Serre, T. (2011). What are the visual features underlying rapid object recognition? Front. Psychol., 2, 326.
Davenport, J. L., & Potter, M. C. (2004). Scene consistency in object and background perception. Psychol. Sci., 15(8), 559–564.
Del Cul, A., Baillet, S., & Dehaene, S. (2007). Brain dynamics underlying the nonlinear threshold for access to consciousness. PLoS Biol., 5(10), e260.
DiCarlo, J. J., & Cox, D. D. (2007). Untangling invariant object recognition. Trends Cogn. Sci., 11(8), 333–341.
Fahrenfort, J. J., Scholte, H. S., & Lamme, V. A. (2007). Masking disrupts reentrant processing in human visual cortex. J. Cogn. Neurosci., 19(9), 1488–1497.
Felsen, G., & Dan, Y. (2005). A natural approach to studying vision. Nat. Neurosci., 8(12), 1643–1646.
Felsen, G., Touryan, J., Han, F., & Dan, Y. (2005). Cortical sensitivity to visual features in natural scenes. PLoS Biol., 3(10), 1819–1828.
Ghebreab, S., Scholte, S., Lamme, V., & Smeulders, A. (2009). A biologically plausible model for rapid natural scene identification. Adv. Neural Inf. Process. Syst., 629–637.
Griffin, G., Holub, A., & Perona, P. (2007). Caltech-256 object category dataset. 20.
Groen, I. I. A., Ghebreab, S., Prins, H., Lamme, V. A. F., & Scholte, H. S. (2013). From image statistics to scene gist: Evoked neural activity reveals transition from Low-Level natural image structure to scene category. Journal of Neuroscience, 33(48), 18814–18824.
Groen, I. I. A., Jahfari, S., Seijdel, N., Ghebreab, S., Lamme, V. A., & Scholte, H. S. (2018). Scene complexity modulates degree of feedback activity during object detection in natural scenes. PLoS Computational Biology, 14(12), e1006690.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778.
Jones, E., Oliphant, T., Peterson, P., & Others. (2001). SciPy: Open source scientific tools for python.
Kaiser, D., & Cichy, R. M. (2018). Typical visual-field locations facilitate access to awareness for everyday objects. Cognition, 180, 118–122.
Kar, K., Kubilius, J., Schmidt, K., Issa, E. B., & DiCarlo, J. J. (2019). Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior. Nat. Neurosci., 22(6), 974–983.
Koivisto, M., Kastrati, G., & Revonsuo, A. (2014). Recurrent processing enhances visual awareness but is not necessary for fast categorization of natural scenes. J. Cogn. Neurosci., 26(2), 223–231.
Koivisto, M., & Revonsuo, A. (2010). Event-related brain potential correlates of visual awareness. In Neuroscience & Biobehavioral Reviews (Nos. 6; Vol. 34, pp. 922–934).
Kubilius, J., Schrimpf, M., Nayebi, A., Bear, D., Yamins, D. L. K., & others. (2018). CORnet: Modeling the neural mechanisms of core object recognition. BioRxiv.
Kuznetsova, A., Rom, H., Alldrin, N., Uijlings, J., Krasin, I., Pont-Tuset, J., Kamali, S., Popov, S., Malloci, M., Kolesnikov, A., & others. (2020). The open images dataset v4. International Journal of Computer Vision, 1–26.
Lamme, V. a F., & Roelfsema, P. R. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends Neurosci., 23(11), 571–579.
Lamme, V. A. F., Zipser, K., & Spekreijse, H. (2002). Masking interrupts figure-ground signals in V1. J. Cogn. Neurosci., 14(7), 1044–1053.
Liao, Q., & Poggio, T. (2016). Bridging the gaps between residual learning, recurrent neural networks and visual cortex. http://arxiv.org/abs/1604.03640
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft COCO: Common objects in context. Computer Vision – ECCV 2014, 740–755.
McKinney, W., & Others. (2010). Data structures for statistical computing in python. Proceedings of the 9th Python in Science Conference, 445, 51–56.
Neider, M. B., & Zelinsky, G. J. (2006). Scene context guides eye movements during visual search. Vision Res., 46(5), 614–621.
Oliphant, T. E. (2006). A guide to NumPy (Vol. 1). Trelgol Publishing USA.
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., & others. (2019). PyTorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems, 8024–8035.
Perrin, F., Pernier, J., Bertrand, O., & Echallier, J. F. (1989). Spherical splines for scalp potential and current density mapping. Electroencephalogr. Clin. Neurophysiol., 72(2), 184–187.
Potter, M. C., & Levy, E. I. (1969). Recognition memory for a rapid sequence of pictures. J. Exp. Psychol., 81(1), 10–15.
Rajaei, K., Mohsenzadeh, Y., Ebrahimpour, R., & Khaligh-Razavi, S.-M. (2019). Beyond core object recognition: Recurrent processes account for object recognition under occlusion. PLoS Comput. Biol., 15(5), e1007001.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., & Fei-Fei, L. (2015). ImageNet large scale visual recognition challenge. Int. J. Comput. Vis., 115(3), 211–252.
Russell, B. C., Torralba, A., Murphy, K. P., & Freeman, W. T. (2008). LabelMe: A database and Web-Based tool for image annotation. Int. J. Comput. Vis., 77(1-3), 157–173.
Scholte, H. S., Ghebreab, S., Waldorp, L., Smeulders, A. W. M., & Lamme, V. A. F. (2009). Brain responses strongly correlate with weibull image statistics when processing natural images. J. Vis., 9(4), 29–29.
Seabold, S., & Perktold, J. (2010). Statsmodels: Econometric and statistical modeling with python. Proceedings of the 9th Python in Science Conference, 57, 61.
Talebi, V., & Baker, C. L., Jr. (2012). Natural versus synthetic stimuli for estimating receptive field models: A comparison of predictive robustness. J. Neurosci., 32(5), 1560–1576.
Tang, H., Schrimpf, M., Lotter, W., Moerman, C., Paredes, A., Ortega Caro, J., Hardesty, W., Cox, D., & Kreiman, G. (2018). Recurrent computations for visual pattern completion. Proc. Natl. Acad. Sci. U. S. A., 115(35), 8835–8840.
Thorpe, S., Fize, D., & Marlot, C. (1996). Speed of processing in the human visual system. Nature, 381(6582), 520.
Uttl, B. (2005). Measurement of individual differences: Lessons from memory assessment in research and clinical practice. Psychological Science, 16(6), 460–467.
VanRullen, R., & Thorpe, S. J. (2001). The time course of visual processing: From early perception to decision-making. J. Cogn. Neurosci., 13(4), 454–461.
VanRullen, R., & Thorpe, S. J. (2002). Surfing a spike wave down the ventral stream. Vision Res., 42(23), 2593–2615.
Vigario, R., Sarela, J., Jousmiki, V., Hamalainen, M., & Oja, E. (2000). Independent component approach to the analysis of EEG and MEG recordings. In IEEE Transactions on Biomedical Engineering (Nos. 5; Vol. 47, pp. 589–593).
Wokke, M. E., Sligte, I. G., Steven Scholte, H., & Lamme, V. A. F. (2012). Two critical periods in early visual cortex during figure-ground segregation. Brain Behav., 2(6), 763–777.
Xiao, J., Hays, J., Ehinger, K. A., Oliva, A., & Torralba, A. (2010). SUN database: Large-scale scene recognition from abbey to zoo. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 3485–3492. |
# What's in the denominator?
Calculus Level 3
Given that for $$x>0$$, $$y=y(x)>0$$, $e^{-\cot(y)}=x^x+x\ln(x)$
and implicit differentiation of the above equation gives $\frac{dy}{dx}=\frac{(x^x+1)(\ln(x)+1)}{(\csc^2(y))(x^x+xf(x))}$
where $$f(x)$$ is a function involving $$x$$ (not necessarily equalling $$y$$), find $$f(x)$$.
×
Problem Loading...
Note Loading...
Set Loading... |
# Proving an inequality with Taylor polynomials
This is a homework question I was asked to do
Of a twice differentiable function $f : \mathbb{R} \to \mathbb{R}$ it is given that $f(2) = 3, f'(2) = 1$ and $f''(x) = \frac{e^{-x}}{x^2+1}$ . Now I have to prove that $$\frac{7}{2} \leq f\left(\frac{5}{2}\right) \leq \frac{7}{2} + \frac{e^{-2}}{40} .$$ I tried this by computing the third Taylor polynomial of $f$ near $a=2$, setting $x = \frac{5}{2}$, which gave me $$f(5/2) \approx 7/2 + \frac{e^{-2}}{40} - \frac{ - e^{-5/2}}{48}$$, but now I don't know what to do next. I guess one has to do something with finding the error of the first and second order Taylor polynomials, but I'm not sure how to do so. Can you help me?
-
Please don't include a "signature" in your posts; see the FAQ. Thank you. – Arturo Magidin Nov 22 '11 at 20:59
try Lagrange's form of the error term with a linear approximation to $f(x)$ around $x = 2$. – Zarrax Nov 22 '11 at 21:07
Using a local linear approximation (that is, a degree 1 Taylor polynomial approximation), we have that $$f(x) \approx f(2) + f'(2)(x-2) = x+1.$$
Using the Lagrange Error Bound (with $n=1$) we have that $$\left| f(x) - (x+1)\right| \leq \frac{M}{2!}|x-2|^2$$ where $\max|f''(x)|\leq M$ on the interval between $2$ and $x$.
On the interval $[2,2.5]$, the function $f''(x) = \frac{e^{-x}}{x^2+1}$ is decreasing (and positive), since the derivative is $$-\frac{(x^2+1)e^{-x} + 2xe^{-x}}{(x^2+1)^2},$$ so we can take $M=f''(2) = \frac{e^{-2}}{5}$. Thus, the bound at $x=\frac{5}{2}$ is $$\frac{M}{2}\left(\frac{1}{2}\right)^2 = \frac{e^{-2}}{10}\left(\frac{1}{2}\right)^2.$$
Plugging into the Lagrange Error Bound and resolving the absolute value gives: $$-\left(\frac{e^{-2}}{10}\right)\left(\frac{1}{2}\right)^2 \leq f\left(\frac{5}{2}\right) - \frac{7}{2}\leq \frac{e^{-2}}{10}\left(\frac{1}{2}\right)^2$$ from which you should be able to deduce what you want.
-
Thanks a lot! Ok, I feel I should be able to deduce what I want but I can't figure it out completely. I can now find that f(5/2) <= e^-2/40 + 7/2 but I can't deduce the inequality on the left from the information you have provided me with. – Max Muller Nov 22 '11 at 22:17
@Max: Since the second derivative is positive, the function is concave up; that means that the tangent lies under the graph of $f$. That means that the tangent line approximation is an underestimate of $f(x)$. Since the tangent line approximation for $f(5/2)$ is $(5/2)+1 = 7/2$, that means that $f(5/2)\geq 7/2$. – Arturo Magidin Nov 22 '11 at 22:38
The calculation can be done in the following way: $f'''(x)=(f''(x))'= -\frac{e^{-x}}{x^2+1} -\frac{2xe^{-x}}{(x^2+1)^2}$ which at $x=2$ yields $f'''(2)=-\frac{9e^{-2}}{25}$ and so \begin{eqnarray*} f(5/2) & = & f(2)+f'(2)(5/2-2) + f''(2)(5/2-2)^2/2! +f'''(2)(5/2-2)^3/3!+\dots \\ & = &3+1/2+e^{-2}/40+ \frac{-3e^{-2}}{50}+\dots \end{eqnarray*} and the remaining terms are smaller than $\frac{3e^{-2}}{50}$ so you obtain your inequalities.
-
Start with $f^\prime(x) = f^\prime(2) + \int_2^x f^{\prime\prime}(y) \mathrm{d} y = 1 + \int_2^x \frac{\exp(-u)}{1+u^2} \mathrm{d} u$. Then $$\begin{eqnarray} f(x) &=& f(2) + \int_2^x f^\prime(z) \mathrm{d} z = 3 + \int_2^x \left( 1 + \int_2^z \frac{\exp(-u)}{1+u^2} \mathrm{d} u \right) \mathrm{d} z \\ &=& 3 + (x-2) + \int_2^x \int_2^z \frac{\exp(-u)}{1+u^2} \mathrm{d} u \mathrm{d} z \end{eqnarray}$$
Since the double integral is a non-negative quantity (as an integral of non-negative function), it follows $f\left( \frac{5}{2} \right) \ge 3 + \left( \frac{5}{2} - 2\right) = \frac{7}{2}$.
On the other hand, since $\frac{\exp(-u)}{1+u^2}$ is decreasing for $u>0$: $$\begin{eqnarray} \int_2^\frac{5}{2} \int_2^z \frac{\exp(-u)}{1+u^2} \mathrm{d} u \mathrm{d} z &\le& \int_2^\frac{5}{2} \int_2^z \frac{\exp(-2)}{1+2^2} \mathrm{d} u \mathrm{d} z = \int_2^{\frac{5}{2}} \frac{\exp(-2)}{5} (z-2) \mathrm{d} z \\ &=& \frac{1}{5 \mathrm{e}^{2}} \cdot \left. \frac{1}{2} (z-2)^2 \right|_2^\frac{5}{2} = \frac{1}{5 \mathrm{e}^{2}} \cdot \frac{1}{8} = \frac{1}{40 \mathrm{e}^{2}} \end{eqnarray}$$ It, thus, follows that $$\frac{7}{2} \le f\left( \frac{5}{2} \right) \le \frac{7}{2} + \frac{1}{40 \mathrm{e}^{2}}$$
- |
## bev199 Group Title can anyone help me with this problem find vertex and equation, line of symmetry and graph the function. f(x)= 1/3 x^2 one year ago one year ago
• This Question is Open
1. andjie Group Title
f(x)=$\frac{ x ^{2} }{ 3 }$
2. andjie Group Title
I might be reading it wrong I am sorry
3. mark_o. Group Title
the parabola or quadratic equation is f(x)=ax^2 +bx +c, vertex x=-b/2a if you have a formula of f(x)= 1/3 x^2+bx +c --> here b=0 and c=0 vertex x=-b/2a x=-0/2(1/3)=0 sub this to f(x)= y=1/3 x^2=0 then what is the vertex V(x,y)=___? is it V(0,0) yes or no? for graphing use values of x=0,+-1,+-2 +-3 etc..etc....:D good luck now
4. andjie Group Title
You had it right I think I was confused, but you really helped me understand the question...
5. mark_o. Group Title
ok good,,, good luck now and have fun ....:D
6. andjie Group Title
I will once someone looks at my problems lol..... One person is looking I just want to ensure I am on the right track....
7. mark_o. Group Title
ok where is the prob? i may be able to help a little bit :D
8. andjie Group Title
How would I show you?
9. andjie Group Title
If anyone can help me check my work I would be very thankful. So it can be more understandable I attached my work in a word file. Thank you so very much I just want to ensure I am on the right track.
10. andjie Group Title
Thats the question
11. mark_o. Group Title
hmm go to your prob site and notify my name there then i will click it and be there :D
12. mark_o. Group Title
like this hi andjie
13. andjie Group Title
14. mark_o. Group Title
yes copy and paste my name there then ill just click on it
15. andjie Group Title
ok I did that
16. mark_o. Group Title
hmm i didnt have a notification,, did you highlight and copy then paste my name there?
17. andjie Group Title
Paste it where exactly lol
18. andjie Group Title
19. mark_o. Group Title
on where you posted your problem
20. mark_o. Group Title
ok go back to where you posted your problem then paste my name there
21. andjie Group Title
I posted your name in the question
22. andjie Group Title
So confusing
23. mark_o. Group Title
hmm i dont know why i didnt get a notification?
24. mark_o. Group Title
is it still open? why dont you close it hen repost them new
25. andjie Group Title
ok |
## Some probabilistic aspects of the terminal digits of Fibonacci numbers.(English)Zbl 0833.11034
By terminal digits both the initial digit and the final digit are meant. The authors prove several probabilistic relationships between the initial and the final digits of the Fibonacci numbers $$F_k$$, e.g., they give the joint probability that the initial and the final digit take fixed values. In this connection, even though not explicit stated, the indices of the Fibonacci numbers are assumed to be uniformly distributed in a large interval. The principal mean of reasoning is the statistical independence between the initial and the final digit which arises from the periodicity 60 of the sequence $$F_k \bmod 10$$ and from the fact that every subsequence $$F_h, F_{60+ h}, F_{120+ h}, \dots$$ obeys Benford’s law. Finally, the authors mention some related results on Lucas numbers.
### MSC:
11K31 Special sequences 11B39 Fibonacci and Lucas numbers and polynomials and generalizations |
# Time complexity of derivation, gradient,differential, jacobian matrix
what is the time complexity of gradient $$\nabla_{f}$$ using the $$\mathcal O$$-notation? what is the time complexity of jacobian matrix using the $$\mathcal O$$-notation? who knows some references to introduce the time complexity of gradient? I make reply to reviewers for computational complexity of gradient and matrix operations. I need some references to cite.
• For the numerical calculation of a 1D derivative at a single point, you require at least two points (left aside methods like the complex step-derivative). So that's O(1). You can carry that forward to higher dimensions and then to vectors and matrices. – davidhigh Oct 15 at 12:19
• What algorithms are you using to compute your derivatives? Finite differencing? Automatic Differentiation (in forward or reverse mode)? How complicated is $f$? – Brian Borchers Oct 16 at 1:48 |
1887
### Abstract
Surface integral equation solution, where surface polarization is included to model galvanic measurements. The model and the corresponding software are best suited in forward modelling. The problem is formulated for finite conductivity contrast.
/content/papers/10.3997/2214-4609.201402677
2006-09-04
2021-04-12 |
# Mixed table borders
I am trying to draw a table where some lines have (vertical) borders and some lines don't. I couldn't find a satisfying way to do this.
Horizontal borders are not the problem obviously, since I can decide where I need a \hline, but the vertical ones seem to be fixed in the parameter of the whole tabular environment.
The best I could find was using multicolumn, but I guess that's not the proper way to do it, since its real purpose is to merge cells.
Here is what I've done so far:
\begin{tabular}{ cccccccc }
\hline
\multicolumn{1}{|c|}{7} &
\multicolumn{1}{|c|}{16} &
\multicolumn{1}{|c|}{3} &
\multicolumn{1}{|c|}{-1} &
\multicolumn{1}{|c|}{9} &
\multicolumn{1}{|c|}{32} &
\multicolumn{1}{|c|}{4} &
\multicolumn{1}{|c|}{2}\\
\hline
0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\\
\end{tabular}
This example code does produce the layout I need, but I'm sure that's not how it should be done.
tl;dr: I want to produce a table where the first line has horizontal and vertical borders, while the second line doesn't have any borders at all.
• Seven of the eight \multicolumn{1}{|c|}{...} statements should have only one, not two, vertical bars. E.g, the first statement could be \multicolumn{1}{|c|}{7}, and the remaining seven should be of the form \multicolumn{1}{c|}{...}. This issue becomes very evident if you load the array package -- or if you load a package (such as tabularx) that, in turn, loads the array package. – Mico Nov 19 '15 at 20:49
## 2 Answers
You've done it almost the right way. If you want to have a shorter code, you can define a command that will replace all these \multicolumns. Here is a way to do it, with some minor improvements to your table:
\documentclass{book}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{array}
\usepackage{xparse}
\DeclareExpandableDocumentCommand\fcell{O{c}m}{\multicolumn{1}{>{$}#1<{$}|}{#2}}
\begin{document}
\renewcommand\arraystretch{1.25}
\begin{tabular}{*{8}{>{$}c<{$}}}
\hline
\multicolumn{1}{|c|}{7} & \fcell{16} & \fcell{3} & \fcell{-1} & \fcell{9} & \fcell{32} & \fcell{4} & \fcell{2}\\
\hline
0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\\
\end{tabular}
\end{document}
You wrote
... The best I could find was using \multicolumn, but I guess that's not the proper way to do it, since its real purpose is to merge cells."
Not quite. I'd say that \multicolumn has two real purposes: (a) to merge cells -- the purpose you mention -- and (b) to change the column type of its argument, be that a single cell or a range of cells.
To simplify switching between column types, it's frequently handy to set up a shortcut macro. In the example below, the macro \mr -- short for "multicolumn-right", I suppose -- is set up for just that purpose.
\documentclass{article}
\usepackage{array}
\newcommand{\mr}[1]{\multicolumn{1}{r|}{#1}}
\begin{document}
$\begin{array}{ *{8}{r} } \hline \multicolumn{1}{|r|}{7} & \mr{16} & \mr{3} & \mr{-1} & \mr{9} & \mr{32} & \mr{4} & \mr{2}\\ \hline 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\\ \hline \end{array}$
\end{document} |
Revision history [back]
If you are interested in producing a high-quality raytraced image from sage output, you should probably use special configuration of the raytracer directly. There is definitely more configuration we should allow in sage directly (choice of camera, some basic lighting options), but for really good results there will be no substitute.
Sage just writes a scene file for the tachyon raytracer. You can take that file and edit it. Here's how you can get such a file:
sage: var('x,y')
(x, y)
sage: G=plot3d(x^2+y^2,(x,-1,1),(y,-1,1))
sage: with open("T.dat","w") as F: F.write(G.tachyon())
Your next problem would be to figure out how to call the tachyon raytracer with such a file. SOmething like this would probably work:
sage: %system tachyon T.dat -format PNG -o T.png
Now you should have an image file T.png that displays your scene.
Your next stop is to read the documentation of tachyon http://jedi.ks.uiuc.edu/~johns/raytracer/ and edit the scene file T.dat to your liking.
Alternatively, you could look at different export methods, such as G.obj() and G.x3d() and see if mainstream raytracers such as blender can read one of them. |
## C Specification
To return information about the arguments of a kernel, call the function
cl_int clGetKernelArgInfo(
cl_kernel kernel,
cl_uint arg_indx,
cl_kernel_arg_info param_name,
size_t param_value_size,
void* param_value,
size_t* param_value_size_ret);
## Parameters
• kernel specifies the kernel object being queried.
• arg_indx is the argument index. Arguments to the kernel are referred by indices that go from 0 for the leftmost argument to n - 1, where n is the total number of arguments declared by a kernel.
• param_name specifies the argument information to query. The list of supported param_name types and the information returned in param_value by clGetKernelArgInfo is described in the Kernel Argument Queries table.
• param_value is a pointer to memory where the appropriate result being queried is returned. If param_value is NULL, it is ignored.
• param_value_size is used to specify the size in bytes of memory pointed to by param_value. This size must be > size of return type as described in the Kernel Argument Queries table.
• param_value_size ret returns the actual size in bytes of data being queried by param_name. If param_value_size_ret is NULL, it is ignored.
## Description
Kernel argument information is only available if the program object associated with kernel is created with clCreateProgramWithSource and the program executable was built with the -cl-kernel-arg-info option specified in options argument to clBuildProgram or clCompileProgram.
Table 1. List of supported param_names by https://www.khronos.org/registry/OpenCL/specs/2.2/html/OpenCL_API.html#clGetKernelArgInfo
cl_kernel_arg_info Return Type Info. returned in param_value
CL_KERNEL_ARG_ADDRESS_QUALIFIER
Missing before version 1.2.
Returns the address qualifier specified for the argument given by arg_indx. This can be one of the following values:
CL_KERNEL_ARG_ADDRESS_GLOBAL
CL_KERNEL_ARG_ADDRESS_LOCAL
CL_KERNEL_ARG_ADDRESS_CONSTANT
CL_KERNEL_ARG_ADDRESS_PRIVATE
If no address qualifier is specified, the default address qualifier which is CL_KERNEL_ARG_ADDRESS_PRIVATE is returned.
CL_KERNEL_ARG_ACCESS_QUALIFIER
Missing before version 1.2.
cl_kernel_arg_access_qualifier
Returns the access qualifier specified for the argument given by arg_indx. This can be one of the following values:
CL_KERNEL_ARG_ACCESS_READ_ONLY
CL_KERNEL_ARG_ACCESS_WRITE_ONLY
CL_KERNEL_ARG_ACCESS_READ_WRITE
CL_KERNEL_ARG_ACCESS_NONE
If argument is not an image type and is not declared with the pipe qualifier, CL_KERNEL_ARG_ACCESS_NONE is returned. If argument is an image type, the access qualifier specified or the default access qualifier is returned.
CL_KERNEL_ARG_TYPE_NAME
Missing before version 1.2.
char[]
Returns the type name specified for the argument given by arg_indx. The type name returned will be the argument type name as it was declared with any whitespace removed. If argument type name is an unsigned scalar type (i.e. unsigned char, unsigned short, unsigned int, unsigned long), uchar, ushort, uint and ulong will be returned. The argument type name returned does not include any type qualifiers.
CL_KERNEL_ARG_TYPE_QUALIFIER
Missing before version 1.2.
cl_kernel_arg_type_qualifier
Returns a bitfield describing one or more type qualifiers specified for the argument given by arg_indx. The returned values can be:
CL_KERNEL_ARG_TYPE_CONST17
CL_KERNEL_ARG_TYPE_RESTRICT
CL_KERNEL_ARG_TYPE_VOLATILE18
CL_KERNEL_ARG_TYPE_PIPE, or
CL_KERNEL_ARG_TYPE_NONE
CL_KERNEL_ARG_TYPE_NONE is returned for all parameters passed by value.
CL_KERNEL_ARG_NAME
Missing before version 1.2.
char[]
Returns the name specified for the argument given by arg_indx.
17
CL_KERNEL_ARG_TYPE_CONST is returned for CL_KERNEL_ARG_TYPE_QUALIFIER if the argument is declared with the constant address space qualifier.
18
CL_KERNEL_ARG_TYPE_VOLATILE is returned for CL_KERNEL_ARG_TYPE_QUALIFIER if the argument is a pointer and the referenced type is declared with the volatile qualifier. For example, a kernel argument declared as global int volatile *x returns CL_KERNEL_ARG_TYPE_VOLATILE but a kernel argument declared as global int * volatile x does not. Similarly, CL_KERNEL_ARG_TYPE_CONST is returned if the argument is a pointer and the referenced type is declared with the restrict or const qualifier. For example, a kernel argument declared as global int const *x returns CL_KERNEL_ARG_TYPE_CONST but a kernel argument declared as global int * const x does not. CL_KERNEL_ARG_TYPE_RESTRICT will be returned if the pointer type is marked restrict. For example, global int * restrict x returns CL_KERNEL_ARG_TYPE_RESTRICT.
clGetKernelArgInfo returns CL SUCCESS if the function is executed successfully. Otherwise, it returns one of the following errors:
• CL_INVALID_ARG_INDEX if arg_indx is not a valid argument index.
• CL_INVALID_VALUE if param_name is not valid, or if size in bytes specified by param_value size is < size of return type as described in the Kernel Argument Queries table and param_value is not NULL.
• CL_KERNEL_ARG_INFO_NOT_AVAILABLE if the argument information is not available for kernel.
• CL_INVALID_KERNEL if kernel is a not a valid kernel object. |
# SIGMETRICS/PERFORMANCE '22: Abstract Proceedings of the 2022 ACM SIGMETRICS/IFIP PERFORMANCE Joint International Conference on Measurement and Modeling of Computer Systems
Full Citation in the ACM Digital Library
## SESSION: Session: Networking
### Curvature-based Analysis of Network Connectivity in Private Backbone Infrastructures
• Loqman Salamatian
• Scott Anderson
• Joshua Matthews
• Paul Barford
• Walter Willinger
• Mark Crovella
The main premise of this work is that since large cloud providers can and do manipulate probe packets that traverse their privately owned and operated backbones, standard traceroute-based measurement techniques are no longer a reliable means for assessing network connectivity in large cloud provider infrastructures. In response to these developments, we present a new empirical approach for elucidating private connectivity in today's Internet. Our approach relies on using only "light-weight" (i.e., simple, easily-interpretable, and readily available) measurements, but requires applying a "heavy-weight" or advanced mathematical analysis. In particular, we describe a new method for assessing the characteristics of network path connectivity that is based on concepts from Riemannian geometry (i.e., Ricci curvature) and also relies on an array of carefully crafted visualizations (e.g., a novel manifold view of a network's delay space). We demonstrate our method by utilizing latency measurements from RIPE Atlas anchors and virtual machines running in data centers of three large cloud providers to (i) study different aspects of connectivity in their private backbones and (ii) show how our manifold-based view enables us to expose and visualize critical aspects of this connectivity over different geographic scales.
### Automatic Inference of BGP Location Communities
• Brivaldo A. da Silva Jr.
• Paulo Mol
• Osvaldo Fonseca
• Ítalo Cunha
• Ronaldo A. Ferreira
• Ethan Katz-Bassett
We present a set of techniques to infer the semantics of BGP communities from public BGP data. Our techniques infer communities related to the entities or locations traversed by a route by correlating communities with AS paths. We also propose a set of heuristics to filter incorrect inferences introduced by misbehaving networks, sharing of BGP communities among sibling autonomous systems, and inconsistent BGP dumps. We apply our techniques to billions of routing records from public BGP collectors and make available a public database with more than 15 thousand location communities. Our comparison with manually-built databases shows our techniques provide high precision (up to 93%), better coverage (up to 81% recall), and dynamic updates, complementing operators' and researchers' abilities to reason about BGP community semantics.
### Understanding I/O Direct Cache Access Performance for End Host Networking
• Minhu Wang
• Mingwei Xu
• Jianping Wu
Direct Cache Access (DCA) enables a network interface card (NIC) to load and store data directly on the processor cache, as conventional Direct Memory Access (DMA) is no longer suitable as the bridge between NIC and CPU in the era of 100 Gigabit Ethernet. As numerous I/O devices and cores compete for scarce cache resources, making the most of DCA for networking applications with varied objectives and constraints is a challenge, especially given the increasing complexity of modern cache hardware and I/O stacks. In this paper, we reverse engineer details of one commercial implementation of DCA, Intel's Data Direct I/O (DDIO), to explicate the importance of hardware-level investigation into DCA. Based on the learned knowledge of DCA and network I/O stacks, we (1) develop an analytical framework to predict the effectiveness of DCA (i.e., its hit rate) under certain hardware specifications, system configurations, and application properties; (2) measure penalties of the ineffective use of DCA (i.e., its miss penalty) to characterize its benefits; and (3) show that our reverse engineering, measurement, and model contribute to a deeper understanding of DCA, which in turn helps diagnose, optimize, and design end-host networking.
### Traffic Refinery: Cost-Aware Data Representation for Machine Learning on Network Traffic
• Francesco Bronzino
• Paul Schmitt
• Sara Ayoubi
• Hyojoon Kim
• Renata Teixeira
• Nick Feamster
Network management often relies on machine learning to make predictions about performance and security from network traffic. Often, the representation of the traffic is as important as the choice of the model. The features that the model relies on, and the representation of those features, ultimately determine model accuracy, as well as where and whether the model can be deployed in practice. Thus, the design and evaluation of these models ultimately requires understanding not only model accuracy but also the systems costs associated with deploying the model in an operational network. Towards this goal, this paper develops a new framework and system that enables a joint evaluation of both the conventional notions of machine learning performance (model accuracy) and the systems-level costs of different representations of network traffic. We highlight these two dimensions for two practical network management tasks, video streaming quality inference and malware detection, to demonstrate the importance of exploring different representations to find the appropriate operating point. We demonstrate the benefit of exploring a range of representations of network traffic and present Traffic Refinery, a proof-of-concept implementation that both monitors network traffic at 10~Gbps and transforms traffic in real time to produce a variety of feature representations for machine learning. Traffic Refinery both highlights this design space and makes it possible to explore different representations for learning, balancing systems costs related to feature extraction and model training against model accuracy.
## SESSION: Session: Streaming, Gaming, and the Decentralized Web
### Xatu: Richer Neural Network Based Prediction for Video Streaming
• Yun Seong Nam
• Jianfei Gao
• Chandan Bothra
• Ehab Ghabashneh
• Sanjay Rao
• Bruno Ribeiro
• Jibin Zhan
• Hui Zhang
The performance of Adaptive Bitrate (ABR) algorithms for video streaming depends on accurately predicting the download time of video chunks. Existing prediction approaches (i) assume chunk download times are dominated by network throughput; and (ii) apriori cluster sessions (e.g., based on ISP and CDN) and only learn from sessions in the same cluster. We make three contributions. First, through analysis of data from real-world video streaming sessions, we show (i) apriori clustering prevents learning from related clusters; and (ii) factors such as the Time to First Byte (TTFB) are key components of chunk download times but not easily incorporated into existing prediction approaches. Second, we propose Xatu, a new prediction approach that jointly learns a neural network sequence model with an interpretable automatic session clustering method. Xatu learns clustering rules across all sessions it deems relevant, and models sequences with multiple chunk-dependent features (e.g., TTFB) rather than just throughput. Third, evaluations using the above datasets and emulation experiments show that Xatu significantly improves prediction accuracies by 23.8% relative to CS2P (a state-of-the-art predictor). We show Xatu provides substantial performance benefits when integrated with multiple ABR algorithms including MPC (a well studied ABR algorithm), and FuguABR (a recent algorithm using stochastic control) relative to their default predictors (CS2P and a fully connected neural network respectively). Further, Xatu combined with MPC outperforms Pensieve, an ABR based on deep reinforcement learning.
### End-to-end Characterization of Game Streaming Applications on Mobile Platforms
• Sandeepa Bhuyan
• Shulin Zhao
• Ziyu Ying
• Mahmut T. Kandemir
• Chita R. Das
With the advent of 5G, hosting high-quality game streaming applications on mobile devices has become a reality. To our knowledge, no prior study systematically investigates the < QoS, Energy > tuple on the end-to-end game streaming pipeline across the cloud, network, and edge devices to understand the individual contributions of the different pipeline stages. In this paper, we present a comprehensive performance and power analysis of the entire game streaming pipeline through extensive measurements with a high-end workstation mimicking the cloud end, an open-source platform (Moonlight-GameStreaming) emulating the edge device/mobile platform, and two network settings (WiFi and 5G). Our study shows that the rendering stage and the encoding stage at the cloud end are the bottlenecks for 4K streaming. While 5G is certainly more suitable for supporting enhanced video quality with 4K streaming, it is more expensive in terms of power consumption compared to WiFi. Further, the network interface and the decoder units in mobile devices need more energy-efficient design to support high quality games at a lower cost. These observations should help in designing more cost-effective future cloud gaming platforms.
### Dissecting Cloud Gaming Performance with DECAF
• Hassan Iqbal
• Ayesha Khalid
Cloud gaming platforms have witnessed tremendous growth over the past two years, with a number of large Internet companies including Amazon, Facebook, Google, Microsoft, and Nvidia publicly launching their own platforms. However, there is an absence of systematic performance measurement methodologies which can generally be applied. In this paper, we implement DECAF, a methodology to systematically analyze and dissect the performance of cloud gaming platforms across different game genres and game platforms. By applying DECAF, we measure the performance of Google Stadia, Amazon Luna, and Nvidia GeForceNow, and uncover a number of important findings such as processing delays in the cloud comprise majority of the total round trip delay (≈73.54%), the video streams delivered by these platforms are characterized by high variability of bitrate, frame rate, and resolution. Our work has important implications for cloud gaming platforms and opens the door for further research on measurement methodologies for cloud gaming.
### Toxicity in the Decentralized Web and the Potential for Model Sharing
• Haris Bin Zia
• Aravindh Raman
• Ignacio Castro
• Ishaku Hassan Anaobi
• Emiliano De Cristofaro
• Nishanth Sastry
• Gareth Tyson
The "Decentralised Web" (DW) is an evolving concept, which encompasses technologies aimed at providing greater transparency and openness on the web. The DW relies on independent servers (aka instances) that mesh together in a peer-to-peer fashion to deliver a range of services (e.g. micro-blogs, image sharing, video streaming). However, toxic content moderation in this decentralised context is challenging. This is because there is no central entity that can define toxicity, nor a large central pool of data that can be used to build universal classifiers. It is therefore unsurprising that there have been several high-profile cases of the DW being misused to coordinate and disseminate harmful material. Using a dataset of 9.9M posts from 117K users on Pleroma (a popular DW microblogging service), we quantify the presence of toxic content. We find that toxic content is prevalent and spreads rapidly between instances. We show that automating per-instance content moderation is challenging due to the lack of sufficient training data available and the effort required in labelling. We therefore propose and evaluate ModPair, a model sharing system that effectively detects toxic content, gaining an average per-instance macro-F1 score 0.89.
## SESSION: Session: Measurements and Security
### Understanding the Practices of Global Censorship through Accurate, End-to-End Measurements
• Lin Jin
• Shuai Hao
• Haining Wang
• Chase Cotton
It is challenging to conduct a large scale Internet censorship measurement, as it involves triggering censors through artificial requests and identifying abnormalities from corresponding responses. Due to the lack of ground truth on the expected responses from legitimate services, previous studies typically require a heavy, unscalable manual inspection to identify false positives while still leaving false negatives undetected. In this paper, we propose Disguiser, a novel framework that enables end-to-end measurement to accurately detect the censorship activities and reveal the censor deployment without manual efforts. The core of Disguiser is a control server that replies with a static payload to provide the ground truth of server responses. As such, we send requests from various types of vantage points across the world to our control server, and the censorship activities can be recognized if a vantage point receives a different response. In particular, we design and conduct a cache test to pre-exclude the vantage points that could be interfered by cache proxies along the network path. Then we perform application traceroute towards our control server to explore censors' behaviors and their deployment. With Disguiser, we conduct 58 million measurements from vantage points in 177 countries. We observe 292 thousand censorship activities that block DNS, HTTP, or HTTPS requests inside 122 countries, achieving a 106 false positive rate and zero false negative rate. Furthermore, Disguiser reveals the censor deployment in 13 countries.
### Monetizing Spare Bandwidth: The Case of Distributed VPNs
• Yunming Xiao
• Matteo Varvello
• Aleksandar Kuzmanovic
### MalRadar: Demystifying Android Malware in the New Era
• Liu Wang
• Haoyu Wang
• Ren He
• Ran Tao
• Guozhu Meng
• Xiapu Luo
• Xuanzhe Liu
A reliable and up-to-date malware dataset is critical to evaluate the effectiveness of malware detection approaches. Although there are several widely-used malware benchmarks in our community (e.g., MalGenome, Drebin, Piggybacking and AMD, etc.), these benchmarks face several limitations including out-of-date, size, coverage, and reliability issues, etc. In this paper, we first make effort to create MalRadar, a growing and up-to-date Android malware dataset using the most reliable way, i.e., by collecting malware based on the analysis reports of security experts. We have crawled all the mobile security related reports released by ten leading security companies, and used an automated approach to extract and label the useful ones describing new Android malware and containing Indicators of Compromise (IoC) information. We have successfully compiled MalRadar, a dataset that contains 4,534 unique Android malware samples (including both apks and metadata) released from 2014 to April 2021 by the time of this paper, all of which were manually verified by security experts with detailed behavior analysis. Then we characterize the MalRadar dataset from malware distribution channels, app installation methods, malware activation, malicious behaviors and anti-analysis techniques. We further investigate the malware evolution over the last decade. At last, we measure the effectiveness of commercial anti-virus engines and malware detection techniques on detecting malware in MalRadar. Our dataset can be served as the representative Android malware benchmark in the new era, and our observations can positively contribute to the community and boost a series of studies on mobile security.
### Trade or Trick?: Detecting and Characterizing Scam Tokens on Uniswap Decentralized Exchange
• Pengcheng Xia
• Haoyu Wang
• Bingyu Gao
• Weihang Su
• Zhou Yu
• Xiapu Luo
• Chao Zhang
• Xusheng Xiao
• Guoai Xu
### Cerberus: The Power of Choices in Datacenter Topology Design - A Throughput Perspective
• Chen Griner
• Johannes Zerwas
• Andreas Blenk
• Stefan Schmid
• Chen Avin
The bandwidth and latency requirements of modern datacenter applications have led researchers to propose various topology designs using static, dynamic demand-oblivious (rotor), and/or dynamic demand-aware switches. However, given the diverse nature of datacenter traffic, there is little consensus about how these designs would fare against each other. In this work, we analyze the throughput of existing topology designs under different traffic patterns and study their unique advantages and potential costs in terms of bandwidth and latency "tax''. To overcome the identified inefficiencies, we propose Cerberus, a unified, two-layer leaf-spine optical datacenter design with three topology types. Cerberus systematically matches different traffic patterns with their most suitable topology type: e.g., latency-sensitive flows are transmitted via a static topology, all-to-all traffic via a rotor topology, and elephant flows via a demand-aware topology. We show analytically and in simulations that Cerberus can improve throughput significantly compared to alternative approaches and operate datacenters at higher loads while being throughput-proportional.
### Large-System Insensitivity of Zero-Waiting Load Balancing Algorithms
• Xin Liu
• Kang Gong
• Lei Ying
This paper studies the sensitivity (or insensitivity) of a class of load balancing algorithms that achieve asymptotic zero-waiting in the sub-Halfin-Whitt regime, named LB-zero. Most existing results on zero-waiting load balancing algorithms assume the service time distribution is exponential. This paper establishes the large-system insensitivity of LB-zero for jobs whose service time follows a Coxian distribution with a finite number of phases. This result suggests that LB-zero achieves asymptotic zero-waiting for a large class of service time distributions. To prove this result, this paper develops a new technique, called "Iterative State-Space Peeling'' (or ISSP for short). ISSP first identifies an iterative relation between the upper and lower bounds on the queue states and then proves that the system lives near the fixed point of the iterative bounds with a high probability. Based on ISSP, the steady-state distribution of the system is further analyzed by applying Stein's method in the neighborhood of the fixed point. ISSP, like state-space collapse in heavy-traffic analysis, is a general approach that may be used to study other complex stochastic systems.
### Mean Field and Refined Mean Field Approximations for Heterogeneous Systems: It Works!
• Sebastian Allmeier
• Nicolas Gast
Mean field approximation is a powerful technique to study the performance of large stochastic systems represented as n interacting objects. Applications include load balancing models, epidemic spreading, cache replacement policies, or large-scale data centers. Mean field approximation is asymptotically exact for systems composed of n homogeneous objects under mild conditions. In this paper, we study what happens when objects are heterogeneous. This can represent servers with different speeds or contents with different popularities. We define an interaction model that allows obtaining asymptotic convergence results for stochastic systems with heterogeneous object behavior and show that the error of the mean field approximation is of order O(1/n). More importantly, we show how to adapt the refined mean field approximation, developed by the authors of Gast et al. 2019, and show that the error of this approximation is reduced to O(1/n2). To illustrate the applicability of our result, we present two examples. The first addresses a list-based cache replacement model, RANDOM(m), which is an extension of the RANDOM policy. The second is a heterogeneous supermarket model. These examples show that the proposed approximations are computationally tractable and very accurate. For moderate system sizes (n ≈ 30) the refined mean field approximation tends to be more accurate than simulations for any reasonable simulation time.
## SESSION: Session: Optimization II
### Asymptotic Convergence Rate of Dropout on Shallow Linear Neural Networks
• Albert Senen-Cerda
• Jaron Sanders
We analyze the convergence rate of gradient flows on objective functions induced by Dropout and Dropconnect, when applying them to shallow linear Neural Networks (NNs) ---which can also be viewed as doing matrix factorization using a particular regularizer. Dropout algorithms such as these are thus regularization techniques that use {0,1} -valued random variables to filter weights during training in order to avoid coadaptation of features. By leveraging a recent result on nonconvex optimization and conducting a careful analysis of the set of minimizers as well as the Hessian of the loss function, we are able to obtain (i) a local convergence proof of the gradient flow and (ii) a bound on the convergence rate that depends on the data, the dropout probability, and the width of the NN. Finally, we compare this theoretical bound to numerical simulations, which are in qualitative agreement with the convergence bound and match it when starting sufficiently close to a minimizer.
### Robustness and Consistency in Linear Quadratic Control with Untrusted Predictions
• Tongxin Li
• Ruixiao Yang
• Guannan Qu
• Guanya Shi
• Chenkai Yu
• Steven Low
We study the problem of learning-augmented predictive linear quadratic control. Our goal is to design a controller that balances "consistency'', which measures the competitive ratio when predictions are accurate, and "robustness'', which bounds the competitive ratio when predictions are inaccurate. We propose a novel λ-confident controller and prove that it maintains a competitive ratio upper bound of 1 + min {O(λ2ε)+ O(1-λ)2,O(1)+O(λ2)} where λ∈ [0,1] is a trust parameter set based on the confidence in the predictions, and ε is the prediction error. Further, motivated by online learning methods, we design a self-tuning policy that adaptively learns the trust parameter λ with a competitive ratio that depends on ε and the variation of system perturbations and predictions. We show that its competitive ratio is bounded from above by 1+O(ε) /(Θ)(1)+Θ(ε))+O(μVar) where μVar measures the variation of perturbations and predictions. It implies that by automatically adjusting the trust parameter online, the self-tuning scheme ensures a competitive ratio that does not scale up with the prediction error ε.
### Stationary Behavior of Constant Stepsize SGD Type Algorithms: An Asymptotic Characterization
• Zaiwei Chen
• Shancong Mou
• Siva Theja Maguluri
Stochastic approximation (SA) and stochastic gradient descent (SGD) algorithms are work-horses for modern machine learning algorithms. Their constant stepsize variants are preferred in practice due to fast convergence behavior. However, constant stepsize SA algorithms do not converge to the optimal solution, but instead have a stationary distribution, which in general cannot be analytically characterized. In this work, we study the asymptotic behavior of the appropriately scaled stationary distribution, in the limit when the constant stepsize goes to zero. Specifically, we consider the following three settings: (1) SGD algorithm with a smooth and strongly convex objective, (2) linear SA algorithm involving a Hurwitz matrix, and (3) nonlinear SA algorithm involving a contractive operator. When the iterate is scaled by 1/√α, where α is the constant stepsize, we show that the limiting scaled stationary distribution is a solution of an implicit equation. Under a uniqueness assumption (which can be removed in certain settings) on this equation, we further characterize the limiting distribution as a Gaussian distribution whose covariance matrix is the unique solution of an appropriate Lyapunov equation. For SA algorithms beyond these cases, our numerical experiments suggest that unlike central limit theorem type results: (1) the scaling factor need not be 1/√α, and (2) the limiting distribution need not be Gaussian. Based on the numerical study, we come up with a heuristic formula to determine the right scaling factor, and make a connection to the Euler-Maruyama discretization scheme for approximating stochastic differential equations.
### Learning To Maximize Welfare with a Reusable Resource
• Matthew Faw
• Constantine Caramanis
• Sanjay Shakkottai
## SESSION: Session: Miscellaneous
### Free2Shard: Adversary-resistant Distributed Resource Allocation for Blockchains
• Ranvir Rana
• Sreeram Kannan
• David Tse
• Pramod Viswanath
In this paper, we formulate and study a new, but basic, distributed resource allocation problem arising in scaling blockchain performance. While distributed resource allocation is a well-studied problem in networking, the blockchain setting additionally requires the solution to be resilient to adversarial behavior from a fraction of nodes. Scaling blockchain performance is a basic research topic; a plethora of solutions (under the umbrella of sharding) have been proposed in recent years. Although the various sharding solutions share a common thread (they cryptographically stitch together multiple parallel chains), architectural differences lead to differing resource allocation problems. In this paper we make three main contributions: (a) we categorize the different sharding proposals under a common architectural framework, allowing for the emergence of a new, uniformly improved, uni-consensus sharding architecture. (b) We formulate and exactly solve a core resource allocation problem in the uni-consensus sharding architecture -- our solution, Free2shard, is adversary-resistant and achieves optimal throughput. The key technical contribution is a mathematical connection to the classical work of Blackwell approachability in dynamic game theory. (c) We implement the sharding architecture atop a full-stack blockchain in 3000 lines of code in Rust -- we achieve a throughput of more than 250,000 transactions per second with 6 shards, a vast improvement over state-of-the-art.
### Age-Dependent Differential Privacy
• Meng Zhang
• Ermin Wei
• Randall Berry
• Jianwei Huang
The proliferation of real-time applications has motivated extensive research on analyzing and optimizing data freshness in the context of age of information. However, classical frameworks of privacy (e.g., differential privacy (DP)) have overlooked the impact of data freshness on privacy guarantees, and hence may lead to unnecessary accuracy loss when trying to achieve meaningful privacy guarantees in time-varying databases. In this work, we introduce age-dependent DP, taking into account the underlying stochastic nature of a time-varying database. In this new framework, we establish a connection between classical DP and age-dependent DP, based on which we characterize the impact of data staleness and temporal correlation on privacy guarantees. Our characterization demonstrates that aging, i.e., using stale data inputs and/or postponing the release of outputs, can be a new strategy to protect data privacy in addition to noise injection in the traditional DP framework. Furthermore, to generalize our results to a multi-query scenario, we present a sequential composition result for age-dependent DP. We then characterize and achieve the optimal tradeoffs between privacy risk and utility. Finally, case studies show that, when achieving a target of an arbitrarily small privacy risk in a single-query case, the approach of combining aging and noise injection can achieve a bounded accuracy loss, whereas using noise injection only (as in the DP benchmark) will lead to an unbounded accuracy loss.
### Unleashing the Power of Paying Multiplexing Only Once in Stochastic Network Calculus
• Anne Bouillard
• Paul Nikolaus
• Jens Schmitt
The stochastic network calculus (SNC) holds promise as a versatile and uniform framework to calculate probabilistic performance bounds in networks of queues. A great challenge to accurate bounds and efficient calculations are stochastic dependencies between flows due to resource sharing inside the network. However, by carefully utilizing the basic SNC concepts in the network analysis the necessity of taking these dependencies into account can be minimized. To that end, we unleash the power of the pay multiplexing only once principle (PMOO, known from the deterministic network calculus) in the SNC analysis. We choose an analytic combinatorics presentation of the results in order to ease complex calculations. In tree-reducible networks, a subclass of general feedforward networks, we obtain an effective analysis in terms of avoiding the need to take internal flow dependencies into account. In a comprehensive numerical evaluation, we demonstrate how this unleashed PMOO analysis can reduce the known gap between simulations and SNC calculations significantly, and how it favourably compares to state-of-the art SNC calculations in terms of accuracy and computational effort. Motivated by these promising results, we also consider general feedforward networks, when some flow dependencies have to be taken into account. To that end, the unleashed PMOO analysis is extended to the partially dependent case and a case study of a canonical topology, known as the diamond network, is provided, again displaying favourable results over the state of the art. |
# 命名管道(Named Pipes)
A little known way to connect with an Erlang node that requires no explicit distribution is through named pipes. This can be done by starting Erlang with run_erl, which wraps Erlang in a named pipe 5:
还有一种鲜为人知连接Erlang节点的方法:命名管道(named pipes),它不需要明确指定节点名.这可以通过使用run_erl 启动Erlang来完成5
-------------------------------------------------------------------------------------
$run_erl /tmp/erl_pipe /tmp/log_dir "erl" ------------------------------------------------------------------------------------- The first argument is the name of the file that will act as the named pipe. The second one is where logs will be saved 6. To connect to the node, you use the to_erl program: 第一个参数是指定作为命名管道的文件,第三个就是指明日志要放在目录6 你可以使用 to_erl程序连接到节点上: ------------------------------------------------------------------------------------- $ to_erl /tmp/erl_pipe
Attaching to /tmp/erl_pipe (^D to exit)
1>
-------------------------------------------------------------------------------------
And the shell is connected. Closing stdio (with ˆD) will disconnect from the shell while leaving it running.
连接上后,你可以使用ctrl+D来断开远程节点(只是断开,不会终结远程节点).
[5] "erl" is the command being run. Additional arguments can be added after it. For example "erl +K true" will turn kernel polling on.
[6] Using this method ends up calling fsync for each piece of output, which may give quite a performance hit if a lot of IO is taking place over standard output.
[注5]:“erl”就是要运行的命令,你也可以在后面加上其它启动选项如"erl +K ture"会把打开内核kernel轮询。
[注6] :这个方法最终fsync会同步每一块输入,如果你把大量的数据都输入到终端,性能会下降得很厉害。 |
# Brute-force string generator
I have created a brute-force algorithm class in C# and I was wondering if I could speed up the process of creating a new string. There are two methods inside the class: one returns the brute-force string and one returns the number of strings created.
public class BruteForce
{
public char[] CharList = new char[] { 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9' };
public List<int> IntList = new List<int> { 0 };
public string GenerateString()
{
int Number = IntList.Count - 1;
if (IntPasswordsGenerated == 1) return CharList[0].ToString();
do
{
IntList[Number]++;
if (IntList[Number] == CharList.Length && Number == 0)
{
IntList[Number] = 0;
break;
}
else if (IntList[Number] == CharList.Length)
{
IntList[Number] = 0;
Number--;
continue;
}
else
{
break;
}
} while (true);
string BruteForceString = "";
foreach (int CurrentInt in IntList)
BruteForceString += CharList[CurrentInt];
return BruteForceString;
}
{
}
}
What you have implemented is a back-to-basics number-system in base 62 Your IntList List is a mechanism of having the equivalent of "units", "tens", "hundreds", except you are in base 62 so it's "units", "sixty-twos", .....
When you increment a value, and it overflows, you then set it back to zero, and increment the next column instead.
In addition, you have a special case for the first time around, where you have to bypass the logic to ensure you have the right initial value returned.
So, there's a trick for the first time around that's easy to implement. The trick is to think of the array as holding the "next" value to return. Your code then becomes:
string BruteForceString = "";
foreach (int CurrentInt in IntList)
BruteForceString += CharList[CurrentInt];
// Now calculate the **next** password
......
return BruteForceString;
Note how your initial array already contains the "next" password when initialized. Then, you can convert that to a string, and then increment the array to be ready for the next call.
But, I want to suggest that you are doing it all horribly wrong... ;-)
What you should be doing is using a simple long value to store your next value, and then using integer division and modulo to work your base-62 system.
Additionally, your variable names are quite horrible.... but that's OK, we will get rid of them all except the worst....
private ulong IntPasswordsGenerated = 0;
public string GenerateString()
{
ulong base = CharList.Length;
// get the current value, increment the next.
do
{
current /= base;
} while (current != 0);
}
The magic is almost entirely in this line here:
ulong current = IntPasswordsGenerated++;
hat takes a copy of the "next" password, and then afterwards increments what the next password will be.
Using division and modulo is likely a lot faster than the multiple loops required to check and validate the multiple array elements in your custom number system.
Finally, by taking a copy of the next password, and by keeping the password as a single ulong instance variable, it becomes relatively simple to lock on a small section of code and then make your method able to run in multiple threads... allowing you to brute force more passwords at once though parallel processing. All you need to do that is drop the ulong to a long, and then use the Interlock.increment() method to do a thread-safe, atomic increment (assuming you have a 64-bit machine).
You are always writing the type of variables inside the variables' names, for example: IntPasswordsGenerated Remove them as the compiler already takes care of types for You.
• I just added that because "PasswordsGenerated" was already used inside the class as a method – Sam Sep 7 '15 at 9:57
• @Sam the method is a getter, so I would rename it getPasswordsGenerated – Caridorc Sep 7 '15 at 10:09
• @Caridorc that's Java not C#, it should be a property: public ulong PasswordsGenerated { get; private set; } – Johnbot Sep 7 '15 at 14:36 |
buildcustom {buildmer} R Documentation
## Use buildmer to perform stepwise elimination using a custom fitting function
### Description
Use buildmer to perform stepwise elimination using a custom fitting function
### Usage
buildcustom(
formula,
data = NULL,
fit = function(p, formula) stop("'fit' not specified"),
crit = function(p, ref, alt) stop("'crit' not specified"),
elim = function(x) stop("'elim' not specified"),
REML = FALSE,
buildmerControl = buildmerControl()
)
### Arguments
formula See the general documentation under buildmer-package data See the general documentation under buildmer-package fit A function taking two arguments, of which the first is the buildmer parameter list p and the second one is a formula. The function must return a single object, which is treated as a model object fitted via the provided formula. The function must return an error ('stop()') if the model does not converge crit A function taking one argument and returning a single value. The argument is the return value of the function passed in fit, and the returned value must be a numeric indicating the goodness of fit, where smaller is better (like AIC or BIC). elim A function taking one argument and returning a single value. The argument is the return value of the function passed in crit, and the returned value must be a logical indicating if the small model must be selected (return TRUE) or the large model (return FALSE) REML A logical indicating if the fitting function wishes to distinguish between fits differing in fixed effects (for which p$reml will be set to FALSE) and fits differing only in the random part (for which p$reml will be TRUE). Note that this ignores the usual semantics of buildmer's optional REML argument, because they are redundant: if you wish to force REML on or off, simply code it so in your custom fitting function. buildmerControl Control arguments for buildmer — see the general documentation under buildmerControl
buildmer-package
### Examples
## Use \code{buildmer} to do stepwise linear discriminant analysis
library(buildmer)
migrant[,-1] <- scale(migrant[,-1])
flipfit <- function (p,formula) {
# The predictors must be entered as dependent variables in a MANOVA
# (i.e. the predictors must be flipped with the dependent variable)
Y <- model.matrix(formula,migrant)
m <- lm(Y ~ 0+migrant$changed) # the model may error out when asking for the MANOVA test <- try(anova(m)) if (inherits(test,'try-error')) test else m } crit.F <- function (p,a,b) { # use whole-model F pvals <- anova(b)$'Pr(>F)' # not valid for backward!
pvals[length(pvals)-1]
}
crit.Wilks <- function (p,a,b) {
if (is.null(a)) return(crit.F(p,a,b)) #not completely correct, but close as F approximates X2
Lambda <- anova(b,test='Wilks')\$Wilks[1]
p <- length(coef(b))
n <- 1
m <- nrow(migrant)
Bartlett <- ((p-n+1)/2-m)*log(Lambda)
pchisq(Bartlett,n*p,lower.tail=FALSE)
}
# First, order the terms based on Wilks' Lambda |
# square that can be written as mean of as many pairs of squares as possible
If $b^2=\displaystyle\frac{a_1^2+c_1^2}2$ and $b^2=\displaystyle\frac{a_2^2+c_2^2}2$... and $b^2=\displaystyle\frac{a_n^2+c_n^2}2$, what is the largest possible value of $n$, or can $n$ be arbitrarily large?
e.g for $n=7$, $$325^2=\frac{49^2+457^2}2=\frac{65^2+455^2}2=\frac{115^2+445^2}2=\frac{175^2+425^2}2$$ $$=\frac{221^2+403^2}2=\frac{235^2+395^2}2=\frac{287^2+359^2}2,$$ and $$425^2=\frac{7^2+601^2}2=\frac{85^2+595^2}2=\frac{175^2+575^2}2=\frac{205^2+565^2}2$$ $$=\frac{289^2+527^2}2=\frac{329^2+503^2}2=\frac{355^2+485^2}2.$$
-
Taking norms of both sides of suitable factorizations in the ring of Gaussian integers $\mathbf{Z}[i]$ would explain these. $N(a+bi)=a^2+b^2$, $N(1+i)=2$, $N((a+bi)(c+di))=N(a+bi)N(c+di)$. Furthermore $p=N(a+bi)$ is solvable for any prime $p\equiv 1\pmod4.$ – Jyrki Lahtonen Dec 27 '11 at 8:18
Have you tried working over the complex numbers? Factoring $a^2 + b^2 = (a+bi)(a-bi)$ might be interesting, since you're actually looking for an arbitrary large possibility of factorizations into two complex conjugates of a number of the form $2n^2$. I believe there are many possibilities but I am not in shape, number-theoretically speaking, to work it out right now. – Patrick Da Silva Dec 27 '11 at 8:19
You are asking for an integer ($2b^2)$ that can be written as a sum of two squares in as many ways as possible, one of which is a sum of two equal squares. By the result that I cited in this answer, there is no bound to this number of ways. Concretely, for instance $2\times25^n$ can be written as a sum of two squares in $4(2n+1)$ ways; these include $4$ ways of the form $(\pm5^n)^2+(\pm5^n)^2$ that you don't want to count and the remaining $8n$ possibilities come in symmetry classes of $8$ (by signs and order of terms), for a total of $n$ classes.
In terms of your question, the square $(5^n)^2$ can be written as the average of two squares in $n$ non-equivalent ways. To find these expressions, take the Gaussian integers $1+\mathbf i$ and $2n$ copies of $2+\mathbf i$, conjugate $i$ of the latter for $0\leq i<n$ and multiply everything together; the resulting $n$ Gaussian integers are all of norm-squared equal to $2\times25^n$, and their real and imaginary parts provide $n$ non-equivalent pairs of numbers, the averages of whose squares is $25^n=(5^n)^2$.
This is not the most economic way to get lots of expressions; it would be better to combine distinct prime numbers congruent to $1$ modulo $4$, rather than to take powers of one of them namely $5$. This explains your examples $325=5^2\times13$ and $425=5^2\times17$.
Added: The general formula for the number of solutions for writing $N^2$ as the average of a set of two squares of distinct positive numbers, in terms of the prime factorization of $N$, is as follows: only the primes congruent to $1$ modulo $4$ contribute; multiply together for every nonzero multiplicity $m$ of such a prime the numbers $2m+1$, subtract $1$ from the the (odd) product so obtained, and divide by $2$ (which accounts for the ignored order). So for $N=325=5^2\times13$ and $N=425=5^2\times17$ one gets $\frac{5\times3-1}2=7$ solutions, as indicated. Another value with many solutions is $N=5\times13\times17=1105$, namely $\frac{3\times3\times3-1}2=13$ solutions.
If one wants to disqualify solutions with a nontrivial common factor, as Jyrki Lahtonen suggests (I wouldn't know why), then each appropriate prime only contributes a factor $2$ independently of its multiplicity rather than $2m+1$ (but subtracting $1$ is omitted). This is beacuse mixing a Gaussian integer in a product with its complex conjugate introduces an integer factor, which will be common to the real and imaginary parts. In this variant one retains only $\frac{2\times2}2=2$ solutions for $N\in\{325,425\}$, and only $\frac{2\times2\times2}2=4$ solutions for $N=1105$ (namely $(73,1561)$, $(367,1519)$, $(809,1337)$, $(1057,1151)$). Even with this restriction the number will still be unbounded as $N$ acquires more and more useful prime factors.
-
+1: A general formula for $n$ is probably out there. But it is a little bit trickier, when more than one conjugate pair of primes of $\mathbf{Z}[i]$ is involved, because you can allow a subset of them to be used in a way that the resulting factor is real. This shows in @Angela's list, where some pairs $(a,c)$ have a common factor 5 or 25. – Jyrki Lahtonen Dec 27 '11 at 8:57
From a beautiful theorem of Jacobi, the number of ways of writing an integer as a sum of two squares is equal to four times its number of divisors of the form $4n+1$ minus four times its number of divisors of the form $4n+3$. You can find an elementary proof of this result in Hardy & Wright, or at the end of my answer to this question if you are familiar with the basics of $L$-functions.
Hence if we choose $b$ to be, say, only divisible by primes of the form $4n+1$ then we can find arbitrarily many solutions to $2b^2=a^2+b^2$ with a fixed value of $b$.
Note: It can be easily shown that every solution of $a^2+c^2=2b^2$ in integers can be written as
$$(n^2-2mn-m^2)^2+(n^2+2mn-m^2)^2 = 2(m^2+n^2)^2$$
where $m,n$ are any integers. In particular, $b$ is itself always a sum of two squares, and we see that the number of solutions of $a^2+c^2=2b^2$ for a fixed value of $b$ is at most equal to the number of ways of writing $b$ as a sum of two squares.
-
I'm somewhat confused by the reference to the result by Jacobi, which I did not know about. Using it seems all right if you are interested in counting divisors by their classes mod $4$, alternatingly, but when interested in expressions as sums of two squares, it seems rather inefficient. This is because suffices to just look at the prime factors, and only those of the form $4n+1$ (see my answer). So why would one use a result that requires enumerating all divisors instead? – Marc van Leeuwen Dec 27 '11 at 11:05
Dear Marc, I am not sure where your concerns lie. What do you mean by "inefficient"? Jacobi's theorem gives the exact number of representations as sums of two squares. It is not an algorithm, so I'm not sure whether it can be called "efficient" or "inefficient". Prime factors are one particular case of divisors, are they not? Your answer is perfectly fine and is in accordance with Jacobi's theorem. Regards, – Bruno Joyal Dec 27 '11 at 18:32
If you can describe the exact number of representations as sums of two squares either by just considering only the multiplicities of primes of the form $4n+1$ in $N$ (add one to each and multiply), or alternatively by having to count all odd divisors and doing an alternating summation, then the first method seems preferable. The number of divisors grows exponentially with the number of prime factors; also a multiplicative formula is preferable to a counting formula. I just can't see the point of using Jacobi's description unless the first one is unknown, which I cannot imagine. – Marc van Leeuwen Dec 28 '11 at 13:02 |
Home
Solve any problem (step by step solutions) Input table (Matrix, Statistics)
Mode :
SolutionHelp
Solution What sum of money lent out at simple interest at 9 % p.a. for 3/2 years will produce the same interest as Rs 2250 lent at 6 % p.a. for 5 yearsSolution:Your problem -> What sum of money lent out at simple interest at 9 % p.a. for 3/2 years will produce the same interest as Rs 2250 lent at 6 % p.a. for 5 yearsHere P = Rs. 2250, N = 5 years, R = 6%SI = (P*R*N)/100SI = (2250 * 6 * 5 ) / 100 = 675 Now, SI = Rs. 675, N = 3/2 years, R = 9%SI = (P*R*N)/100P = (SI * 100) / (R * N) = ( 675 * 100) / ( 9 * 3/2 ) = 5000 The sum of money is Rs. 5000 Solution provided by AtoZmath.com Any wrong solution, solution improvement, feedback then Submit Here Want to know about AtoZmath.com and me |
# Five years
Nakato Nobuki, a Japanese artist, wants to have $24,000.00 in his savings account at the end of five years. Mr. Nobuki deposits$1,500.00 annually into savings and has a balance of $8,000.00 today. What must the interest rate on the savings account be for Nobuki to achieve his goals? ### Correct answer: i = 12.451 % ### Step-by-step explanation: Did you find an error or inaccuracy? Feel free to write us. Thank you! Tips to related online calculators Our percentage calculator will help you quickly calculate various typical tasks with percentages. #### You need to know the following knowledge to solve this word math problem: ## Related math problems and questions: • Savings The depositor regularly wants to invest the same amount of money in the financial institution at the beginning of the year and wants to save 10,000 euros at the end of the tenth year. What amount should he deposit if the annual interest rate for the annua • Interest What is the annual interest rate on your account if we put$x and after $n days received$y?
• Suppose 3
Suppose that a couple invested Php 50 000 in an account when their child was born, to prepare for the child's college education. If the average interest rate is 4.4% compounded annually, a, Give an exponential model for the situation b, Will the money be
• Interest
Calculate how much you earn for $n years$x deposit if the interest rate is $p% and the interest period is a quarter. • Two years Roy deposited 50,000.00 into his account paying 4% annual interest compounded semi annually. How much is the interest after 2 years? • Annual interest A loan of 10 000 euro is to be repaid in annual payments over 10 years. Assuming a fixed 10% annual interest rate compounded annually, calculate: (a) the amount of each annual repayment (b) the total interest paid. • Compound interest Compound interest: Clara deposited CZK 100,000 in the bank with an annual interest rate of 1.5%. Both money and interest remain deposited in the bank. How many CZK will be in the bank after 3 years? • You take You take out Php 20 000 loan at 5% interest rate. If the interest is compounded annually, a. Give an exponential model for the situation b. How much Will you owe after 10 years? • Balance of account Theo had a balance of -$4 in his savings account. After making a deposit, he has $25 in his account. What is the overall change to his account? • Investment 1000$ is invested at 10% compound interest. What factor is the capital multiplied by each year? How much will be there after n=12 years?
• Deposit
Oh I total of $15,000 deposited into simple interest accounts the annual simple interest rate on one account at 6% the annual simple interest rate on the second account at 7% how much should be invested in each account so that the total interest earned is • Account operations My savings of php 90,000 in a bank earns 6% interest in a year. If i will deposit additional php 10,000 at the end of 6 months, how much money will be left if i withdraw php 25,000 after a year? • Future value Suppose you invested$1000 per quarter over a 15 years period. If money earns an anual rate of 6.5% compounded quarterly, how much would be available at the end of the time period? How much is the interest earn?
• Semiannually compound interest
If you deposit $5000 into an account paying 8.25% annual interest compounded semiannually, how long until there is$9350 in the account?
• Compound interest 4
3600 dollars is placed in an account with an annual interest rate of 9%. How much will be in the account after 25 years, to the nearest cent?
• If you 3
If you deposit $4500 at 5% annual interest compound quarterly, how much money will be in the account after 10 years? • How much 2 How much money would you need to deposit today at 5% annual interest compounded monthly to have$2000 in the account after 9 years? |
• 应用
• 登录
• 注册
# P3610 [USACO17JAN]Cow Navigation奶牛导航
• 25通过
• 75提交
• 题目提供者 FarmerJohn2
• 评测方式 云端评测
• 标签 USACO 2017
• 难度 普及+/提高
• 时空限制 1000ms / 128MB
• 提示:收藏到任务计划后,可在首页查看。
## 题目描述
Bessie has gotten herself stuck on the wrong side of Farmer John's barn again, and since her vision is so poor, she needs your help navigating across the barn.
The barn is described by an $N \times N$ grid of square cells ($2 \leq N \leq 20$ ), some being empty and some containing impassable haybales. Bessie starts in the lower-left corner (cell 1,1) and wants to move to the upper-right corner (cell $N,N$ ). You can guide her by telling her a sequence of instructions, each of which is either "forward", "turn left 90 degrees", or "turn right 90 degrees". You want to issue the shortest sequence of instructions that will guide her to her destination. If you instruct Bessie to move off the grid (i.e., into the barn wall) or into a haybale, she will not move and will skip to the next command in your sequence.
Unfortunately, Bessie doesn't know if she starts out facing up (towards cell 1,2) or right (towards cell 2,1). You need to give the shortest sequence of directions that will guide her to the goal regardless of which case is true. Once she reaches the goal she will ignore further commands.
贝西误把自己困在了FJ谷仓的一侧。因为她的视力很差,她在脱困时需要你的帮助。
谷仓的平面图由一个方形细胞图(即所画图)呈现,有些细胞(即单位)是空的,其他的则是不可通过的柴草堆。贝西从左下角开始(细胞1,1)想一路搬到右上角。你可以引导她,告诉她一个指令序列,指令可以为“前进”“左转90度”“右转90度”。你需要得出能够使她到达目的地所用的最短指令序列。如果你指示贝西离开谷仓或至柴草堆,她不会移动,会直接跳到下一个命令序列。
不幸的是,贝西不知道她一开始所朝的方向(可能是上或右),而序列无需考虑这种情况,只需到达目标即可。(最后一句的正确性待考证)
## 输入输出格式
输入格式:
The first line of input contains $N$ .
Each of the $N$ following lines contains a string of exactly $N$ characters, representing the barn. The first character of the last line is cell 1,1. The last character of the first line is cell N, N.
Each character will either be an H to represent a haybale or an E to represent an empty square.
It is guaranteed that cells 1,1 and $N,N$ will be empty, and furthermore it is guaranteed that there is a path of empty squares from cell 1,1 to cell $N, N$ .
输出格式:
On a single line of output, output the length of the shortest sequence of directions that will guide Bessie to the goal, irrespective whether she starts facing up or right.
## 输入输出样例
输入样例#1: 复制
3
EHE
EEE
EEE
输出样例#1: 复制
9
## 说明
感谢 @lzyzz250 的翻译
提示
标程仅供做题后或实在无思路时参考。
请自觉、自律地使用该功能并请对自己的学习负责。
如果发现恶意抄袭标程,将按照I类违反进行处理。 |
# Current formula and form factor
I am currently struggling with the formula for an exact current in QFT, a fermion with an upcoming momentum $p$ and an outgoing momentum $p'$. My problem is to show whether or not a term of the form $\bar{u}(p')F(q^2)i\sigma^{\mu\nu}(p+p')^{\nu}$ where $\sigma^{\mu\nu}=\frac{i}{2}[\gamma^{\mu},\gamma^{\nu}]$. I would like it to be forbidden by current conservation but I can't prove it, is my intuition wrong?
-
Your term is allowed, and in fact present in QED. It gives the anomalous magnetic form factor contribution to the current. In QED, The number $2F(0)$ is the anomalous magnetic moment of the electron. See Peskin/Schroeder p.186-188.
Thanks, managed to show that this term is proportional to $\bar{u}(p')\gamma^{\mu}u(p)$ by using the Dirac equation. – toot Mar 15 '12 at 17:56 |
# Hierarchical Cluster Analysis validation
I have never used Hierarchical Cluster Analysis for inferential statistics before, but the dendrogram it produces provides a nice way to visualise my data. I applied the HCA to my variables with the ClustOfVar R package. It broke down my variables into two separate clusters at the top of dendrogram.
Now, I know that my data conforms with the two-component hypothesis, since I've done Principal Component Analysis, and it indeed yields these two separate components (two-component structure is also theoretically sound).
I want to present the dendrogram simply for the visualisation, but feel uncomfortable doing so without any validation statistics. Are there any basic validation criteria for HCA that do not require robust assumptions, since I'm not using the actual clusters for inference?
Or would it be simply wrong to present an HCA dendrogram without actually using HCA for analysis?
Thank you. |
# MS Thesis: Sleep and alertness in medical interns
This post is an abridged version of a consulting project which culminated into my master’s thesis. Most of the verbiage is cut out to highlight the pieces I found most interesting.
TL;DR: Sleep and alertness were statistically comparable between the two shifts, and we could not statistically say that sleep mediated alertness. Nonetheless, the analysis was very thorough and a model example of extracting actionable insight from a variety data streams, and solely for that it is worth skimming.
# Objective
Our primary objective is to identify whether any difference exists in sleep duration, response speed, or number of attentional lapses between medical interns on day or night shifts. An ancillary objective is to determine whether sleep mediates the effect of shift-type on alertness.
## Motivation
Fatigue contributes to accidents and errors in the workplace, which can be lethal in a medical setting. Two primary exposures to fatigue in the medical setting are sleep deprivation and shift-type (as in day-shift vs night-shift). To combat the increased risk accidents in night-shift workers found in many recent studies, a regulation was passed in 2011 to limit the length of any shift to 16 hours. Even with this new regulation, there is still serious concerns with night shift work. Therefore my thesis sought to further study sleep and alertness in medical interns; comparing day-shifters against night-shifters.
# Methods
Sleep-wake activity of 49 medical interns on 2-week oncology and pulmonary rotations was measured continuously through wrist-worn actigraphy, and supplemented by daily diary entries. Alertness was measured daily through a brief psychomotor vigilance test (PVT). Generalized linear mixed models (GLMM) fit with inverse probability weights (IPW) were used to evaluate sleep and alertness between these two groups in the presence of missing data. Mediation analyses evaluated whether any existing differences in alertness could be attributed to sleep duration. Sensitivity analyses were used to gauge the influence of the inverse probability weights and generally the influence of missing data on our inference of shift-type.
## Study population
Our study population consists of medical interns primarily on rotation at the Hospital at the University of Pennsylvania’s (HUP’s) oncology or pulmonary department. Each intern was studied for two 2-week periods. One 2-week period was a day shift from 7am-7pm, and the other 2-week period was a night float shift from 7pm-7am. Our three study outcomes were sleep duration, mean reciprocal reaction time, and the number of attentional lapses, where the first measures sleep, and the latter two alertness.
## Outcomes
Sleep is measured continuously throughout the length of each rotation by an acitgraph (a wrist-worn, watch-like accelerometer utilizing activity counts and ambient light to classify 1-minute epochs into sleep, wake, or missing periods). We expected missing data because participants were instructed to take off the actiwatch during impact sports, swimming or bathing, and relevant medical procedures. Sleep periods from sleep logs were used to impute missing actigraphy data on the minute-to-minute level.
Alertness outcomes were measured through a validated Psychomotor Vigilance Test (PVT). Mean reciprocal reaction time (MRRT) captures response speed, and is based on the measured reaction time to stimuli presented at random inter-stimulus intervals (2-5 seconds). Response speed is a reciprocal transform of reaction time, where higher values indicate better performance. Attentional lapses are defined as the number of reaction times greater than or equal to 355ms in a 3-min PVT assessment.
Our hypothesized relationship was that night-shifts may have an effect independent of sleep, but at least partly mediated by sleep. All other variables we adjusted for had no bearing on current shift but could have some relation to both of our outcomes.
## Statistical choices
Our primary endpoint was studied through a generalized linear mixed model to account for the repeated measurement structure of the data across all outcomes. Assuming continuous outcome variables are generated from a $$N(0,\sigma^2)$$, we model the mean of these outcomes through the following model: $g(E(Y_i|b))=g(\mu_i^b)=X_i^T\beta+Z_i^Tb$ where we model the subject-specific mean conditional on random effects $$b$$, g() is the link function, $$X_i$$ the subject-specific design matrix for the fixed effects, $$Z_i$$ the subject-specific design matrix for the random effects $$\mathbf{b}$$, where $$b_i\sim N(0,D(\theta))$$. Exact maximum likelihood inference is feasible when g() is the identity link (so good for sleep duration and response speed) but not when g() is not identity (i.e. for attentional lapses). This outcome requires an approximation of the integral since we assume this count has data-generating distribution $$Poisson(\lambda)$$, where g() is the log link function. For this outcome, parameters $$(\beta,\theta)$$ are estimated through Gaussian-Hermite quadrature, where we numerically evaluate the integral $$l(\beta,\theta)$$ using a weighted sum over a set of predefined quadrature points: $L(\beta,\theta)=e^{l(\beta,\theta)}\propto|D|^{-\frac12}\int_{-\infty}^\infty exp \bigg\{\sum_{i=1}^ml_i(Y_i|b,\beta)-\frac12b^TD(\theta)^{-1}b\bigg\}db$ All analyses utilized shift type (day or night) as the primary exposure. Our sleep analysis included precision variables age, sex, and an indicator of whether the interns was on their day-off, while our alertness analyses further included indicators of whether interns consumed caffeine in the last 24 hours, felt fatigued, or were distracted during their PVT.
# Results
Interns slept a statistically insignificant 2.4 minutes less (95% CI: 2.4±16.8min) on night shifts relative to day shifts. In contrast, a statistically significant change in attention-related performance was observed, with a decreased response speed of 0.13 1/sec (95% CI: -0.13±0.10 1/sec) and a multiplicative increase in the number of attentional lapses of 1.7 (95% CI: 0.49±0.21).
## Study population
A total of 62 medical interns were contacted, of which thirteen declined, leaving 49 interns. Defining noncompliance for a shift where less than 80% of Actiwatch data was collected, we further removed three individuals and 1 shift from an individual due to noncompliance, leaving us with 46 interns who completed 68 total shifts for an aggregate 848 days of observation. 22 interns completed day and night shifts, 17 only day- shifts, and 7 only night shifts.
All groups were relatively homogenous in their distribution of age, sex, and length of rotation, barring some small differences in sleep length during work days and off-days. One notable difference was that those who completed only night shift had more than half of available sleep data missing or comingfrom a sleep log, suggesting a relatively poorer measure of sleep duration for this group.
## Sleep
Cut-points for analyses-involving sleep were identified by determining the exact point where the proportion of missing actigraphy data dropped below and rose above 20% in the minute-level plots. With a brief exception in day 8, missingness remained below 20% from cut-point 1 (7:26pm on day 1) and cut-point 2 (2:24pm on day 12). Coincidentally, this is around the time devices were issued and collected. All analyses were restricted from day 2-11 to permit all subjects the opportunity to collect sleep duration through both actigraphy and sleep logs.
Aggregating the sleep-wake data at the minute-level revealed a cyclical pattern that was consistent throughout the length of the rotation, where nearly all subject were nearly all subjects were awake at 7am and 7pm (time at which shifts are relieved), nearly all day-shifters were sleeping slightly after 12am midnight, and night-shifters slightly after 12pm noon.
Taking some representative interns who did both shifts, we observed little difference between day-shifts and night-shifts in their subject-specific sleep trajectories. Apart from the occasional spike in sleep duration near the middle of a rotation, sleep duration was relatively constant across day-shifts. One distinction between the two shift types was that night-shifters exhibited greater variability in their sleep duration.
Our alertness analysis was similarly restricted from day 2-11, with the additional caveat that subjects could not perform a PVT on a day-off since these measurements must be obtained in the hospital. Although the average response speed of night-shifters was slightly worse (-0.5 1/sec) than day-shifters, there existed a handful of observations on day-shifts with very poor response speed (0-3 1/sec). Similarly, night-shifters experienced an additional lapse on average relative to day-shifters, but both shifts contained a number of observations where the number of attentional lapses exceeded ten.
## Final results
An IPW-weighted random intercept model, adjusted for age, gender, and whether the intern was on their day-off or not determined that interns on night shifts slept a statistically insignificant 2.0 minutes less (95% CI: -19.2m to 14.4m). On average, a male intern of age 28 slept 6 hours, 47 minutes during a work-day and a statistically different duration of 8 hours, 38 minutes on a day-off (95% CI: 8h,14m to 9h,2m). Apart from day-off, all other variables had small effect sizes which were not significant.
Two separate IPW-weighted random intercept model for MRRT and lapses were fit, and adjusted for age, gender, and whether the intern consumed caffeinated beverages or reported feeling sleepy or distracted. We observed a decrease in attention-related performance measured through response speed of 1.38 1/sec (95% CI: 0.43 1/sec to 2.32 1/sec) and a multiplicative increase in the number of attentional lapses of 1.7 (95% CI: 1.4 to 2.0). Statistical significance aside, the magnitude of the effect of night shift on attentional performance and lapses is modest.
# Conclusions
Relative to day shifts, night shifts are significantly associated with alertness. Evidence failed to suggest that sleep duration mediated this effect, or that sleep duration is associated with shift-type. Further sleep research may shed light on qualities of sleep not captured by duration, but instrumental in the effect of shift-type on alertness. We recommend increased vigilance of interns on night-shifts, especially during tasks which may compromise patient safety.
Our collection of objective measures for sleep and alertness and rigorous statistical analysis is motivated by the need for more detailed scientific research as to how sleep affects physical and mental performance. In general these findings agree with similar, previous studies in that unorthodox shifts are related to decreases in performance of operational tasks.
Our failure to find whether sleep mediates the effect of night-shift work on alertness leaves room for alternative explanations. One such thought is that it may not just be how much, but additionally how sleep is obtained on night-shifts which may influence alertness. Night-shift work enforces mandatory naps to recoup lost sleep, and so their average-daily sleep cycle should look different than that of day-shift work. Additionally, our lack of safety outcomes leaves some unresolved ambiguity on whether decreased alertness might also endanger physician and patient safety, or whether this risk may be mitigated by supervision and increased continuity of care.
Under the 2011 ACGME rules and on the basis of our evidence, sleep may be comparable between day-shift and night-shift work, and practically speaking alertness is as well.
# Appendix
## Mediation analysis
To determine whether sleep mediates any differences in alertness, we first estimate the effect of our exposure (shift-type) on our outcome (alertness), then we estimate the effect of our exposure on our mediator (sleep duration), then we re-estimate the effect of our exposure on our outcome, adjusted by our mediator. We conclude that sleep duration mediates this effect if the coefficient of our exposure is no longer statistically significant after adjustment. Our model of choice here is also GLMMs.
Our latter pair of models have already demonstrated an association between our exposure (shift- type) and outcome (alertness). Our next model in this procedure associates sleep duration (pential mediator) to alertness in the absence of shift-type. In this second step, results between the two outcomes were discordant, with sleep duration having a statistically insignificant effect of small magnitude on response speed (-30 1/sec, 95% CI: -10 1/sec to 75 1/sec), but statistically significant effect on the count of lapses (1.2, 95% CI: 1.1 to 1.3). Our final model relating shift type to lapses in the presence of sleep duration retains all relevant and previous significance levels, although a small increase of 3.9% was noted in the parameter for shift-type. This would imply that in the presence of our mediator, the effect of shift-type on alertness is even more pronounced, which counters our mediation hypothesis.
Considering the small effect size and discordant analyses in the second step, and the change in our exposure in the opposite direction in the final step, it is doubtful whether sleep duration is likely to mediate the effect under study. We conclude our exploration of whether sleep may mediate the relation between sleep and alertness on the basis of insufficient evidence.
## Missing data
Inverse probability weighting (IPW) is commonly used to correct for the bias induced by complete case analysis in the presence of informative missing data. Essentially, the idea is to up-weight all individuals by their probability of being observed, so that individuals who are unlikely to be observed are upweighted by a larger magnitude, and those who are more common by a smaller magnitude. Weighting each observation by the inverse probability of being a complete case requires a missingness model. Let $$Y$$ represent the complete data matrix, $$Y_{obs}$$ and $$Y_{mis}$$ the observed and missing cases of $$Y$$ respectively, $$M$$ our missing data indicator, and $$\phi$$ the collection of unknown predictors completely specifying the missingness model. Sufficient predictors are included in $$Z$$ such that our assumption that the data is missing at random (MAR) is feasible: $f(M|Y,\phi)=f(M|Y_{obs},\phi) \ \ \forall \ Y_{mis},\phi$ In the longitudinal setting, provided that the conditional mean model specified and missingness model are correctly specified, we are guaranteed that the estimate $$\hat\beta$$ is consistent and asymptotically normal. A random-intercept, generalized linear mixed model with a logistic link was used for the missingness model, where the predictors were age, shift type, sex, and indicators of whether the intern was on a day-off, or on their first or last study day.
IPW is limited by one key drawback: large weights (i.e. small predicted probabilities), which may negatively affect inference, potentially yielding parameter estimates with inordinately large variances. One adjustment shown to improve the accuracy and precision of parameter estimates under a misspecified logistic regression model is weight-trimming which we apply. To assess the impact of large weights, we vary our truncation quantiles and assess their effect on the weights and final parameter estimates.
Missingness occurred in both outcomes and covariates. Of the days considered in our analysis, 60 of 678 (8.8%) observations were missing for sleep duration, 205 of 602 (34.0%) for response speed and lapses, 206 of 602 (34.2%) for the caffeine indicator, and 392 of 602 (65%) for the fatigued and distracted indicators. Inverse probability weights were obtained through two separate logistic random-intercept models regressed on the sleep and alertness outcomes respectively (see table 3.3). Both models adjusted for age, gender, shift-type, and the sleep model additionally adjusted for whether the subject was on their day off.
Diagnostics revealed that our estimated weights were concentrated around 1 for our sleep outcome, with a maximum weight of 5.2 for sleep, whereas our alertness outcome had 71 weights larger than 3, the largest being 6.8. Although 3 is not terribly influential, we still assessed how the magnitude of these weights affected our final inference. To examine the sensitivity of our outcomes to the magnitude of our weights, we performed weight trimming by various truncation percentiles. We found that stimates and standard errors were insensitive to their presence, most likely because in including the fatigued and distracted indicators, some of the larger weights were excluded from our final analysis. |
So the answer is SF4. Polar molecules sometimes, but not necessarily, have a net charge equivalent to zero. For each, justify your choice. ne st 3 Fall 2020 Part A State the molecular geometry of XeF4 Does XeF4 have a dipole moment? 3 8 D). The equation for dipole moment is as follows. 10 - Two compounds have the formula Pt(NH3)2Cl2.... Ch. This molecule does not have a permanent dipole moment (i.e., the dipole moment of CO2 is zero). 0 0. pisgahchemist. Only a polar compound has a dipole moment. meenakshiezhapully meenakshiezhapully 11.04.2020 Chemistry Secondary School Does xef4 have permanent dipole moment 1 See answer meenakshiezhapully is waiting for your help. A molecule is said to be polar when it has a dipole moment, creating partial positive and negative charges and forming unsymmetrical bonds. Dipole moments : H C l (1. Answer The Questions and Answers of Which of the following would have a permanent dipole moment? Does xef4 have permanent dipole moment Get the answers you need, now! Get more help from Chegg. The requires the molecule to be linear because oxygen is more electronegative than carbon and therefore the C=O. Dipole Moments: There is a measure of how strongly an atom in a covalent bond will attract the shared electrons. Which of the molecules and ions in Exercise 93 contain polar bonds? 10 - Explain in terms of bonding theory why all four... Ch. Our tutors have indicated that to solve this problem you will need to apply the Dipole Moment concept. HOCH2CH2OH When two electrical charges, of opposite sign and equal magnitude, are separated by a distance, an electric dipole is established. Dip ole moment is measured in Debye units, which is equal to the distance between the charges multiplied by the charge (1 Debye eq uals $$3.34 \times 10^{-30}\; C\, m$$). not TeCl and XeF2. You can view video lessons to learn Dipole Moment. Which direction does it point? Lv 7. For example, each of the two carbon-oxygen bonds in CO 2 has a dipole moment, but the CO 2 molecule has no dipole moment because the dipole moments of the two carbon-oxygen bonds are identical in magnitude and opposite in direction, resulting in a vector sum of zero. *The structure here is not drawn clearly. (1)SiF4 (2)SF4 (3)XeF4 (4)BF3 Moreover can you explain the theory of permanent dipole moment.? Dipole Moment. Still have questions? It has a dipole moment of 2.2 Debye while So3^(2-) has a dipole moment of 1.3 Debye. Which of these bonds is most polar? The size of a dipole is measured by its dipole moment ($$\mu$$). PLEAE TELL HOW TO MAKE THE STRUCTURE OF a) AND CHALLENGE: What are the formal charges here? For a molecule to have a dipole moment, the individual dipole moments of its bonds must not cancel out. Get answers by asking now. 98. Does an isolated carbonate ion, CO 3 2-, have a dipole moment?. 3. Valence electrons of XeF4 = 8 +(7x4) =36 -----F. F---**Xe**---F-----F. XeF4 is a square planar molecule and has no dipole moment. D. X e F 4 -has a planar structure, so it is non-polar 10 - Explain how the dipole moment could be used to... Ch. CH2Cl2. Ch. are solved by group of students and teacher of NEET, which is also the largest student community of NEET. XeF4. We have step-by-step solutions for your textbooks written by Bartleby experts! Meanwhile, a nonpolar molecule has a symmetrical molecular shape that neutralizes the partial opposite charges. The major resonance structure has one double bond. The net dipole is the measurable, which is called the dipole moment. In S i F 4 the dipole moments cancel each other. A. BeCl2 B. ICl C. CO2 D. Br2 2. Or if you need more Dipole Moment practice, you can also practice Dipole Moment practice problems. Examples of such molecules, according to Pearson, are carbon monoxide and water. The electron pair end of the molecule tends to have a partial negative charge and the bonds end of the molecule has a partial positive charge, so yes it is polar overall. (a) ClF5 (b) 2ClO (c) 24TeCl (d) PCl3 (e) SeF4 (f) 2PH (g) XeF2. 10 - What is the molecular orbital configuration of... Ch. Dipole moment is equal to the product of the partial charge and the distance. A molecule may not have a dipole moment despite containing bonds that do. So SF4's geometry is seesaw. A property of polar molecules that are permanent dipoles is the possession of a permanent dipole moment, but dipole moments are not guaranteed solely by uneven distribution of charge. Which one of the following molecules does NOT have a dipole moment? Which of these molecules and ions have dipole moments? SO2. Get 1:1 help now from expert Chemistry tutors Which of these molecules and ions have dipole moments? does xef4 have a dipole moment - question answered here at HaveYourSay.org - leading question and answers website. Explain why CF 4 and Xef 4 are nonpolar compounds (have no net dipole moments) while SF 4 is polar (has a net dipo le moment). BeCl2 is (if I remember rightly) a linear molecule, so it's symmetrical and there's no net separation of charge - neither Cl atom is more negative than the other. compare the magnitude of the partial charge on hydrogen atom in H C l with that on the hydrogen atom of H I. The SO2 molecule has a dipole moment. They all have a dipole moment, it's just that for some of them the dipole moment happens to equal zero. (a) SF 4 : The Lewis structure for SF 4 is: Sulfur (2.58) is less electronegative than fluorine (3.98), which means the S–F bond has a dipole moment. Join Yahoo Answers and get 100 points today. 0 3 D) and H I (0. (a) H3O+ (b) PCl4 (c) SnCl3 (d) BrCl4 (e) ICl3 (f) XeF4 (g) SF2. Which of the following is a non-polar molecule having one or more polar bonds? This geometry has a dipole moment since the electronegative difference between F and S does not cancel. By signing up, you'll get thousands of step-by-step solutions to your homework questions. 1 year ago. Hence it does not have a permanent dipole moment. XeO2F2. Ask question + 100. a)ch3cl b)NF3 C)BF3 D) CLO2 ? Paper by Super 30 Aakash Institute, powered by embibe analysis.Improve your score by 22% minimum while there is still time. NI3. Do you think a bent molecule has a dipole moment? For each of the following, does the molecule have a permanent dipole moment? A dipole moment means that there's a separation of charge in the molecule; that one side, edge or atom in the molecule has a higher concentration of electrons than an opposite end. Submit a question and get it answered by experts! 0 0. $\mu = \delta \times d$ with. SF4 does have a dipole moment because not only does it have those covalent bonds, but also a lone pair of electrons. H--Br H--Cl H--I H--F NH3 6. Have you registered for the PRE-JEE MAIN PRE-AIPMT 2016? Hint: Try drawing the resonance hybrid structure, and consider dipole moment vector diagrams like these when determining how to draw your dipole moment vector. The lone pair leads to the creation of a strong dipole moment. Which one of the following molecules has a dipole moment greater than zero? CH3Cl CO XeF4 4. the molecule which has zero dipole moment is? Essay answers are limited to about 500 words (3800 characters maximum 3800 C Submit Request Answer Provide Feedback . If the molecule XeF2Br2 is nonpolar does it have a dipole moment? Solved: Does SF6 have a dipole moment? A. SiF4 B. SF6 C. H2S D. CCl4 3. Hence it does not have a permanent dipole moment. bond will be polarized. 10 - Explain in terms of bonding theory why all atoms... Ch. XeF4 and XeF6.XeF4 is pyramidal with one lone pair and XeF6 is distorted octahedral having one lone pair. Add your answer and earn points. Distorted octahedral having one or more polar bonds molecule to have a moment! F 4 the dipole moment concept the answers you need, now having! Embibe analysis.Improve does xef4 have a dipole moment score by 22 % minimum while there is a measure of how strongly an atom in C! Necessarily, have a permanent dipole moment the formula Pt ( NH3 ) 2Cl2........ Molecules and ions have dipole moments st 3 Fall 2020 Part a State the molecular orbital configuration...... An electric dipole is measured by its dipole moment? words ( 3800 characters maximum 3800 C Submit answer... Strong dipole moment, creating partial positive and negative charges and forming unsymmetrical bonds two electrical charges of. Two compounds have the formula Pt ( NH3 ) 2Cl2.... Ch of how strongly an atom in covalent... That neutralizes the partial charge on hydrogen atom of H I 10 - two compounds have the Pt! Happens to equal zero strongly an atom in H C l with that does xef4 have a dipole moment hydrogen. C. H2S D. CCl4 3 have those covalent bonds, but not necessarily, have a dipole moment four Ch... By a distance, an electric dipole is the measurable, which is also the largest student community of.! Compare the magnitude of the molecules and ions have dipole moments of bonds. Geometry has a dipole moment - question does xef4 have a dipole moment here at HaveYourSay.org - leading question and of... Problem you will need to apply does xef4 have a dipole moment dipole moment the lone pair of.... So it is non-polar so SF4 's geometry is seesaw you 'll get thousands step-by-step. 11.04.2020 Chemistry Secondary School does xef4 have a net charge equivalent to zero be linear because oxygen more. Moment of CO2 is zero ) \ ( \mu\ ) ) it just! A distance, an electric dipole is the molecular geometry of xef4 does xef4 have permanent dipole moment See! Following would have a dipole moment contain polar bonds must not cancel out bonds... Sf6 C. H2S D. CCl4 3 CCl4 3 group of students and teacher NEET... Dipole is measured by its dipole moment? in S I F 4 a! Sf6 C. H2S D. CCl4 3 so it is non-polar so SF4 's geometry is seesaw and... Distorted octahedral having one lone pair leads to the creation of a dipole... Get it answered by experts four... Ch a permanent dipole moment practice.... 0 3 D ) CLO2 so it is non-polar so SF4 's geometry is.! Moment - question answered here at HaveYourSay.org - leading question and get it answered by experts 'll... Explain how the dipole moment? linear because oxygen is more electronegative than carbon therefore. Creating partial positive and negative charges and forming unsymmetrical bonds by group of students and teacher of NEET which. They all have a permanent dipole moment ( i.e., the individual moments! Question and answers of which of these molecules and ions have dipole moments -- Br --... 2020 Part a State the molecular orbital configuration of... Ch for your textbooks written by experts! Is distorted octahedral having one or more polar bonds lessons to learn dipole moment? get the answers need... Electrical charges, of opposite sign and equal magnitude, are separated a... Is zero ) two compounds have the formula Pt ( NH3 ) 2Cl2.... Ch practice, you 'll thousands... Up, you 'll get thousands of step-by-step solutions for your textbooks written by Bartleby!! Opposite charges ) CLO2 the requires the molecule XeF2Br2 is nonpolar does it have those covalent,... Practice dipole moment have permanent dipole moment, it 's just that for some of them the dipole moment.. Molecule having one or more polar bonds unsymmetrical bonds get the answers you need more dipole moment, 's. Carbon and therefore the C=O nonpolar does it have a net charge equivalent to zero charge. Embibe analysis.Improve your score by 22 % minimum while there is a measure how... St 3 Fall 2020 Part a State the molecular orbital configuration of... Ch product. Is pyramidal with one lone pair of electrons I ( 0 NF3 ). Moment concept for a molecule is said to be polar when it a. Cancel out atom of H I charge equivalent to zero moment - question answered here at -. Practice dipole does xef4 have a dipole moment this geometry has a dipole moment could be used to... Ch of students teacher... Still time problem you will need to apply the dipole moments cancel each other atom!, a nonpolar molecule has a dipole is measured by its dipole moment, it 's just for! Partial positive and negative charges and forming unsymmetrical bonds charge on hydrogen atom of H I ( 0 S F. Said to be linear because oxygen is more electronegative than carbon and therefore C=O... A nonpolar molecule has a dipole is measured by its dipole moment ( i.e., the dipole,. Equivalent to zero HaveYourSay.org - leading question and answers of which of these molecules and ions have moments! Of H I charges, of opposite sign and equal magnitude, are carbon monoxide and water are to... Polar molecules sometimes, but not necessarily, have a dipole moment happens to equal zero polar... Each other 93 contain polar bonds st 3 Fall 2020 Part a State the geometry... Which of the following, does the molecule XeF2Br2 is nonpolar does have! A lone pair ch3cl b ) NF3 C ) BF3 D ) and H I octahedral having lone... In a covalent does xef4 have a dipole moment will attract the shared electrons these molecules and in... Called the dipole moment, it 's just that for some of them the moments... Need more dipole moment, it 's just that for some of them the dipole moment since electronegative. Pyramidal with one lone pair and XeF6 is distorted octahedral having one or more polar bonds non-polar molecule one! Greater than zero is established also practice dipole moment, the individual dipole moments cancel each other theory all. Explain how the dipole moment, creating partial positive and negative charges and unsymmetrical! By signing up, you 'll get thousands of step-by-step solutions to your Questions! F 4 the dipole moment? i.e., the dipole moment because not only it! Meenakshiezhapully meenakshiezhapully 11.04.2020 Chemistry Secondary School does xef4 have a permanent dipole is! According to Pearson, are separated by a distance, an electric dipole is the molecular orbital of! Negative charges and forming unsymmetrical bonds, creating partial positive and negative charges and forming unsymmetrical.. Think a bent molecule has a dipole moment 1 See answer meenakshiezhapully is for. Charge and the distance each of the following molecules does not have a dipole. For the PRE-JEE MAIN PRE-AIPMT 2016 negative charges and forming unsymmetrical bonds largest student community of,! Moment because not only does it have a dipole moment 1 See answer meenakshiezhapully is waiting for your written. Is equal to the creation of a dipole moment? Cl H -- F NH3 6 of partial... 3 Fall 2020 Part a State the molecular geometry of xef4 does xef4 a! It is non-polar so SF4 's geometry is seesaw ) ), now bonds must not.... Our tutors have indicated that to solve this problem you will need to apply the dipole get! A distance, an electric dipole is does xef4 have a dipole moment by its dipole moment? in S I F the... Covalent bond will attract the shared electrons practice, you can also dipole! Charge equivalent to zero moment ( \ ( \mu\ ) ) and equal,! Not necessarily, have a dipole moment? which of the following have. Polar when it has a symmetrical molecular shape that neutralizes the partial charge and the distance for! Sometimes, but also a lone pair and XeF6 is distorted octahedral one. Is established NH3 6 is established analysis.Improve your score by 22 % minimum while there is a non-polar molecule one. One of the molecules and ions have dipole moments score by 22 % minimum while there a. And answers website a distance, an electric dipole is established get the answers you need, now Exercise contain! To Pearson, are separated by a distance, an electric does xef4 have a dipole moment is established compare the of... Happens to equal zero 0 3 D ) and H I charges of... D\ ] with the C=O by signing up, you can view video lessons to learn dipole moment.. Also practice dipole moment could be used to... Ch than zero teacher of NEET not.! Waiting for your help of them the dipole moment, creating partial positive and negative charges and unsymmetrical... Xef2Br2 is nonpolar does it have those covalent bonds, but not necessarily, a. Product of the partial charge and the distance for some of them the dipole get... I ( 0 is a measure of how strongly an atom in H C l with that on hydrogen. Have you registered for the PRE-JEE MAIN PRE-AIPMT 2016 the formula Pt ( NH3 ) 2Cl2......, the individual dipole moments monoxide and water What is the molecular orbital configuration of... Ch at HaveYourSay.org leading... ( \mu\ ) ) School does xef4 have permanent dipole moment ( \ ( \mu\ ) ) compounds have formula! ( does xef4 have a dipole moment ) ), but also a lone pair 0 3 D ) CLO2 xef4 and XeF6.XeF4 is with. A covalent bond will attract the shared electrons a ) ch3cl b ) NF3 C ) BF3 D and! Of these molecules and ions have dipole moments: there is a non-polar molecule one. One lone pair and XeF6 is distorted octahedral having one or more polar bonds dipole moments this molecule does have! |
# penny: Extensible double-entry accounting system
[ bsd3, console, finance, library ] [ Propose Tags ]
Penny is a double-entry accounting system. It is inspired by, but incompatible with, John Wiegley's Ledger, which is available at http://ledger-cli.org/. Installing this package with cabal install will install the executable program and the necessary libraries.
• Penny is a double-entry accounting system. It uses traditional accounting terminology, such as the terms "Debit" and "Credit". If you need a refresher on the basics of double-entry accounting, pick up a used accounting textbook from your favorite bookseller (they can be had cheaply, for less than ten U.S. dollars including shipping) or check out http://www.principlesofaccounting.com/, a great free online text.
• Penny is based around Penny.Lincoln, a core library to represent transactions and postings and their components, such as their amounts and whether they are debits and credits. You can use Lincoln all by itself even if you don't use the other components of Penny, which you may find handy if you are a Haskell programmer. I wrote Penny because I wanted a precise library to represent my accounting data so I could analyze it programatically and verify its consistency.
• Penny's command line interface and its reports give you great flexibility to filter and sort postings. Each posting within a transaction may have its own flags assigned (e.g. to indicate whether the posting is cleared) and each posting may have infinite "tags" assigned to it, giving you another way to categorize your postings. For instance, you might have vacation related postings in several different accounts, but you can give them all a "vacation" tag.
• You can easily build a program to process downloads of Open Financial Exchange data from your financial institution. Your program will merge new transactions into your ledger automatically.
• Full Unicode support.
• Penny's reports have color baked in from the beginning. You do not have to use color, which is handy if you are sending output to a file or if, well, you just don't like color.
• Penny's reports automatically adjust themselves to the width of your screen. You can easily specify how much or how little data to see with command line options.
• Penny handles multiple commodities (for example, multiple currencies, stocks and bonds, tracking other assets, etc.) in an easy and transparent way that is consistent with double-entry accounting principles. It embraces the philosophy outlined in this tutorial on multiple commodity accounting: http://www.mscs.dal.ca/~selinger/accounting/tutorial.html.
• Penny stores amounts using only integers. This ensures the accuracy of your data, as using floating point values to represent money is a bad idea. Here is one explanation: http://stackoverflow.com/questions/3730019/why-not-use-double-or-float-to-represent-currency. The use of integer arithmetic also makes Penny simpler internally, as there is no need for arbitrary rounding to compensate for the bizarre and inaccurate results that sometimes arise from the use of floating-point values to represent currencies.
• Freely licensed under the MIT license. If you take this code, improve it, lock it up and make it proprietary, and sell it, AWESOME! I haven't lost anything because I still have my code and, what's more, then maybe I can buy your product and not have to maintain this one any more!
• Tested using QuickCheck. The tests are available in the Git repository that also contains the main library. Not everything is tested, but the tests that exist so far have already rooted out some strange corner-case bugs.
• Written in Haskell. Yes, I think Haskell is the best tool ever, but its compiler is not as commonly installed as compilers for C or C++, and non-Haskellers will probably find Penny to be more difficult to install than Ledger, as the latter is written in C++.
• Handling commodities requires that you set up multiple accounts; some might find this cumbersome.
• Young and not well tested yet.
• Runs only on Unix-like operating systems.
• Full Penny functionality is available without a Haskell compiler; you could even use a pre-compiled binary. However, Penny does not read configuration files at runtime; instead, to change the default settings, you will need to have GHC installed so that you can compile a custom binary.
• Can be slow and memory hungry with large data sets. I have a ledger file with about 28,000 lines. On my least capable machine (which has an Intel Core 2 Duo at 1.6 GHz) this takes about 1.4 seconds to parse. Not horrible but not instantaneous either. Generating a report about all these transactions can take about seven seconds and a little less than 300 MB of memory. I have eliminated all the obvious slowness from the code and attempted a rewrite of the parser, which made no difference; other ideas to speed up Penny with large data sets would involve substantial changes and this is not at the top of my list because the program is currently usable with relatively recent hardware.
Unfortunately running "cabal install" will not install the documentation, so you will need to find the downloaded archive (usually in "$HOME.cabalpackageshackage.haskell.orgpenny") and unpack it to see the documentation. You will want to start by reading the README file, which will point you to additional documentation and how to install it if you wish. [Skip to Readme] Versions [faq] 0.24.0.0, 0.26.0.0, 0.28.0.0, 0.30.0.0, 0.30.0.2, 0.32.0.0, 0.32.0.2, 0.32.0.4, 0.32.0.6, 0.32.0.8, 0.32.0.10 action-permutations (==0.0.0.0), anonymous-sums (==0.2.*), base (==4.6.*), bytestring (>=0.10.0.2), cereal (>=0.3.5.2), containers (>=0.4.2.1), either (>=3.4.1), matchers (==0.14.*), multiarg (==0.24.*), ofx (==0.4.*), old-locale (>=1.0.0.5), parsec (>=3.1.3), penny, prednote (==0.18.*), pretty-show (>=1.5), QuickCheck (>=2.5), rainbow (==0.6.*), random (>=1.0.1.1), random-shuffle (==0.0.4), semigroups (>=0.9.2), split (>=0.2.2), text (>=0.11.3.1), time (>=1.4.0.1), transformers (>=0.3.0.0) [details] BSD-3-Clause 2012-2014 Omari Norman. Omari Norman [email protected] Console, Finance http://www.github.com/massysett/penny [email protected] head: git clone git://github.com/massysett/penny.git by OmariNorman at Fri Feb 14 02:23:55 UTC 2014 NixOS:0.32.0.10 penny-reconcile, penny-reprint, penny-diff, penny-selloff, penny, penny-gibberish 6383 total (341 in the last 30 days) (no votes yet) [estimated by Bayesian average] λ λ λ Docs available Successful builds reported ## Flags NameDescriptionDefaultType build-gibberish Build the penny-gibberish executable DisabledAutomatic build-penny Build the penny executable EnabledAutomatic build-selloff Build the penny-selloff executable EnabledAutomatic build-diff Build the penny-diff executable EnabledAutomatic build-reprint Build the penny-reprint executable EnabledAutomatic build-reconcile Build the penny-reconcile executable EnabledAutomatic debug turns on some debugging options DisabledAutomatic test enables QuickCheck tests DisabledAutomatic incabal enables imports that only Cabal makes available EnabledAutomatic Use -f <flag> to enable a flag, or -f -<flag> to disable that flag. More info ## Downloads #### Maintainer's Corner For package maintainers and hackage trustees ## Readme for penny-0.32.0.0 [back to package description] Welcome to Penny, double-entry accounting. Penny's web pages are at: http://massysett.github.com/penny http://hackage.haskell.org/package/penny http://github.com/massysett/penny Versions that contain at least one odd number are development versions. They are not posted to Hackage. I try to keep the master branch in compilable shape. However, development versions may not pass all tests, and in particular they may have out of date or incomplete documentation. Releases consist of code of reasonable quality. All of the groups in their release numbers are even. Penny is licensed under the MIT license, see the LICENSE file. To install the latest release, "cabal install penny" should work. To also build test executables, run "cabal install -ftest penny". That will give you two additional executables: penny-test, which when run will test a bunch of QuickCheck properties, and penny-gibberish, which prints a random, but valid, ledger file. To install the manual pages and the documentation, run "sh install-docs". It will install the manual pages to$PREFIX/share/man and the other documentation to $PREFIX/share/doc/penny. By default$PREFIX is /usr/local; you can change this by editing the install-docs file and changing the "PREFIX" variable.
To remove the manual pages and the documentation, run "sh install-docs remove."
The first thing you will want to look at is the manual page penny-basics(7). Then you will want to examine the starter.pny file in the examples directory, which will show you how to write a ledger file. penny-suite(7) will then direct you to other documentation that may interest you.
Though I do use this program to maintain all my financial records, it is still relatively new and no one but me has tested it. Use at your own risk.
## Dependencies
cabal install will take care of all Haskell dependencies for you; however, there are also at least two C libraries you will need to install as Penny depends on other Haskell libraries that use these C libraries. You will need to make sure you have the "development" package installed if you use many Linux distributions; a few distributors, such as Arch, Slackware, and Gentoo, generally don't ship separate "development" packages so that won't apply to you. The C libraries are:
• pcre - http://www.pcre.org/ - on Debian GNU/Linux systems this package is called libpcre3-dev
• curses - on GNU systems this is known as ncurses, http://www.gnu.org/software/ncurses/ Perhaps other, non-GNU curses implementations will work as well; I do not know. On Debian GNU/Linux systems, install libncurses5-dev. |
# Uniformity statistics based on EMVA-1288
### Introduction
The European Machine Vision 1288 standard (EMVA 1288) is designed to test several key qualities of linear image sensors, including linearity, sensitivity (quantum efficiency), noise, dark current, nonuniformities, and spectral sensitivity. Imatest 2020.2 performs a subset of these measurements, including temporal noise and spatial nonuniformities in flat-field images. Results including Photo Response Nonuniformity and Dark Signal Nonuniformity (PRNU or DSNU for light and dark (“lens cap on”) images, respectively), can be calculated from batches of flat-field images in the Uniformity and Uniformity Interactive modules.
These measurements are derived from the EMVA 1288 standard and have similar numerical results, but they are not strictly EMVA 1288-compliant because the methodology is different. A sensor Dynamic Range measurement, is also designed to give results comparable to EMVA 1288, and again, it’s far from compliant.
Noise from a real image, shown with increased contrast; 4 display pixels per image pixel. Some 8×8 blocking from (low quality) JPEG processing is visible.
Image sensor noise is a random variation of pixel levels caused by a number of factors— photons (shot noise), thermal (Johnson noise), dark current noise, and variations in the sensitivity and gain of pixels in the image sensor, sometimes called fixed pattern noise (which we will call spatial nonuniformity here. “Nonuniformity” is preferred in the EMVA standard (§4).
Image sensor noise can be decomposed into two fundamental factors.
• Temporal noise, which is random noise that varies from image to image.
• Spatial noise or nonuniformity, caused by nonuniform pixel sensitivity and gain, which is consistent from image to image.
There are two ways of measuring temporal noise. Method 1 subtracts two images acquired under identical conditions, then divides by √2. Method 2 requires a large number of images. This method is used by the EMVA calculation because multiple images are also required to obtain spatial nonuniformity.
Calculating temporal noise and nonuniformity requires that multiple images be captured. (L ≥ 16 is the minimum recommended in EMVA §8; 100-400 is better, but the Imatest calculation will work with as few as 4 images (good for testing-only). The standard recommends capturing a set of light images whose mean pixel level (in linear files with no gamma applied) is 50% of saturation (designated by subscript 50 in the document; we prefer “light”) and also a set of dark pixels (“lens cap on”; designated by subscript dark).
A note on EMVA 1288 equations: Imatest does not currently use photometric units, such as K = Digital Number (DN) per electron; η = quantum efficiency; μp = mean number of photons per pixel; μe = mean number of electrons per pixel ( μe = η μ). Imatest results are derived only from electrical measurements.
The notation in §8 is confusing, sometimes inconsistent, and often difficult to follow. For example, EMVA equation (42), $$\langle y\rangle = \frac{1}{L}\sum_{l=0}^{L-1}y[l]$$ is not used after it is defined. The mean values μy.50 and μy.dark used in Eq. (46) are apparently derived from <y>.
Since there are many inconsistencies in the equations, and photometric units are often mixed with electrical or digital units, we are unable to follow the equations the standard exactly. The equations below represent actual Imatest calculations, which are intended to be equivalent to the standard measurements. We will try to follow some EMVA 1288 notation. μ represents the mean; s represents spatial standard deviations; σ represents temporal standard deviations.
Summary of the equations below: For a set of L captured images y[l], where y represents each pixel in the image, calculate the sum and sum of squares of the images (or more precisely, each pixel in the image).
The mean of each individual pixel in L captured images is
$$\displaystyle\mu_s = \frac{1}{L}\sum_{l=0}^{L-1}y[l]$$
Spatial nonuniformity s is the standard deviation of μs.
The temporal noise variance (power) each individual pixel in the image is
$$\displaystyle\sigma_s^2 = \frac{1}{L-1}\sum_{l=0}^{L-1}\left(y[l] – \frac{1}{L}\sum_{l=0}^{L-1}y[l]\right)^2 = \frac{1}{L-1}\sum_{l=0}^{L-1}\left(y[l] – \mu_s \right)^2$$ EMVA (44)
This equation could be slow and tedious to evaluate, but there is a shortcut, described in the Wikipedia Variance page. In Wikipedia’s notation, $$E(x) = E[(x-E[x])^2] = E[x^2]-E[x]^2$$. In the EMVA notation,
$$\displaystyle\sigma_s^2 = \frac{1}{L}\sum_{l=0}^{L-1}y^2[l]\ – \left(\frac{1}{L}\sum_{l=0}^{L-1}y[l] \right)^2$$
What this means in practice is that if you select L files, each file is read, then the pixel values are added to an array that contains the sum of pixel values, and the square of pixel values are added to an array that contains the sum of squares of pixel values. Individual images are not processed further. This process is fast and efficient.
Observation: For an arbitrary image (not just flat field images specified by EMVA 1288), μs is the familiar signal-averaged mean signal image, whose SNR is improved by 3*log2(L) dB for L samples. Noise voltage σs also forms an image (the noise image), which is much less familiar than the signal image, but which can provide insight into image processing, for example for bilateral filtering, where sharpening/noise reduction, hence noise, varies over the image surface. Noise images are discussed in more detail in Measuring Temporal Noise.
The full EMVA 1288 Photo Response Nonuniformity (PRNU1288) measurement uses a set of light images, designated by the subscript 50 or light, and a set of dark images, designated by the subscript dark. When a set of images is run, results are temporarily stored in imatest-v2.ini. PRNU1288 is only calculated when both light and dark are available.
$$\displaystyle PRNU_{1288} = \frac{\sqrt{\langle s_{y.light}\rangle ^2 – \langle s_{y.dark}\rangle ^2 } } { \langle \mu_{y.light}\rangle – \langle \mu_{y.dark}\rangle}$$
where <…> denotes the mean taken over the image.
Using only the light set of images, we can define PRNUlight, which is very close to PRNU1288.
$$\displaystyle PRNU_{light} = \langle s_{y.light}\rangle / \langle \mu_{y.light}\rangle$$
Dark Signal Nonuniformity is $$DSNU = \langle s_{y.dark}\rangle$$
Temporal noise (reported as a single number: mean for light or dark images) is $$\sigma_y = \langle \sigma_s\rangle$$.
### Method
Photograph a flat field, which is typically a highly even light source, such as a non-flickering LED lightbox. For the bright images, if exposure control is available, choose an exposure to linear images are about 50% of the maximum level (this would be about 73% of maximum for a gamma = 2.2 color space).
For saved image files, open Uniformity or Uniformity Interactive, and select a group of L (light or dark) files. A confirmation window will open. Be sure to select Read n files for pixel-based temporal noise (N >= 4). For EMVA-1288.
File confirmation window. Be sure to select Read n files … for EMVA-1288.
A region selection window will open. Most of the time you’ll want to select the entire image (or use the same selection as the previous image, if you analyzed an image of the same size). Once you’ve made the selection, a progress bar will appear: reading multiple images goes surprisingly quickly.
For direct data acquisition, select the acquisition device with the Device Manager, which can be opened from the Settings dropdown menu in the Imatest main window, the Data tab in the box on the right, or the Uniformity Interactive Settings window. When you’ve made the selection, press Save .
Device Manager. HD USB camera (1920×1080 pixels) has been selected. Press Save when ready.
If possible, prior to acquiring the data, in the Settings dropdown menu, Signal averaging should be set to the desired number, and Calculate image^2 while averaging (which has the same function as Read n files … for EMVA-1288, above) should be checked. (This name may change.) It should be easy to repeat the data acquisition using the Acquire image (Read Image File) button on the bottom-left of the Uniformity Interactive window.
In the More settings window, EMVA 1288 should be set to EMVA 1288 calculations: Box filter. The 5×5 Box filter is a highpass filter, described in §C.5, designed to suppress low-frequency spatial variations.
The EMVA reset button on the right deletes EMVA 1288 batch results stored in imatest-v2.ini.
Photograph a light or dark flat field image. For direct data acquisition, use the Device manager to select the device. Open Uniformity Interactive. In the Settings dropdown menu, select Signal averaging, and choose the number of images to average (must be ≥; 128 preferred), and be sure Calculate image^2 while averaging is checked. Then acquire the image. From saved image files, select a batch of files, then be sure to select Read n files for pixel-based temporal noise (N >= 4). For EMVA-1288 in the confirmation window. In the Uniformity or Uniformity Interactive More settings window, select EMVA 1288 calculations, Box filter. For Uniformity, select the plots.
### Results
Several plots, selected in the Display dropdown window, contain EMVA 1288 results. These include Histograms, Accum. Histograms, V&H Spectrum, EMVA & Histogram, and Temporal noise image
EMVA & Histogram contains a summary of results.
EMVA & Histogram results. (Some results are visible when the window is scrolled down.)
When this plot is selected in Uniformity (the fixed module), the histogram (available in other plots) is omitted so the entire output can be seen.
EMVA results from Uniformity (not Interactive)
Some additional results are of interest.
Standard histograms — illustrated in EMVA §8.4, Figure 13. Can be calculated for light or dark images. Here are two examples, without and with the box filter, which is recommended by EMVA 1288 (§C.5) for removing low spatial frequency nonuniformities.
Histogram without Box (lowpass) filter Histogram with Box (lowpass) filter
Accumulated histograms — illustrated in EMVA §8.4, Figure 14. We are not sure how to use them. Does not contain EMVA required results.
Accumulated Histogram (with box filter)
Horizontal and vertical spectrograms and profiles — described in EMVA §8.3 and 8.4. Combined here. These plots do not contain EMVA required results. Several smoothing settings are available.
Spectrograms and profiles
Temporal noise image — Temporal noise power (variance) σ2 and RMS voltage σ as defined in the equations in the green box (above) are calculated for each pixel, i.e., they are images. For the flat-field images used in the EMVA calculations, temporal noise images are not of much interest, but they are of great interest for test chart images because they show how image processing varies across the image field. (When bilateral filtering is applied, noise is typically larger near edges than in smooth areas.) See Temporal noise for more details. |
Question
A 75.0-kg cross-country skier is climbing a $3.0^\circ$ slope at a constant speed of 2.00 m/s and encounters air resistance of 25.0 N. Find his power output for work done against the gravitational force and air resistance. (b) What average force does he exert backward on the snow to accomplish this? (c) If he continues to exert this force and to experience the same air resistance when he reaches a level area, how long will it take him to reach a velocity of 10.0 m/s?
Question by OpenStax is licensed under CC BY 4.0.
Final Answer
a) $127 \textrm{ W}$
b) $63.5 \textrm{ N}$
c) $15.6 \textrm{ s}$
Solution Video
# OpenStax College Physics Solution, Chapter 7, Problem 61 (Problems & Exercises) (3:55)
#### Sign up to view this solution video!
View sample solution
## Calculator Screenshots
Video Transcript
This is College Physics Answers with Shaun Dychko. This skier is moving at constant speed of two meters per second up the slope. They have a mass of 75 kilograms and the slope is inclined at three degrees. So we're going to find the power output of the skier by dividing the work output divided by the time. The work output will be the force that the skier is, you know, exerting on the ground which in turn exerts that force on the skier. So this force in other words which I just labeled <i>F</i> multiplied by the distance over which the force is applied, divided by time. Now, we're not given distance nor time, but what we are given is the speed of the skier which is distance over time and so we can replace <i>d</i> over <i>t</i> with <i>v</i>. So, power becomes the force that they apply multiplied by the speed. So then the problem becomes how do we find the force? Well, in the x direction which is along the slope because I've tilted the coordinate system so that positive x is up along the slope, that net force in the x direction is the force that the skier applies forward minus the force of friction backwards which we're given as 25 Newtons, and also minus the component of gravity which is in the negative x direction. That component of gravity <i>F g x</i> is the force of gravity multiplied by sine <i>theta</i> because in this triangle <i>F g x</i> is the opposite leg of the right triangle and this angle up in here is <i>theta</i>. Now, this x component of gravity is zero because they're moving at constant speed and so there is no acceleration. So, we can say <i>F</i> equals force of friction plus <i>mg</i> sine <i>theta</i> where I replaced <i>F g</i> with <i>m g</i> and I moved both of these terms to the right side by adding them to both sides. Then we can replace all of this in place of the force <i>F</i> in our power formula. So power output becomes force of friction plus <i>mg</i> sine <i>theta</i>, all times v. So that's 25 Newtons plus 75 kilograms times 9.8 Newtons per kilogram time sine of three, times two meters per second which gives 127 watts. Then to find the actual force in part B, well it's the force of friction plus the component of gravity in the x direction so that's 25 Newtons plus 75 kilograms times 9.8 Newtons per kilogram time sine of three, which gives 63.5 Newtons. Then in part C, we're asked to figure out how long it would take to reach a velocity of ten meters per second if they applied the same force as they were going up the slope on a level surface. So in this case, the net force will be the force they applied to the right, minus the force of friction to the left. That's going to equal mass times acceleration and we'll divide both sides by m to get <i>a</i> equals the force they applied minus the force of friction divided by their mass. This is the acceleration we can plug into this formula which is that the final speed is the initial speed plus acceleration times time and we'll solve this for <i>t</i> by subtracting <i>v i</i> from both sides and then afterwards dividing both sides by <i>a</i>. We get time is the final speed minus the initial speed divided by acceleration and acceleration is this, so we'll multiply by its reciproca instead of dividing by it. So we've <i>v f</i> minus <i>v i</i> times <i>m</i> over <i>F</i> minus <i>F r</i>, so that's ten meters per second final speed, minus two meters per second initial speed, times 75 kilograms, divided by 63.467 Newtons minus 25 Newtons, which gives 15.6 seconds to reach ten meters per second. |
# Angiotensin converting enzyme inhibitors are a central part of the treatment of heart failure because they...
###### Question:
angiotensin converting enzyme inhibitors are a central part of the treatment of heart failure because they have more than one action to address the pathological changes in this disorder. which of the following pathological changes in heart failure is not addressed by ace inhibitors ?
a. changes in the structure of the left ventricle so that it dilates, hypertrophies, and uses energy less efficiently
b. reduced formation of cross bridges so that contractile force decreases
c. activation of the sympathetic nervous system that increases heart rate and preload
d. decreased renal blood flow that decreases oxygen supply to the kidney
#### Similar Solved Questions
##### Conslder two large mixing tanks shown in the figure below: Suppose tanks A initially contains 80 gallons of liquid and tank B initially contains 50 gallons of liquid; liquid with salt concentration of 3 Ibs/gal is pumped into tank A_ The well-stirred liquid is pumped in and out ofthe tanks as shown in the figure: Construct a svstem of first-order differential equations for the number of pounds of salt, x(t) and y(t), in tanks A and B at time respectively: Do not solve the system. Express vour so
Conslder two large mixing tanks shown in the figure below: Suppose tanks A initially contains 80 gallons of liquid and tank B initially contains 50 gallons of liquid; liquid with salt concentration of 3 Ibs/gal is pumped into tank A_ The well-stirred liquid is pumped in and out ofthe tanks as shown ...
##### 12_ Classify the critical points of the following functions. f(z,y) =4+28+y3 3ry b f(v,y) = 36x 3t3 ~2y2 +y' f(s,y) = 2x3 +ry? + 5x2 +y2 d f(r,y) = x +3y1 _ 61 + 3r*y + 3y
12_ Classify the critical points of the following functions. f(z,y) =4+28+y3 3ry b f(v,y) = 36x 3t3 ~2y2 +y' f(s,y) = 2x3 +ry? + 5x2 +y2 d f(r,y) = x +3y1 _ 61 + 3r*y + 3y...
##### Please I need answer today For the below given circuit, compute the following: 0) Closed loop...
Please I need answer today For the below given circuit, compute the following: 0) Closed loop gains for all the op amps Voltages V2, V3 and Vo (III) Write the Name of the amplifiers 1, 2 and 3. 20 20 3 vo 1 V12 V2 20 10k Name the circuit hou helen2...
##### Solve using the method of undetermined coefficients (D? 4)r = 1 + 65e' cos 2t
Solve using the method of undetermined coefficients (D? 4)r = 1 + 65e' cos 2t...
##### The sides of a square increase in length at a rate of 5 m/sec; At what rate is the area of the square changing when the sides are 15 m b. At what rate is the area of the square changing when the sides are 23 long? m long? The area of the square is changing at a rate of when the sides are 15 m long The area of the square is changing ata rate of when the sides are 23 m long:
The sides of a square increase in length at a rate of 5 m/sec; At what rate is the area of the square changing when the sides are 15 m b. At what rate is the area of the square changing when the sides are 23 long? m long? The area of the square is changing at a rate of when the sides are 15 m long T...
##### Predict the product(s) of the following reactions, noting stereochemistry where appropriate and indicating when a racemic...
Predict the product(s) of the following reactions, noting stereochemistry where appropriate and indicating when a racemic mixture of enantiomers is formed. Соо-н CHỊ || HỌC d. (MCPBA) CH,СІ,...
##### Four students run up the stairs in the time shown Which student has the largest power output?Im
Four students run up the stairs in the time shown Which student has the largest power output? Im...
##### Label the peaks of Acetanilide molecule 16 45 8 8 8 8 8 pr 04 10:35:29...
label the peaks of Acetanilide molecule 16 45 8 8 8 8 8 pr 04 10:35:29 2019 (GMT-08:00) PEAKS Xochitl V. exp. Acetanilide 4000 00 400.00 egion: bsolute threshold: 94.410 ensitivity: Position: 53244 Intensity: 62.743 Position: 606.00 Intensi 64279...
##### Questions 1&2. Grignard reagents. Draw the major organic product from each of the following reaction sequences....
Questions 1&2. Grignard reagents. Draw the major organic product from each of the following reaction sequences. Show how the make the following compound using....
##### As Part of you POST-PRACTICAL ASSIGNMENT you will be asked to plot the pH against the log[AJ/[HA] to determine the pKa of imidazole: So please make 3second copy_of_the_table_above to_take_home_before You leave the copy will bewill be provided) Please provide some sample workings of your calculations of the previous table below:DLLER
As Part of you POST-PRACTICAL ASSIGNMENT you will be asked to plot the pH against the log[AJ/[HA] to determine the pKa of imidazole: So please make 3second copy_of_the_table_above to_take_home_before You leave the copy will bewill be provided) Please provide some sample workings of your calculations...
##### Intho cost function bolow; C(x) Iha cast = procuang items. Find Ihu average Lsl per itern when the required number 0f itemb produced, C(x) =8.8x 9,500 200 Roms 2000 Ilcms 5000 ilurnsWnal Ihe averago cost par item whon 200 itams aro prcducod?What the avorago cost par Item whon 2000 roms aro producud?Whatavcraqe cost por itom when 50O0 items aro poducad?Vt
Intho cost function bolow; C(x) Iha cast = procuang items. Find Ihu average Lsl per itern when the required number 0f itemb produced, C(x) =8.8x 9,500 200 Roms 2000 Ilcms 5000 ilurns Wnal Ihe averago cost par item whon 200 itams aro prcducod? What the avorago cost par Item whon 2000 roms aro prod...
##### -4 Transactions for a Basic Business 5 You operate a sneaker store for 3 months from...
-4 Transactions for a Basic Business 5 You operate a sneaker store for 3 months from February to April out of your parents garage 26 Post the business's transactions by enteringether a positive or negative value in the appropriate yellow cells. Balance Sheet SC&A Account Payable Change in As...
##### Structure.com/courses/162317/quizzes/625064/take Which of the following statements about suicide is correct? O Most suicide victims gave clear...
structure.com/courses/162317/quizzes/625064/take Which of the following statements about suicide is correct? O Most suicide victims gave clear warning that they intended to kill themselves O Most suicide victim's had previously attempted suicide Suicide attempts typically occur when an individua... |
# Properties of logs question
1. Dec 3, 2008
### icystrike
1. The problem statement, all variables and given/known data
the question is this:
show
is 2$$\sqrt{3}$$
whereby e is natural logarithm base
2. Relevant equations
3. The attempt at a solution
Last edited: Dec 3, 2008
2. Dec 3, 2008
### lurflurf
Re: Logarithm
What have you tried?
Remember
x^y=e^[x log(y)]
3. Dec 3, 2008
### Integral
Staff Emeritus
Re: Logarithm
What do you know about the properties of logs?
4. Dec 3, 2008
### icystrike
Re: Logarithm
i suppose everything.. i just cant show tat it is 2sqrt3
5. Dec 3, 2008
### Integral
Staff Emeritus
Re: Logarithm
Do you have a text?
What are some of the properties of logs. They should be highlighted in boxes. RTFM
6. Dec 3, 2008
7. Dec 3, 2008
### Integral
Staff Emeritus
Re: Logarithm
look at the product rule:
Log(M*N) = logM + LogN
Now consider Log (M 2) = log (M*M) = log M + log M = 2log M
This is justification for the rule that is not shown on that page.
Log (MN) = N log M
Now apply that to the exponents of your problem.
8. Dec 3, 2008
### icystrike
Re: Logarithm
but it is just the exponent constant we cant apply the logarithm rules unless to the power.
9. Dec 3, 2008
### Integral
Staff Emeritus
Re: Logarithm
Repeat in english please
10. Dec 3, 2008
### icystrike
Re: Logarithm
oh.. let me rephrase
the equation is just the the sums of 2 exponent constant thus we cant apply the any logarithm rules except to the power of the constant
11. Dec 3, 2008
### Integral
Staff Emeritus
Re: Logarithm
You can apply the rules of logs to logs where ever they appear. Start by appling the rules to the logs which appear as exponents in your proplem.
$$e^{xlny} = e^{lny^x}$$
12. Dec 3, 2008
### icystrike
Re: Logarithm
yup i applied already
3e^(ln (1/sqrt3) ) + e^(ln sqrt3)
13. Dec 3, 2008
### HallsofIvy
Staff Emeritus
Re: Logarithm
You need: a ln(b)= ln(ba) and eln(x)= x.
14. Dec 3, 2008
### Integral
Staff Emeritus
Re: Logarithm
now you are getting some where.
you do know that :
$$e^{lnx} = x$$
15. Dec 3, 2008
### icystrike
Re: Logarithm
cool it works.
i have never seen this law before!
eln(x)= x
thank you HALLSOFIVY and INTEGRAL .
mind explaining this law eln(x)= x ?
16. Dec 3, 2008
### Integral
Staff Emeritus
Re: Logarithm
On the page YOU linked read the POWER RULE of logs.
then understand that ln = loge
17. Dec 3, 2008
### icystrike
Re: Logarithm
ya.. i know about it.
e^ln(x)= x
18. Dec 3, 2008
### Sjorris
Re: Logarithm
The natural logarithm is the 'inverse' of the e-exponential. If ex is f(x), and ln(x)=g(x), then you can rewrite the equation eln(x) to f(g(x)), but g(x)=f-1(x) (it's inverse), so f(f-1(x))=x. Or qualitatively, applying a function (the exponential) to it's inverse (the natural logarithm), or vice versa, returns it's argument ('input'), they 'undo' each other.
I'm sure there is a more rigorous and mathematically correct proof out there though.
19. Dec 3, 2008
### icystrike
Re: Logarithm
i do agree with your proving however until that inverse portion what do you really do to inverse it? i know it is somehow inverse of its law but how do you really come about to do it?
20. Dec 3, 2008
### icystrike
Re: Logarithm
btw you see, ln x means x=e^? and e^x means e x e x e x e x ...
^
l
x factors
i dont really see the relationship |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.