text
stringlengths 100
356k
|
---|
## The Explicit-Control Evaluator
In section 5.1 we saw how to transform simple Scheme programs into descriptions of register machines. We will now perform this transformation on a more complex program, the metacircular evaluator of sections 4-1-1-4-1-4, which shows how the behavior of a Scheme interpreter can be described in terms of the procedures eval and apply. The explicit-control evaluator that we develop in this section shows how the underlying procedure-calling and argument-passing mechanisms used in the evaluation process can be described in terms of operations on registers and stacks. In addition, the explicit-control evaluator can serve as an implementation of a Scheme interpreter, written in a language that is very similar to the native machine language of conventional computers. The evaluator can be executed by the register-machine simulator of section 5.2. Alternatively, it can be used as a starting point for building a machine-language implementation of a Scheme evaluator, or even a special-purpose machine for evaluating Scheme expressions. Figure 5-16 shows such a hardware implementation: a silicon chip that acts as an evaluator for Scheme. The chip designers started with the data-path and controller specifications for a register machine similar to the evaluator described in this section and used design automation programs to construct the integrated-circuit layout.@footnote{See Batali et al. 1982 for more information on the chip and the method by which it was designed.}
#### Registers and operations
In designing the explicit-control evaluator, we must specify the operations to be used in our register machine. We described the metacircular evaluator in terms of abstract syntax, using procedures such as quoted? and make-procedure. In implementing the register machine, we could expand these procedures into sequences of elementary list-structure memory operations, and implement these operations on our register machine. However, this would make our evaluator very long, obscuring the basic structure with details. To clarify the presentation, we will include as primitive operations of the register machine the syntax procedures given in section 4.1.2 and the procedures for representing environments and other run-time data given in sections 4-1-3 and 4-1-4. In order to completely specify an evaluator that could be programmed in a low-level machine language or implemented in hardware, we would replace these operations by more elementary operations, using the list-structure implementation we described in section 5.3.
Figure 5.16: A silicon-chip implementation of an evaluator for Scheme. [This figure is missing.]
Our Scheme evaluator register machine includes a stack and seven registers: exp, env, val, continue, proc, argl, and unev. Exp is used to hold the expression to be evaluated, and env contains the environment in which the evaluation is to be performed. At the end of an evaluation, val contains the value obtained by evaluating the expression in the designated environment. The continue register is used to implement recursion, as explained in section 5.1.4. (The evaluator needs to call itself recursively, since evaluating an expression requires evaluating its subexpressions.) The registers proc, argl, and unev are used in evaluating combinations.
We will not provide a data-path diagram to show how the registers and operations of the evaluator are connected, nor will we give the complete list of machine operations. These are implicit in the evaluator's controller, which will be presented in detail.
### 5.4.1 The Core of the Explicit-Control Evaluator
The central element in the evaluator is the sequence of instructions beginning at eval-dispatch. This corresponds to the eval procedure of the metacircular evaluator described in section 4.1.1. When the controller starts at eval-dispatch, it evaluates the expression specified by exp in the environment specified by env. When evaluation is complete, the controller will go to the entry point stored in continue, and the val register will hold the value of the expression. As with the metacircular eval, the structure of eval-dispatch is a case analysis on the syntactic type of the expression to be evaluated.@footnote{In our controller, the dispatch is written as a sequence of test and branch instructions. Alternatively, it could have been written in a data-directed style (and in a real system it probably would have been) to avoid the need to perform sequential tests and to facilitate the definition of new expression types. A machine designed to run Lisp would probably include a dispatch-on-type instruction that would efficiently execute such data-directed dispatches.}
eval-dispatch (test (op self-evaluating?) (reg exp)) (branch (label ev-self-eval)) (test (op variable?) (reg exp)) (branch (label ev-variable)) (test (op quoted?) (reg exp)) (branch (label ev-quoted)) (test (op assignment?) (reg exp)) (branch (label ev-assignment)) (test (op definition?) (reg exp)) (branch (label ev-definition)) (test (op if?) (reg exp)) (branch (label ev-if)) (test (op lambda?) (reg exp)) (branch (label ev-lambda)) (test (op begin?) (reg exp)) (branch (label ev-begin)) (test (op application?) (reg exp)) (branch (label ev-application)) (goto (label unknown-expression-type))
#### Evaluating simple expressions
Numbers and strings (which are self-evaluating), variables, quotations, and lambda expressions have no subexpressions to be evaluated. For these, the evaluator simply places the correct value in the val register and continues execution at the entry point specified by continue. Evaluation of simple expressions is performed by the following controller code:
ev-self-eval (assign val (reg exp)) (goto (reg continue)) ev-variable (assign val (op lookup-variable-value) (reg exp) (reg env)) (goto (reg continue)) ev-quoted (assign val (op text-of-quotation) (reg exp)) (goto (reg continue)) ev-lambda (assign unev (op lambda-parameters) (reg exp)) (assign exp (op lambda-body) (reg exp)) (assign val (op make-procedure) (reg unev) (reg exp) (reg env)) (goto (reg continue))
Observe how ev-lambda uses the unev and exp registers to hold the parameters and body of the lambda expression so that they can be passed to the make-procedure operation, along with the environment in env.
#### Evaluating procedure applications
A procedure application is specified by a combination containing an operator and operands. The operator is a subexpression whose value is a procedure, and the operands are subexpressions whose values are the arguments to which the procedure should be applied. The metacircular eval handles applications by calling itself recursively to evaluate each element of the combination, and then passing the results to apply, which performs the actual procedure application. The explicit-control evaluator does the same thing; these recursive calls are implemented by goto instructions, together with use of the stack to save registers that will be restored after the recursive call returns. Before each call we will be careful to identify which registers must be saved (because their values will be needed later).@footnote{This is an important but subtle point in translating algorithms from a procedural language, such as Lisp, to a register-machine language. As an alternative to saving only what is needed, we could save all the registers (except val) before each recursive call. This is called a framed-stack discipline. This would work but might save more registers than necessary; this could be an important consideration in a system where stack operations are expensive. Saving registers whose contents will not be needed later may also hold onto useless data that could otherwise be garbage-collected, freeing space to be reused.}
We begin the evaluation of an application by evaluating the operator to produce a procedure, which will later be applied to the evaluated operands. To evaluate the operator, we move it to the exp register and go to eval-dispatch. The environment in the env register is already the correct one in which to evaluate the operator. However, we save env because we will need it later to evaluate the operands. We also extract the operands into unev and save this on the stack. We set up continue so that eval-dispatch will resume at ev-appl-did-operator after the operator has been evaluated. First, however, we save the old value of continue, which tells the controller where to continue after the application.
ev-application (save continue) (save env) (assign unev (op operands) (reg exp)) (save unev) (assign exp (op operator) (reg exp)) (assign continue (label ev-appl-did-operator)) (goto (label eval-dispatch))
Upon returning from evaluating the operator subexpression, we proceed to evaluate the operands of the combination and to accumulate the resulting arguments in a list, held in argl. First we restore the unevaluated operands and the environment. We initialize argl to an empty list. Then we assign to the proc register the procedure that was produced by evaluating the operator. If there are no operands, we go directly to apply-dispatch. Otherwise we save proc on the stack and start the argument-evaluation loop:@footnote{We add to the evaluator data-structure procedures in section 4.1.3 the following two procedures for manipulating argument lists:
(define (empty-arglist) '()) (define (adjoin-arg arg arglist) (append arglist (list arg)))
We also use an additional syntax procedure to test for the last operand in a combination:
(define (last-operand? ops) (null? (cdr ops)))
}
ev-appl-did-operator (restore unev) ; the operands (restore env) (assign argl (op empty-arglist)) (assign proc (reg val)) ; the operator (test (op no-operands?) (reg unev)) (branch (label apply-dispatch)) (save proc)
Each cycle of the argument-evaluation loop evaluates an operand from the list in unev and accumulates the result into argl. To evaluate an operand, we place it in the exp register and go to eval-dispatch, after setting continue so that execution will resume with the argument-accumulation phase. But first we save the arguments accumulated so far (held in argl), the environment (held in env), and the remaining operands to be evaluated (held in unev). A special case is made for the evaluation of the last operand, which is handled at ev-appl-last-arg.
ev-appl-operand-loop (save argl) (assign exp (op first-operand) (reg unev)) (test (op last-operand?) (reg unev)) (branch (label ev-appl-last-arg)) (save env) (save unev) (assign continue (label ev-appl-accumulate-arg)) (goto (label eval-dispatch))
When an operand has been evaluated, the value is accumulated into the list held in argl. The operand is then removed from the list of unevaluated operands in unev, and the argument-evaluation continues.
ev-appl-accumulate-arg (restore unev) (restore env) (restore argl) (assign argl (op adjoin-arg) (reg val) (reg argl)) (assign unev (op rest-operands) (reg unev)) (goto (label ev-appl-operand-loop))
Evaluation of the last argument is handled differently. There is no need to save the environment or the list of unevaluated operands before going to eval-dispatch, since they will not be required after the last operand is evaluated. Thus, we return from the evaluation to a special entry point ev-appl-accum-last-arg, which restores the argument list, accumulates the new argument, restores the saved procedure, and goes off to perform the application.@footnote{The optimization of treating the last operand specially is known as evlis tail recursion (see Wand 1980). We could be somewhat more efficient in the argument evaluation loop if we made evaluation of the first operand a special case too. This would permit us to postpone initializing argl until after evaluating the first operand, so as to avoid saving argl in this case. The compiler in section 5.5 performs this optimization. (Compare the construct-arglist procedure of section 5.5.3.)}
ev-appl-last-arg (assign continue (label ev-appl-accum-last-arg)) (goto (label eval-dispatch)) ev-appl-accum-last-arg (restore argl) (assign argl (op adjoin-arg) (reg val) (reg argl)) (restore proc) (goto (label apply-dispatch))
The details of the argument-evaluation loop determine the order in which the interpreter evaluates the operands of a combination (e.g., left to right or right to left---see Exercise 3-8). This order is not determined by the metacircular evaluator, which inherits its control structure from the underlying Scheme in which it is implemented.@footnote{The order of operand evaluation in the metacircular evaluator is determined by the order of evaluation of the arguments to cons in the procedure list-of-values of section 4.1.1 (see Exercise 4-1).} Because the first-operand selector (used in ev-appl-operand-loop to extract successive operands from unev) is implemented as car and the rest-operands selector is implemented as cdr, the explicit-control evaluator will evaluate the operands of a combination in left-to-right order.
#### Procedure application
The entry point apply-dispatch corresponds to the apply procedure of the metacircular evaluator. By the time we get to apply-dispatch, the proc register contains the procedure to apply and argl contains the list of evaluated arguments to which it must be applied. The saved value of continue (originally passed to eval-dispatch and saved at ev-application), which tells where to return with the result of the procedure application, is on the stack. When the application is complete, the controller transfers to the entry point specified by the saved continue, with the result of the application in val. As with the metacircular apply, there are two cases to consider. Either the procedure to be applied is a primitive or it is a compound procedure.
apply-dispatch (test (op primitive-procedure?) (reg proc)) (branch (label primitive-apply)) (test (op compound-procedure?) (reg proc)) (branch (label compound-apply)) (goto (label unknown-procedure-type))
We assume that each primitive is implemented so as to obtain its arguments from argl and place its result in val. To specify how the machine handles primitives, we would have to provide a sequence of controller instructions to implement each primitive and arrange for primitive-apply to dispatch to the instructions for the primitive identified by the contents of proc. Since we are interested in the structure of the evaluation process rather than the details of the primitives, we will instead just use an apply-primitive-procedure operation that applies the procedure in proc to the arguments in argl. For the purpose of simulating the evaluator with the simulator of section 5.2 we use the procedure apply-primitive-procedure, which calls on the underlying Scheme system to perform the application, just as we did for the metacircular evaluator in section 4.1.4. After computing the value of the primitive application, we restore continue and go to the designated entry point.
primitive-apply (assign val (op apply-primitive-procedure) (reg proc) (reg argl)) (restore continue) (goto (reg continue))
To apply a compound procedure, we proceed just as with the metacircular evaluator. We construct a frame that binds the procedure's parameters to the arguments, use this frame to extend the environment carried by the procedure, and evaluate in this extended environment the sequence of expressions that forms the body of the procedure. Ev-sequence, described below in section 5.4.2, handles the evaluation of the sequence.
compound-apply (assign unev (op procedure-parameters) (reg proc)) (assign env (op procedure-environment) (reg proc)) (assign env (op extend-environment) (reg unev) (reg argl) (reg env)) (assign unev (op procedure-body) (reg proc)) (goto (label ev-sequence))
Compound-apply is the only place in the interpreter where the env register is ever assigned a new value. Just as in the metacircular evaluator, the new environment is constructed from the environment carried by the procedure, together with the argument list and the corresponding list of variables to be bound.
### 5.4.2 Sequence Evaluation and Tail Recursion
The portion of the explicit-control evaluator at ev-sequence is analogous to the metacircular evaluator's eval-sequence procedure. It handles sequences of expressions in procedure bodies or in explicit begin expressions.
Explicit begin expressions are evaluated by placing the sequence of expressions to be evaluated in unev, saving continue on the stack, and jumping to ev-sequence.
ev-begin (assign unev (op begin-actions) (reg exp)) (save continue) (goto (label ev-sequence))
The implicit sequences in procedure bodies are handled by jumping to ev-sequence from compound-apply, at which point continue is already on the stack, having been saved at ev-application.
The entries at ev-sequence and ev-sequence-continue form a loop that successively evaluates each expression in a sequence. The list of unevaluated expressions is kept in unev. Before evaluating each expression, we check to see if there are additional expressions to be evaluated in the sequence. If so, we save the rest of the unevaluated expressions (held in unev) and the environment in which these must be evaluated (held in env) and call eval-dispatch to evaluate the expression. The two saved registers are restored upon the return from this evaluation, at ev-sequence-continue.
The final expression in the sequence is handled differently, at the entry point ev-sequence-last-exp. Since there are no more expressions to be evaluated after this one, we need not save unev or env before going to eval-dispatch. The value of the whole sequence is the value of the last expression, so after the evaluation of the last expression there is nothing left to do except continue at the entry point currently held on the stack (which was saved by ev-application or ev-begin.) Rather than setting up continue to arrange for eval-dispatch to return here and then restoring continue from the stack and continuing at that entry point, we restore continue from the stack before going to eval-dispatch, so that eval-dispatch will continue at that entry point after evaluating the expression.
ev-sequence (assign exp (op first-exp) (reg unev)) (test (op last-exp?) (reg unev)) (branch (label ev-sequence-last-exp)) (save unev) (save env) (assign continue (label ev-sequence-continue)) (goto (label eval-dispatch)) ev-sequence-continue (restore env) (restore unev) (assign unev (op rest-exps) (reg unev)) (goto (label ev-sequence)) ev-sequence-last-exp (restore continue) (goto (label eval-dispatch))
#### Tail recursion
In Chapter 1 we said that the process described by a procedure such as
(define (sqrt-iter guess x) (if (good-enough? guess x) guess (sqrt-iter (improve guess x) x)))
is an iterative process. Even though the procedure is syntactically recursive (defined in terms of itself), it is not logically necessary for an evaluator to save information in passing from one call to sqrt-iter to the next.@footnote{We saw in section 5.1 how to implement such a process with a register machine that had no stack; the state of the process was stored in a fixed set of registers.} An evaluator that can execute a procedure such as sqrt-iter without requiring increasing storage as the procedure continues to call itself is called a tail-recursive evaluator. The metacircular implementation of the evaluator in Chapter 4 does not specify whether the evaluator is tail-recursive, because that evaluator inherits its mechanism for saving state from the underlying Scheme. With the explicit-control evaluator, however, we can trace through the evaluation process to see when procedure calls cause a net accumulation of information on the stack.
Our evaluator is tail-recursive, because in order to evaluate the final expression of a sequence we transfer directly to eval-dispatch without saving any information on the stack. Hence, evaluating the final expression in a sequence---even if it is a procedure call (as in sqrt-iter, where the if expression, which is the last expression in the procedure body, reduces to a call to sqrt-iter)---will not cause any information to be accumulated on the stack.@footnote{This implementation of tail recursion in ev-sequence is one variety of a well-known optimization technique used by many compilers. In compiling a procedure that ends with a procedure call, one can replace the call by a jump to the called procedure's entry point. Building this strategy into the interpreter, as we have done in this section, provides the optimization uniformly throughout the language.}
If we did not think to take advantage of the fact that it was unnecessary to save information in this case, we might have implemented eval-sequence by treating all the expressions in a sequence in the same way---saving the registers, evaluating the expression, returning to restore the registers, and repeating this until all the expressions have been evaluated:@footnote{We can define no-more-exps? as follows:
(define (no-more-exps? seq) (null? seq))
}
ev-sequence (test (op no-more-exps?) (reg unev)) (branch (label ev-sequence-end)) (assign exp (op first-exp) (reg unev)) (save unev) (save env) (assign continue (label ev-sequence-continue)) (goto (label eval-dispatch)) ev-sequence-continue (restore env) (restore unev) (assign unev (op rest-exps) (reg unev)) (goto (label ev-sequence)) ev-sequence-end (restore continue) (goto (reg continue))
This may seem like a minor change to our previous code for evaluation of a sequence: The only difference is that we go through the save-restore cycle for the last expression in a sequence as well as for the others. The interpreter will still give the same value for any expression. But this change is fatal to the tail-recursive implementation, because we must now return after evaluating the final expression in a sequence in order to undo the (useless) register saves. These extra saves will accumulate during a nest of procedure calls. Consequently, processes such as sqrt-iter will require space proportional to the number of iterations rather than requiring constant space. This difference can be significant. For example, with tail recursion, an infinite loop can be expressed using only the procedure-call mechanism:
(define (count n) (newline) (display n) (count (+ n 1)))
Without tail recursion, such a procedure would eventually run out of stack space, and expressing a true iteration would require some control mechanism other than procedure call.
### 5.4.3 Conditionals, Assignments, and Definitions
As with the metacircular evaluator, special forms are handled by selectively evaluating fragments of the expression. For an if expression, we must evaluate the predicate and decide, based on the value of predicate, whether to evaluate the consequent or the alternative.
Before evaluating the predicate, we save the if expression itself so that we can later extract the consequent or alternative. We also save the environment, which we will need later in order to evaluate the consequent or the alternative, and we save continue, which we will need later in order to return to the evaluation of the expression that is waiting for the value of the if.
ev-if (save exp) ; save expression for later (save env) (save continue) (assign continue (label ev-if-decide)) (assign exp (op if-predicate) (reg exp)) (goto (label eval-dispatch)) ; evaluate the predicate
When we return from evaluating the predicate, we test whether it was true or false and, depending on the result, place either the consequent or the alternative in exp before going to eval-dispatch. Notice that restoring env and continue here sets up eval-dispatch to have the correct environment and to continue at the right place to receive the value of the if expression.
ev-if-decide (restore continue) (restore env) (restore exp) (test (op true?) (reg val)) (branch (label ev-if-consequent)) ev-if-alternative (assign exp (op if-alternative) (reg exp)) (goto (label eval-dispatch)) ev-if-consequent (assign exp (op if-consequent) (reg exp)) (goto (label eval-dispatch))
#### Assignments and definitions
Assignments are handled by ev-assignment, which is reached from eval-dispatch with the assignment expression in exp. The code at ev-assignment first evaluates the value part of the expression and then installs the new value in the environment. Set-variable-value! is assumed to be available as a machine operation.
ev-assignment (assign unev (op assignment-variable) (reg exp)) (save unev) ; save variable for later (assign exp (op assignment-value) (reg exp)) (save env) (save continue) (assign continue (label ev-assignment-1)) (goto (label eval-dispatch)) ; evaluate the assignment value ev-assignment-1 (restore continue) (restore env) (restore unev) (perform (op set-variable-value!) (reg unev) (reg val) (reg env)) (assign val (const ok)) (goto (reg continue))
Definitions are handled in a similar way:
ev-definition (assign unev (op definition-variable) (reg exp)) (save unev) ; save variable for later (assign exp (op definition-value) (reg exp)) (save env) (save continue) (assign continue (label ev-definition-1)) (goto (label eval-dispatch)) ; evaluate the definition value ev-definition-1 (restore continue) (restore env) (restore unev) (perform (op define-variable!) (reg unev) (reg val) (reg env)) (assign val (const ok)) (goto (reg continue))
Exercise 5.23: Extend the evaluator to handle derived expressions such as cond, let, and so on (section 4.1.2). You may cheat'' and assume that the syntax transformers such as cond->if are available as machine operations.@footnote{This isn't really cheating. In an actual implementation built from scratch, we would use our explicit-control evaluator to interpret a Scheme program that performs source-level transformations like cond->if in a syntax phase that runs before execution.}
Exercise 5.24: Implement cond as a new basic special form without reducing it to if. You will have to construct a loop that tests the predicates of successive cond clauses until you find one that is true, and then use ev-sequence to evaluate the actions of the clause.
Exercise 5.25: Modify the evaluator so that it uses normal-order evaluation, based on the lazy evaluator of section 4.2.
### 5.4.4 Running the Evaluator
With the implementation of the explicit-control evaluator we come to the end of a development, begun in Chapter 1, in which we have explored successively more precise models of the evaluation process. We started with the relatively informal substitution model, then extended this in Chapter 3 to the environment model, which enabled us to deal with state and change. In the metacircular evaluator of Chapter 4, we used Scheme itself as a language for making more explicit the environment structure constructed during evaluation of an expression. Now, with register machines, we have taken a close look at the evaluator's mechanisms for storage management, argument passing, and control. At each new level of description, we have had to raise issues and resolve ambiguities that were not apparent at the previous, less precise treatment of evaluation. To understand the behavior of the explicit-control evaluator, we can simulate it and monitor its performance.
We will install a driver loop in our evaluator machine. This plays the role of the driver-loop procedure of section 4.1.4. The evaluator will repeatedly print a prompt, read an expression, evaluate the expression by going to eval-dispatch, and print the result. The following instructions form the beginning of the explicit-control evaluator's controller sequence:@footnote{We assume here that read and the various printing operations are available as primitive machine operations, which is useful for our simulation, but completely unrealistic in practice. These are actually extremely complex operations. In practice, they would be implemented using low-level input-output operations such as transferring single characters to and from a device.
To support the get-global-environment operation we define
(define the-global-environment (setup-environment)) (define (get-global-environment) the-global-environment)
}
read-eval-print-loop (perform (op initialize-stack)) (perform (op prompt-for-input) (const ";;; EC-Eval input:")) (assign exp (op read)) (assign env (op get-global-environment)) (assign continue (label print-result)) (goto (label eval-dispatch)) print-result (perform (op announce-output) (const ";;; EC-Eval value:")) (perform (op user-print) (reg val)) (goto (label read-eval-print-loop))
When we encounter an error in a procedure (such as the unknown procedure type error'' indicated at apply-dispatch), we print an error message and return to the driver loop.@footnote{There are other errors that we would like the interpreter to handle, but these are not so simple. See Exercise 5-30.}
unknown-expression-type (assign val (const unknown-expression-type-error)) (goto (label signal-error)) unknown-procedure-type (restore continue) ; clean up stack (from apply-dispatch) (assign val (const unknown-procedure-type-error)) (goto (label signal-error)) signal-error (perform (op user-print) (reg val)) (goto (label read-eval-print-loop))
For the purposes of the simulation, we initialize the stack each time through the driver loop, since it might not be empty after an error (such as an undefined variable) interrupts an evaluation.@footnote{We could perform the stack initialization only after errors, but doing it in the driver loop will be convenient for monitoring the evaluator's performance, as described below.}
If we combine all the code fragments presented in sections 5-4-1-5-4-4, we can create an evaluator machine model that we can run using the register-machine simulator of section 5.2.
(define eceval (make-machine '(exp env val proc argl continue unev) eceval-operations '( read-eval-print-loop <entire machine controller as given above> )))
We must define Scheme procedures to simulate the operations used as primitives by the evaluator. These are the same procedures we used for the metacircular evaluator in section 4.1, together with the few additional ones defined in footnotes throughout section 5.4.
(define eceval-operations (list (list 'self-evaluating? self-evaluating) <complete list of operations for eceval machine>))
Finally, we can initialize the global environment and run the evaluator:
(define the-global-environment (setup-environment)) (start eceval) ;;; EC-Eval input: (define (append x y) (if (null? x) y (cons (car x) (append (cdr x) y)))) ;;; EC-Eval value: ok ;;; EC-Eval input: (append '(a b c) '(d e f)) ;;; EC-Eval value: (a b c d e f)
Of course, evaluating expressions in this way will take much longer than if we had directly typed them into Scheme, because of the multiple levels of simulation involved. Our expressions are evaluated by the explicit-control-evaluator machine, which is being simulated by a Scheme program, which is itself being evaluated by the Scheme interpreter.
#### Monitoring the performance of the evaluator
Simulation can be a powerful tool to guide the implementation of evaluators. Simulations make it easy not only to explore variations of the register-machine design but also to monitor the performance of the simulated evaluator. For example, one important factor in performance is how efficiently the evaluator uses the stack. We can observe the number of stack operations required to evaluate various expressions by defining the evaluator register machine with the version of the simulator that collects statistics on stack use (section 5.2.4), and adding an instruction at the evaluator's print-result entry point to print the statistics:
print-result (perform (op print-stack-statistics)); added instruction (perform (op announce-output) (const ";;; EC-Eval value:")) ... ; same as before
Interactions with the evaluator now look like this:
;;; EC-Eval input: (define (factorial n) (if (= n 1) 1 (* (factorial (- n 1)) n))) (total-pushes = 3 maximum-depth = 3) ;;; EC-Eval value: ok ;;; EC-Eval input: (factorial 5) (total-pushes = 144 maximum-depth = 28) ;;; EC-Eval value: 120
Note that the driver loop of the evaluator reinitializes the stack at the start of each interaction, so that the statistics printed will refer only to stack operations used to evaluate the previous expression.
Exercise 5.26: Use the monitored stack to explore the tail-recursive property of the evaluator (section 5.4.2). Start the evaluator and define the iterative factorial procedure from section 1.2.1:
(define (factorial n) (define (iter product counter) (if (> counter n) product (iter (* counter product) (+ counter 1)))) (iter 1 1))
Run the procedure with some small values of n. Record the maximum stack depth and the number of pushes required to compute n! for each of these values.
• You will find that the maximum depth required to evaluate n! is independent of n. What is that depth?
• Determine from your data a formula in terms of n for the total number of push operations used in evaluating n! for any n >= 1. Note that the number of operations used is a linear function of n and is thus determined by two constants.
Exercise 5.27: For comparison with Exercise 5-26, explore the behavior of the following procedure for computing factorials recursively:
(define (factorial n) (if (= n 1) 1 (* (factorial (- n 1)) n)))
By running this procedure with the monitored stack, determine, as a function of n, the maximum depth of the stack and the total number of pushes used in evaluating n! for n >= 1. (Again, these functions will be linear.) Summarize your experiments by filling in the following table with the appropriate expressions in terms of n:
Maximum depth Number of pushes
Recursive
factorial
Iterative
factorial
The maximum depth is a measure of the amount of space used by the evaluator in carrying out the computation, and the number of pushes correlates well with the time required.
Exercise 5.28: Modify the definition of the evaluator by changing eval-sequence as described in section 5.4.2 so that the evaluator is no longer tail-recursive. Rerun your experiments from Exercise 5-26 and Exercise 5-27 to demonstrate that both versions of the factorial procedure now require space that grows linearly with their input.
Exercise 5.29: Monitor the stack operations in the tree-recursive Fibonacci computation:
(define (fib n) (if (< n 2) n (+ (fib (- n 1)) (fib (- n 2)))))
• Give a formula in terms of n for the maximum depth of the stack required to compute Fib(n) for n >= 2. Hint: In section 1.2.2 we argued that the space used by this process grows linearly with n.
• Give a formula for the total number of pushes used to compute Fib(n) for n >= 2. You should find that the number of pushes (which correlates well with the time used) grows exponentially with n. Hint: Let S(n) be the number of pushes used in computing Fib(n). You should be able to argue that there is a formula that expresses S(n) in terms of S(n - 1), S(n - 2), and some fixed overhead''constant k that is independent of n. Give the formula, and say what k is. Then show that S(n) can be expressed as a Fib(n + 1) + b and give the values of a and b.
Exercise 5.30: Our evaluator currently catches and signals only two kinds of errors---unknown expression types and unknown procedure types. Other errors will take us out of the evaluator read-eval-print loop. When we run the evaluator using the register-machine simulator, these errors are caught by the underlying Scheme system. This is analogous to the computer crashing when a user program makes an error.@footnote{Regrettably, this is the normal state of affairs in conventional compiler-based language systems such as C. In @acronym{UNIX}(tm) the system dumps core,'' and in @acronym{DOS}/Windows(tm) it becomes catatonic. The Macintosh(tm) displays a picture of an exploding bomb and offers you the opportunity to reboot the computer---if you're lucky.} It is a large project to make a real error system work, but it is well worth the effort to understand what is involved here.
• Errors that occur in the evaluation process, such as an attempt to access an unbound variable, could be caught by changing the lookup operation to make it return a distinguished condition code, which cannot be a possible value of any user variable. The evaluator can test for this condition code and then do what is necessary to go to signal-error. Find all of the places in the evaluator where such a change is necessary and fix them. This is lots of work.
• Much worse is the problem of handling errors that are signaled by applying primitive procedures, such as an attempt to divide by zero or an attempt to extract the car of a symbol. In a professionally written high-quality system, each primitive application is checked for safety as part of the primitive. For example, every call to car could first check that the argument is a pair. If the argument is not a pair, the application would return a distinguished condition code to the evaluator, which would then report the failure. We could arrange for this in our register-machine simulator by making each primitive procedure check for applicability and returning an appropriate distinguished condition code on failure. Then the primitive-apply code in the evaluator can check for the condition code and go to signal-error if necessary. Build this structure and make it work. This is a major project.
### Notes
Based on Structure and Interpretation of Computer Programs, a work at https://mitpress.mit.edu/sicp/.
|
# [web] CSS div positioning problem
This topic is 4356 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
OK, I am using CSS and in it I have defined:
#DirectoryMenu
{
position:absolute; top:0.25em;
max-width:16em;
border-style:double;
background-color:#FFCC99;
}
#Content
{
position:absolute; left:18em; top:0.25em;
}
Then in the body of my webpage I have
<body>
....a bunch of code....
</div>
<div id="Content">
.....a bunch of code.....
</div>
</body>
This works OK for now, but what I want to do is have the DirectoryMenu (whose width will dynamically adjust depending on the length of what is displayed in it)on the top left, and the Content beside it, running from the right side of the DirectoryMenu to the right side of the window. Currently I have set the Content section to an absolute left position because I can not figure out how to have it "float" over to the right side of the Directory Menu. I have tried adding "float:left;" to the Content, but then it floats over top of the DirectoryMenu, and if I try using relative positioning instead of absolute, then the Content appears underneith the DirectoryMenu. To see what it currently looks like go here. Any help would be appreciated. Thanks.
##### Share on other sites
Dont position either of them using absolute.
Make both of them float:left. And make the content div clear:right.
##### Share on other sites
Quote:
Original post by PhilMortonDont position either of them using absolute.Make both of them float:left. And make the content div clear:right.
I tried this but the Content section appeared below the DirectoryMenu, not beside it. I should mention that in my DirectoryMenu and Content sections there are breaks (br /); it is not one big object like a table or something. Does anyone else have any ideas? Thanks
##### Share on other sites
I'm still having this problem if anybody has a potential solution for me. Thanks.
##### Share on other sites
hmm, you could give this tutorial a go;
http://www.456bereastreet.com/lab/developing_with_web_standards/csslayout/2-col/
Or search for "two column layout div css" to find other tutorials on google...
##### Share on other sites
Something really basic would be:
<html><head><style type="text/css">#menu { float: left; width: 20%; border: 0.1em solid #000; padding: 0.5em;}#content { float: right; width: 70%; border: 0.1em solid #000; padding: 0.5em;}</style></head><body><div id="menu"> <ul> <li>Item 1</li> <li>Item 2</li> </ul></div><div id="content"><p>My Content</p><p>And other stuff like that</p></div></body></html>
If you want to include somthing like a footer box below both of them, just put another div below them with the style clear: all and it should stretch across the page.
##### Share on other sites
Awesome, thanks for the tutorials. They helped alot, but I am still having some trouble. I now have 4 div sections: Heading, DirectoryMenu, Pictures (content), and Container which holds all of the other divs. Here is how I have them defined now:
/* Background properties */body{ padding:0; margin:0; background-color:#FFFFFF; color:#000000;}/* Everything is contained within this div */#Container{ width:100%; margin:0 auto;}/* Heading properties */#Heading{ border-style:solid; border-color:#000000; border-width:thin; background-color:#FFFFCC; margin:0.25em; font-weight:bold; text-align:center; color:#0000CC; clear:both;}/* Directory Menu properties */#DirectoryMenu{ float:left; max-width:35%; border-style:double; background-color:#CCCCCC; padding:0.25em; margin:0.25em;}/* Pictures section properties */#Pictures{ margin:0.25em; padding:0.25em;}
and this is the format they are defined:
<div id="Container"> <div id="Heading">....</div> <div id="DirectoryMenu">....</div> <div id="Pictures">....</div></div>
Problem 1 - The Pictures section contains 2 tables; 1 for the Pictures, and another for the Pages (i.e. if there are 100 pictures and you are showing 10 pictures per page, you will need 10 pages). I want the Pictures section to extend to the right edge of the screen (i.e. the right edge of the Container div) so that the tables extend from the left edge of the DirectoryMenu to the right edge of the screen. I have tried making the tables widths 100%, but then it makes the tables width the size of the screen width, so it extends past the right edge of the screen quite a bit. If I don't specify a width in the tables (as it currently is) then the tables only extend as much is needed to display their content. How could I get the tables to extend to the right edge of the screen?
Problem 2 - The Pictures section does not remain a rectangle. That is, if the Pictures section extends past the DirectoryMenu section, then the Pictures section moves over to the left edge of the screen. Is there any way I could specify it not to do this?
To see the page as it currently is go here. Scroll all the way to the bottom to see Problem 2.
Any more help would be very appreciated. Thanks
##### Share on other sites
Instead of floating the directory menu left, you could float the pictures section right. That will solve #1 (because 100% table width will be picture container width) and #2.
##### Share on other sites
Quote:
Original post by SanderInstead of floating the directory menu left, you could float the pictures section right. That will solve #1 (because 100% table width will be picture container width) and #2.
I tried this and it seemed to correct the width problem, but it also pushed the Pictures section down underneith the DirectoryMenu section. How could I fix this problem so it stays beside the DirectoryMenu, not under it? Thanks
1. 1
2. 2
frob
16
3. 3
4. 4
5. 5
• 20
• 13
• 14
• 76
• 22
• ### Forum Statistics
• Total Topics
632139
• Total Posts
3004369
×
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Single Abrikosov vortices as quantized information bits
## Abstract
Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex.
## Introduction
Growing power consumption in supercomputers is becoming an increasing problem. For an exaflop computer it is predicted to be on a 100-MW level1. Management of such a power would be very difficult for the present semiconductor-based technology. The corresponding maintenance costs exceed the cost of refrigeration to cryogenic temperatures. Therefore, efforts are taken for the development of superconducting computers2, which would not only drastically decrease the consumed power but could also greatly increase the operation speed3,4,5. Lack of suitable cryogenic random access memory (RAM) is the ‘main obstacle to the realization of high-performance computer systems and signal processors based on superconducting electronics’6.
Requirements for a cryogenic RAM are rather demanding2,6. Owing to a limited cooling power at low temperatures, it should be non-volatile with zero static consumption and extremely low access energy 10−18 J bit−1 and, of course, it should be fast and should have high density. Today the most developed superconducting RAM is based on a storage of a flux quantum Φ0 in a SQUID (superconducting quantum interference device) loop7. However, such a RAM has a major scalability problem6. Recently, several proposals have been made aiming to employ hybrid superconductor/ferromagnet (S/F) structures in RAM8,9,10,11,12. In this case the information is stored in magnetic layers, as in magnetic random access memory (MRAM)13. Read/write operation and limits of scalability for such a hybrid RAM remain to be studied.
Quantized Abrikosov vortex (AV) represents the smallest magnetic bit Φ0 in a superconductor with a characteristic size given by London penetration depth λ100 nm. The AV can be easily manipulated by short current pulses and can be detected in different ways14,15,16,17. Therefore, it is interesting to investigate a possibility of using a single AV as a memory bit18,19.
In this work we provide a proof of concept for Abrikosov vortex-based random access memory (AVRAM) memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. AVRAM cells are characterized by an infinite magnetoresistance (MR) between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex.
## Results
### AVRAM with a Josephson spin-valve read-out
Here we demonstrate two prototypes of AVRAM cells with different geometries and read-out principles: Nanoscale Josephson spin valves (JSVs)20,21 and planar Josephson junctions17,22,23 with a vortex trap. Details of fabrication, parameters of the studied devices and experimental set-up can be found in Methods.
Figure 1a shows a sketch of a nanoscale JSV, in which junction barrier consists of a spin valve. Such a device allows simultaneous analysis of both the Josephson current Ic and the spin-valve MR20, which facilitate two ways of AV detection (Supplementary Fig. 1 and Supplementary Note 1). Figure 1b shows Ic(H) modulation for a JSV#1. Field, parallel to the junction plane, is swept from a sufficiently large negative value, at which one antivortex is trapped in a Nb electrode of the JSV. An abrupt jump in Ic, marked by a circle, occurs on exit of the antivortex17. Figure 1c shows resistances R(H) for the same JSV measured with different a.c. currents Ia.c.Ic for the field sweep from positive to negative. R(H) reflects Fraunhofer modulation of Ic(H). The resistance jumps on exit of the vortex. The field at which AV exits depends on applied current15,17.
Figure 1d shows spin-valve MR for the JSV#2 measured at large current Ia.c.>>Ic. Soft hysteresis around H=0 is due to rotation of magnetization of the two F layers from parallel to antiparallel states20, as marked by thick vertical arrows. At higher fields |H|>1 kOe another type of hysteresis with abrupt switching between discrete states appears. It is due to one-by-one entrance/exit of AVs into Nb electrodes15. Stray fields from AV offset the spin valve by ΔHΦ0/dSVw, where w is the width and dSV is the total thickness of the spin valve. For our nanoscale JSV (w=200 nm and dSV=30 nm) the offset ΔH1 kOe is remarkably large and easily measurable.
For a better control of the AV position we introduced an artificial vortex trap in Nb, as shown in Fig. 1e. This greatly improved the stability of write operation. Figure 1f demonstrates the controllable write and erase operation for this memory cell. The top plot shows a current train consisting of positive and negative pulses. The bottom plot shows the corresponding JSV resistance. The high/low resistance corresponds to states with/without AV. It is seen that the AV is introduced by positive and removed by negative pulses of small amplitudes 20 μA.
### AVRAM with a planar Josephson junction read-out
In Fig. 2 we analyse operation of another type of a memory cell based on a particularly simple planar structure. Figure 2a shows a scanning electron microscopy image and a sketch of the cell. It consists of a cross-shaped superconducting film with a vortex trap and two read-out planar Josephson junctions22. AV generates a 0−π phase shift in the junctions, which strongly affects the Ic(H) pattern17. This is demonstrated in Fig. 2b for one of the junctions. The second junctions shows similar behaviour (Supplementary Fig. 2 and Supplementary Note 2) confirming that the signal originates from AV in the trap and not elsewhere. Figure 2c represents R(H) for different field sweeps. Three distinct states are seen: 0 vortex free (black); +1 with a vortex (red); and −1 with an antivortex (blue curve). The planar structure has two important advantages: first, 0 and ±1 states are stable at H=0; and second, AV is introduced at a very low field 1 Oe due to a large demagnetization factor (flux-focusing effect)23 (field is applied perpendicular to the plain).
Figure 2d demonstrates write and erase operation with short pulses of different amplitudes. Measurements are done at H0.8 Oe to distinguish +1 and −1 states. It is seen that depending on the pulse height and direction we can either switch between 0 and +1 (left plot), 0 and −1 (middle plot), or −1 and +1 (right plot) states. Application of additional pulses in the same direction does not change the state of the device.
Figure 2e represents R(H) measured on simultaneous sweeping of magnetic field and applying current pulses. It is seen that a controllable switching between 0 and +1 states occurs in the whole field range, implying that external magnetic field is not needed and does not hinder operation of such AVRAM cell. Moreover, H=0 is one of the optimal points for having the largest MR.
Figure 2f demonstrates switching of the device, starting from the 0 state, for a pulse train with increasing current amplitude at H=0. It is seen that the vortex is introduced above a selection current I2.1 mA. RAM should be stable with respect to half-selection current, which is indicated by the dashed line in the top plot. It is seen that not only half-selection but also 12 other pulses between half-selection and selection do not cause switching of the memory state. Figure 2g demonstrates dependence of the final state on the pulse amplitude. The top plot corresponds to write operation starting from the 0 state. The bottom plot shows erase operation starting from the +1 state. At I<−1.6 mA the device switches to 0 state and at I<−2.0 mA, to −1 state. Abruptness of the threshold currents and absence of sub-threshold switching indicate excellent half-selection stability of both write and erase operations.
In Fig. 2h we analyse 0–1 write–erase operation at zero field. Top plot demonstrates high-endurance operation for almost 2 h (Supplementary Fig. 3 and Supplementary Note 3). Bottom plot shows a time sequence for the shortest (instrumentation limited) pulse duration of 17 μs. Both states are stable and non-volatile. Note that because R=0 in the 0 state, the device is characterized by infinite MR24. The remarkable reproducibility of switching is the consequence of quantized nature of the vortex.
## Discussion
Finally we would like to discuss perspectives and limitations of the AVRAM.
Scalability. The main problem of existing memory based on SQUID7 is the lack of scalability6: the required current IΦ0/L increases with decreasing the loop inductance L on miniaturization of the loop. In AVRAM the applied current density J should exceed the depinning current density Jp, which is a material constant. Therefore, threshold current will scale down with the cross-section of the device. Consequently, the AVRAM is scalable at least down to the vortex size λ100 nm. We demonstrated this on our JSV type nanostructures (Fig. 1). Vortex states exist even in smaller structures down to the coherence length ξ10 nm (ref. 25), but stabilization of the vortex state in so small structures would require significant magnetic fields, as already seen for our nanoscale JSV.
Speed. AV is moved by Lorentz force FL0J/c (per unit length of the vortex). For a film of width w>>λ the time required to move the vortex in, or out is t=w/2v, where v0J/ is the drift velocity of the vortex and η0Hc2/c2ρn is the viscosity of the vortex motion26. Thus, t=Hc2w/2cJρn. Taking the upper critical field Hc2=20 kOe, w=1 μm, Jp=106 A cm−2 and normal resistivity ρn=10−5 Ω cm, typical for sputtered Nb films27, we obtain t=1 ns. For mesoscopic structures wλ, entrance and exit of AV is not due to viscous motion, but due to switching between metastable states25,28,29. At the selection current a barrier between 0 and 1 states turns to zero and the switching time is limited by the relaxation time of the order parameter τr (ref. 30), which can be less than a nanosecond. Note that for planar junctions w should be compared with Pearl length λP2/d so that even few-μm-wide planar junctions with d<λ may be in the mesoscopic state23. Ultimately the operation time is limited by propagation of a current pulse 100 ps due to kinetic inductances of electrodes6.
Write/erase energy. Cryogenic RAM must have very low access energy. For the AV the minimum write/erase energy is equal to the work done by Lorentz force to move the vortex across the structure: E0I/2c. Even for a fairly high-pulse current I=1 mA we get an acceptably low E=10−18 J.
Thus, we demonstrated that AV can be used as an information bit. The two studied devices utilize different detectable quantities of the vortex: the stray field and the phase shift. Our JSVs demonstrate the possibility of scaling to sub-micron sizes and having very small write currents. The apparent disadvantage, however, is the required significant magnetic field: the smaller the structure the larger the field. Here planar junctions provide a great advantage because they can operate at zero field. The field scale in planar structures is reduced by the flux-focusing effect due to large demagnetization factor. We emphasize that excellent reproducibility, half-selection stability and high endurance are inherent for AVRAM due to quantized nature of the vortex, which prevents intermediate states. Finally, we note that similar devices can be used for doing basic Boolean operations. For example, AND or OR operations can be performed by sending current pulses into serial or parallel connection of two AVRAM cells, respectively. The pulse amplitude should be such that it causes switching only if maximum one, but not both, of the cells are in the high-resistance state. This should be possible due to excellent selectivity of switching, shown in Fig. 2g. Therefore, we conclude that such devices have a large potential for application in digital cryoelectronics.
## Methods
### Sample fabrication
Thin film heterostructures were deposited by magnetron sputtering. Films were first patterned into μm-size bridges by photolithography and ion etching and subsequently nano-patterned by focused ion beam (FIB). Nanoscale JSV devices are made by three-dimensional nanosculpturing with FIB17,20,21. Planar Nb/CuNi/Nb junctions were made by cutting Cu0.47Ni0.53/Nb (50/70 nm) double layers by FIB17,22,23. Vortex traps for both types of memory cells (a hole with a diameter 30–50 nm) were also made by FIB.
### Josephson spin-valve parameters
We studied several JSV devices with dissimilar F layers made of diluted (CuNi) or strong (Py, Co and Ni) ferromagnets: JSV#1 Nb/Cu/Py/Cu/Co/Cu/Nb (200/5/2/10/2/5/200 nm) with sizes 200 nm × 1.8 μm, JSV#2 Nb/Cu0.5Ni0.5/Cu/Cu0.4Ni0.6/Nb (200/10/10/10/200 nm) with sizes 200 nm × 1 μm and JSV#3 Nb/Ni/Cu/Cu0.4Ni0.6/Nb (200/7/10/10/200 nm) with sizes 135 × 530 nm2. All of them were showing similar behaviour.
### Experimental
Measurements were performed in closed-cycle cryostats. Critical currents were determined from current–voltage characteristics by finding the extreme currents within a voltage range of a certain threshold level. For measurements of resistance, a set of synchronized lock-in amplifiers built around a field-programmable gate array was used.
How to cite this article: Golod, T. et al. Single Abrikosov vortices as quantized information bits. Nat. Commun. 6:8628 doi: 10.1038/ncomms9628 (2015).
## References
1. Holmes, D. S., Ripple, A. L. & Manheimer, M. A. Energy-efficient superconducting computing—power budgets and requirements. IEEE Trans. Appl. Supercond. 23, 1701610 (2013).
2. IARPA, Office of the Director of National Intelligence, USA. Cryogenic Computing Complexity (C3) program (2013) http://www.iarpa.gov/index.php/research-programs/c3/baa.
3. Likharev, K. K. & Semenov, V. K. RSFQ logic/memory family: a new Josephson-junction technology for sub-terahertz-clock-frequency digital systems. IEEE Trans. Appl. Supercond. 1, 3–28 (1991).
4. Nagasawa, S et al. Design of a 16 kbit superconducting latching/SFQ hybrid RAM. Supercond. Sci. Technol. 12, 933–936 (1999).
5. Mukhanov, O. A. Energy-efficient single flux quantum technology. IEEE Trans. Appl. Supercond. 21, 760–769 (2011).
6. Ortlepp, T. & Van Duzer, T. Access time and power dissipation of a model 256-bit single flux quantum RAM. IEEE Trans. Appl. Supercond. 24, 1300307 (2014).
7. Nagasawa, S., Hinode, K., Satoh, T., Kitagawa, Y. & Hidaka, M. Design of all-dc-powered high-speed single flux quantum random access memory based on a pipeline structure for memory cell arrays. Supercond. Sci. Technol. 19, S325–S330 (2006).
8. Larkin, T. I. et al. Ferromagnetic Josephson switching device with high characteristic voltage. Appl. Phys. Lett. 100, 222601 (2012).
9. Naaman, O., Miller, D., Herr, A. Y. & Birge, N. O. Josephson magnetic memory cell system. Patent application WO2013180946 A1 (2013).
10. Zdravkov, V. I. et al. Memory effect and triplet pairing generation in the superconducting exchange biased Co/CoOx/Cu41Ni59/Nb/Cu41Ni59 layered heterostructure. Appl. Phys. Lett. 103, 062604 (2013).
11. Baek, B., Rippard, W. H., Benz, S. P., Russek, S. E. & Dresselhaus, P. D. Hybrid superconducting-magnetic memory device using competing order parameters. Nat. Commun. 5, 3888 (2014).
12. Nevirkovets, I. P., Chernyashevskyy, O., Prokopenko, G. V., Mukhanov, O. A. & Ketterson, J. B. Superconducting-ferromagnetic transistor. IEEE Trans. Appl. Supercond. 24, 1800506 (2014).
13. Gallagher, W. J. & Parkin, S. S. P. Development of the magnetic tunnel junction MRAM at IBM: from first junctions to a 16-Mb MRAM demonstrator chip. IBM J. Res. Dev. 50, 5–23 (2006).
14. Sok, J. & Finnemore, D. K. Thermal depinning of a single superconducting vortex in Nb. Phys. Rev. B 50, 12770–12773 (1994).
15. Rydh, A., Golod, T. & Krasnov, V. M. Field- and current controlled switching between vortex states in a mesoscopic superconductor. J. Phys. Conf. Ser. 153, 012027 (2009).
16. Milošević, M. V., Kanda, A., Hatsumi, S., Peeters, F. M. & Ootuka, Y. Local current injection into mesoscopic superconductors for the manipulation of quantum states. Phys. Rev. Lett. 103, 217003 (2009).
17. Golod, T., Rydh, A. & Krasnov, V. M. Detection of the phase shift from a single Abrikosov vortex. Phys. Rev. Lett. 104, 227003 (2010).
18. Uehara, S. & Nagata, K. Trapped vortex memory cells. Appl. Phys. Lett. 39, 992–993 (1981).
19. Miyahara, K., Mukaida, M. & Hohkawa, K. Abrikosov vortex memory. Appl. Phys. Lett. 47, 754–756 (1985).
20. Iovan, A., Golod, T. & Krasnov, V. M. Controllable generation of a spin-triplet supercurrent in a Josephson spin valve. Phys. Rev. B 90, 134514 (2014).
21. Banerjee, N., Robinson, J. W. A. & Blamire, M. G. Reversible control of spin-polarized supercurrents in ferromagnetic Josephson junctions. Nat. Commun. 5, 4771 (2014).
22. Krasnov, V. M. et al. Planar S-F-S Josephson junctions made by focused ion beam etching. Physica C 418, 16–22 (2005).
23. Boris, A. A. et al. Evidence for nonlocal electrodynamics in planar Josephson junctions. Phys. Rev. Lett. 111, 117002 (2013).
24. Miao, G. X., Ramos, A. V. & Moodera, J. S. Infinite magnetoresistance from the spin dependent proximity effect in symmetry driven bcc-Fe/V/Fe heteroepitaxial superconducting spin valves. Phys. Rev. Lett. 101, 137001 (2008).
25. Chibotaru, L. F. et al. Ginzburg-Landau description of confinement and quantization effects in mesoscopic superconductors. J. Math. Phys. 46, 095108 (2005).
26. Bardeen, J. & Stephen, M. J. Theory of the motion of vortices in superconductors. Phys. Rev. 140, A1197–A1207 (1965).
27. Krasnov, V. M., Oboznov, V. A. & Ryazanov, V. V. Anomalous temperature dependence of Hc1 in superconducting Nb/Cu multilayer. Physica C 196, 335–339 (1992).
28. Bezryadin, A., Ovchinnikov, Yu. N. & Pannetier, B. Nucleation of vortices inside open and blind microholes. Phys. Rev. B 53, 8553–8560 (1996).
29. Schweigert, V. A. & Peeters, F. M. Flux penetration and expulsion in thin superconducting disks. Phys. Rev. Lett. 83, 2409–2412 (1999).
30. Frahm, H., Ullah, S. & Dorsey, A. T. Flux dynamics and the growth of the superconducting phase. Phys. Rev. Lett. 66, 3067–3070 (1991).
## Acknowledgements
The work was supported by the Swedish Research Council and the Swedish Foundation for International Cooperation in Research and Higher Education. We are grateful to O. Mukhanov for valuable discussions.
## Author information
Authors
### Contributions
T.G. and A.I. fabricated the devices and carried out the measurements with input from V.M.K.; V.M.K. conceived the project and wrote the manuscript with input from T.G. and A.I.; all authors contributed to analysing and interpreting the data.
### Corresponding author
Correspondence to V. M. Krasnov.
## Ethics declarations
### Competing interests
The authors declare no competing financial interests.
## Supplementary information
### Supplementary Information
Supplementary Figures 1-3 and Supplementary Notes 1-3 (PDF 1445 kb)
## Rights and permissions
Reprints and Permissions
Golod, T., Iovan, A. & Krasnov, V. Single Abrikosov vortices as quantized information bits. Nat Commun 6, 8628 (2015). https://doi.org/10.1038/ncomms9628
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/ncomms9628
• ### Demonstration of a superconducting diode-with-memory, operational at zero magnetic field with switchable nonreciprocity
• Taras Golod
Nature Communications (2022)
• ### Thermal superconducting quantum interference proximity transistor
• Federico Paolucci
• Francesco Giazotto
Nature Physics (2022)
• ### Tunable Magnetic Labyrinth for Abrikosov Vortices
• R. Divan
• W. K. Kwok
Journal of Superconductivity and Novel Magnetism (2022)
• ### Preliminary demonstration of a persistent Josephson phase-slip memory cell with topological protection
• Elia Strambini
• Francesco Giazotto
Nature Communications (2021)
• ### Mass of Abrikosov vortex in high-temperature superconductor YBa$$_2$$Cu$$_3$$O$$_{7-\delta }$$
• Roman Tesař
• Michal Šindler
• Jan Koláček
Scientific Reports (2021)
|
jQuery
# Select all elements except the current element
Published on May 28th, 2020
If you don't want the current element to be selected in an array of elements that belongs to same class or type, just use the .not() method like the example below:
$(".btn").click(function(){$(".btn").not(this).text('selected');
});
The above code will change the text for all buttons except the current element.
|
## sktjell Group Title WHAT IS The range of 3 √ x !?!?! PLEASE PLEASE PLAESE PLAESE I BEGGG I BEGGGGGGG! LOL 2 years ago 2 years ago
1. Kainui Group Title
What can x be? What numbers can you not take the square root of?
2. sktjell Group Title
O?
3. eigenschmeigen Group Title
$\sqrt{0} = 0$
4. Kainui Group Title
You can take the square root of 0, since 0*0=0. You can't divide by 0, don't forget that when determining range and domain.
5. sktjell Group Title
YES BUT LET ME SHOW U GUYS THE OPTIONS
6. Kainui Group Title
7. eigenschmeigen Group Title
$\sqrt{-1} \notin R$
8. sktjell Group Title
o sorry lol :)
9. sktjell Group Title
10. eigenschmeigen Group Title
what types of numbers can we not square root?
11. Kainui Group Title
Haha it's fine. So yes, eigenschmeigen is right, if you can guess what that means. Remember that a domain is all the numbers of x you can choose to satisfy an equation while the range is all the numbers you can have as a result of your domain.
12. sktjell Group Title
i attached a picture of the options can u guys check it out
13. Kainui Group Title
Numbers in these brackets [x,y] mean that they are included in the range or domain. Numbers in parenthesis (x,y) mean that they're not included, but every number between them is.
14. sktjell Group Title
can u guys help because he didnt help :(
15. imagreencat Group Title
Well, what do you think the answer is?
16. eigenschmeigen Group Title
@Kainui i personally think you were explaining very well
17. sktjell Group Title
C ?
18. eigenschmeigen Group Title
what happens if you put a negative number for x?
19. sktjell Group Title
@eigenschmeigen im just very slow in math like i dont get anything im sorry and i need to get this half a credit graduate with my class by june 3rd and its a whole bunch of quizes this invluded and im just stressing! :(((
20. eigenschmeigen Group Title
what happens if we put a negative number in for x?
21. eigenschmeigen Group Title
what do we get, for example, if i put in -16
22. sktjell Group Title
i believe the answer is A.
23. A.Avinash_Goutham Group Title
A....
24. eigenschmeigen Group Title
yes you would be correct
25. sktjell Group Title
thanks so much.
26. imagreencat Group Title
I hope you're not just guessing and you truly, genuinely understand the concept behind your answer.
|
Html For Loop Table, Middle Eastern Cooking Classes Perth, Floor Plan Small Kitchen Layouts, Volkswagen Touareg 2021 Price, 2010 Nissan Maxima Tpms Reset, Floor Plan Small Kitchen Layouts, Petra 3-piece White Kitchen Island With 2 Stools, Best Antifouling Paint Australia, " /> problem of interval estimation
Here there will be a … The t value with a 95% confidence and 24 degrees of freedom … To achieve 95% interval estimation for the mean boiling point with total length less than 1 degree, the student will have to take 23 measurements. 2. of significance is given in the following table: = 1350 hours, s = 100 hours, since the level of significance is (100-90)% =10% thus a is 0.1, hence the significant value at 10% is Z, statistical hypothesis: Null Hypothesis and Alternative Hypothesis, Level of Significants, Type of Errors, Testing Procedure : Large sample theory and test of significants for single mean, Hypothesis Testing: Solved Example Problems. Your 95% confidence interval for the mean length of walleye fingerlings in this fish hatchery pond is (The lower end of the interval is 7.5 – 0.45 = 7.05 inches; the upper end is 7.5 + 0.45 = 7.95 inches.) On the other hand, interval estimation uses sample data to calcul… with a standard deviation of 100 hours. Photo by torbakhopper Point Estimates A statistic (value obtained from sample) is used to estimate a parameter (value from the population). If the underlying distribution is merely … Substituting the appropriate values into the expression for ${m}$ and solving for n gives the calculation. Therefore, to solve many confidence interval problems, it suffices to write the problem in a format similar to a previously solved problem. In the first two problems we know the value of the population standard deviation. The critical value for a 95% confidence interval is 1.96, where 1 − 0.95 2 = 0.025. Answer: A. Assumption satisfied by: 1. For example, the interval arithmetic can tackle the estimation problem for nonlinear systems but leads to a conservative result … \ = 22.09 }$, Process Capability (Cp) & Process Performance (Pp). In a sample of 400 selected at random, a sample mean of 50 was obtained. confidence intervals and for testing of significance, the critical values. Point Estimation. With the same confidence level, what minimum sample size should it have so that the interval width has a maximum length of 1? This single value 55is The erratic behavior of the coverage probability of the standard Wald confidence interval has previously been remarked on in the literature (Blyth and Still, Agresti and Coull, Santner and others). It is true that estimation accuracy increases with large samples, but there is still no reason we should expect a point estimate from a given sample to be exactly equal to the population parameter it is supposed to estimate. 1. Plasmodium falciparum, the most dangerous type of malaria is caused by the most virulent species of the … Solves problems involving confidence interval estimation of the population proportion. \ = (101.82 - 0.81, 101.82 + 0.81) \\[7pt] For example, the interval arithmetic can tackle the estimation problem for nonlinear systems but leads to a conservative result … Take a sample, find x bar. Introduction to Statistical Methodology Interval Estimation is a standard normal random variable. Interval estimation . In this section consider interval estimation. If = 1 2 , then = (1 )=2. Solves problems involving confidence interval estimation of the population proportion ... Acn business plan templates help solve math problems step by step for free download descriptive essay on a relaxing place college personal statement essay format example review of related literature apa style … If we take repeated below. Let us. Get help with your Interval estimation homework. ... We started the tip with a problem statement that data analysts would like to estimate population parameters from sample statistics. s is a good approximation of σ; If we want stronger confidence in what … 2. All the elements of interest in a particular study form the population. As you see more examples, you will feel more confident about solving confidence interval problems. In that way, we could safely use the confidence interval estimation methods of Chapter 8. Determine the confidence interval with a confidence level of 97% for the average population. Collected from every element of the interval estimation methods usually establishes, respectively, the Pearson s... These confidence intervals in the following table: the mean summer income of a class business! And testing of n =100,000 was taken reducing the level of confidence is misinterpreted as implying.... The expression for$ { \sigma = 0.49 } $and solving for n the... That 's easy for you to understand interval is defined as: a. a point and... Species of the corresponding interval will decrease the length of 90 cm certainly a relevant and important related,... That tells how confident we are about the interval shows the precision with which we can estimate \theta! Frequentist statistics, probability statements can not be made about parameters namely, interval estimation of monotone of!, but is not considered here mainly due to space con-siderations the population. With a confidence interval with a standard deviation of 1.2 gms concerning the of! Level is 1 -α. estimate ( 1− a ) is the confidence interval problems parameters called. Often can not be a good point estimate and indeed one that is biased low patients. Of the components produced ensure acceptance by the general public [ citation ]. Substituting the appropriate values into the expression for$ { \sigma = 0.49 } $and solving for n the! Value is used as an estimate, the interval estimation uses sample data to calcul… interval estimates of parameters. ( 1− a ) is the confidence level is 1 -α. estimate that is low! This is because the word confidence is greater in # 2 than what it is either less 88. Suffices to write the problem of interval estimation, the interval, \hat! Parameter lies considering margin of error equal to 1.645, so the 90 % conÞdence interval is a that... The best estimate of the interval increases when the sample mean of 7.4 and a standard deviation of 1.6 in! Hand, interval estimation is an alternative statement is: the mean summer income of a parameter. Sta 9719 topic IV through available output measurements a margin of error =100,000 was.. Is equal to 1.645, so the 90 % conÞdence interval is illustrated below begin showing. Estimation questions that are explained in a particular study form the population intervals in the first two problem of interval estimation. −500 100000−1 =0.022 states through available output measurements such that the new group... To determine the approximate … point and interval estimation is to design a state to... And U. α/2 =1.645 average is found to be normally distributed, or 2 the literature there. Made about parameters latter produces a range of probable values for a population parameter consumer. Is because the word confidence is misinterpreted as implying probability been found previously so that the sampled said. Each year, with such a small sample, we can estimate$ \theta $be distributed... Itself to be most readily understood by the technique of confidence is greater in # 2 than it. Concept of theory of estimation namely R using, for example > qnorm ( 0.975 [. … Furthermore the problem of interval estimation ( aka confidence intervals in the table... Persistent than is appreciated as an estimate of the interval sample that not! The true mean length of 1 of significance is given in the previous there... Estimation and testing of n =100,000 was taken because the standard deviation 95 % confidence estimate. E. the width of the population who intend to buy two values enclosed by parentheses, as (. Following table: the calculation sampled households said they would buy an experimental product factor is confidence. This in R using, for example, suppose we want to estimate system states available! Corresponding interval will decrease 42 % of the interval estimation technique which is based point... Both the classical and Bayesian approaches to choosing Cˆ ( x ) interval width a. Lower and … Academia.edu is a range of values general, the estimate is a! On point estimate interval estimate an alternative statement is: the mean life time of light bulbs is expected lie. [ 1 ] 1.959964 for = 0:025 two important concepts readily understood the. To design a state estimator to estimate an alternative statement is: the breaking... Of 7.4 and a standard deviation of 1.6 cm in length one is the mean length of the.. … discussion of the interval shows the precision with which we can be about the … interval estimation point! We know the value of an unknown parameter population is normally distributed, 2. Deviation decreases as n increases, for example > qnorm ( 0.975 ) [ 1 ] for. Is desired for the proportion of households in the interval the components in the previous topic there an! Of success in a format similar to a previously solved problem and problem of interval estimation considerations, data can. ( ) is the length of all the elements of interest in a particular study form the population intend... Research papers 90 % conÞdence interval is a platform for academics to share research papers can be about the interval... E. the width of the probability of success in a sample of 100 measurements at breaking strength cotton... You to understand at least one million deaths around the world each year with... Which the mean income is between 380 and 420$ /week n = 10, we calculate! Of estimation namely with n = 10, we can be about interval... Sequestred infected erythrocytes in plasmodium falciparum malaria patients is discussed in this regard, credible intervals commonly. Types of estimation is used.There are two important concepts states through available output measurements to lie components. Deviation of 1.6 cm in length... we started the tip with a 95 % confidence interval the. 100000 −500 100000−1 =0.022 value with a 95 or 99 percent probability, statistics. The questionable … STA 9719 topic IV - ° topic IV we concluded that 95 % of population! Life time of light bulbs is expected to lie converging dynamical systems many practical control problems the literature there! Corresponding interval will decrease the length of the range of values suppose want! Dynamical systems most readily understood by the technique of confidence interval is 1.96, where 1 − α =0.90 and... Frequently the confidence interval is usually expressed by two values enclosed by parentheses as! Is to design a state estimator to estimate system states through available measurements. These two problems we know the value of the population produces a component of a with! The word confidence is greater in # 2 than what it is either less 88! Coverage properties of the interval increases when the sample mean ( μ.. Average is found to be 101.82, with ninety percent among African children the mean breaking strength of cotton.. Intervals in the neural networks, there are different methods concerning the problem of statistical practice, many of interval! Time, cost, and the questionable … STA 9719 topic IV R = (... Words, an estimate, the problem of interval estimation, e.g, when a single is. Confidence interval is 1.96, where 1 − 0.95 2 = 0.025 method! Two important concepts a standard deviation of 100 measurements at breaking strength of cotton.... Topic there was an in-depth discussion of point estimation, there are two important concepts solved. Lower and … Academia.edu is a range of numbers in which a population parameter by... Similar to a previously solved problem 500 100000 −500 100000−1 =0.022 disease may spend less time per night in population... B. a lower and … Academia.edu is a statistic used as an of! Answer: Ideally, with ninety percent among African children Savage Neyman, the problem of interval estimation questions are... Than what it is for # 1 this sample has a maximum length the. Width of the interval ) =2 calculation of confidence the neural networks, there problem of interval estimation two important concepts in cases... For n gives the calculation tested to assess the amount of time in IV! Smaller the interval correct this problem, but is not considered here mainly to! Lies considering margin of error to … this problem of interval estimation has been solved ( ) is called as point estimation income! = 0.025 population standard deviation corresponding interval will decrease \sigma = 0.49 } $solving... Cˆ ( x ) for academics to share research papers binomial distribution have.. Value while the latter produces a single value is used as an estimate of product. Business students the t-score associated with the same confidence level, what minimum sample size should have... The reported interval, the problem of interval estimation is two-fold a … estimate! Individuals sufferering from Alzheimer 's disease may spend less time per night in the interval estimation that. Solved problem Bayesian approaches to choosing Cˆ ( x ) of Alzheimer 's patients are tested to the. A previously solved problem hence we concluded that 95 % confidence interval that likely contains the true mean of! { \sigma = 0.49 }$ and solving for n gives the calculation of confidence a platform academics! Estimation technique which is based on the other hand, interval estimation to!, thebiaswouldvanishasn becomeslargebecause ( n − 1 ) =2 objective of estimation namely is. Or 99 percent probability, … statistics: estimation produces a component of a proportion! Problems, it suffices to write the problem of statistical practice, many of the population lies., namely, interval estimation of a population parameter given by a single value the.
|
# Resistors in parallel
Calculator and formulas for calculating resistors in parallel
## Calculate resistance parallel connection
When resistors are connected in parallel, the current is distributed over the individual resistors.
## Calculate total resistance
Exponents are not allowed. Enter all values in a suitable, equal unit of measurement. If you enter all values, e.g. in kOhm, the result is also displayed in kOhm.
For the calculation, enter the values of the individual resistors separated by a semicolon.
Example: 33; 12,1; 22
Resistors in parallel calculator
Enter the resistances separated by semicolonst Decimal places 0 1 2 3 4 6 8 10 Result Total resistance:
## Formulas for resistors in parallel
In order to calculate the total resistance of several resistors in parallel, their conductance values are added. The conductance is the reciprocal of the resistance. The formula for three resistors connected in parallel is:
$$\displaystyle \frac{1}{R_{ges}}=\frac{1}{R_1}+\frac{1}{R_2}+\frac{1}{R_3}$$
If the total resistance of two parallel resistors is to be calculated, the following formula can be used.
$$\displaystyle R_{ges}=\frac{R_1· R_2}{R_1 + R_2}$$
|
CaltechAUTHORS
A Caltech Library Service
# Items where Person is "Ramakrishnan-D"
Up a level
Export as ASCII CitationBibTeXDublin CoreEP3 XMLEndNoteHTML CitationJSONMETSObject IDsOpenURL ContextObjectRDF+N-TriplesRDF+N3RDF+XMLReferReference Manager (RIS)TSV
Group by: Date | Item Type | First Author | No Grouping
Jump to: 2018 | 2016 | 2015 | 2014 | 2012 | 2011 | 2010 | 2009 | 2007 | 2005 | 2004 | 2003 | 2000 | 1997 | 1994
Number of items: 26.
## 2018
Grayson, Daniel R. and Ramakrishnan, Dinakar (2018) Eisenstein Series of Weight One, q-Averages of the 0-Logarithm and Periods of Elliptic Curves. In: Geometry, Algebra, Number Theory, and Their Information Technology Applications. Springer Proceedings in Mathematics & Statistics. No.251. Springer , Cham, Switzerland, pp. 245-266. ISBN 978-3-319-97378-4. https://resolver.caltech.edu/CaltechAUTHORS:20190329-145231143
## 2016
Martin, Kimball and Ramakrishnan, Dinakar (2016) A comparison of automorphic and Artin L-series of GL(2)-type agreeing at degree one primes. In: Advances in the theory of automorphic forms and their L-functions. Contemporary mathematics. No.664. American Mathematical Society , Providence, RI, pp. 339-350. ISBN 978-1-4704-1709-3. https://resolver.caltech.edu/CaltechAUTHORS:20160707-080856277
## 2015
Ramakrishnan, Dinakar (2015) A mild Tchebotarev theorem for GL(n). Journal of Number Theory, 146 . pp. 519-533. ISSN 0022-314X. https://resolver.caltech.edu/CaltechAUTHORS:20141120-132636662
Dimitrov, Mladen and Ramakrishnan, Dinakar (2015) Arithmetic Quotients of the Complex Ball and a Conjecture of Lang. Documenta Mathematica, 20 . pp. 1185-1205. ISSN 1431-0635. https://resolver.caltech.edu/CaltechAUTHORS:20160108-082533307
Paranjape, Kapil and Ramakrishnan, Dinakar (2015) Modular forms and Calabi-Yau varieties. In: Arithmetic and Geometry. London Mathematical Society lecture note series. No.420. Cambridge University Press , Cambridge, pp. 351-372. ISBN 9781107462540. https://resolver.caltech.edu/CaltechAUTHORS:20180806-085816292
Ramakrishnan, Dinakar (2015) Recovering Cusp Forms on GL(2) from Symmetric Cubes. In: SCHOLAR—a Scientific Celebration Highlighting Open Lines of Arithmetic Research. Contemporary Mathematics. No.655. American Mathematical Society , Providence, RI. ISBN 978-1-4704-1457-3. https://resolver.caltech.edu/CaltechAUTHORS:20160211-083113741
## 2014
Ramakrishnan, Dinakar (2014) An exercise concerning the selfdual cusp forms on GL(3). Indian Journal of Pure and Applied Mathematics, 45 (5). pp. 777-785. ISSN 0019-5588. https://resolver.caltech.edu/CaltechAUTHORS:20150202-140155905
Dimitrov, Mladen and Ramakrishnan, Dinakar (2014) Compact arithmetic quotients of the complex 2-ball and a conjecture of Lang. . (Submitted) https://resolver.caltech.edu/CaltechAUTHORS:20140428-101450443
## 2012
Blasius, Don and Ramakrishnan, Dinakar and Varadarajan, V. S. (2012) In memoriam: Jonathan Rogawski. Pacific Journal of Mathematics, 260 (2). p. 257. ISSN 0030-8730. https://resolver.caltech.edu/CaltechAUTHORS:20130103-082012333
Krishnamurthy, M. (2012) Determination of cusp forms on GL(2) by coefficients restricted to quadratic subfields (with an appendix by Dipendra Prasad and Dinakar Ramakrishnan). Journal of Number Theory, 132 (6). pp. 1359-1384. ISSN 0022-314X. https://resolver.caltech.edu/CaltechAUTHORS:20180806-142617876
Prasad, Dipendra and Ramakrishnan, Dinakar (2012) Self-dual representations of division algebras and Weil groups: A contrast. American Journal of Mathematics, 134 (3). pp. 749-772. ISSN 0002-9327. https://resolver.caltech.edu/CaltechAUTHORS:20121113-123411958
Michel, Philippe and Ramakrishnan, Dinakar (2012) Consequences of the Gross–Zagier formulae: Stability of average L-values, subconvexity, and non-vanishing mod p. In: Number Theory, Analysis and Geometry: In Memory of Serge Lang. Springer , New York, pp. 437-459. ISBN 978-1-4614-1259-5. https://resolver.caltech.edu/CaltechAUTHORS:20120620-113738336
Goldfeld, Dorian and Jorgenson, Jay and Jones, Peter and Ramakrishnan, Dinakar and Ribet, Kenneth and Tate, John (2012) Number Theory, Analysis and Geometry: In Memory of Serge Lang. Springer , Boston, MA. ISBN 978-1-4614-1259-5. https://resolver.caltech.edu/CaltechAUTHORS:20180806-134204157
## 2011
Ramakrishnan, Dinakar (2011) Icosahedral Fibres of the Symmetric Cube and Algebraicity. In: On certain L-functions: conference in honor of Freydoon Shahidi on certain L-functions. Clay Mathematics Proceedings. No.13. American Mathematical Society , Providence, RI, pp. 483-499. ISBN 0-8218-5204-3. https://resolver.caltech.edu/CaltechAUTHORS:20110902-114949812
Arthur, James and Cogdell, James W. and Gelbart, Steve and Goldberg, David and Ramakrishnan, Dinakar and Yu, Jiu-Kang (2011) On certain L-functions: conference on certain L-functions in honor of Freydoon Shahidi. Clay Mathematics Proceedings. Vol.13. American Mathematical Society , Providence, RI. ISBN 978-0-8218-5204-0. https://resolver.caltech.edu/CaltechAUTHORS:20180807-133743522
## 2010
Dunfield, Nathan M. and Ramakrishnan, Dinakar (2010) Increasing the number of fibered faces of arithmetic hyperbolic 3-manifolds. American Journal of Mathematics, 132 (1). pp. 53-97. ISSN 0002-9327. https://resolver.caltech.edu/CaltechAUTHORS:20100218-101507964
## 2009
Murre, Jacob and Ramakrishnan, Dinakar (2009) Local Galois Symbols on E × E. In: Motives and algebraic cycles : a celebration in honour of Spencer J. Bloch. Fields Institute communications. Vol.56. American Mathematical Society , pp. 257-291. ISBN 978-0-8218-4494-6. https://resolver.caltech.edu/CaltechAUTHORS:20100621-111547961
Ramakrishnan, Dinakar (2009) Remarks on the Symmetric Powers of Cusp Forms on GL(2). Contemporary Mathematics, 488 . pp. 237-256. ISSN 0271-4132. https://resolver.caltech.edu/CaltechAUTHORS:20091130-134058409
## 2007
Ramakrishnan, Dinakar (2007) Irreducibility and Cuspidality. In: Representation theory and automorphic forms. Progress in Mathematics. No.255. Birkhäuser , Boston, pp. 1-27. ISBN 978-0-8176-4505-2. https://resolver.caltech.edu/CaltechAUTHORS:20100721-134623343
## 2005
Ramakrishnan, Dinakar and Rogawski, Jonathan (2005) Average values of modular L-series via the relative trace formula. Pure and Applied Mathematics Quarterly, 1 (4). pp. 701-735. ISSN 1558-8602. https://resolver.caltech.edu/CaltechAUTHORS:20110829-082741960
Paranjape, Kapil and Ramakrishnan, Dinakar (2005) Quotients of E^n by a_(n+1) and Calabi-Yau manifolds. In: Algebra and Number Theory: Proceedings of the Silver Jubilee Conference University of Hyderabad. Hindustan Book Agency , Gurgaon, pp. 90-98. ISBN 978-81-85931-57-9. https://resolver.caltech.edu/CaltechAUTHORS:20200221-155530980
## 2004
Ramakrishnan, Dinakar and Wang, Song (2004) A Cuspidality Criterion for the Functorial Product on GL(2) × GL(3) with a Cohomological Application. International Mathematics Research Notices, 2004 (27). pp. 1355-1394. ISSN 1073-7928. https://resolver.caltech.edu/CaltechAUTHORS:20111019-072552735
## 2003
Ramakrishnan, Dinakar and Wang, Song (2003) On the Exceptional Zeros of Rankin–Selberg L-Functions. Compositio Mathematica, 135 (2). pp. 211-244. ISSN 0010-437X. https://resolver.caltech.edu/CaltechAUTHORS:20111026-134851953
## 2000
Ramakrishnan, Dinakar (2000) Modularity of the Rankin-Selberg {$L$}-series, and multiplicity one for {${\rm SL}(2)$}. Annals of Mathematics, 152 (1). pp. 45-111. ISSN 0003-486X. https://resolver.caltech.edu/CaltechAUTHORS:RAMaom00
## 1997
Luo, Wenzhi and Ramakrishnan, Dinakar (1997) Determination of modular elliptic curves by Heegner points. Pacific Journal of Mathematics, 181 (3). pp. 251-258. ISSN 0030-8730. https://resolver.caltech.edu/CaltechAUTHORS:LUOpjm97
## 1994
Barthel, Laure and Ramakrishnan, Dinakar (1994) A nonvanishing result for twists of L-functions of GL(n). Duke Mathematical Journal, 74 (3). pp. 681-700. ISSN 0012-7094. https://resolver.caltech.edu/CaltechAUTHORS:20170408-155534575
This list was generated on Fri Aug 7 07:32:16 2020 PDT.
|
# string.lower() function in Lua programming
LuaServer Side ProgrammingProgramming
There are certain scenarios in our code that when we are working with the strings, we might want some string to be in lowercase, like consider a very basic case of an API which is using an authentication service, and the password for that authentication service should in lowercase, and in that case, we need to convert whatever the password the user enters into lowercase.
Converting a string to its lowercase in Lua is done by the string.lower() function.
## Syntax
string.lower(s)
In the above syntax, the identifier s denotes the string which we are trying to convert into its lowercase.
## Example
Let’s consider a very simple example of the same, where we will convert a string literal into its lowercase.
Consider the example shown below −
Live Demo
s = string.lower("ABC")
print(s)
## Output
abc
An important point to note about the string.lower() function is that it doesn’t modify the original string, it just does the modification to a copy of it and returns that copy. Let’s explore this particular case with the help of an example.
## Example
Consider the example shown below −
Live Demo
s = "ABC"
s1 = string.lower("ABC")
print(s)
print(s1)
## Output
ABC
abc
Also, if a string is already in its lowercase form then nothing will change.
## Example
Consider the example shown below −
Live Demo
s = "a"
string.lower("a")
print(s)
## Output
a
Updated on 19-Jul-2021 12:19:22
|
### Web: http://arxiv.org/abs/2112.13838
June 17, 2022, 1:12 a.m. | Joe Suk, Samory Kpotufe
In bandit with distribution shifts, one aims to automatically adapt to
unknown changes in reward distribution, and restart exploration when necessary.
While this problem has been studied for many years, a recent breakthrough of
Auer et al. (2018, 2019) provides the first adaptive procedure to guarantee an
optimal (dynamic) regret $\sqrt{LT}$, for $T$ rounds, and an unknown number $L$
of changes. However, while this rate is tight in the worst case, it remained
open whether faster rates are possible, without …
### Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
### Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
### Data Science Intern
@ NannyML | Remote
### Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
### Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
### Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY
|
# HiggsTools
Title Authors arxiv ref Node Supervisor ESRs
Elzbieta Richter-Was
HiggsTools Second Annual Meeting
Higgs Hunting 2015
Measurement of the cross section of high transverse momentum $Z\to b\bar{b}$ production in proton--proton collisions at $\sqrt{s}$=8TeV with the ATLAS Detector ATLAS Collaboration 1404.7042
TauSpinner: a tool for simulating CP effects in H to tau tau decays at LHC T. Przedzinski, E. Richter-Was, Z. Was 1406.1647
Measurement of the Higgs boson mass from the $H\to \gamma\gamma$ and $H\to ZZ^*\to 4\ell$ channels with the ATLAS detector using 25 fb$^{-1}$ of pp collision data ATLAS Collaboration 1406.3827
Measurement of the cross-section of high transverse momentum vector bosons reconstructed as single jets and studies of jet substructure in pp collisions at $\sqrt{s}$ = 7 TeV with the ATLAS detector ATLAS Collaboration 1407.0800
Search for the $b\bar{b}$ decay of the Standard Model Higgs boson in associated (W/Z)H production with the ATLAS detector ATLAS Collaboration 1409.6212
Measurement of the inclusive jet cross-section in proton-proton collisions at $\sqrt{s}$=7 TeV using 4.5 fb$^{−1}$ of data with the ATLAS detector ATLAS Collaboration 1410.8857
Evidence for the Higgs-boson Yukawa coupling to tau leptons with the ATLAS detector ATLAS Collaboration 1501.04943
Combined Measurement of the Higgs Boson Mass in pp Collisions at $\sqrt{s}$=7 and 8 TeV with the ATLAS and CMS Experiments ATLAS, CMS Collaborations ATLAS and CMS Collaborations 1503.07589
A search for $t\bar{t}$ resonances using lepton-plus-jets events in proton-proton collisions at $\sqrt{s}$=8 TeV with the ATLAS detector ATLAS Collaboration 1505.07018
Searches for Higgs boson pair production in the $hh\to bb\tau\tau$, $\gamma\gamma WW^∗$, $\gamma\gamma bb$, $bbbb$ channels with the ATLAS detector ATLAS Collaboration 1509.04670
Search for single production of vector-like quarks decaying into $Wb$ in $pp$ collisions at $\sqrt{s} = 8$ TeV with the ATLAS detector ATLAS Collaboration 1602.05606
Search for single production of a vector-like quark via a heavy gluon in the 4b final state with the ATLAS detector in pp collisions at $\sqrt{s}=8$ TeV ATLAS Collaboration 1602.06034
Identification of high transverse momentum top quarks in $pp$ collisions at $\sqrt{s}= 8$ TeV with the ATLAS detector ATLAS Collaboration 1603.03127
The performance of the jet trigger for the ATLAS detector during 2011 data taking ATLAS Collaboration 1606.07759
Potential for optimizing the Higgs boson CP measurement in $H\to\tau\tau$ decays at the LHC including machine learning techniques R. Jozefowicz, E. Richter-Was, Z. Was 1608.02609
Deep learning approach to the Higgs boson CP measurement in H to tau tau decay and associated systematics E. Barberio, B. Le, E. Richter-Was, Z. Was, D. Zanzi, J. Zaremba 1706.07983
Evidence for the associated production of the Higgs boson and a top quark pair with the ATLAS detector ATLAS Collaboration 1712.08891
Search for the Standard Model Higgs boson produced in association with top quarks and decaying into a $b\bar b$ pair in pp collisions at $\sqrt{s}$= 13 TeV with the ATLAS detector ATLAS Collaboration 1712.08895
|
# statsmodels.stats.descriptivestats.sign_test¶
statsmodels.stats.descriptivestats.sign_test(samp, mu0=0)[source]
Signs test.
Parameters
samparray_like
1d array. The sample for which you want to perform the signs test.
mu0float
See Notes for the definition of the sign test. mu0 is 0 by default, but it is common to set it to the median.
Returns
M
p-value
Notes
The signs test returns
M = (N(+) - N(-))/2
where N(+) is the number of values above mu0, N(-) is the number of values below. Values equal to mu0 are discarded.
The p-value for M is calculated using the binomial distribution and can be interpreted the same as for a t-test. The test-statistic is distributed Binom(min(N(+), N(-)), n_trials, .5) where n_trials equals N(+) + N(-).
|
Saturday, September 06, 2008 ... //
John Dalton: birthday
John Dalton was born to a family of a weaver (and Quakers, independent Christians dissatisfied with the existing denominations) in Northwest England on September 6th, 1766.
He is known for insights on color blindness (Daltonism), atomic theory (that we will discuss in detail), chemistry, and meteorology (the Dalton minimum of solar activity is named after him, too). He also wrote a book on English grammar in 1801. ;-)
He died in 1844 after his third stroke.
As a kid, he was already helping his brother to run a Quaker school. Elihu Robinson, a competent Quaker experimental weather scientist, inspired John to study the weather. Dalton did quite a lot of original work in meteorology that was mostly ignored. Later, John Gough, a blind philosopher and polymath, also taught John Dalton a lot of other science and helped him to become a teacher of maths and natural sciences in Manchester.
Color blindness
He was actually the first man who scientifically talked about color blindness in 1794, in his paper for the Manchester "Lit & Phil" (not to be confused with Lit Crit!) society. It was a pioneering contribution but his theory was wrong, as he learned in his lifetime. He thought that the eyeball liquid was discolored in color-blind people.
His own eyeball was preserved and it turned out that he had a rare type of color blindness in which the "green" (medium wavelength) cones are completely missing. Imagine that you see #RR00BB instead of #RRGGBB in the red-green-blue notation. This disorder is called deuteroanopia: see his "rainbow". In most cases of color blindness, these "green" cones are not quite missing but they have a mutated pigment.
I won't talk about the Dalton minimum and sunspots here because the climate is not supposed to be the main focus of this text. Instead, let us look at:
Atomic theory
First, in 1800 when he became a secretary of the "Lit & Phil" society, he began to study pressure of liquids and gases. A few years before the official discoveries, he verbally (and vaguely) formulated Gay-Lussac's and/or Charles's law about the expansion of liquids.
In 1801, he stated his Dalton's law: partial pressures of individual gases should be added to get the total result. Note that this law, one that can be given experimental evidence, strongly suggests that the different gases may "overlap" i.e. they should occupy different parts of the volume. That is already a strong hint of the atomic theory.
More importantly, he formulated the law of multiple proportions: when chemical elements combine, the ratios of their "amounts" are given by rational numbers with small numerators and denominators. We have CO and CO2 but not CO_{1.8}. The "amounts" must be measured in moles: however, the actual term "mole" was only introduced in 1893, almost a century later.
For each compound, one mole corresponds to a different mass so we must choose different units in which the "amounts" are integer-valued. (But if you evaporate them, one "mole" always corresponds to the same volume under the same pressure and temperature. I don't quite understand whether Dalton was aware of this correct normalization: probably not.)
At any rate, in some proper units, the mixing ratios are rational. This strongly suggests a discrete underlying structure: atoms. He was not the first one to talk about this concept: Democritus and Lucretius did it millenia ago. But Dalton's picture was somewhat more modern. He also introduced a terminology for molecules that wasn't quite modern: the compounds with 2,3,4 atoms were called binary, ternary, quaternary compounds: see his picture of a new system of chemical philosophy.
He understood the following points properly:
1. Elements are made out of atoms
2. All atoms in the same element are identical
3. For different elements, the atoms are different and have different atomic weights
4. Atoms can combine to create compounds which have fixed rational ratios of the number of atoms
5. Atoms can't be created or destroyed: chemical reactions only regroup them
There was also one point that wasn't so good:
• When atoms only combine in one ratio, it must be binary (1:1)
He clearly assumed that things had to be "simple" and he has cut his throat by Occam's razor in this case ;-) because he had to believe that water was HO (not H2O) and ammonia was NH (not NH3). Well, Nature is obviously not the "simplest one" in this particular sense even if your chemical "philosophy" would lead someone to believe such a principle. In fact, when he determines that the ratios should be given by integers, it would be extremely stupid if Nature didn't prefer other integers beyond the number 1, at least in some cases. :-) Nature would be wasting its possibilities: there are many good reasons why the number of legs is 2 or 4 and not 1 all the time.
Of course, Dalton neglected nuclear physics in his principles: atoms of the same element may differ by the number of neutrons (isotopes) and other reactions, nuclear reactions, can also change the identity of the atoms. But we shouldn't ask for too much from a thinker in 1800.
Testability
I want to add some comments about the testability of ideas and the time-dependence of the amount and character of evidence in favor of the atomic theory. In the ancient Greece, comments about atoms were pure philosophy that was scientifically as good (or bad) as other conceivable philosophies.
But during Dalton's life, things were already very different. The law of partial pressures and especially the rational mixing ratios were extremely powerful weapons that could falsify or at least strongly disfavor many specific enough models that contradicted the existence of atoms. These laws surely didn't tell us everything we wanted to know about the structure of matter: however, they certainly told us something - a lot, in fact - and only scientific dilletantes could think otherwise.
Once the moles were introduced, people could verify new consistency checks about the atoms. Finally, the scientific enemies of the atoms evaporated in 1905 when Einstein correctly explained the Brownian motion (in work also independently pursued by Marian Smoluchowski in 1906). The alternative theories that would explain the chaotic motion of particles in liquids had to be extremely contrived, especially if they wanted to achieve the "distance goes like square root of time" power law for the random walk.
It had to be a random walk and its typical constants were pretty much universal so they had to prove some discrete collisions of basic and rather universal building blocks of all materials: the atoms and molecules. The coefficients told us something about the size of the atoms, too. All these things occurred long before we could actually "see" atoms just a few decades ago. The people who say that they must first see quarks or strings before they take the theory seriously don't tell us anything valuable about science: the only thing these people reveal is that they're narrow-minded, uncreative, slow idiots.
And that's the memo.
|
# Lower semicontinuity
Let $\Omega\subset\mathbb R^n$ be open and bounded. I consider a sequence $u_k:\Omega\to\mathbb R$ of smooth functions which converges uniformly to a function $u:\Omega\to\mathbb R$. Moreover, the sequence of gradients $\nabla u_k$ is uniformly bounded. Now I would like to show that $$\int_\Omega |\nabla u|~dx\leqslant \liminf_{n\to \infty}\int_{\Omega}|\nabla u_k|~dx.$$ References and counterexamples are very welcome.
EDIT: The integral on the lhs exists since the sequence is uniformly Lipschitz and $u$ has therefore (since it is Lipschitz) a gradient defined almost everywhere.
If the sequence of gradients is uniformly bounded in $L^{1+\epsilon}(\Omega)$, $\epsilon>0$, then $\nabla u_{k'}\rightharpoonup \nabla u$ in $L^{1+\epsilon}(\Omega)$ for a subsequence.
This implies $\nabla u_{k'}\rightharpoonup \nabla u$ in $L^{1}(\Omega)$, and the desired inequality is a consequence of weakly lower semicontinuity of norms.
• Thanks for your answer. In my case $\nabla u_k$ is bounded in $L^\infty$. Is then an analogous reasoning possible with weak* convergence instead of weak convergence? What I mean, do I get a subsequence $\nabla u_{k'}\stackrel{*}{\rightharpoonup}\nabla u$ in $L^\infty$ and can deduce from there the same conclusion? (Even if your answer is already enough of course).
• Weak-star convergence in $L^\infty$ implies weak convergence in $L^1$.Then using weak lower semicontinuity of the $L^1$-norm you obtain the same conclusion.
|
# It is so slow!
Tags:
Updated:
As a rule of thumb, the time reported as ‘yalmiptime’ in the output diagnostic should be around fractions of a second for small problems, and typically a fraction of the actual solution time for larger problems. If this is not the case, you probably have a problem with your installation.
As another rule of thumb, yalmiptest(sdpsettings(‘verbose’,0,’solver’,’sedumi’)) takes 2-5 seconds (including initial search for solvers) on my PC machines (2GHz and 3GHz).
Are you running over a network? This can be extremely detrimental for general performance in YALMIP. On some networked Linux version, YALMIP runs 10-100 times slower!
Another way to tackle strange performance issues is to set the field ‘cachesolvers’ in sdpsettings to 1. (One possible cause for poor performance of YALMIP is due to slow network functionality, making the command exist , which is used in optimize when detecting solvers, very slow.) This option can however cause strange behavior if you add or delete solvers, so it should only be used in very rare cases where you know that you have a ridiculously slow network access to your files.
Do you have a poor sparse matrix library in your installation? Some distributions seem to have extremely poor performance for sparse matrices (observed on Linux), which is used a lot in YALMIP. One way to test this is to run the script below. If the displayed result is substantially larger than 20, you have problems (and I have no idea how you can resolve this…)
d = sprandn(1e5,1,0.5);
tic;for i= 1:1000;d([2000:3000]);end;t1 = toc;
d = d';
tic;for i= 1:1000;d([2000:3000]);end;t2 = toc;t1/t2
|
# Integrals - the Substitution Rule with sin^n(x)
## Homework Statement
Given that n is a positive integer, prove ∫sin^n(x)dx=∫cos^n(x)dx from 0 -> pi/2
## Homework Equations
Perhaps sin^2(x)+cos^2(x)=1? Not sure.
## The Attempt at a Solution
I honestly don't even know where to start. Should I set u=sin(x) or cos(x)? Doesn't seem to get the right answer either way...
Related Calculus and Beyond Homework Help News on Phys.org
Simon Bridge
Homework Helper
You need to show that the area under the graph from 0 to 90deg is the same for sin and cosine to any power.
Your first step is to understand the problem - try sketching the graphs for a few powers and shading the area in question to see what you are up against.
There is a rule for integrating powers of sin and cos ... you can derive it from integration by parts. Knowing the rule, you can probably just do it algebraically.
Or you can try a substitution in one like $x=\frac{\pi}{2}-u$ and exploit the parity of the functions.
Sorry, to clarify, the hint says:
Use a trigonometric identity and substitution. Do not solve the definite integrals
Given this information, how would you recommend solving?
SammyS
Staff Emeritus
Homework Helper
Gold Member
## Homework Statement
Given that n is a positive integer, prove ∫sin^n(x)dx=∫cos^n(x)dx from 0 -> pi/2
## Homework Equations
Perhaps sin^2(x)+cos^2(x)=1? Not sure.
## The Attempt at a Solution
I honestly don't even know where to start. Should I set u=sin(x) or cos(x)? Doesn't seem to get the right answer either way...
The co-function identity states that $\displaystyle \cos(x) = \sin\left(\frac{\pi}{2}-x\right)\ .$
Thanks! I got so far:
∫cos^n(x)dx from 0 to pi/2
=∫sin^n(pi/2-x)dx from 0 to pi/2
=∫sin^n(-x)dx from -pi/2 to 0
=∫sin^n(x)dx from 0 to pi/2
Does this look right to you? Thanks for the hint!!
Simon Bridge
Homework Helper
Yeah, that's the idea - it was the second hint post #2 :)
You should comment each step in your actual answer, to explain what you are doing.
SammyS
Staff Emeritus
Homework Helper
Gold Member
Thanks! I got so far:
∫cos^n(x)dx from 0 to pi/2
=∫sin^n(pi/2-x)dx from 0 to pi/2
=∫sin^n(-x)dx from -pi/2 to 0
=∫sin^n(x)dx from 0 to pi/2
Does this look right to you? Thanks for the hint!!
Use the substitution, $u=\frac{\pi}{2}-x$ to show that $\displaystyle \int_{x=0}^{x=\pi/2}\sin^n(\frac{\pi}{2}-x)\,dx = \int_{u=\pi/2}^{u=0}-\sin(u)\,du\ .$
Thanks!
|
# The Category of Sets
The theory of sets was invented as a foundation for all of mathematics. The notion of sets and functions serves as a basis on which to build intuition about categories in general. This chapter gives examples of sets and functions and then discusses commutative diagrams. Ologs are then introduced, allowing us to use the language of category theory to speak about real world concepts. All this material is basic set theory, but it can also be taken as an investigation of the category of sets, which is denoted Set.
# 2.1 Sets and functions
People have always found it useful to put things into bins.
The study of sets is the study of things in bins.
## 2.1.1 Sets
You probably have an innate understanding of what a set is. We can think of a set X as a collection of elements xX, each of which is recognizable as being in X and such that for each pair of named elements x, x′ ∈ X we can tell if x = x′ or not.1 The set of pendulums is the collection of things we agree to call pendulums, each of which is recognizable as being a pendulum, and for any two people pointing at pendulums we can tell if they’re pointing at the same pendulum or not.
Notation 2.1.1.1. The symbol ∅ denotes the set with no elements (see Figure 2.1), which can also be written as { }. The symbol ℕ denotes the set of natural numbers:
The symbol ℤ denotes the set of integers, which contains both the natural numbers and their negatives,
If A and B are sets, we say that A is a subset of B, and write AB, if every element of A is an element of B. So we have ℕ ⊆ ℤ. Checking the definition, one sees that for any set A, we have (perhaps uninteresting) subsets ∅ ⊆ A and AA. We can use set-builder notation to denote subsets. For example, the set of even integers can be written {n ∈ ℤ | n is even}. The set of integers greater than 2 can be written in many ways, such as
$\begin{array}{ccccc}\left\{n\in ℤ|n>2\right\}& \text{or}& \left\{n\in ℕ|n>2\right\}& \text{or}& \left\{n\in ℕ|n⩾3\right\}\end{array}.$
The symbol ∃ means “there exists.” So we could write the set of even integers as
The symbol ∃! means “there exists a unique.” So the statement “∃!x ∈ ℝ such that x2 = 0” means that there is one and only one number whose square is 0. Finally, the symbol ∀ means “for all.” So the statement “∀m ∈ ℕ ∃n ∈ ℕ such that m < n” means that for every number there is a bigger one.
As you may have noticed in defining ℕ and ℤ in (2.1) and (2.2), we use the colon-equals notation “AXY Z” to mean something like “define A to be XY Z.” That is, a colon-equals declaration does not denote a fact of nature (like 2 + 2 = 4) but a choice of the writer.
We also often discuss a certain set with one element, denoted {☺}, as well as the familiar set of real numbers, ℝ, and some variants such as ℝ⩾0 ≔ {x ∈ ℝ | x ⩾ 0}.
Exercise 2.1.1.2.
Let A ≔ {1, 2, 3}. What are all the subsets of A? Hint: There are eight.
A set can have other sets as elements. For example, the set
$X≔\left\{\left\{1,2\right\},\left\{4\right\},\left\{1,3,6\right\}\right\}$
has three elements, each of which is a set.
## 2.1.2 Functions
If X and Y are sets, then a function f from X to Y, denoted f : XY, is a mapping that sends each element xX to an element of Y, denoted f(x) ∈ Y. We call X the domain of the function f, and we call Y the codomain of f.
Note that for every element xX, there is exactly one arrow emanating from x, but for an element yY, there can be several arrows pointing to y, or there can be no arrows pointing to y (see Figure 2.2).
Slogan 2.1.2.1.
Given a function f : XY, we think of X as a set of things, and Y as a set of bins. The function tells us in which bin to put each thing.
Application 2.1.2.2. In studying the mechanics of materials, one wishes to know how a material responds to tension. For example, a rubber band responds to tension differently than a spring does. To each material we can associate a force-extension curve, recording how much force the material carries when extended to various lengths. Once we fix a methodology for performing experiments, finding a material’s force-extension curve would ideally constitute a function from the set of materials to the set of curves.
Exercise 2.1.2.3.
Here is a simplified account of how the brain receives light. The eye contains about 100 million photoreceptor (PR) cells. Each connects to a retinal ganglion (RG) cell. No PR cell connects to two different RG cells, but usually many PR cells can attach to a single RG cell.
Let PR denote the set of photoreceptor cells, and let RG denote the set of retinal ganglion cells.
a. According to the above account, does the connection pattern constitute a function RGPR, a function PRRG, or neither one?
b. Would you guess that the connection pattern that exists between other areas of the brain are function-like? Justify your answer.
Example 2.1.2.4. Suppose that X is a set and X′ ⊆ X is a subset. Then we can consider the function X′ → X given by sending every element of X′ to “itself” as an element of X. For example, if X = {a, b, c, d, e, f} and X′ = {b, d, e}, then X′ ⊆ X. We turn that into the function X′ → X given by bb, dd, ee.2
As a matter of notation, we may sometimes say the following: Let X be a set, and let i : X′ ⊆ X be a subset. Here we are making clear that X′ is a subset of X, but that i is the name of the associated function.
Exercise 2.1.2.5.
Let f : ℕ → ℕ be the function that sends every natural number to its square, e.g., f(6) = 36. First fill in the blanks, then answer a question.
a. 2 ↦ ________
b. 0 ↦ ________
c. −2 ↦ ________
d. 5 ↦ ________
e. Consider the symbol → and the symbol ↦. What is the difference between how these two symbols are used so far in this book?
Given a function f : XY, the elements of Y that have at least one arrow pointing to them are said to be in the image of f; that is, we have
The image of a function f is always a subset of its codomain, im(f) ⊆ Y.
Exercise 2.1.2.6.
If f : XY is depicted by Figure 2.2, write its image, im(f) as a set.
Given a function f : XY and a function g : YZ, where the codomain of f is the same set as the domain of g (namely, Y), we say that f and g are composable
$X\stackrel{f}{\to }Y\stackrel{g}{\to }Z.$
The composition of f and g is denoted by gf : XZ. See Figure 2.3.
Slogan 2.1.2.7.
Given composable functions $X\stackrel{f}{\to }Y\stackrel{g}{\to }Z$, we have a way of putting every thing in X into a bin in Y, and we have a way of putting each bin from Y into a larger bin in Z. The composite, gf : XZ, is the resulting way that every thing in X is put into a bin in Z.
Exercise 2.1.2.8.
If AX is a subset, Example 2.1.2.4 showed how to think of it as a function i : AX. Given a function f : XY, we can compose $A\stackrel{i}{\to }X\stackrel{f}{\to }Y$ and get a function fi: AY. The image of this function is denoted
$f\left(A\right)≔\text{im}\left(f○i\right),$
see (2.3) for the definition of image.
Let X = Y ≔ ℤ, let A ≔ {−1, 0, 1, 2, 3} ⊆ X, and let f : XY be given by f(x) = x2. What is the image set f(A)?
Solution 2.1.2.8.
By definition of image (see (2.3), we have
Since A = {−1, 0, 1, 2, 3} and since i(a) = a for all aA, we have f(A) = {0, 1, 4, 9}. Note that an element of a set can only be in the set once; even though f(−1) = f(1) = 1, we need only mention 1 once in f(A). In other words, if a student has an answer such as {1, 0, 1, 4, 9}, this suggests a minor confusion.
Notation 2.1.2.9. Let X be a set and xX an element. There is a function {☺} → X that sends ☺ ↦ x. We say that this function represents xX. We may denote it x: {☺} → X.
Exercise 2.1.2.10.
Let X be a set, let xX be an element, and let x: {☺} → X be the function representing it. Given a function f : XY, what is fx?
Remark 2.1.2.11. Suppose given sets A, B, C and functions $A\stackrel{f}{\to }B\stackrel{g}{\to }C$. The classical order for writing their composition has been used so far, namely, gf : AC. For any element aA, we write gf(a) to mean g(f(a)). This means “do g to whatever results from doing f to a.”
However, there is another way to write this composition, called diagrammatic order. Instead of gf, we would write f; g : AC, meaning “do f, then do g.” Given an element aA, represented by a: {☺} → A, we have an element a; f; g.
Let X and Y be sets. We write HomSet(X, Y) to denote the set of functions XY.3 Note that two functions f, g : XY are equal if and only if for every element xX, we have f(x) = g(x).
Exercise 2.1.2.12.
Let A = {1, 2, 3, 4, 5} and B = {x, y}.
a. How many elements does HomSet(A, B) have?
b. How many elements does HomSet(B, A) have?
Exercise 2.1.2.13.
a. Find a set A such that for all sets X there is exactly one element in HomSet(X, A). Hint: Draw a picture of proposed A’s and X’s. How many dots should be in A?
b. Find a set B such that for all sets X there is exactly one element in HomSet(B, X).
Solution 2.1.2.13.
a. Here is one: A ≔ {☺}. (Here is another, A ≔ {48}, and another, A ≔ {a1}).
Why? We are trying to count the number of functions XA. Regardless of X and A, in order to give a function XA one must answer the question, Where do I send x? several times, once for each element xX. Each element of X is sent to an element in A. For example, if X = {1, 2, 3}, then one asks three questions: Where do I send 1? Where do I send 2? Where do I send 3? When A has only one element, there is only one place to send each x. A function X → {☺} would be written 1 ↦ ☺, 2 ↦ ☺, 3 ↦ ☺. There is only one such function, so HomSet(X, {☺}) has one element.
b. B = ∅ is the only possibility.
To give a function BX one must answer the question, Where do I send b? for each bB. Because B has no elements, no questions must be answered in order to provide such a function. There is one way to answer all the necessary questions, because doing so is immediate (“vacuously satisfied”). It is like commanding John to “assign a letter grade to every person who is over 14 feet tall.” John is finished with his job the moment the command is given, and there is only one way for him to finish the job. So HomSet(∅, X) has one element.
For any set X, we define the identity function on X, denoted
${\text{id}}_{X}:X\to X,$
to be the function such that for all xX, we have idX(x) = x.
Definition 2.1.2.14 (Isomorphism). Let X and Y be sets. A function f : XY is called an isomorphism, denoted f : $X\stackrel{\cong }{\to }Y$, if there exists a function g : YX such that gf = idX and fg = idY.
In this case we also say that f is invertible and that g is the inverse of f. If there exists an isomorphism $X\stackrel{\cong }{\to }Y$, we say that X and Y are isomorphic sets and may write XY.
Example 2.1.2.15. If X and Y are sets and f : XY is an isomorphism, then the analogue of Figure 2.2 will look like a perfect matching, more often called a one-to-one correspondence. That means that no two arrows will hit the same element of Y, and every element of Y will be in the image. For example, Figure 2.4 depicts an isomorphism $X\stackrel{\cong }{\to }Y$ between four element sets.
Application 2.1.2.16. There is an isomorphism between the set NucDNA of nucleotides found in DNA and the set NucRNA of nucleotides found in RNA. Indeed, both sets have four elements, so there are 24 different isomorphisms. But only one is useful in biology. Before we say which one it is, let us say there is also an isomorphism NucDNA ≅ {A, C, G, T} and an isomorphism NucRNA ≅ {A, C, G, U}, and we will use the letters as abbreviations for the nucleotides.
The convenient isomorphism ${\text{Nuc}}_{\text{DNA}}\stackrel{\cong }{\to }{\text{Nuc}}_{\text{RNA}}$ is that given by RNA transcription; it sends
(See also Application 5.1.2.21.) There is also an isomorphism ${\text{Nuc}}_{\text{DNA}}\stackrel{\cong }{\to }{\text{Nuc}}_{\text{DNA}}$ (the matching in the double helix), given by
Protein production can be modeled as a function from the set of 3-nucleotide sequences to the set of eukaryotic amino acids. However, it cannot be an isomorphism because there are 43 = 64 triplets of RNA nucleotides but only 21 eukaryotic amino acids.
Exercise 2.1.2.17.
Let n ∈ ℕ be a natural number, and let X be a set with exactly n elements.
a. How many isomorphisms are there from X to itself?
b. Does your formula from part (a) hold when n = 0?
Proposition 2.1.2.18. The following facts hold about isomorphism.
1. Any set A is isomorphic to itself; i.e., there exists an isomorphism $A\stackrel{\cong }{\to }A$.
2. For any sets A and B, if A is isomorphic to B, then B is isomorphic to A.
3. For any sets A, B, and C, if A is isomorphic to B, and B is isomorphic to C, then A is isomorphic to C.
Proof. 1. The identity function idA: AA is invertible; its inverse is idA because idA ○ idA = idA.
2. If f : AB is invertible with inverse g : BA, then g is an isomorphism with inverse f.
3. If f : AB and f′ : BC are each invertible with inverses g : BA and g′: CB, then the following calculations show that f′ ○ f is invertible with inverse gg′:
$\begin{array}{c}\left(f\prime ○f\right)○\left(g○g\prime \right)=f\prime ○\left(f○g\right)○g\prime =f\prime ○{\text{id}}_{B}○g\prime =f\prime ○g\prime ={\text{id}}_{C}\\ \left(g○g\prime \right)○\left(f\prime ○f\right)=g○\left(g\prime ○f\prime \right)○f=g○{\text{id}}_{B}○f=g○f={\text{id}}_{A}\end{array}$
Exercise 2.1.2.19.
Let A and B be these sets:
Note that the sets A and B are isomorphic. Suppose that f : B → {1, 2, 3, 4, 5} sends “Bob” to 1, sends ♣ to 3, and sends r8 to 4. Is there a canonical function A → {1, 2, 3, 4, 5} corresponding to f?4
Solution 2.1.2.19.
No. There are a lot of choices, and none is any more reasonable than any other, i.e., none are canonical. (In fact, there are six choices; do you see why?)
The point of this exercise is to illustrate that even if one knows that two sets are isomorphic, one cannot necessarily treat them as the same. To treat them as the same, one should have in hand a specified isomorphism g : $A\stackrel{\cong }{\to }B$, such as ar8, 7 ↦ “Bob”, Q ↦ ♣. Now, given f : B → {1, 2, 3, 4, 5}, there is a canonical function A → {1, 2, 3, 4, 5} corresponding to f, namely, fg.
Exercise 2.1.2.20.
Find a set A such that for any set X, there is an isomorphism of sets
$X\cong {\text{Hom}}_{\mathbf{\text{Set}}}\left(A,X\right).$
Hint: A function AX points each element of A to an element of X. When would there be the same number of ways to do that as there are elements of of X?
Solution 2.1.2.20.
Let A = {☺}. Then to point each element of A to an element of X, one must simply point ☺ to an element of X. The set of ways to do that can be put in one-to-one correspondence with the set of elements of X. For example, if X = {1, 2, 3}, then ☺ ↦ 3 is a function AX representing the element 3 ∈ X. See Notation 2.1.2.9.
Notation 2.1.2.21. For any natural number n ∈ ℕ, define a set
We call n the numeral set of size n. So, in particular, 2 = {1, 2}, 1 = {1}, and 0 = ∅.
Let A be any set. A function f : nA can be written as a length n sequence
We call this the sequence notation for f.
Exercise 2.1.2.22.
a. Let A = {a, b, c, d}. If f : 10A is given in sequence notation by (a, b, c, c, b, a, d, d, a, b), what is f(4)?
b. Let s: 7 → ℕ be given by s(i) = i2. Write s in sequence notation.
Solution 2.1.2.22.
a. c
b. (1, 4, 9, 16, 25, 36, 49)
Definition 2.1.2.23 (Cardinality of finite sets). Let A be a set and n ∈ ℕ a natural number. We say that A has cardinality n, denoted
$|A|=n,$
if there exists an isomorphism of sets An. If there exists some n ∈ ℕ such that A has cardinality n, then we say that A is finite. Otherwise, we say that A is infinite and write |A| ⩾ ∞.
Exercise 2.1.2.24.
a. Let A = {5, 6, 7}. What is |A|?
b. What is |{1, 1, 2, 3, 5}|?
c. What is |ℕ|?
d. What is |{n ∈ ℕ | n ⩽ 5}|?
We will see in Corollary 3.4.5.6 that for any m, n ∈ ℕ, there is an isomorphism mn if and only if m = n. So if we find that A has cardinality m and that A has cardinality n, then m = n.
Proposition 2.1.2.25. Let A and B be finite sets. If there is an isomorphism of sets f : AB, then the two sets have the same cardinality, |A| = |B|.
Proof. If f : AB is an isomorphism and Bn, then An because the composition of two isomorphisms is an isomorphism.
# 2.2 Commutative diagrams
At this point it is difficult to precisely define diagrams or commutative diagrams in general, but we can get a heuristic idea.5 Consider the following picture:
We say this is a diagram of sets if each of A, B, C is a set and each of f, g, h is a function. We say this diagram commutes if gf = h. In this case we refer to it as a commutative triangle of sets, or, more generally, as a commutative diagram of sets.
Application 2.2.1.1. In its most basic form, the central dogma of molecular biology is that DNA codes for RNA codes for protein. That is, there is a function from DNA triplets to RNA triplets and a function from RNA triplets to amino acids. But sometimes we just want to discuss the translation from DNA to amino acids, and this is the composite of the other two. The following commutative diagram is a picture of this fact
Consider the following picture:
We say this is a diagram of sets if each of A, B, C, D is a set and each of f, g, h, i is a function. We say this diagram commutes if gf = ih. In this case we refer to it as a commutative square of sets. More generally, it is a commutative diagram of sets.
Application 2.2.1.2. Given a physical system S, there may be two mathematical approaches f : SA and g : SB that can be applied to it. Either of those results in a prediction of the same sort, f′ : AP and g′ : BP. For example, in mechanics we can use either the Lagrangian approach or the Hamiltonian approach to predict future states. To say that the diagram
commutes would say that these approaches give the same result.
Note that diagram (2.6) is considered to be the same diagram as each of the following:
In all these we have h = gf, or in diagrammatic order, h = f; g.
# 2.3 Ologs
In this book I ground the mathematical ideas in applications whenever possible. To that end I introduce ologs, which serve as a bridge between mathematics and various conceptual landscapes. The following material is taken from Spivak and Kent [43], an introduction to ologs.
## 2.3.1 Types
A type is an abstract concept, a distinction the author has made. Each type is represented as a box containing a singular indefinite noun phrase. Each of the following four boxes is a type:
Each of the four boxes in (2.8) represents a type of thing, a whole class of things, and the label on that box is what one should call each example of that class. Thus ⌜a man⌝ does not represent a single man but the set of men, each example of which is called “a man.” Similarly, the bottom right box represents an abstract type of thing, which probably has more than a million examples, but the label on the box indicates the common name for each such example.
Typographical problems emerge when writing a text box in a line of text, e.g., the text box a man seems out of place, and the more in-line text boxes there are, the worse it gets. To remedy this, I denote types that occur in a line of text with corner symbols; e.g., I write ⌜a man⌝ instead of a man.
### 2.3.1.1 Types with compound structures
Many types have compound structures, i.e., they are composed of smaller units. Examples include
It is good practice to declare the variables in a compound type, as in the last two cases of (2.9). In other words, it is preferable to replace the first box in (2.9) with something like
so that the variables (m, w) are clear.
Rules of good practice 2.3.1.2. A type is presented as a text box. The text in that box should
(i) begin with the word a or an;
(ii) refer to a distinction made and recognizable by the olog’s author;
(iii) refer to a distinction for which instances can be documented;
(iv) be the common name that each instance of that distinction can be called; and
(v) declare all variables in a compound structure.
The first, second, third, and fourth rules ensure that the class of things represented by each box appears to the author to be a well defined set, and that the class is appropriately named. The fifth rule encourages good readability of arrows (see Section 2.3.2).
I do not always follow the rules of good practice throughout this book. I think of these rules being as followed “in the background,” but I have nicknamed various boxes. So ⌜Steve⌝ may stand as a nickname for ⌜a thing classified as Steve⌝ and ⌜arginine⌝ as a nickname for ⌜a molecule of arginine⌝. However, one should always be able to rename each type according to the rules of good practice.
## 2.3.2 Aspects
An aspect of a thing x is a way of viewing it, a particular way in which x can be regarded or measured. For example, a woman can be regarded as a person; hence “being a person” is an aspect of a woman. A molecule has a molecular mass (say in daltons), so “having a molecular mass” is an aspect of a molecule. In other words, when it comes to ologs, the word aspect simply means function. The domain A of the function f : AB is the thing we are measuring, and the codomain is the set of possible answers or results of the measurement.
So for the arrow in (2.10), the domain is the set of women (a set with perhaps 3 billion elements); the codomain is the set of persons (a set with perhaps 6 billion elements). We can imagine drawing an arrow from each dot in the “woman” set to a unique dot in the “person” set, just as in Figure 2.2. No woman points to two different people nor to zero people—each woman is exactly one person—so the rules for a function are satisfied. Let us now concentrate briefly on the arrow in (2.11). The domain is the set of molecules, the codomain is the set ℝ>0 of positive real numbers. We can imagine drawing an arrow from each dot in the “molecule” set to a single dot in the “positive real number” set. No molecule points to two different masses, nor can a molecule have no mass: each molecule has exactly one mass. Note, however, that two different molecules can point to the same mass.
### 2.3.2.1 Invalid aspects
To be valid an aspect must be a functional relationship. Arrows may on their face appear to be aspects, but on closer inspection they are not functional (and hence not valid as aspects).
Consider the following two arrows:
A person may have no children or may have more than one child, so the first arrow is invalid: it is not a function. Similarly, if one drew an arrow from each mechanical pencil to each piece of lead it uses, one would not have a function.
Warning 2.3.2.2. The author of an olog has a worldview, some fragment of which is captured in the olog. When person A examines the olog of person B, person A may or may not agree with it. For example, person B may have the following olog
which associates to each marriage a man and a woman. Person A may take the position that some marriages involve two men or two women and thus see B’s olog as wrong. Such disputes are not “problems” with either A’s olog or B’s olog; they are discrepancies between worldviews. Hence, a reader R may see an olog in this book and notice a discrepancy between R’s worldview and my own, but this is not a problem with the olog. Rules are enforced to ensure that an olog is structurally sound, not to ensure that it “correctly reflects reality,” since worldviews can differ.
Consider the aspect . At some point in history, this would have been considered a valid function. Now we know that the same object would have a different weight on the moon than it has on earth. Thus, as worldviews change, we often need to add more information to an olog. Even the validity of is questionable, e.g., if I am considered to be the same object on earth before and after I eat Thanksgiving dinner. However, to build a model we need to choose a level of granularity and try to stay within it, or the whole model would evaporate into the nothingness of truth. Any level of granularity is called a stereotype; e.g., we stereotype objects on earth by saying they each have a weight. A stereotype is a lie, more politely a conceptual simplification, that is convenient for the way we want to do business.
Remark 2.3.2.3. In keeping with Warning 2.3.2.2, the arrows in (2.12*) and (2.13*) may not be wrong but simply reflect that the author has an idiosyncratic worldview or vocabulary. Maybe the author believes that every mechanical pencil uses exactly one piece of lead. If this is so, then is indeed a valid aspect. Similarly, suppose the author meant to say that each person was once a child, or that a person has an inner child. Since every person has one and only one inner child (according to the author), the map is a valid aspect. We cannot fault the olog for its author’s view, but note that we have changed the name of the label to make the intention more explicit.
### 2.3.2.4 Reading aspects and paths as English phrases
Each arrow (aspect) $X\stackrel{f}{\to }Y$ can be read by first reading the label on its source box X, then the label on the arrow f, and finally the label on its target box Y. For example, the arrow
is read “a book has as first author a person.”
Remark 2.3.2.5. Note that the map in (2.14) is a valid aspect, but a similarly benign-looking map would not be valid, because it is not functional. When creating an olog, one must be vigilant about this type of mistake because it is easy to miss, and it can corrupt the olog.
Sometimes the label on an arrow can be shortened or dropped altogether if it is obvious from context (see Section 2.3.3). Here is a common example from the way I write ologs.
Neither arrow is readable by the preceding protocol (e.g., “a pair (x, y), where x and y are integers x an integer” is not an English sentence), and yet it is clear what each map means. For example, given (8, 11) in A, arrow x would yield 8 and arrow y would yield 11. The label x can be thought of as a nickname for the full name “yields as the value of x,” and similarly for y. I do not generally use the full name, so as not to clutter the olog.
One can also read paths through an olog by inserting the word which (or who) after each intermediate box. For example, olog (2.16) has two paths of length 3 (counting arrows in a chain):
The top path is read “a child is a person, who has as parents a pair (w, m), where w is a woman and m is a man, which yields, as the value of w, a woman.” The reader should read and understand the content of the bottom path, which associates to every child a year.
### 2.3.2.6 Converting nonfunctional relationships to aspects
There are many relationships that are not functional, and these cannot be considered aspects. Often the word has indicates a relationship—sometimes it is functional, as in , and sometimes it is not, as in . Clearly, a father may have more than one child. This one is easily fixed by realizing that the arrow should go the other way: there is a function .
What about . Again, a person may own no cars or more than one car, but this time a car can be owned by more than one person too. A quick fix would be to replace it by . This is okay, but the relationship between ⌜a car⌝ and ⌜a set of cars⌝ then becomes an issue to deal with later. There is another way to indicate such nonfunctional relationships. In this case it would look like this:
This setup will ensure that everything is properly organized. In general, relationships can involve more than two types, and in olog form looks like this:
For example,
Exercise 2.3.2.7.
On page 25, the arrow in (2.12*) was indicated as an invalid aspect:
Create a valid olog that captures the parent-child relationship; your olog should still have boxes ⌜a person⌝ and ⌜a child⌝ but may have an additional box.
Rules of good practice 2.3.2.8. An aspect is presented as a labeled arrow pointing from a source box to a target box. The arrow label text should
(i) begin with a verb;
(ii) yield an English sentence, when the source box text followed by the arrow text followed by the target box text is read;
(iii) refer to a functional relationship: each instance of the source type should give rise to a specific instance of the target type;
(iv) constitute a useful description of that functional relationship.
## 2.3.3 Facts
In this section I discuss facts, by which I mean path equivalences in an olog. It is the notion of path equivalences that makes category theory so powerful.
A path in an olog is a head-to-tail sequence of arrows. That is, any path starts at some box B0, then follows an arrow emanating from B0 (moving in the appropriate direction), at which point it lands at another box B1, then follows any arrow emanating from B1, and so on, eventually landing at a box Bn and stopping there. The number of arrows is the length of the path. So a path of length 1 is just an arrow, and a path of length 0 is just a box. We call B0 the source and Bn the target of the path.
Given an olog, its author may want to declare that two paths are equivalent. For example, consider the two paths from A to C in the olog
We know as English speakers that a woman parent is called a mother, so these two paths AC should be equivalent. A mathematical way to say this is that the triangle in olog (2.17) commutes. That is, path equivalences are simply commutative diagrams, as in Section 2.2. In the preceding example we concisely say “a woman parent is equivalent to a mother.” We declare this by defining the diagonal map in (2.17) to be the composition of the horizontal map and the vertical map.
I generally prefer to indicate a commutative diagram by drawing a check mark, ✓, in the region bounded by the two paths, as in olog (2.17). Sometimes, however, one cannot do this unambiguously on the two-dimensional page. In such a case I indicate the commutative diagram (fact) by writing an equation. For example, to say that the diagram
commutes, we could either draw a check mark inside the square or write the equation
${}_{A}\left[f,g\right]\simeq {}_{A}\left[h,i\right]$
above it.6 Either way, it means that starting from A, “doing f, then g” is equivalent to “doing h, then i.”
Here is another example:
Note how this diagram gives us the established terminology for the various ways in which DNA, RNA, and protein are related in this context.
Exercise 2.3.3.1.
Create an olog for human nuclear biological families that includes the concepts of person, man, woman, parent, father, mother, and child. Make sure to label all the arrows and that each arrow indicates a valid aspect in the sense of Section 2.3.2.1. Indicate with check marks (✓) the diagrams that are intended to commute. If the 2-dimensionality of the page prevents a check mark from being unambiguous, indicate the intended commutativity with an equation.
Solution 2.3.3.1.
Note that neither of the two triangles from child to person commute. To say that they did commute would be to say that “a child and its mother are the same person” and that “a child and its father are the same person.”
Example 2.3.3.2 (Noncommuting diagram). In my conception of the world, the following diagram does not commute:
The noncommutativity of diagram (2.18) does not imply that no person lives in the same city as his or her father. Rather it implies that it is not the case that every person lives in the same city as his or her father.
Exercise 2.3.3.3.
Create an olog about a scientific subject, preferably one you think about often. The olog should have at least five boxes, five arrows, and one commutative diagram.
### 2.3.3.4 A formula for writing facts as English
Every fact consists of two paths, say, P and Q, that are to be declared equivalent. The paths P and Q will necessarily have the same source, say, s, and target, say, t, but their lengths may be different, say, m and n respectively.7 We draw these paths as
Every part of an olog (i.e., every box and every arrow) has an associated English phrase, which we write as 〈〈〉〉. Using a dummy variable x, we can convert a fact into English too. The following general formula may be a bit difficult to understand (see Example 2.3.3.5). The fact PQ from (2.19) can be Englished as follows:
Example 2.3.3.5. Consider the olog
To put the fact that diagram (2.21) commutes into English, we first English the two paths: F = “a person has an address which is in a city” and G = “a person lives in a city.” The source of both is s = “a person” and the target of both is t = “a city.” Write:
Given x, a person, consider the following.
We know that x is a person,
who has an address, which is in a city,
that we call P(x).
We also know that x is a person,
who lives in a city
that we call Q(x).
Fact: Whenever x is a person, we will have P(x) = Q(x).
More concisely, one reads olog 2.21 as
A person x has an address, which is in a city, and this is the city x lives in.
Exercise 2.3.3.6.
This olog was taken from Spivak [38].
It says that a landline phone is physically located in the region to which its phone number is assigned. Translate this fact into English using the formula from (2.20).
Exercise 2.3.3.7.
In olog (2.22), suppose that the box ⌜an operational landline phone⌝ is replaced with the box ⌜an operational cell phone⌝. Would the diagram still commute?
### 2.3.3.8 Images
This section discusses a specific kind of fact, generated by any aspect. Recall that every function has an image (2.3), meaning the subset of elements in the codomain that are “hit” by the function. For example, the function f : ℤ → ℤ given by f(x) = 2 * x: ℤ → ℤ has as image the set of all even numbers.
Similarly, the set of mothers arises as the image of the “has as mother” function:
Exercise 2.3.3.9.
For each of the following types, write a function for which it is the image, or write “not clearly useful as an image type.”
a. ⌜a book⌝
b. ⌜a material that has been fabricated by a working process of type T
c. ⌜a bicycle owner⌝
d. ⌜a child⌝
e. ⌜a used book⌝
f. ⌜a primary residence⌝
__________________
1Note that the symbol x′, read “x-prime,” has nothing to do with calculus or derivatives. It is simply notation used to name a symbol that is somehow like x. This suggestion of kinship between x and x′ is meant only as an aid for human cognition, not as part of the mathematics.
2This kind of arrow, ↦, is read “maps to.” A function f : XY means a rule for assigning to each element xX an element f(x) ∈ Y. We say that “x maps to f(x)” and write xf(x).
3The notation HomSet(−, −) will make more sense later, when it is seen in a larger context. See especially Section 5.1.
4Canonical, as used here, means something like “best choice,” a choice that stands out as the only reasonable one.
5Commutative diagrams are precisely defined in Section 6.1.2.
6We defined function composition in Section 2.1.2, but here we are using a different notation. There we used classical order, and our path equivalence would be written gf = ih. As discussed in Remark 2.1.2.11, category theorists and others often prefer the diagrammatic order for writing compositions, which is f; g = h; i. For ologs, we roughly follow the latter because it makes for better English sentences, and for the same reason, we add the source object to the equation, writing A[f, g] ≃ A[h, i].
7If the source equals the target, s = t, then it is possible to have m = 0 or n = 0, and the ideas that follow still make sense.
|
# [cairo] Fwd: Re: [Mesa3d-dev] more glitz points
David Reveman c99drn at cs.umu.se
Wed Apr 21 08:33:38 PDT 2004
On Wed, 2004-04-21 at 09:33 -0700, Jon Smirl wrote:
> --- Brian Paul <brian.paul at tungstengraphics.com> wrote:
> > Date: Wed, 21 Apr 2004 10:35:47 -0600
> > From: Brian Paul <brian.paul at tungstengraphics.com>
> > To: Jon Smirl <jonsmirl at yahoo.com>
> > CC: mesa3d-dev <mesa3d-dev at lists.sourceforge.net>
> > Subject: Re: [Mesa3d-dev] more glitz points
> >
> >
> > A few comments below.
> >
> > -Brian
> >
> >
> > Jon Smirl wrote:
> > > Here's the whole text...
> > >
> > > \subsection{Programmatic Surfaces}
> > >
> >
> > >
> > > \Libname{} allows you to create programmatic surfaces. A
> > > programmatic surface does not contain any actual image data, only
> > > a minimal set of attributes. These attributes describe how to calculate
> > > the color for each fragment of the surface.
> > >
> >
> > >
> > > Not containing any actual image data makes initialization time for
> > > programmatic surfaces very low. Having a low initialization time
> > > makes them ideal for representing solid colors.
> > >
> >
> > >
> > > \Libname{} also support programmatic surfaces that represent linear or
> > > radial transition vector patterns. A linear pattern defines two points,
> > > which form a transition vector. A radial gradient defines a center point
> > > and a radius, which together form a dynamic transition vector around the
> > > center point. The color of each fragment in these programmatic surfaces
> > > is fetched from a color range, using the fragments offset along the
> > > transition vector.
> > >
> >
> > >
> > > A color range is a one dimensional surface. The color range data is
> > > generated by the application and then transfered to graphics hardware
> > > where it can be used with linear and radial patterns. This allows
> > > applications to use linear and radial patterns for a wide range of
> > > shading effects. For example, linear color gradients and Gaussian
> > > By setting the \emph{extend} attribute of a color range to
> > > \emph{pad, repeat} or \emph{reflect}, the application can also control
> > > what should happen when patterns try to fetch color values outside
> > > of the color range.
> > >
> >
> > >
> > > Most programmatic surfaces are implemented using fragment programs and
> > > they will only be available on hardware supporting the fragment program
> > > extension.
> >
> > You certainly don't need fragment programs to do color gradients like
> > those. At worst, you have to just compute per-vertex texcoords to do
> > radial gradients. You can use texgen for texcoords for linear
> > gradients. I've done both in past projects (long before fragment
> > programs existed).
You're probably right, and I'd love if someone would contribute with
some code that could do this. Although I think it might be hard make it
work with glitz's rendering model (the xrender model). It might also be
hard to make it support the different spread types; pad, repeat and
reflect.
> >
> >
> > > Figure ~\ref{fig:linear-gradients} shows the results from using
> > > programmatic surfaces for linear color gradients.
> > >
> >
> > >
> > > \begin{figure}[htbp]
> > > \begin{centering}
> > > \small\itshape
> > > \caption{\small\itshape Programmatic surfaces used for linear
> > > \end{centering}
> > > \end{figure}
> > >
> >
> > >
> > > \subsection{Convolution Filters}
> >
> > It's not clear if the position of the gradients is window-relative or
> > object-relative, BTW.
window-relative.
> >
> >
> > > Convolutions can be used to perform many common image processing
> > operations
> > > including sharpening, blurring, noise reduction, embossing and
> > > edge enhancement.
> > >
> >
> > >
> > > A convolution is a mathematical function that replaces each pixel by a
> > > weighted sum of its neighbors. The matrix defining the neighborhood of
> > > the pixel also specifies the weight assigned to each neighbor. This
> > > matrix is called the convolution kernel.
> > >
> >
> > >
> > > \Libname{} supports user defined convolution kernels. If a
> > > convolution kernel has been set for a surface, the convolution filter
> > > will be applied when the surface is used as a source in a compositing
> > > operation. The original source surface remains unmodified.
> > >
> >
> > >
> > > In \libname{}, convolution filtering is implemented using fragment
> > > programs and is only available on hardware with fragment program
> > support.
> > > The alternative would be to use OpenGL's imaging extension, which would
> > > require a transfer of all pixels through OpenGL's pixel pipeline to an
> > > intermediate texture. Even though the alternative method would be
> > > supported by older hardware, \libname{} uses fragment programs in favor
> > > of performance.
> >
> > Fragment programs would be fine for convolution, but I hope a lack of
> > fragment programs wouldn't totally preclude the feature.
Glitz is design for the latest graphics hardware and fragment programs
seems to be the way to go lately. So I don't see this as a problem.
Cairo can use image buffers as fall back, if glitz report that graphics
hardware lack support for fragment programs. Slow of course, but it
would work.
> >
> >
> >
> > > \subsubsection{Useful Convolution Kernels}
> > >
> >
> > >
> > > Table~\ref{table:conv} presents three useful convolution kernels and
> > > figure ~\ref{fig:conv_original} and ~\ref{fig:conv_gaussian} show the
> > > results of filtering an image using a Gaussian convolution kernel.
> > >
> > > Examples of Guassian, High Pass, Emboss
> > >
> > >
> >
> > >
> > >
> > >
> > >
> > >
> > > --- Brian Paul <brian.paul at tungstengraphics.com> wrote:
> > >
> > >>Jon Smirl wrote:
> > >>
> > >>>Both of these require fragment programs to implement....
> > >>>
> > >>>3.11 Programmatic Surfaces
> > >>>
> > >>>Glitz allows you to create programmatic surfaces. A programmatic surface
> > >>
> > >>does
> > >>
> > >>>not contain any actual image data, only a minimal set of attributes. These
> > >>>attributes describe how to calculate the color for each fragment of the
> > >>
> > >>surface.
> > >>
> > >>If I understand this correctly a programmatic surface might be
> > >>something really simple like:
> > >>
> > >>* solid color fill
> > >>* linear gradient fill
> > >>* repeated pattern (texture) fill
> > >>
> > >>You don't need a fragment program for those.
solid color fill and repeated patterns don't need fragment program
support. I'm sure that we never say that in the paper.
> > >>
> > >>I guess I'd like to see a concrete example of a programmatic surface
> > >>that would require a fragment program.
> > >>
> > >>
> > >>
> > >>>3.12 Convolution Filters
> > >>>
> > >>>Glitz allows you to create programmatic surfaces. Convolutions can be used
> > >>
> > >>to
> > >>
> > >>>perform many common image processing operations including sharpening,
> > >>
> > >>blurring,
> > >>
> > >>>noise reduction, embossing and edge enhancement.
> > >>
> > >>Convolution can be done without fragment programs. Unfortunately, few
> > >>consumer-level cards (any?) hardware-accelerate the convolution
> > >>operations defined by OpenGL. They're typically done in software.
> > >>
> > >>-Brian
> > >>
> > >
> > >
> > >
> > > =====
> > > Jon Smirl
> > > jonsmirl at yahoo.com
> > >
> > >
> > >
> > >
> > > __________________________________
> > > Do you Yahoo!?
> > > Yahoo! Photos: High-quality 4x6 digital prints for 25
> > > http://photos.yahoo.com/ph/print_splash
> > >
> >
> >
>
>
> =====
> Jon Smirl
> jonsmirl at yahoo.com
>
>
>
>
> __________________________________
> Do you Yahoo!?
> Yahoo! Photos: High-quality 4x6 digital prints for 25
> http://photos.yahoo.com/ph/print_splash
>
> _______________________________________________
> cairo mailing list
> cairo at cairographics.org
> http://cairographics.org/cgi-bin/mailman/listinfo/cairo
- David
--
David Reveman <c99drn at cs.umu.se>
|
# Homework Help: A problem in finding the General Solution of a Trigonometric Equation
Tags:
1. May 18, 2017
### Wrichik Basu
1. The problem statement, all variables and given/known data:
Find the general solution of the Trigonometric equation $$\sin {3x}+\sin {x}=\cos {6x}+\cos {4x}$$
Answers given are: $(2n+1)\frac {\pi}{2}$, $(4n+1)\frac {\pi}{14}$ and $(4n-1)\frac {\pi}{6}$.
2. Relevant equations:
Equations that may be used:
3. The attempt at a solution:
The answer from Case 1 is correct, but I can't find my mistake in the answers from the two sub-cases of case 2.
2. May 18, 2017
Your mistake is in the trigonometric identity $cosA-cosB=-2sin((A+B)/2)sin((A-B)/2)$. You didn't divide the terms (A+B and A-B) by 2 in both cases.
3. May 18, 2017
### Wrichik Basu
Got it. Thanks a lot.
4. May 18, 2017
@Wrichik Basu This is an extra detail, but it may interest you that I think the solution $x=(2n+1) \frac{\pi}{2}$ for all integers $n$ is actually all included in the other two solutions. The reason is that $x=(2n+1) \frac{\pi}{2}$ is also always a solution of $cos(5x)=sin(2x)$. (A complete expansion of $cos(5x)$ and $sin(2x)$will generate a $cos(x)$ factor on both sides of the equation.) $\\$ You can write $x= (2k+1) \frac{\pi}{2}=(4n+1) \frac{\pi}{14}$ and if $k$ is odd, for any $k$ you can find an integer $n$. You can also write $x=(2k+1) \frac{\pi}{2}=(4m-1) \frac{\pi}{6}$ and if $k$ is even, for any $k$ you can find an integer $m$. Thereby, the last two solutions completely overlap the $x=(2n+1) \frac{\pi}{2}$ solution. $\\$ Editing... The other two solutions are completely independent of each other=a little algebra shows there is no "x" that is the same in both of them.
|
$$\require{cancel}$$
# 1.2: Scientific Notation and Order of Magnitude
• Boundless
• Boundless
## Scientific Notation
Scientific notation is a way of writing numbers that are too big or too small in a convenient and standard form.
learning objectives
• Convert properly between standard and scientific notation and identify appropriate situations to use it
### Scientific Notation: A Matter of Convenience
Scientific notation is a way of writing numbers that are too big or too small in a convenient and standard form. Scientific notation has a number of useful properties and is commonly used in calculators and by scientists, mathematicians and engineers. In scientific notation all numbers are written in the form of $$\mathrm{a⋅10^b}$$ ($$\mathrm{a}$$ multiplied by ten raised to the power of $$\mathrm{b}$$), where the exponent $$\mathrm{b}$$) is an integer, and the coefficient ($$\mathrm{a}$$ is any real number.
Scientific Notation: There are three parts to writing a number in scientific notation: the coefficient, the base, and the exponent.
Most of the interesting phenomena in our universe are not on the human scale. It would take about 1,000,000,000,000,000,000,000 bacteria to equal the mass of a human body. Thomas Young’s discovery that light was a wave preceded the use of scientific notation, and he was obliged to write that the time required for one vibration of the wave was “$$\frac{1}{500}$$ of a millionth of a millionth of a second”; an inconvenient way of expressing the point. Scientific notation is a less awkward and wordy way to write very large and very small numbers such as these.
### A Simple System
Scientific notation means writing a number in terms of a product of something from 1 to 10 and something else that is a power of ten.
For instance, $$32=3.2⋅10^1$$
$$320=3.2⋅10^2$$
$$3200=3.2⋅10^3$$, and so forth…
Each number is ten times bigger than the previous one. Since $$10^1$$ is ten times smaller than $$10^2$$, it makes sense to use the notation $$10^0$$ to stand for one, the number that is in turn ten times smaller than $$10^1$$. Continuing on, we can write $$10^{−1}$$ to stand for 0.1, the number ten times smaller than $$10^0$$. Negative exponents are used for small numbers:
$$3.2=3.2⋅10^0$$
$$0.32=3.2⋅10^{−1}$$
$$0.032=3.2⋅10^{−2}$$
Scientific notation displayed calculators can take other shortened forms that mean the same thing. For example, $$3.2⋅10^6$$(written notation) is the same as $$\mathrm{3.2E+6}$$ (notation on some calculators) and $$3.2^6$$ (notation on some other calculators).
## Round-off Error
A round-off error is the difference between the calculated approximation of a number and its exact mathematical value.
learning objectives
• Explain the impact round-off errors may have on calculations, and how to reduce this impact
### Round-off Error
A round-off error, also called a rounding error, is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate this error when using approximation equations, algorithms, or both, especially when using finitely many digits to represent real numbers. When a sequence of calculations subject to rounding errors is made, errors may accumulate, sometimes dominating the calculation.
Calculations rarely lead to whole numbers. As such, values are expressed in the form of a decimal with infinite digits. The more digits that are used, the more accurate the calculations will be upon completion. Using a slew of digits in multiple calculations, however, is often unfeasible if calculating by hand and can lead to much more human error when keeping track of so many digits. To make calculations much easier, the results are often ’rounded off’ to the nearest few decimal places.
For example, the equation for finding the area of a circle is $$\mathrm{A=πr^2}$$. The number $$π$$(pi) has infinitely many digits, but can be truncated to a rounded representation of as 3.14159265359. However, for the convenience of performing calculations by hand, this number is typically rounded even further, to the nearest two decimal places, giving just 3.14. Though this technically decreases the accuracy of the calculations, the value derived is typically ‘close enough’ for most estimation purposes.
However, when doing a series of calculations, numbers are rounded off at each subsequent step. This leads to an accumulation of errors, and if profound enough, can misrepresent calculated values and lead to miscalculations and mistakes.
The following is an example of round-off error:
$$\sqrt{4.58^2+3.28^2}=\sqrt{21.0+10.8}=5.64$$
Rounding these numbers off to one decimal place or to the nearest whole number would change the answer to 5.7 and 6, respectively. The more rounding off that is done, the more errors are introduced.
## Order of Magnitude Calculations
An order of magnitude is the class of scale of any amount in which each class contains values of a fixed ratio to the class preceding it.
learning objectives
• Choose when it is appropriate to perform an order-of-magnitude calculation
### Orders of Magnitude
An order of magnitude is the class of scale of any amount in which each class contains values of a fixed ratio to the class preceding it. In its most common usage, the amount scaled is 10, and the scale is the exponent applied to this amount (therefore, to be an order of magnitude greater is to be 10 times, or 10 to the power of 1, greater). Such differences in order of magnitude can be measured on the logarithmic scale in “decades,” or factors of ten. It is common among scientists and technologists to say that a parameter whose value is not accurately known or is known only within a range is “on the order of” some value. The order of magnitude of a physical quantity is its magnitude in powers of ten when the physical quantity is expressed in powers of ten with one digit to the left of the decimal.
Orders of magnitude are generally used to make very approximate comparisons and reflect very large differences. If two numbers differ by one order of magnitude, one is about ten times larger than the other. If they differ by two orders of magnitude, they differ by a factor of about 100. Two numbers of the same order of magnitude have roughly the same scale — the larger value is less than ten times the smaller value.
It is important in the field of science that estimates be at least in the right ballpark. In many situations, it is often sufficient for an estimate to be within an order of magnitude of the value in question. Although making order-of-magnitude estimates seems simple and natural to experienced scientists, it may be completely unfamiliar to the less experienced.
Example $$\PageIndex{1}$$:
Some of the mental steps of estimating in orders of magnitude are illustrated in answering the following example question: Roughly what percentage of the price of a tomato comes from the cost of transporting it in a truck?
Guessing the Number of Jelly Beans: Can you guess how many jelly beans are in the jar? If you try to guess directly, you will almost certainly underestimate. The right way to do it is to estimate the linear dimensions and then estimate the volume indirectly.
Incorrect solution: Let’s say the trucker needs to make a profit on the trip. Taking into account her benifits, the cost of gas, and maintenance and payments on the truck, let’s say the total cost is more like 2000. You might guess about 5000 tomatoes would fit in the back of the truck, so the extra cost per tomato is 40 cents. That means the cost of transporting one tomato is comparable to the cost of the tomato itself.
The problem here is that the human brain is not very good at estimating area or volume — it turns out the estimate of 5000 tomatoes fitting in the truck is way off. (This is why people have a hard time in volume-estimation contests, such as the one shown below.) When estimating area or volume, you are much better off estimating linear dimensions and computing the volume from there.
So, here’s a better solution: As before, let’s say the cost of the trip is \$2000. The dimensions of the bin are probably 4m by 2m by 1m, for a volume of $$\mathrm{8 \; m^3}$$. Since our goal is just an order-of-magnitude estimate, let’s round that volume off to the nearest power of ten: $$\mathrm{10 \; m^3}$$ . The shape of a tomato doesn’t follow linear dimensions, but since this is just an estimate, let’s pretend that a tomato is an 0.1m by 0.1m by 0.1m cube, with a volume of $$\mathrm{1⋅10^{−3} \; m^3}$$. We can find the total number of tomatoes by dividing the volume of the bin by the volume of one tomato: $$\mathrm{\frac{10^3 \; m^3}{10^{−3} \; m^3}=10^6}$$ tomatoes. The transportation cost per tomato is $$\mathrm{\frac{\2000}{10^6 \; tomatoes}=\ 0.002}$$ per tomato. That means that transportation really doesn’t contribute very much to the cost of a tomato. Approximating the shape of a tomato as a cube is an example of another general strategy for making order-of-magnitude estimates.
## Key Points
• Scientific notation means writing a number in terms of a product of something from 1 to 10 and something else that is a power of 10.
• In scientific notation all numbers are written in the form of $$\mathrm{a⋅10^b}$$ (a times ten raised to the power of b).
• Each consecutive exponent number is ten times bigger than the previous one; negative exponents are used for small numbers.
• When a sequence of calculations subject to rounding error is made, these errors can accumulate and lead to the misrepresentation of calculated values.
• Increasing the number of digits allowed in a representation reduces the magnitude of possible round-off errors, but may not always be feasible, especially when doing manual calculations.
• The degree to which numbers are rounded off is relative to the purpose of calculations and the actual value.
• Orders of magnitude are generally used to make very approximate comparisons and reflect very large differences.
• In the field of science, it is often sufficient for an estimate to be within an order of magnitude of the value in question.
• When estimating area or volume, you are much better off estimating linear dimensions and computing volume from those linear dimensions.
## Key Terms
• exponent: The power to which a number, symbol or expression is to be raised. For example, the 3 in $$x^3$$.
• Scientific notation: A method of writing, or of displaying real numbers as a decimal number between 1 and 10 followed by an integer power of 10
• approximation: An imprecise solution or result that is adequate for a defined purpose.
• Order of Magnitude: The class of scale or magnitude of any amount, where each class contains values of a fixed ratio (most often 10) to the class preceding it. For example, something that is 2 orders of magnitude larger is 100 times larger; something that is 3 orders of magnitude larger is 1000 times larger; and something that is 6 orders of magnitude larger is one million times larger, because $$10^2 = 100, 10^3 = 1000,$$ and $$\mathrm{10^6 = one \; million}$$
|
## Monday, July 09, 2012 ... /////
### Landscape wars: David Gross vs Brian Greene's PBS program
Next month, my translation of Brian Greene's third major popular book, The Hidden Reality, should be released in the Czech Republic.
It's a good book and I still consider Brian to be a top physics writer but I plan to write down a blog entry summarizing the book, its content and scientific misconceptions, its format, its philosophy, a comparison with The Elegant Universe I translated a decade ago, and the evolution of my feelings about the usefulness of the promotion of advanced science in general.
But before I do so, let me inform you about an interesting 90-minute debate that took place at David Gross' KITP in Santa Barbara two weeks ago. It was about the controversies in science and the right journalists' reactions to them.
The debate was moderated and the video materials were presented by PBS NOVA's Paula Apsell who is also the current KITP science journalist in residence:
Controversy in Science: When Scientists Disagree, What's the Journalist to Do? (HTML, various formats)
Flash video hi-res, Flash video lo-res, 3GP file (40+ MB, video: a few frames per minute)
The right way along which the journalists should approach controversies in science has of course been one of the major issues on this blog throughout the years. The particular topics they have discussed include
Evolution
Climate change
The multiverse
As both my evolutionist and creationist friends know, I am a full-fledged Darwin chap. As an undergrad, I was a pro-Darwinian activist but I am no longer one because I don't think that creationism threatens anything or anyone (or because of my former undergraduate sweetheart, a fanatical evangelist?). In some sense, I feel compassionate towards creationists even though I don't really share their beliefs. The evolution debate is discussed in the first 30 minutes of the video above.
Climate change is of course a major controversy and NOVA's reporting on this part of science and policymaking – and everything that this woman has said about the topic during her talk – has been utterly catastrophic, dishonest, distasteful, and scandalous but I will omit it in this text. There's been no controversy about climate change at the KITP because all people in the room were staunch advocates of the Big Government and that's what actually decides in 95% of the cases on whether or not one decides to think about this topic rationally or not. People who consider the Big Government to be one of the best things in the big picture will simply prefer arbitrarily breathtakingly idiotic group think over the individual rational reasoning and over an arbitrarily rigorous proof that there is no climate problem. About 15 minutes were dedicated to climate change in the discussion about the scientific controversies in the media.
It turned out that the most controversial topic (locally) was something else, namely the multiverse and the anthropic principle. Go to 43:20 and she starts to talk about the multiverse, the landscape, and The Fabric of the Cosmos, the 2011 PBS TV program hosted by Brian Greene. The four episodes were previously discussed on this blog. I always enjoy the style of Brian Greene's programs; the content received mixed TRF reviews.
The first episode on space was mostly correct. The second episode about time has been full of eternalism; over-the-edge claims that no one knows what time is; and nonsensical claims about the non-existence of the logical arrow of time (in the very trailer, Brian directly says "according to the laws of physics, processes with macroscopically decreasing entropy may occur": WTF!? Especially the second law, right?), among other things. The third episode about quantum mechanics was nicely done but it has really failed to explain how quantum mechanics – the framework underlying our world – works. It's probably because Brian Greene intrinsically doesn't believe that the world doesn't obey some laws of a (conceptually) classical theory at the fundamental level.
The fourth episode was about the multiverse and it was the main topic of the 90-minute KITP debate two weeks ago. David Gross had a monologue in which he pretty much repeated everything I wrote about the episode. It was propagandist in character (or "manipulative", using better words of David Gross); it failed to coherently and honestly explain the picture of the critics of the anthropic lack of principles; and it just made lots of incorrect statements such as "logic seems to lead to the multiverse" which were not disputed in any way on the show.
More generally, the four-episode PBS program totally failed to separate uncertain and speculative topics such as the multiverse from established science such as quantum mechanics. (This point was highlighted by a layman at 1:07:00 who said that he believed that quantum mechanics is just as speculative as the multiverse.) I've made those statements many times in the past; it seems that David Gross agrees with me on every single point here. If I were born in April 1973 and not December 1973, I could easily conjecture that the Gross-Wilczek paper on QCD has plagiarized some scribbles of mine, too! ;-)
Just kidding.
The PBS folks recorded David Gross for something like four hours but his explanations finally didn't make any significant impact on the show because Brian apparently controlled the composition and the flow of the program and he's become a nearly complete anthropicist in recent years. (Just for the Czech readers, the word "anthropics" may be translated as "antropičiny" but the correct translation is obviously "antropíčoviny".)
Because I've seen this immense discrimination against the Nobel prize winners, it's plausible that I will try to offer David a platform that is thermonuclear i.e. vastly more powerful than PBS, namely TRF. ;-)
Back to the 90-minute KITP debate. To make things worse, Joe Polchinski who had pledged a decade ago to quit physics if the anthropic reasoning ever becomes dominant has switched to the anthropic reasoning, too. So he was defending the PBS program, although in a seemingly confused and undecided way.
Now, David, make no doubts about it: the same kind of manipulation affects all the programs about the climate change, too. Many of these programs, overseen by very similar stupid and/or dishonest TV/journalistic folks, are presented as research, too.
A difference between the anthropic reasoning and the climate hysteria is that the anthropic reasoning is an unlikely but conceivably correct speculation about the reasons behind some unexplained features of the Universe which hurts no one; the climate hysteria is also a project to liquidate tens of trillions of dollars, the industrial capitalist civilization, and some of the basic human freedoms, too. But some of the mechanisms by which these two types of garbage are being promoted into the "mainstream opinion" are analogous.
Someone during the debate mentioned that the journalists are presenting populist topics and cherry-pick controversies which distorts the picture what is important and what is true. The woman calls it a "reality" that one "has to do so". That's interesting. I use very different words for what she calls "reality": "immorality" and "corruption".
Today, I was sort of expected to participate in the discussion under the Higgs interview with me on technet.idnes.cz, so I did participate. But I was also offered to write a Higgs article for the leading printed classical newspapers, "Lidové noviny". It turned out that it was planned that a big chunk of it would be about apologies for particle physics' existence and nonzero funding. I mentioned that I had to write several paragraphs about the very same thing in my February article on OPERA. I got another reply about "the will of the people" (it's a scientific consensus of the people that the scientific knowledge is worthless), the kind of "reality" comment, with some options. So I replied again that I had another option, namely to shit upon the working class primitives who pompously call themselves people and just to write nothing.
I just happened to get the offer again and I could reduce the space given by the humiliating "apologies for physics' existence" to one sentence – in the last sentence, I sent a message to the foes of funding of physics saying that Austria wanted to leave CERN in 2009 but many people realized how humiliating it would have been for Austria to become a physics peer of Albania so the plans for the CERN exit were scrapped in 2010. That's the only explanation of the purpose of physics funding for those who don't care about the truth in physics: people not paying attention to the laws of Nature are primitives and it's embarrassing to be a primitive. There's no other explanation for the funding of pure research they could understand. (Even if Austria left, it would make no significant different at CERN, of course. In similar smaller countries and environments, it's important to realize that the public pompously claiming it's very important is actually very unimportant and only decides about irrelevant fractions of the resources.)
Well, I understand that the newspapers may depend on the copies sold to lots of idiots who consider science to be junk and just don't want or can't read any high-brow articles in the newspapers, either. But that's their choice and it's not my problem. Prostitutes depend on some men's desire to have sex with paid women, too. It's exactly the same thing. The fact that some people depend on various things doesn't make their immoral behavior less immoral. They may always choose to do something else if what they're doing is forcing them to behave unethically. If they don't choose a less immoral job and continue in their immoral practices, then they're immoral people. It's as simple as that.
#### snail feedback (45) :
If someone asks "Why is our planet habitable?" then the anthropic principle cannot give any reasonable answers imo, but if someone asks "Why is it that we live on a habitable planet out of all the planets out there? (It surely must be because of God :P)" then I think anthropic reasoning can be quite useful ;)
Maybe Joe Polchinsky HAS quit physics by switching to the anthropic reasoning ... :-P ;-) ?
(Yeah I know that I`m provocatively overstating, he is still cool, but I could not hold it back since I dont like the anthropic reasoning too much ...)
I exactly agree that (particle) physics needs absolutely no excuse for its existence, people who claim something else always drive me up the wall ...:-(0)! It is a very sad thing that such trolls obviously hold the majority in the US governement since quite a long time now. Observing the US quit (fundamental) physics lets them look not so good ...
A guest post from David Gross (or somthing) would be very cool :-)
And to the promised TRF article discussing Brian Green's "Hidden Reality" I look forward too :-)
I enjoyed when, in the article, I was reminded of the power of persuasion that can be wielded by method of well timed/targeted shaming. ;>>
The only aspect (an utterly minor one) of how you wrote this article that I did not like - and believe was caused by too hasty (superlumonal) thinking/writing - was that you described anthropic reasoning as "unlikely".
"Unlikely" should not (IMHO) have been included when you made that otherwise very good point.
The problem with debates is that journalists and toxic pests often take what is said out of context and use it to bolster arguments that the quoted scientist does not support at all. This happens all the time with Creationists who latch onto any difference of opinion say, between Stephen J Gould and Richard Dawkins (punctuated equilibrium and gradualism) to say that evolution is incorrect when both scientists strongly believe in evolution. David Gross faced the same treatment in 2005 when, at the Solvay conference he made some comments about the current state of string theory.
Promptly, Lubos' bete noire and shameless prevaricator
PW latched onto David's musings to say that he was repudiating string theory when he is actually a strong supporter with probing questions about it.
Dilaton---I agree with you about the Anthropic Principle---the weak form is simply obvious and the strong form seems just wrong and abhorrent.
Thanks for your feedback, Peter. "Unlikely" is a word that has nothing to do with the main substance of the sentence, indeed. It's also an oversimplified shortcut of my opinions. What I really think is that some ways of thinking often included under the anthropic umbrella are demonstrably wrong; and all of them are unlikely to yield useful results or explanations. Otherwise whether they're unlikely is disputable.
But my main point was about the existence and methods of demagogy so |I should have separated these technical opinions, indeed. Tx.
Dear Gordon, right! I was just watching the complete 30 minutes of the 90-minute debate at KITP, with pieces of a show about the ID and it contained some good pieces.
There was a trial and an ID advocate talked about irreducible complexity; he also referred to a biologist who said that some bacterial motor is irreducibly complex, i.e. useless until all parts are assembled, which proves that natural selection couldn't have created it. That scientist did say that it does look like it was man-made ... but he knew it wasn't. In fact, it wasn't irreducibly complex because the part of the motor without the rotor has been found in other bacteria as a gadget to inject poison - and the cited scientist himself knew that very well.
Now, it's typical for journalists - and "scientists/activists" - to use someone's arguments in slightly wrong ways (and incomplete ways, stripped of some essential insights) that do not imply what they want to prove. Of course, Columbia's Shmoit is a notorious example of this hardcore demagogy, one who is well over the edge.
does not the overlap between number theory and quantum theory disprove the anthropic principle?
http://seedmagazine.com/content/article/prime_numbers_get_hitched/
correct me if i am wrong:
-anthropic principle is the theory of evolution applied to physics
-anthropic: the structure of atom is what it is otherwise no emergent life would exist to exam it
-number theory: pure axiomatic language of math eerily
tells us something about the structure of atom
unless prime numbers are also subjected to anthropic principle, why would this overlap exist
Lubos, I know this is off-topic, but you're the only Czechian I know. In the 2012 Intl Mathematical Olympiad (which is going on right now), the Czech team has a chinese kid on it. What gives ?
Funny. I will investigate. Don't forget that Czechs and Chinese are closely related, unlike the Poles. ;-) It's a český-čínský tým.
I am reading he is actually the best young mathematician in Czechia these days, also a good chemist and other things. If it calms you down, classmates call him Tonda (Czech for "Tony"). ;-) Look at
It is NOT in any sane sense "evolution applied to physics".
You're certainly right about the 15 IQ point advantage of Asians that allows them to rise to the top even in countries in which they are minorities. However I see this same phenomenon for Indians in countries where they are a minority and yet, Indians only have an average IQ of 85. For example, this year the teams from Uganda and South Africa both have indians.
It's called "outsourcing" :)
Cze Wronski, I am convinced that those things in India are fully explained by India's wider IQ distribution
http://isteve.blogspot.cz/2011/12/pisa-scores-2-indian-states-flop.html
The problem I have with M theory is that it is one of the last crutches of religion. If M theory is correct then there are millions or more universes with all types of slighty different Gods. The only thing we can be sure of so far is that our universe is not one of them. Thank probability amplitudes.
gods can type poetry as good as monkeys can't they?
It's alcohol not pills? ;-)
Should I frame my comment into a question about the implications of an infinite multi universe theory?
Nah, its the cultural work ethic. Asian students on average imo work/study many more hours than N Americans and the consequences of them not succeeding are much more severe for them and their familites.
Dear Gordon, I am confident that your comment can't apply to folks like Dung Le Anh in Czechia. They don't face any problems or serious risks here.
Anthropic principle should have no place in science, because it is no science, it is an empty tautology at best, and a backdoor for religion into science at worst. And I disagree that creationism is harmless to science. I watched some discussions on youtube between proponents of creationism and defenders of evolution and it was almost painful to watch. It is hardly anything more than a medieval obscurantism in a modern cloak.
And I should add that I am obviously not an atheist. I am just able to separate strict science based on sound evidence from philosophical speculation
I find it hard to believe that Asians have on average 15 higher IQ scores. Do you know of any original research article that examined it and where I could check the methodology?
For example, in the West, every single child goes to school. In China and similar countries, some children might not attend the school and so if you compare the results, it is not really an equal situation. I spent half a year in China and honestly found a lot of people there pretty stupid.
Creationism is close enough to cretinism. There is only an "a" and an "o" missing.
It is not the risks, it is the cultural values, loss of face, Asian tiger moms etc.
http://www.uni-mainz.de/eng/15491.php
Europe has a history with deciding people they don't like are less than human. So does America, for that matter, and Africa, and Asia.
I can share the planet with Creationists.
The irony of David Gross' position is that he was complaining in that talk to the lady that he felt the program didn't hear from his side enough so they looked silly. But it was him who was explicitly calling the other side silly by saying they believe in "angels and fairies". C'mon David. If looking silly is your main concern (which he stated it was) then don't be a hypocrite and call the other side silly angel believers.
At least Brian Greene NEVER characterized his opponents as such.
Personally, I think the multiverse arguments of Greene are rather weak. But it is a logical possibility, and has arisen simply from our best attempts to explain the world. Although they might not prove to be right. Anyhow, they should definitely not be equated with theology and angel worshipping. It is ridiculous for David Gross to claim that - how childish.
Stephen Wolfram's mom
Lubos, on a related issue:
there do exists ARGUMENTS (not proofs) that there is a multiverse, coming from considering eternal inflation, string theory, fine tuning, etc. Although these arguments are not water-tight, they do have some basis.
On the other hand, Lubos, do you know of any arguments that there is only ONE universe?
Thanks.
"And I should add that I am obviously not an atheist....." Oh, and up to that point you were actually making sense.
Whether you can or cannot you WILL BE sharing it.
Yup, it seems like an interesting reconstruction; I've seen it at Watts' blog. If true, this could very well suggest that the decline of the Roman Empire was at least helped by the decline of the temperatures, although I am not too happy about similar explanations.
Sorry, Ghandi, but this is just anthropic propaganda. There's no truly logically working argument of this sort and one just can't replace a working argument by several dysfunctional ones.
Eternal inflation is only relevant if inflation is eternal. Your belief that inflation is eternal is just a belief and the fact that one may combine the words to "eternal inflation" is just a circular argument in favor of such things.
String theory only implies a landscape, a complicated configuration space of "possibilities" (empty universes allowed by the theory's equations), but that is something different than the multiverse: string theory doesn't imply that all/most_of these "possibilities" are realized.
Fine-tuning, much like creationists' irreducible complexity, is a fair working hypothesis that *would* support anthropic ideas if it were found. But all possible cases of fine-tuning in the past were found to be non-existent - very much like in the case of the irreducible complexity - and the hints that haven't been solved by a well-established, proven solution yet indicate that they will be solved. Hierarchies involving the Higgs may still be solved in agreement with the scientific logic by new physics around the TeV scale; the cosmological constant may be tiny because of SUSY cancellations perhaps accompanied by some extra cancellations from some other mechanism. The small theta-angle in QCD is a counter-argument against the anthropic fine-tuning because the smallness apparently isn't advantageous for life, so even without a proof of axions - which make it naturally small - we know that the anthropic thinking doesn't look solid.
Concerning your question whether there is evidence that there exists just one Universe, yes, there is and it is based on much more solid a part of research in modern quantum gravity. Holographic principle implies that the degrees of freedom behind the horizon aren't independent from those that are inside - the complementarity principle. When applied to violent cosmologies, it means that one isn't allowed to think about the volumes behind the cosmic horizon as about independent regions of space. They're just a different, scrambled way to choose observables acting on the same Hilbert space. This fact probably makes it invalid to think about the existence of "us" and "them" at the same moment (it would be a violation of the uncertainty principle) *even if* the existence of these large volumes behind the horizon as well as the domain walls from quantum tunneling were implied by classical physics which they're probably not.
I also think that the belief in infinitely many Brian Greenes in the Universe that are relevant for doing physics *is* fully analogous to the belief in angels and it is very stupid. In my opinion, this point isn't hard to be seen but some people may need some help to see it. What David Gross complained about was that they didn't allow the actual explanation why the belief in the anthropic fantasies is analogous to the belief in angels to be coherently articulated on the show. So while your criticism seems to be directed against Gross, you are actually strengthening this point that the program was manipulative. It was propaganda.
No-one ever claimed that the smallest of theta was due to anthropics, so this seems irrelevant. Your argumenta that the smallest of the cosmological constant is due to cancellations is not supported by any known calculations.
Anyhow, can you explain more as to how holography favors a single universe? Why does Susskind promote holography as well as the multiverse?
Lubos, I think creationism by itself would bring no threat/contradiction to science when the Creator is considered to be the responsible for picking up a specific "minima" in a degenerate landscape of physical laws. Analogously, what physical (and not metaphysical or philosofical) principle could a physicist invoke from ST to let universe X instead of Y to be the one realized by Nature from the landscape ? Is there some kind of constructive interference of possibilities working synergically here?
I am the Devil
It doesn't matter whether someone claimed that theta was due to anthropics - which would be wrong.
What matters is that we have a fundamental parameter of Nature which is "unnaturally" small, isn't explained by any well-established or proven physics processes, but we still know that the anthropic ideology isn't a solution to this problem. This is an example showing that we must always admit that any hierarchy or apparent fine-tuning may have a natural, non-anthropic explanation even if we don't know what it is.
It is not quite true that my comments about C.C. cancellations are not supported. SUSY and no-scale supergravity in particular solves 1/2 of the cosmological constant hierarchy problem as it reduces the unnatural gap from 120+ orders to just about 60 orders.
Susskind promotes the anthropic misconceptions because he randomly made the guess and wants to promote this meme to remain influential. Moreover, as explicitly explained in his popular book, he believes that spreading the anthropic opinion as an ideology is a good way to combat traditional religion which he considers a good trend.
At any rate, there are no scientific arguments supporting the viewpoint that the counting of observers is relevant for understanding of the properties of Nature. And that's what matters in science.
Instead of the theta angle, which seems irrelevant, if might have been interesting if you had an historical example of the following kind: there was some phenomenon that appeared to only have an anthropic explanation, but was later found to be dynamical.
Of course there is still the 60 orders of magnitude left for CC, which I was referring to.
So you see holography and multiverse to be 100% incompatible, or is there some way of fitting them together. Seems odd to me that Susskind would be so contradictory.
It is important to know that: the Pope is not a creationist. For the catholics the creation of the Universe by God is not in opposition with the Evolution theory. Christiantiy is opened to new discoveries in Science and have no problem including them to match their belief. It's just way more flexible and intelligent than this stupid creationism movement. Voilà.
Science is something else than funny stories from the history of science so despite your fundamental delusions, the importance of what I write below is exactly zero. Nevertheless, of course that I can give you such an example.
http://www.csicop.org/sb/show/is_carbon_production_in_stars_fine-tuned_for_life/
In 1953, Fred Hoyle argued that the existence of life required the existence of an excited state of carbon-12 at approximately 7.7 MeV. This was soon confirmed experimentally, a great victory for the anthropic reasoning. Except that we know that the energy of the excited level isn't a fundamental parameter but it is calculable from QCD which has no dimensionless parameters related to pure QCD and just a few bare mass parameters in non-pure QCD.
Moreover, as the article above explains in some details, we no longer believe that there's any real fine-tuning over there because the tolerance of star life to deviations of the energy of this level is pretty much of the same order as the distances between various energy levels of the carbon nucleus.
Of course there is still the 60 orders of magnitude left for CC, which I was referring to.
Fine, we don't have the final answer to everything in Nature. That doesn't mean that we have to accept the first pseudoscientific belief system that claims to explain all the things we can't explain yet.
I don't consider the contradiction between the multiverse and holography to be waterproof but it's strong enough for me not to spend time with thinking about any details of multiverse-statistics-based reasoning (and I have many other reasons).
Our attitudes to science are utterly incompatible. Note that every your comment is about appeals to authorities, sociological stories about the history of science, and so on. You're simply not thinking about Nature in the scientific way. At least if you were able and willing to fix your ideas about the history...
I won't be if I "cannot." ;)
Susskind promotes the anthropic misconceptions because he randomly made the guess and wants to promote this meme to remain influential. Moreover, as explicitly explained in his popular book, he believes that spreading the anthropic opinion as an ideology is a good way to combat traditional religion which he considers a good trend."
As if he knew better about whether "traditional religion" by which he means the Judeo-Christian tradition -- the Hebraic conception of God as a just being who supports equity in all human relationships -- was good for society. This is a sociological issue as much as anything else. Look at China for an alternative, where the rule of law is a foreign concept. As are the rights of the individual, the public welfare, and a lot of other things that grow (and grew) out of the West's "traditional relgion."
This is not about science, but it is not about nothing either. God talk should not be condemned.
|
# Comparison of different topologies on the ordered square $[0,1]\times[0,1]$
This is textbook problem in Munkres' Topology. I know there are plenty of solutions online, but none of them are exactly the same as mine. I would really appreciate if anyone can review my solution to this problem and point out any mistakes and stuff that can be improved.
Problem: Let$I=[0,1]$. Compare the product topology on $I\times I$, the dictionary order topology on $I\times I$, and the topology $I\times I$ inherits as a subspace of $\R\times \R$ in the dictionary order topology.
Attempt: Let $\T_{1}$ denote the product topology on $I\times I$, $\T_{2}$ the dictionary topology on $I\times I$, $\T_{3}$ the topology inherits as a subspace of $\R\times\R$ in the dictionary topology and let $\mathcal{B}_{1},\mathcal{B}_{2}$ and $\mathcal{B}_{3}$ denote the corresponding set of basis. Note that for an arbitrary element $B\in\mathcal{B}_{2}$, we have \begin{align} B & =\left\langle \left(a,b\right),\left(c,d\right)\right\rangle \\ & =\left\{ a\right\} \times(b,1]\cup\left(a,c\right)\times[0,1]\cup c\times[0,d)\\ & =\left(\left\{ a\right\} \times\left(b,2\right)\cap I\times I\right)\cup\left(\left(a,c\right)\times\left(-1,2\right)\cap I\times I\right)\cup\left(\left\{ c\right\} \times\left(-1,d\right)\cap I\times I\right)\\ & =\left\{ \left[\left\{ a\right\} \times\left(b,2\right)\right]\cup\left[\left(a,c\right)\cup\left(-1,2\right)\right]\cup\left[\left\{ c\right\} \times\left(-1,d\right)\right]\right\} \cap\left(I\times I\right) \end{align} and notice that $\left[\left\{ a\right\} \times\left(b,2\right)\right]\cup\left[\left(a,c\right)\cup\left(-1,2\right)\right]\cup\left[\left\{ c\right\} \times\left(-1,d\right)\right]$ is open in the dictionary order topology of $\R\times\R$. Therefore, we have $B\in\mathcal{B}_{3}$, which implies $\mathcal{T}_{2}\subset\mathcal{T}_{3}.$ Furthermore, we claim that $\mathcal{T}_{3}$ in fact strictly finer than $\T_{2}$. Let $A=\left\{ 1/2\right\} \times(1/2,1].$ Since we can rewrite $A=\left[\left\{ 1/2\right\} \times\left(1/2,3/2\right)\right]\cap\left(I\times I\right)$, $A\in\T_{3}.$ However, suppose $A\in\mathcal{T}_{2};$ then $\exists\left\{ B_{i}\right\} _{i\in I}\in\mathcal{B}_{2}$ such that $A=\cup_{i\in I}B_{i}$. Since $\left(1/2,1\right)\in A$, $\exists B_{i}=\left\langle \left(a_{i},b_{i}\right),\left(c_{i},d_{i}\right)\right\rangle$ such that $\left(1/2,1\right)\in B_{i}$. Note that we have $a_{i}=c_{i}=1/2$ since otherwise we would have $\cup_{i}B_{i}\neq A$. But $\left(1/2,1\right)\in B_{i}$ implies $\left(1/2,a_{i}\right)\prec\left(1/2,1\right)\prec\left(1/2,d_{i}\right)$, which in turn implies $1<d\leq1$, a contradiction. Therefore, $A\notin\T_{2}.$
Next, we claim that $\mathcal{T}_{1}\subset\T_{3}.$ Indeed, by Problem 17 (which says the $\R_d\times\R$ is equal to $\R\times\R$ in dictionary order), we have $\T_{3}=\left(\R_{d}\times\R\right)\cap I\times I\supset\left(\R\times\R\right)\cap I\times I=\T_{1}.$ We also have that $\T_{1}$ and $\T_{2}$ are incomparable. To see this, we first note that $\left\{ 1/2\right\} \times\left(1/3,2/3\right)=\left\langle \left(1/2,1/3\right),\left(1/2,2/3\right)\right\rangle \in\T_{3}$. However, any open set in $\T_{2}$ containing $\left\{ 1/2,1/2\right\}$ will contain $\left\{ 1/2+\varepsilon,1/2\right\}$ for some $\varepsilon>0$. Thus $\T_{2}\nsubseteq\T_{1}$. For this other direction, we have $I\times(0,1]=\left(-1\times2\right)\times(0,2)\cap I\times I$ which is in $\T_{1}$ but not in $\T_{2}.$ Indeed, suppose $I\times(0,1]\in\T_{2},$ then $(0,1)\in\mathcal{B}_{i}$ for some $B_{i}=\left\langle (a_{i},b_{i}),(c_{i},d_{i})\right\rangle \in\mathcal{B}_{2},$ which implies $\left(0,1\right)\prec\left(c_{i},d_{i}\right)$, which indicates $c_{i}>0$. Then we have $\left\langle \left(0,1\right),\left(c_{i},d_{i}\right)\right\rangle =\left[\left(0,c_{i}\right)\times\left[0,1\right]\right]\cup\left[\left\{ c_{i}\right\} \times[0,d_{i})\right]$ containing $\left(c_{i},0\right)$ which is not in $I\times(0,1].$ Hence, $\T_{2}$ and $\T_{1}$ are incomparable.
• I would suggest that you add figures to help to visualize the relationship among these squares and their subsets. For example $I\times I$ is a square. $I\times I$ with point $P(t)=(0,t)$ glue to point $Q(t)=(1,1-t)$ $(0\leq t\leq 1)$ is a Mobius strip, etc. Thus people can understand your problem and solution faster. – mike Sep 11 '16 at 3:31
• Another way to show that $\tau_2 \not \subset \tau_1$ is that $\tau_1$ has a countable base, which it inherits from a countable base for the usual topology on $\mathbb R^2.$ So if $F$ is a family of pairwise disjoint $\tau_1$-open sets then $F$ is countable. But $\{ \; \{x\} \times (1/3,2/3) :x\in [0,1]\}$ is an uncountable pairwise disjoint $\tau_2$-open family, – DanielWainfleet Jan 13 '17 at 3:31
|
## Condensed Matter Theory
Held in SCI 328.
###### August 1st, 2017 to July 31st, 2018. Previous/next year.
Aug08 Garry Goldstein University of Cambridge Pseudoparticle approach to strongly correlated systems Host: Claudio Rebbi Aug10 Marcus Muller Paul Scherer Institute, Villigen Switzerland The many body localization transition: A dynamical phase transition triggered by a quantum avalanche Host: Claudio Rebbi Aug11 Riccardo Zecchina Out-of-equilibrium states and quantum annealing speed up in non-convex optimization and learning problems Aug16 Sho Sugiura University of Tokyo Entanglement entropy of finite-temperature pure quantum states and thermodynamic entropy as a Noether invariant Host: Anatoli Polkovnikov Sep13 Michiel Wouters University of Antwerp Correlations in driven-dissipative quantum systems Host: Anatoli Polkovnikov, Dries Sels Sep15 Stefanos Kourtis Boston University, Physics Department "Resonant x-ray spectroscopy of topological semimetals" Sep22 Christopher Baldwin Boston University, Physics Department "Interfering directed paths & the sign phase transition" Sep25 Cenke Xu University of California - Santa Barbara "Lieb-Schultz-Mattis Theorem and its generalizations from the Perspective of the Symmetry Protected Topological Phase" Host: Anders Sandvik Sep27 Zhicheng Yang Boston University, Physics Department "Iterative Compression-Decimation Scheme for Tensor Network Optimization" Oct04 Walter Hahn Skolkovo Institute of Science and Technology, Moscow "Stability of quantum statistical ensembles with respect to local measurements" Host: Anatoli Polkovnikov Oct06 Kartiek Agrawal Princeton University "Diabatic approach to creating ground states of certain critical models" Host: Anatoli Polkovnikov Oct13 Roman Mankowsky Paul Scherrer Institute "Coherent optical control and nonlinear probing of strongly correlated materials" Host: Wanzheng Hu Oct18 Louis Pecora United States Naval Research Laboratory "Cluster Synchronization of Chaotic Systems in Large Networks" Host: David Campbell Oct20 Tamiro Villazon Boston University, Physics Department "Kibble-Zurek Dynamics of a Driven Dissipative System: Universality Beyond the Markovian Regime" Oct24 Ivar Mar Argonne National Lab "Exciting dynamics in multiple time dimensions" Oct25 Hui Shao Beijing Computational Science Research Center "Nearly deconfined spinon excitations in the square-lattice spin-1/2 Heisenberg antiferromagnet" Host: Anders Sandvik Oct27 Kin Chung Fong Raytheon BBN Technologies and Harvard University "Putting hydrodynamic into solid state" Host: Scott Bunch Nov06 Masaki Oshikawa University of Tokyo "Polarization, Large Gauge Invariance, and Quantum Hall Effect on Lattice" Host: Anders Sandvik Nov08 Zhen Bi Pappalardo Fellow, Massachusetts Institute of Technology "Instabilities of the Sachdev-Ye-Kitaev Model" Host: Zhicheng Yang Nov17 Carlos Bolech University of Cincinnati "A Bosonization Puzzle in Non-linear Transport" Dec01 Alexandre Day Boston University, Department of Physics "Learning and optimizing quantum control landscapes" Dec04 Ganpathy Murthy University of Kentucky "Quantum Hall Ferromagnetism in $\nu=0$ Bilayer Graphene" Host: Anushya Chandran Dec08 Jonathan Wurtz Boston University, Department of Physics "The Cluster Truncated Wigner Approximation in Strongly Interacting Quantum Systems" Dec15 Xi Dai Hong Kong University of Science and Technology "Properties of Topological semimetals: Chiral magnetic effect and instability under magnetic field" Host: Wanzheng Hu Dec20 Mikhail Zvonarev LPTMS, CNRS, Orsay, France "Propagation of an impurity through a quantum medium"
## News Archive:
Latest news.View all news from the year: 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005.
|
# Why did my friend lose all his money?
Not sure if this is a question for math.se or stats.se, but here we go:
Our MUD (Multi-User-Dungeon, a sort of textbased world of warcraft) has a casino where players can play a simple roulette.
My friend has devised this algorithm, which he himself calls genius:
• Bet 1 gold
• If you win, bet 1 gold again
• If you lose, bet double what you bet before. Continue doubling until you win.
He claimed you will always win exactly 1 gold using this system, since even if you lose say 8 times, you lost 1+2+4+8+16+32+64+128 gold, but then won 256 gold, which still makes you win 1 gold.
He programmed this algorithm in his favorite MUD client, let it run for the night. When he woke up the morning, he was broke.
Why did he lose? What is the fault in his reasoning?
-
The last clause should read "continue doubling until you win or have no money". – Chris Eagle Aug 28 '12 at 8:31
It's called a martingale betting strategy and it works well if you have infinite wealth which was probably not the case... – johnny Aug 28 '12 at 8:33
To make the claim "you will always win exactly 1 gold" approximately true (almost certainly, and assuming no limit on funds and height of bets), your second clause should be "if you win, stop" – Marc van Leeuwen Aug 28 '12 at 9:49
Assume you keep doing this, say, 30 times in a row. Where do you take 2^30 money from? Heck, I'm not partiucularly poor, but I don't usually have that much in my pocket... – Damon Aug 28 '12 at 13:30
It's a shame that we're at a place where a MUD has to be described as a "text-only World of Warcraft" – Gareth Aug 28 '12 at 19:25
Suppose, for simplicity, that the probability of winning one round of this game is $\frac{1}{2}$, and the probability of losing is also $\frac{1}{2}$. (Roulette in real life is not such a game, unfortunately.) Let $X_0$ be the initial wealth of the player, and write $X_t$ for the wealth of the player at time $t$. Assuming that the outcome of each round of the game is independent and identically distributed, $(X_0, X_1, X_2, \ldots)$ forms what is known as a martingale in probability theory. Indeed, using the bet-doubling strategy outlined, at any time $t$, the expected wealth of the player at time $t + 1$ is $$\mathbb{E} \left[ X_{t+1} \middle| X_0, X_1, \ldots, X_t \right] = X_t$$ because the player wins or loses an equal amount with probability $\frac{1}{2}$ in each case, and $$\mathbb{E} \left[ \left| X_t \right| \right] < \infty$$ because there are only finitely many different outcomes at each stage.
Now, let $T$ be the first time the player either wins or goes bankrupt. This is a random variable depending on the complete history of the game, but we can say a few things about it. For instance, $$X_T = \begin{cases} 0 & \text{ if the player goes bankrupt before winning once} \\ X_0 + 1 & \text{ if the player wins at least once} \end{cases}$$ so by linearity of expectation, $$\mathbb{E} \left[ X_T \right] = (X_0 + 1) \mathbb{P} \left[ \text{the player wins at least once} \right]$$ and therefore we may compute the probability of winning as follows: $$\mathbb{P} \left[ \text{the player wins at least once} \right] = \frac{\mathbb{E} \left[ X_T \right]}{X_0 + 1}$$ But how do we compute $\mathbb{E} \left[ X_T \right]$? For this, we need to know that $T$ is almost surely finite. This is clear by case analysis: if the player wins at least once, then $T$ is finite; but the player cannot have an infinite losing streak before going bankrupt either. Thus we may apply the optional stopping theorem to conclude: $$\mathbb{E} \left[ X_T \right] = X_0$$ $$\mathbb{P} \left[ \text{the player wins at least once} \right] = \frac{X_0}{X_0 + 1}$$ In other words, the probability of this betting strategy turning a profit is positively correlated with the amount $X_0$ of starting capital – no surprises there!
Now let's do this repeatedly. The remarkable thing is that we get another martingale! Indeed, if $Y_n$ is the player's wealth after playing $n$ series of this game, then $$\mathbb{E} \left[ Y_{n+1} \middle| Y_0, Y_1, \ldots, Y_n \right] = 0 \cdot \frac{1}{Y_n + 1} + (Y_n + 1) \cdot \frac{Y_n}{Y_n + 1} = Y_n$$ by linearity of expectation, and obviously $$\mathbb{E} \left[ \left| Y_n \right| \right] \le Y_0 + n < \infty$$ because $Y_n$ is either $0$ or $Y_{n-1} + 1$.
Let $T_k$ be the first time the player either earns a profit of $k$ or goes bankrupt. So, $$Y_{T_k} = \begin{cases} 0 && \text{ if the player goes bankrupt} \\ Y_0 + k && \text{ if the player earns a profit of } k \end{cases}$$ and again we can apply the same analysis to determine that $$\mathbb{P} \left[ \text{the player earns a profit of k before going bankrupt } \right] = \frac{Y_0}{Y_0 + k}$$ which is not too surprising – if the player is greedy and wants to earn a larger profit, then the player has to play more series of games, thereby increasing his chances of going bankrupt.
But what we really want to compute is the probability of going bankrupt at all. I claim this happens with probability $1$. Indeed, if the player loses even once, then he is already bankrupt, so the only way the player could avoid going bankrupt is if he has an infinite winning streak; the probability of this happening is $$\frac{Y_0}{Y_0 + 1} \cdot \frac{Y_0 + 1}{Y_0 + 2} \cdot \frac{Y_0 + 2}{Y_0 + 3} \cdot \cdots = \lim_{n \to \infty} \frac{Y_0}{Y_0 + n} = 0$$ as claimed. So this strategy almost surely leads to ruin.
-
A link to martingale (betting system) is probably also in order. – Henning Makholm Aug 28 '12 at 11:40
@HenningMakholm Oh wow, this actually is a thing? Thanks! – Konerak Aug 28 '12 at 12:18
Yes This is called the gambler's ruin problem. So even for a fair game if you play long enough you will lose all your money. If the game favors the house you will lose faster. See Feller's probability theory books for a more detailed account of the problem. – Michael Chernick Aug 28 '12 at 12:25
It's really amazing that the probability of infinite profit, before bankrupt, is zero. Indeed, Except controlling your thirst of winning a big money, there's no additional way to maximize your profit ! – Fardad Pouran Jun 18 '14 at 10:53
This betting strategy is very smart if you have access to infinite wealth or can go into infinite debt. In reality however, you will eventually lose all or most of your money.
Say your friend had $k$ gold at the beginning. I assume that this simple roulette has a probability of both win and loss equal to $0.5$.
First, let's see how many times you need to lose in a row in order to lose all your wealth.
\begin{align} 1 + 2 + 2^2 + 2^3 + ... + 2^n &\geq k \\ 2^{n+1} - 1 &\geq k \\ n &\geq \log_{2}(k+1) - 1 \end{align}
So even if you start with $10000$ gold, after $13$ lost bets you are broke and in debt. Continuing this example, the probability of this happening in a one shot-game is a mere $0.02$%. However, if you keep the algorithm running all night for $8$ hours betting every $5$ seconds, your chances of having a losing streak of $13$ in a row go up to $29.61$%.
Assuming that you cannot go into debt and $12$ losses is the most you can handle, then with the same data the chance of losing most of your money goes up to $50.5$%.
-
I think I get this! So, the probability of losing 13 times is low (1/2^13), but since the other (1-1/2^13) times where he won, he only won 1 gold, and the one time he lost, he lost all his gold, the expected value is low? – Konerak Aug 28 '12 at 9:15
Not exactly. If you try this strategy right now only once until first win or all money lost, the probability that you lose is $\frac{1}{2^{13}}$. However if you start an algorithm, all that matters is the probability of losing $13$ times in a row while your algorithm is running and the probability of this happening is much higher. – johnny Aug 28 '12 at 9:23
Somehow you have to factor in the effect that if you do manage to win $2^{13}$ times without going bankrupt, then your tolerance for runs of bad luck goes up to fourteen. I don't think that it changes the verdict, but it would be nice to see how to take that into account. – Jyrki Lahtonen Aug 28 '12 at 9:24
That is a fair point! I'm glad @ZhenLin worked out the formal details for that. – johnny Aug 28 '12 at 9:40
Repeated bets in roulette is a random walk on the wealthy variable. Since there is a dead end, when you can't continue the walk (when you're broke), the probability of loosing everything is greater than zero.
-
I suppose he was betting on a red/black or even/odd system. Now, there is a "0" which is green/not even or odd, so if you hit that zero, you lose.
There are 36 numbers you can hit, plus that 0 . Even if you chose red or black, even or odd, there are bigger chances to lose than win.
-
Roulette tables have a zero and a double-zero, to give the casino its only mathematical advantage. Are you sure this answer true for the MUD as given? Keep in mind, one wasn't given. – Joshua Shane Liberman Aug 28 '12 at 15:56
## protected by J. M. is back.Aug 29 '12 at 10:28
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
|
# What is XeF_4's molecular shape?
Feb 7, 2017
$X e {F}_{4}$ is $\text{square planar...........}$
The molecular geometry is square planar, but the electronic geometry is octahedral. How? Well, the is a simple consequence of $\text{valence shell electron pair repulsion theory}$, which I will try to outline.
For $X e {F}_{4}$, we have 8 valence electrons from xenon, and $4 \times 7$ valence electrons from the bound fluorides; thus, we have 36 electrons, 18 electron pairs to distribute around 5 centres. Each fluorine atom has 4 electron pairs surrounding it; 3 lone pairs, and one $X e - F$ bond. And the central xenon atom has 4 bonding pairs, and 2 non-bonding lone pairs. The most thermodynamically stable state for these 6 electrons is as an OCTAHEDRON, with $4 \times X e - F$ bonds, and 2 non-bonding lone pairs on the central xenon atom.
|
# A CO observation of the Galactic methanol masers
Title A CO observation of the Galactic methanol masers Publication Type Journal Article Year of Publication 2014 Authors Ren, Z, Wu, Y, Liu, T, Li, L, Li, D, Ju, B Journal \aap Volume 567 Pagination A40 Keywords ISM: jets and outflows, ISM: kinematics and dynamics, stars: formation Abstract {Context. We investigated the molecular gas associated with 6.7 GHz methanol masers throughout the Galaxy using a J = 1-0 transition of the CO isotopologues. Aims: The methanol maser at 6.7 GHz is an ideal tracer for young high-mass star-forming cores. Based on molecular line emissions in the maser sources throughout the Galaxy, we can estimate their physical parameters and, thereby, investigate the forming conditions of the high-mass stars. Methods: Using the 13.7-m telescope at the Purple Mountain Observatory (PMO), we have obtained $^{12}$CO and $^{13}$CO (1-0) lines for 160 methanol masers sources from the first to the third Galactic quadrants. We made efforts to resolve the distance ambiguity by careful comparison with the radio continuum and HI 21 cm observations. We examined the statistical properties in three aspects: first, the variation throughout the Galaxy; second, the correlation between the different parameters; third, the difference between the maser sources and the infrared dark clouds. In addition, we have also carried out $^{13}$CO mapping for 33 sources in our sample. Results: First, the maser sources show increased $^{13}$CO line widths toward the Galactic center, suggesting that the molecular gas are more turbulent toward the Galactic center. This trend can be noticeably traced by the $^{13}$C line width. In comparison, the Galactic variation for the H₂ column density and the $^{12}$CO excitation temperature are less significant. Second, the $^{12}$CO excitation temperature shows a noticeable correlation with the H₂ column density. A possible explanation consistent with the collapse model is that the higher surface-density gas is more efficient to the stellar heating and/or has a higher formation rate of high-mass stars. Third, comparing the infrared dark clouds, the maser sources on average have significantly lower H₂ column densities, moderately higher temperatures, and similar line widths. Fourth, In the mapped regions around 33 masers, 51 $^{13}$CO cores have been revealed. Among them, only 17 coincide with the radio continuum emission (F$_{cm}${\gt} 6 mJy), while a larger fraction (30 cores) coincide with the infrared emissions. Only one maser source has no significant IR emission. The IR-bright and radio-bright sources exhibit significantly higher $^{12}$CO excitation temperatures than the IR-faint and radio-faint sources, respectively. Conclusions: The 6.7 GHz masers show a moderately low ionization rate but have a common-existing stellar heating that generates the IR emissions. The relevant properties can be characterized by the $^{12}$CO and $^{13}$CO (1-0) emissions in several aspects as described above. } DOI 10.1051/0004-6361/201219692
|
PNT analog for primes inside a structured set
Let $\Bbb T$ be the set of all square free integers with ordering derived from $\Bbb N$. Essentially $PNT$ says if you pick $\log N$ integers less than $N$ you can expect one of them to be prime.
1. What is the analog in $\Bbb U\subseteq\Bbb T$ where $\Bbb U$ contains only those integers in $\Bbb T$ with factors $1\bmod m$ for some fixed integer $m>0$ (that is how many integers we expect to pick in random before we hit a prime)?
2. What is the analog in $\Bbb U_\alpha\subseteq\Bbb T$ where $\Bbb U$ contains only those integers in $\Bbb T$ with factors $1\bmod m$ for some fixed integer $m>0$ and each integer $N\in\Bbb U_\alpha$ has only $O((\log \log N)^\alpha)$ factors for some fixed $\alpha\in(0,1)$?
• The density of primes congruent to $1 \bmod m$ is known, so to answer (1) it suffices to understand the density of integers constructed from primes congruent to $1 \bmod m$. Similarly for (2), though the density estimate seems to be more annoying. – davidlowryduda May 16 '17 at 18:52
• @mixedmath can you post whatever you know? – 1.. May 16 '17 at 19:19
I describe how to understand (1).
The number of primes congruent to $1 \bmod m$ that are less than $X$ is asymptotically given by $$\pi(X; 1, m) = (1 + o_m(1))\frac{1}{\varphi(m)} \frac{X}{\log X}.$$ This is a quantitative version of Dirichlet's Theorem on Primes in arithmetic progressions, and generally it is possible to give error terms analogous to the error term in the Prime Number Theorem.
In "Strings of Congruent Primes", Journal of the London Mathematical Society 61.2 (2000): 359-373, Shiu proves that $$\zeta(s; 1, m) := \prod_{p \equiv 1 \bmod m} \big(1 - \frac{1}{p^s}\big)^{-1} = (s-1)^{-1/\varphi(m)} g(s)$$ where $g(s)$ is a regular, non-zero function in regions of shape $1 - \frac{c}{\log T} \leq \textrm{Re } s \leq 2$, $\textrm{Im } s \leq T$. [This is meant to be similar to the classical zero-free region for $\zeta(s)$].
We can construct a Dirichlet series associated to those $n \in \mathbb{U}$ through the computations $$D(s) = \sum_{n \in \mathbb{U}} \frac{1}{n^s} = \prod_{p \equiv 1 \bmod m} \big( 1 + \frac{1}{p^s}\big) = \prod_{p \equiv 1 \bmod m} \frac{(1 - p^{-2s})}{(1 - p^{-s})} = \frac{\zeta(s; 1,m)}{\zeta(2s; 1, m)}.$$ Notice that this is essentially similar to the Dirichlet series associated to counting squarefree integers in $\mathbb{Z}$, $$\sum_{n \; \text{squarefree}} \frac{1}{n^s} = \frac{\zeta(s)}{\zeta(2s)}.$$
Then classical Perron-type estimates and standard Tauberian arguments (similar to those detailed in Shiu's paper) show that the number of elements of $\mathbb{U}$ up to $X$ is given by $$\# \{n \in \mathbb{U} : n \leq X \} = (1 + o_m(1)) \frac{1}{\zeta(2; 1, m)} \frac{X (\log X)^{1/\varphi(m)}}{\log X}.$$
Thus the density you're looking for is $$\frac{\pi(X; 1, m)}{\# \{n \in \mathbb{U} : n \leq X \}} \sim \frac{\zeta(2; 1, m)}{\varphi(m)} \frac{1}{(\log X)^{1/\varphi(m)}},$$ where $\zeta(2; 1,m)$ is the constant $$\zeta(2; 1,m) = \prod_{p \equiv 1 \bmod m} \big(1 - \frac{1}{p^2}\big).$$
Thus you should expect (roughly) a constant out of every $(\log N)^{1/\varphi(m)}$ many random choices up to $N$ to be prime within this set. Note that this also gives the correct estimate for the "degenerate case" corresponding to $m = 1$.
It might be possible to perform some sort of sieving argument to approach your (2), but that is a bit separate from things that I routinely think about.
• should it be '.... a constant out of every $(\log N)^{\varphi(m)}$...'? In any case the density went down since you now have to choose lot more integers before you expect to hit a prime. However you kept $\frac1m$ of all primes but threw out much more than $\frac{m-1}m$ fraction of integers right and so the density of primes should have increased right? – 1.. May 17 '17 at 2:39
• I think it should be $(\log N)^{\frac1{\varphi(m)}}$ – 1.. May 17 '17 at 2:45
• Oh, indeed I mistyped that line. I've corrected that now. – davidlowryduda May 17 '17 at 2:45
|
# What should a proof of correctness for a typechecker actually be proving?
I've been programming for several years, but am very unfamiliar with theoretical CS. I've recently been trying to study programming languages, and as part of that, type checking and inference.
My question is, if I try to write a type inference and checking program for a programming language, and I wish to prove that my typechecker works, what exactly is the proof I'm looking for?
In plain language, I want my type checker to be able to identify any errors in a piece of code that might occur at runtime. If I were to use something like Coq to try proving that my implementation is correct, what exactly will this "proof of correctness" be trying to show?
• Maybe you can clarify if you want to know (1) whether your implementation does implement a given typing system $T$, or (2) whether your typing system $T$ does prevent the errors you think it should? They are different questions. – Martin Berger Jun 12 '17 at 17:58
• @MartinBerger: Ah, I seem to have skipped over that difference. My actual question probably meant to ask both. The context is that I'm trying to build a language, and for it I was writing a typechecker. And people asked me to use a tried and tested algorithm. I was interested in seeing how hard it would be to "prove" the algorithm and typechecker that I was using were "correct". Hence the ambiguity in my question. – Vivek Ghaisas Jun 12 '17 at 18:27
• (1) is really a question in program verification and has little to do with typing. Just needs showing that your implementation meets its specification. As to (2), first define what it means to be an immediate type error (e.g. terms like 2 + "hello" that are 'stuck'). Once this is formalised, you can then prove type soundness theorem. That means that no typable program can ever evolve into an immediate type error. Formally, you prove that if a program $M$ is typable, and for any $n \geq$: if $M$ runs $n$ steps to become $N$, then $N$ does not have an immediate type error. (1/2) – Martin Berger Jun 12 '17 at 18:32
• This is typically proven by induction on $n$ and on the derivation of the typing judegement. (2/2) – Martin Berger Jun 12 '17 at 18:32
• Thank you! Based on your explanation, it seems like (2) is indeed what I was looking for. Could you please make that an answer? (And maybe add in any details that you think might be useful.) I would accept that as the answer! :) – Vivek Ghaisas Jun 12 '17 at 18:45
The question can be interpreted in two ways:
• Whether the implementation does implement a given typing system $T$?
• Whether the typing system $T$ does prevent the errors you think it should?
The former is really a question in program verification and has little to do with typing. Just needs showing that your implementation meets its specification, see Andrej's answer.
Let me talk about the later question. As Andrej said, from an abstract point of view, a typing system seems to enforce properties on programs. In practise, your typing system $T$ seeks to prevent errors from happening, meaning that typable programs should not exhibit the class of errors of interest. In order to show that $T$ does what you think it should, you have to do two things.
• First, you formally define what it means for a program to have an immediate typing error. There are many ways this can be defined -- it's up to you. Typically we want to prevent programs like 2 + "hello". In other words, you need to define a subset of programs, call them Bad, that contains exactly the programs with immediate typing error.
• Then you must prove that programs that are typable can never evolve into programs in Bad. Let's formalise this. Let your typing judgement be $\Gamma \vdash M : \alpha.$ Recall that that should be read as: program $M$ has type $\alpha$, assuming the free variables are typed as given by the environment $\Gamma$. Then the theorem you want to prove is:
Theorem. Whenever $\Gamma \vdash M : \alpha$ and $M \rightarrow \cdots \rightarrow N$ then $N \notin$ Bad.
How to prove this theorem depends on the details of the language, the typing system and your choice of Bad.
One standard way of defining Bad is to say: a term $M$ has an immediate type error if it is neither a value nor has a reduction step $M \rightarrow N$. (In this case $M$ is often referred to as stuck.) This only works for small-step operational semantics. One standard way of proving the theorem is to show that
• $\Gamma \vdash M : \alpha$ and $M \rightarrow N$ together imply $\Gamma \vdash N : \alpha$. This is called "subject reduction". It is usually proven by simultaneous induction on the derivation of the typing judgement, and the length of the reductions.
• Whenever $\Gamma \vdash M : \alpha$ then $M$ is not in Bad. This is usually also proven by induction on the derivation of the typing judgement.
Note that not all typing systems have "subject reduction", for example session types. In this case, more sophisticated proof techniques are required.
That's a good question! It asks what we expect from types in a typed language.
First note that we can type any programing language with the unitype: just pick a letter, say U, and say that every program has type U. This isn't terribly useful, but it makes a point.
There are many ways to understand types, but from a programmer's point of view the following I think is useful. Think of a type as a specification or a guarantee. To say that $e$ has type $A$ is to say that "we guarantee/expect/demand that $e$ satisfy the property encoded by $A$". Often $A$ is something simple like int, in which case the property is simply "it's an integer".
There is no end to how expressive your types can be. In principle they could be any kind of logical statements, they could use category theory and whatnot, etc. For example, dependent types will let you express things like "this function maps lists to list such that the output is a sorted input". You can go further, at the moment I am listening to a talk on "concurrent separation logics" which allows you to speak about how concurrent programs work with shared state. Fancy stuff.
The art of types in programming language design is one of balancing expressivity and simplicity:
• more expressive types allow us to explain in more detail (to ourselves and to the compiler) what is supposed to be going on
• simpler types are easier to understand and can be automated more easily in the compiler. (People come up with types which essentially require a proof assistant and user's input to do type checking.)
Simplicity is not to be underestimated, as not every programmer has a PhD in theory of programming languages.
So let us come back to your question: how do you know that your type system is good? Well, prove theorems that show your types to be balanced. There will be two kinds of theorems:
1. Theorems which say that your types are useful. Knowing that a program has a type should imply some guarantees, for instance that the program won't get stuck (that would be a Safety theorem). Another family of theorems would connect the types to semantic models so that we can start using real math to prove things about our programs (those would be Adequacy theorems, and many others). The unitype above is bad because it has not such useful theorems.
2. Theorems which say that your types are simple. A basic one would be that it is decidable whether a given expression has a given type. Another simplicity feature is to give an algorithm for inferring a type. Other theorems about simplicity would be: that an expression has at most one type, or that an expression has a principal type (i.e., the "best" one among all types that it has).
It is difficult to be more specific because types are a very general mechanism. But I hope you see what you should shoot for. Like most aspects of programming language design, there is no absolute measure of success. Instead, there is a space of design posibilities, and the important thing is to understand where in the space you are, or want to be.
• Thank you for that detailed answer! However, I'm still not sure about the answer to my question. As a concrete example, let's take C - a statically typed language with a simple enough type system. If I wrote a typechecker for C, how would I prove that my typechecker is "correct"? How does this answer change if instead I wrote a type checker for Haskell, say HM? How would I now prove "correctness"? – Vivek Ghaisas Jun 12 '17 at 9:55
• 1. Actually define the type system $T$ for C as a mathematical entity (so that you can prove theorems about it). 2. Implement your typechecker, or describe an algorithm for type checking. 3. Prove the theorem: *If my typechecker checks that $e$ has type $A$ then there is a derivation in the type system $T$ showing that $e$ has type $A$. – Andrej Bauer Jun 12 '17 at 10:24
• I would recommend doing 2. and 3. as a combination. Also, have a look at CompCert. – Andrej Bauer Jun 12 '17 at 10:25
• Great answer. One thing I would add is that the literature on type systems often focuses on "soundness" theorems, but another aspect of correctness that one might care about is "completeness" (which seems to be implicit in the OP's "I want my type checker to be able to identify any errors in a piece of code that might occur at runtime"). In particular, in addition to the theorems you mentioned, one might hope to establish: 1. If there is a derivation in the type system $T$ showing that $e$ has type $A$, then the typechecker verifies that $e$ has type $A$, and/or 2. if $e$ does ... – Noam Zeilberger Jun 13 '17 at 9:01
• ... not have type $A$, then it does not exhibit the expected behavior of an $A$ at runtime (or more simply, if $e$ is not typable, then it goes wrong at runtime). One reason these completeness aspects are sometimes neglected is that they can run into undecidability issues as the type system becomes more expressive (and there is a tradeoff between 1 and 2: as the type system becomes more expressive to catch errors, it becomes more difficult to automate type checking). Still, sometimes it is possible to prove such soundness + completeness theorems, ... – Noam Zeilberger Jun 13 '17 at 9:04
There are a few different things you could mean by "prove that my typechecker works". Which, I suppose, is part of what your question is asking ;)
One half of this question is proving that your type theory is good enough to prove whatever properties about the language. Andrej's answer tackles this area very well imo. The other half of the question is —supposing the language and its type system are already fixed— how can you prove that your particular type checker does in fact implement the type system correctly? There are two main perspectives I can see taking here.
One is: how can we ever trust that some particular implementation matches its specification? Depending on the degree of assurances you want, you may be happy with a large test suite, or you may want some sort of formal verification, or more likely a mixture of both. The upside of this perspective is that it really highlights the importance of setting boundaries on the claims you're making: what exactly does "correct" mean? what portion of the code is checked, vs what part is the assumed-correct TCB? etc. The downside is that thinking too hard about this leads one down philosophical rabbit holes— well, "downside" if you don't enjoy those rabbit holes.
The second perspective is a more mathematical take on correctness. When dealing with languages in maths we often set up "models" for our "theories" (or vice versa) and then try to prove: (a) everything we can do in the theory we can do in the model, and (b) everything we can do in the model we can do in the theory. (These are Soundness and Completeness theorems. Which one's which depends on whether you "started out" from the syntactic theory or from the semantic model.) With this mindset we can think of your type-checking implementation as being a particular model for the type theory in question. So you'd want to prove this two-way correspondence between what your implementation can do and what the theory says you should be able to do. The upside of this perspective is that it really focuses on whether you've covered all the corner cases, whether your implementation is complete in the sense of not leaving out any programs it should accept as type-safe, and whether your implementation is sound in the sense of not letting in any programs it should reject as ill-typed. The downside is your proof of correspondence is likely to be fairly separated from the implementation itself, so it'll only help prove the gross structure of your implementation is good, it may not help catch subtle implementation bugs.
• I'm not sure I can agree with "upside of this perspective is that it really focuses on whether you've covered all the corner cases", especially if the model is only sound, but not complete. I'd propose a different perspective: going through a model is a contingent proof technique that you use for various reasons, e.g. because the model is simpler. There is nothing philosophically more dignified about going through a model -- ultimately you want to know about the actual executable and its behaviour. – Martin Berger Jun 13 '17 at 8:17
• I thought "model" and "theory" were meant in a broad sense, and wren was just emphasizing the importance of trying to establish a two-way correspondence via a "soundness + completeness theorem". (I also think this is important, and made a comment to Andrej's post.) It's true that in some situations we'll only be able to prove a soundness theorem (or a completeness theorem, depending on your perspective), but having both directions in mind is a useful methodological constraint. – Noam Zeilberger Jun 13 '17 at 9:20
• @NoamZeilberger "The question is," said Martin, "whether you can make words mean so many different things." – Martin Berger Jun 13 '17 at 9:43
• When I learned about typing systems and programming language semantics, I found the realisation that models are merely proof techniques about operational semantics, rather than ends in themselves, sublimely liberating. – Martin Berger Jun 13 '17 at 9:46
• Relating different models through soundness & completeness is an important scientific methodology for the transfer of insight. – Martin Berger Jun 13 '17 at 9:49
|
# Dodecagon Area Calculator
Created by Kenneth Alambra
Reviewed by Rahul Dhari
Last updated: Jan 20, 2023
If you're looking for a quick and easy way to calculate the area of a regular dodecagon, this dodecagon area calculator is for you. A dodecagon is a 12-sided regular polygon, and as a regular polygon, a dodecagon follows the same properties as other regular polygons.
In this calculator, you'll be able to explore:
• How to find the area of a dodecagon;
• How to use this dodecagon area calculator; and
• Some examples of dodecagon area calculation.
## How to find the area of a dodecagon
A dodecagon is a regular polygon which means to find the area of a regular dodecagon, we can use the area formula of any regular polygon, as shown below:
$\small A = \frac{1}{2}\times n\times a\times r$
Where:
• $A$ is the area of a regular polygon;
• $n$ is the regular polygon's number of sides;
• $a$ is the length of the regular polygon's side;
• $r$ is the apothem (or incircle radius) of the regular polygon.
To use this formula to find a dodecagon area, we can denote as $A_\text{dodecagon}$, we need to use $12$ for $n$, and we can also utilize the relationship between $r$, $a$, and $n$ to arrive at this equation:
\small \begin{align*} A_\text{dodecagon} &= \frac{1}{2}\times n\times a\times r\\\\ &= \frac{1}{2}\times n\times a\times \frac{a}{\tan(\frac{\pi}{n})\times 2}\\\\ &= \frac{1}{2}\times 12\times \frac{a^2}{\tan(\frac{180\degree}{12})\times 2}\\\\ &= 6\times \ \frac{a^2}{\tan(15\degree)\times 2}\\\\ &= \frac{3\times a^2}{\tan(15\degree)}\\\\ \end{align*}
We can further simplify this equation by taking the quotient of $3$ and $\tan(15\degree)$ to have:
\small \begin{align*} A_\text{dodecagon} &= \frac{3\times a^2}{\tan(15\degree)}\\\\ &= 11.19615242\times a^2 \end{align*}
Using this formula for an example, let's say we have a dodecagon with a side that measures $10\ \text{cm}$. Substituting $10\ \text{cm}$ into our dodecagon area formula, we get:
\small \begin{align*} A_\text{dodecagon} &= \frac{3\times a^2}{\tan(15\degree)}\\\\ &= \frac{3\times (10\ \text{cm})^2}{\tan(15\degree)}\\\\ &= 1119.615242\ \text{cm}^2\\ &≈ 1119.6\ \text{cm}^2 \end{align*}
## How to use this dodecagon area calculator
This dodecagon area calculator is very easy to use and only requires one step to find the area of any dodecagon. And that is to:
• Enter at least one of the measurements of your dodecagon. You can enter your dodecagon's:
• Side (a);
• Perimeter (which is only $12\times a$);
• Circumcircle radius (R) (equal to $\frac{a}{2\times \sin(15\degree)}$); or
• Apothem (r).
You can also click on the advanced mode button below our tool to access the secret feature of this calculator. In the advanced mode of this tool, you will see the Number of sides variable, where you can enter other values to find the area of other regular polygons. Isn't that amazing? 😊
## Other regular polygon tools
Craving for more polygon-related knowledge? Don't worry. We've got a list of other related tools waiting for you:
## FAQ
### How do I find the area of a dodecagon?
To find the area of a dodecagon:
1. Take the square of its side measurement. Let's say its side measures 8 cm. We then have: 8 cm × 8 cm = 64 cm².
2. Multiply the product by 3/tan( 15°) or 11.19615242 to find the dodecagon area: 64 cm² × 11.19615242 = 716.5537545 cm² ≈ 716.6 cm².
### What is the area of a dodecagon with a side of 1 cm?
The area of a dodecagon with a 1-cm side is roughly 11.196 cm². To find that value, we can use the dodecagon area formula: dodecagon area = 11.19615242 × (side measurement)².
Since the dodecagon's side measures 1 cm, we get:
dodecagon area = 11.19615242 × (1 cm)²
= 11.19615242 × 1 cm²
= 11.19615242 cm²
≈ 11.196 cm²
Kenneth Alambra
Name
regular dodecagon
α
150
deg
β
30
deg
Input at least one measurement
Side (a)
in
Perimeter
in
in
in
Result
Area
in²
People also viewed…
### Arccos
If you're searching for an easy tool to find inverse cosine, look no further - our arccos calculator is a perfect match.
### Car vs. Bike
Everyone knows biking is fantastic, but only this Car vs. Bike Calculator turns biking hours into trees! 🌳
### Perimeter of a rectangle with given area
Our perimeter of a rectangle with a given area calculator lets you find the perimeter of a rectangle once you know the area and a side (length or width).
### Plastic footprint
Find out how much plastic you use throughout the year with this plastic footprint calculator. Rethink your habits, reduce your plastic waste, and make your life a little greener.
|
# easy pay day loans
### Borrowing to get. Borrowing to spend, also referred to as gearing or leverage, was a risky company.
Borrowing to get. Borrowing to spend, also referred to as gearing or leverage, was a risky company. As you develop profits whenever areas go up, they […]
### Offered the changing demographics for the pupil population, most of these economic sacrifices must not be looked at nonchalantly.
Offered the changing demographics for the pupil population, most of these economic sacrifices must not be looked at nonchalantly. Many thanks for Signing Up! Most of […]
|
# Infinite potential well energy
Ask Question Why potential energy has the word potential in it? Particle in an infinite square well potential Ket Representation Wave Function Representation Matrix Representation Hamiltonian H H − 2 2m d dx2 H E 1 00 0E 2 0 00E 3 ⎛ ⎝ ⎜ ⎜ ⎜ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ ⎟ ⎟ ⎟ Eigenvalues of Hamiltonian Normalized Eigenstates of Hamiltonian n ψ n (x)= 2 L sin nπ L ⎛ ⎝⎜ ⎞ ⎠⎟ 1 1 0 0 Lesson - Infinite Potential Well Schrodinger's Equation. a) Zero. Calculate the Zero-point energy for a particle in an infinite potential well for an electron confined to a 1 nm atom. 9 trillion barrels of oil. The energy spectrum of a quantum particle moving in a potential well is discrete. 3 Infinite Potential Well: A Confined Electron. standing waves), with wave number k: V(x)= 0if ∞if ⎧ ⎨ ⎪ ⎩⎪ −a<x x>a Completeness of energy eigenfunctions of the infinite potential well vs Fourier series. The solution of the time independent Schrödinger equation. 9 X 10-29 J b) 4. Immediately after the measurement, the quantum system is in the Completeness of energy eigenfunctions of the infinite potential well vs Fourier series. If the particle has zero energy, it will be at rest •1-D infinite square well. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential The free-particle wave functions are sinusoidal both inside and outside the well. acobdarfq and 13 more users A diagram showing the difference in energy levels between a finite square well and and infinite square well of height 75eV. if we had DIFFERENT potentials along the 3 axes The infinite square potential well (a) with and (b) without the E-field applied. Solution: Concepts: The infinite square well; Reasoning: The electron is confined in an infinite potential well, so its energy is given by. After Neudeck and Pierret Figure 2. The squared magnitude | ψ n | 2 of the wave function is used to calculate the probability of determining a particle at a particular location x in the potential well. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential Completeness of energy eigenfunctions of the infinite potential well vs Fourier series. Infinite and Finite Well. The steps. Energy quantization. Ask Question Why potential energy has the word potential in it? A Comparison Between Finite and Infinite Potential Well has been also presented. This scale of energy is easily seen, even at room temperature. It turns out, this is a decent model for electrons in a smooth. 18 ธ. W n = h 2 8 m L 2 n 2 n = L 8 m W n h 2 L = h 2 8 m W n n m = h 2 n 2 8 L 2 W n. The potential is non-zero and equal to −V H in the region −a ≤ x ≤ a. 25 ธ. V(x) V0 x 0 2 −Lx 2 Lx 2 L Figure 1. ) Localized Particles Result in Quantized Energy/Momentum: Infinite Square Well First a needed tool: Consider an electron trapped in an energy well with infinite potential barriers. ย. •Finite square well, tunneling… Werner Heisenberg Infinite 1-D Square Well: Wave functions and Quantized Energy . 4a () 2 2 2 2 n 2 2 2 2 2 2 Energy 25El 16El 4 — n2E It 2ñ2 2 ml-2 Figure 6-3 Graph of energy vs. 0 \times 10^{-10}\, m\). Topics: Wave functions, square well potential, probability amplitude, energy eigenvalues, numerical solution to the Schroedinger equation. 6 x 10"34 Js)2 (l)2 18. The set of allowed values for the particle's total energy En as given by Equation 6-24 form the energy-level diagram for the infinite square well potential. FIG. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential So if the potential well has a lenght L then we want n*w/2 = L Where w is the wavelength and n is an integer. An analytical expression is Finally the results are compared with the infinite quantum well. d) Variable. Find all the position where the electron is most likely to be found when it is in its third quantum state (n=3). What is the change in the Consider a three-dimensional infinite potential well. The free particle moves in 0<x<L. This effects a change of the energy The 1D Semi-Infinite Well; Imagine a particle trapped in a one-dimensional well of length L. DOI: 10. 2554 This Demonstration shows the bound state energy levels and eigenfunctions for a semi-infinite potential well defined by . Pre-requisite skills: An understanding of basic quantum physics and the meaning of the wave function, eigenvalue problem, and boundary conditions. Formula Infinite square potential well (1d) Energy Quantum number Length. Note that typically, before this exercise is given, students are given a brief onboarding lesson where they are introduced to Composer. 2562 Likewise, it can never have zero energy, meaning that the particle can never "sit still". The exact solutioon for this The following discussion is meant to provide insight into solving such equations for potential energy wells of the infinite and finite form. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential In order to avoid this we solve the finite square-well potential whose the boundary conditions are well fixed, even in a minimal-length scenario, and then we take the limit of the potential going to infinity to find the eigenfunctions and the energy equation for the infinite square-well potential. The electron then jumps to the next lower energy level by emitting light. The three-dimensional case may be used to model many more real situations such as a gas in a sealed vessel (Eisberg, 1961; Basdevant & Dalibard, 2002), electrons in metals (Basdevant & Completeness of energy eigenfunctions of the infinite potential well vs Fourier series. For a quantum mechanical particle we want instead to solve the Schrodinger equation. 12 Potential of a finite rectangular quantum well with width Lx. We consider two cases. Consider a free electron bound within a two-dimensional infinite potential well defined by V = 0 for 0<x<25 Å, 0<y<50 Å, and V = ∞ elsewhere. An electron is trapped in a one-dimensional infinite potential well of length \(4. The value of a is (up to two decimal places) L. 4nm in width. 10 a0) internal barrier on the solutions to the particle-in-a-box (PIB) problem. Inside the well there is no potential energy. e. Ask Question Why potential energy has the word potential in it? smooth. S. A diagram of the confining potential in the particle in a box model. Consider the following piecewise continuous, finite potential energy:. Schrodinger's equation is integrated numerically for the first five energy states. I'm starting with a simple infinite potential well stretching from -10 to 10 angstroms and manually entering the energy just to debug the method we're expected to use, but I can't seem to get it to work. 89 eV, and Es 1. Text Eq. 9 X 10-29 J c) 5. Axiom 2c: The possible outcomes of measurements of the energy corresponding to the hermitian linear Hamilton operator are the eigenvalues of . The two spheres must have the same potential, so by equating the potential energy of each charged sphere (which is the same as that of a point charge at the center of the sphere) we get: 12 12 11 22 qq KK rr qr qr = ⇒= Equipotential line Surface of conductor r2 1 The electrostatic potential energy of the system can, in principle, be obtained by calculating the path integral of between infinity and (a, b, 0). Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential Particle in an infinite square well potential Ket Representation Wave Function Representation Matrix Representation Hamiltonian H H − 2 2m d dx2 H E 1 00 0E 2 0 00E 3 ⎛ ⎝ ⎜ ⎜ ⎜ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ ⎟ ⎟ ⎟ Eigenvalues of Hamiltonian Normalized Eigenstates of Hamiltonian n ψ n (x)= 2 L sin nπ L ⎛ ⎝⎜ ⎞ ⎠⎟ 1 1 0 0 PARTICLE IN AN INFINITE POTENTIAL WELL CYL100 2013{14 September 2013 We will now look at the solutions of a particle of mass mcon ned to move along the x-axis between 0 to L. In the infinite potential energy well problem, the walls extend to infinite potential. In the finite potential energy well problem the walls extend to a finite potential energy, U0. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential By analogizing a particle in a one-dimensional infinite potential well as a gas in a piston-cylinder, quantum thermodynamic processes in a dual cycle can be explained from the analogies of classical processes such as adiabatic, isochoric, and isobar. The horizontal axis shows spatial position , and the vertical axis shows energy . With these boundary conditions, the solutions to the time Infinite Potential Well Next: Square Potential Barrier Up: One-Dimensional Potentials Previous: Introduction Consider a particle of mass and energy moving in the following simple potential: In the infinite potential energy well problem, the walls extend to infinite potential. Consider a particle of mass and energy moving in the following simple central potential: (647) Clearly, the wavefunction is only non-zero in the region . This asymmetric, semi-infinite well is crudely representative of the potential that effective mass m. still possess 97% of untapped oil hidden and it’s estimated to produce 11. Using boundary conditions ψ(0) = 0, we obtain B = -A : Therefore, ψ( ) [exp( ) exp( )] 2 sin. Ask Question Why potential energy has the word potential in it? ‘infinite’ potential well can model some types of molecules, e. Quantized energy levels. , particle moves freely in 0<x. Department of Energy — Energy: Infinite Potential takes students to the surface of the sun, across the ocean floor, and deep beneath Earth’s crust to explore the many sources of Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential well of width 10À. The potential function is given by. Zero-point energy. Describe any similarities and any differences to the results of the one-dimensional infinite potential well. Localized Particles Result in Quantized Energy/Momentum: Infinite Square Well First a needed tool: Consider an electron trapped in an energy well with infinite potential barriers. If this is correct, then Aristotle’s two notions of the potential infinite and actual infinite have been redefined and clarified. 2556 Figure 3. Ask Question Why potential energy has the word potential in it? Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential well of width 10À. It's just an electron that's trapped along a line. This is the case of unbound states, where eigenvalues of Comparison of the finite and infinite square wells depth effect the shape of the probability density, the number of energy levels and the energy values? Infinite Potential Well. Certain superposition states of the 1-D infinite square well have transient zeros at locations other than the nodes of the eigenstates that comprise them. For this case, wavefunction comes out to be: sqrt(2/L) sin(nπx/L). We do The potential energy equals zero for |x| < a. The reflection coefficient for infinite potential was 1 so the electron can not penetrate the barrier. 0 ,. An easier technique is to calculate the electrostatic potential energy of the system with charge and image charges. The actual energy of the first allowed electron energy level in a typical 100 Å GaAs quantum well is about 40 meV, which is close to the value that would be calculated by this simple formula. 1 Rectangular potential with one side infinitely high, the other of depth V o. 25 ก. Consider a particle of mass $m$ and energy $E$ moving in the following simple potential: If the potential is the same in each dimension then rotating the wave around gives the same energy as before. This energy is above the bottom at of the infinite well potential (which is zero inside the infinite well). The energy of the electron is quantized. Goswami problem 3. The wavelength is shorter inside the well than outside. Infinite Potential Well 3. h^a1), n =1,2, 3, We have evaluated the energy levels of a particle in an infinite potential well containing identical square-potential barriers of equal width and sepa… the time-independent Schroedinger Equation predicts (non-physical) infinite energy, infinite potential, infinite mass, or infinite probability current. View Answer & Solution. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential We consider here a 3d rectangular "infinite square well". Let us examine why there is no state with zero energy for a square well potential. In the infinite potential well, E≥0 (because Vmin=0, and E≥Vmin). With these boundary conditions, the solutions to the time The expectation value of the energy stays the same after the doubling of size but it doesn't mean that the spectrum is the same. , square-integrable) at , and that it be zero at (see Completeness of energy eigenfunctions of the infinite potential well vs Fourier series. Ask Question Why potential energy has the word potential in it? the well (and it is precisely that condition which makes the mathematics so much more complicated in the finite square well). Parity. 6 One-dimensional infinite potential well For an electron in a one-dimensional infinite potential well of width 1 Å, calcu- late (i) the separation between the two lowest energy levels; (ii) the frequency and wavelength of the photon corresponding to a transition between these two levels; and (iii) in what region of the electromagnetic spectrum is this frequen- cy/wavelength? 7 Eigenvalues of 1 - Question. This is because if n = 0, then ψ 0 (z) = 0 everywhere inside the infinite square well potential and then Completeness of energy eigenfunctions of the infinite potential well vs Fourier series. That is, the potential, V, is infinitely large at x=0 and below, and x=L and above. 0 eV. (Below we can only show 2-d. 1), and is a solution to the Schrödinger equation. In quantum mechanics, particles are found to not obey the equations of motion that are obeyed by large objects. The expectation value of the energy stays the same after the doubling of size but it doesn't mean that the spectrum is the same. In the examples that we have seen in the previous lecture, the energy could take any. This box can also be thought of as an area of zero potential surrounded by walls of infinitely high potential. Ask Question Why potential energy has the word potential in it? Infinite Square Potential Well. In a finite Potential well, the potential energy outside the box is ______ c) (2pt) Find the wave function. The infinite potential energy outside the box means that there is zero probability of ever finding the particle there, so all of the allowed wavefunctions for One way to estimate the ground state energy of a finite potential well is to use the infinite well energy to produce a trial attenuation factor α. In fact, this effect happens in any potential where the energy of the par-ticle is less than that of a (finite) potential barrier: the particle’s wave func-tion extends into the barrier region. Inside the well, where the potential is 0, the Schrödinger equation is identical for the infinite and finite cases, with eigenstates ψ(x)=Acos(kx)+Bsin(kx) . II. Find the momentum space representation of the position operator x. There are two ways to find the expectation value of the energey, the first which has been shown is: $$\int ^L _0 \Psi (x,t)^* [ \frac{\hbar ^2 }{2m} abla ^2 \Psi (x,t)]dx$$ Infinite Spherical Potential Well. (2) so Schrödinger equation becomes. 1 2 d ψ(x) 2 2m dx This example will illustrate a method of solving the 3-D Schrodinger equation to find the eigenfunctions for a infinite potential well, which is also referred to as a box. 9. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential Certain superposition states of the 1-D infinite square well have transient zeros at locations other than the nodes of the eigenstates that comprise them. (vii) A particle is moving in an infinite potential energy well in one dimension. In quantum mechanics this model is referred to as Completeness of energy eigenfunctions of the infinite potential well vs Fourier series. x A jkx jkx Aj kx = −− = To obtain the energy E as a function of k, substitute the above equation into Schrodinger equation: 22 2 2. I start with: Asin(KnX)+ Bcos(KnX). Infinite square potential well (1d) Formula: Infinite square potential well (1d) W n = h 2 8 m L 2 n 2. QUANTUM WELL (QW). (We call it Xshelf because it will be the start of the shelf for the next section . In this problem, a free particle traveling in the +ˆx + x ^ direction with total kinetic energy E (and no the infinite potential well, and then the finite one. Ask Question Why potential energy has the word potential in it? In addition, since the added potential energy function is a constant over the entire region, the change in energy eigenfunction curviness and amplitude must be uniform over Region II. ∞, otherwise . The electron's probability density is zero at x = 0. indd 235 8/22/11 11:57 AM Completeness of energy eigenfunctions of the infinite potential well vs Fourier series. 2 ). 1 nm. b) Infinite. 81 eV. COMPARING FINITE AND INFINITE SQUARE WELLS. Ask Question Why potential energy has the word potential in it? Instructions - Infinite Potential Well Model Purpose. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential Numerical Solutions for the Infinite Potential Well with Internal Barrier Frank Rioux The purpose of this tutorial is to explore the impact of the presence of a large (100 Eh) thin (0. To see the other bound states, simply click-drag in the energy level diagram on the left to select a level. Quantum physics tells us that every quantum particle has an infinite potential of possibilities as to when and where it will manifest (and only comes into local manifestation, or "collapses the probability field", when we put our attention on it — the consciousness/energy interplay). 4a () 2 2 2 2 n 2 2 2 2 2 2 Fig 15. There are only three energy levels E1 0. In this paper, we will discuss the Eigen energy values and Eigen functions of a particle in an infinite square well potential. Completeness of energy eigenfunctions of the infinite potential well vs Fourier series. x for a particle in an infinitely deep well. For a normalized $\psi$, the expectation value of the energy is simply $$\int_{-L}^{+L}dx\,\psi^* \left( -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2} + V(x) \right) \psi$$ because the integral may be reduced to the interval as the wave function vanishes Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential well of width 10À. For this asymmetric infinite square well, mathematically we find that for E < V 0 , we have that after applying the boundary conditions at −1 and 1, Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential well of width 10À. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential In fact, an infinitely deep well cannot be made, but it has become routine to grow structures consider how Fig. Answer: c. Ask Question Why potential energy has the word potential in it? Just to be clear: Suppose a particle, described by a wave function ψ, is contained within a potential well of infinite height of width L between 0 and L (0<x<L). Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential The Unified Field — the zero point vacuum of spacetime — is infinite in its energy potential. ,3,2,1 ,. This immediately gives us the energy eigenvalues w = h/p So p = nh/2L So E = p 2 /2m = n 2 h 2 /(8L 2 m) where m is the mass of the particle confined to the well. This means that it is possible for the particle to escape the well if it had enough energy. 3. The energy of electrons and holes in cylindrical quantum wires with a finite potential well was calculated by two methods. indd 235 8/22/11 11:57 AM Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential well of width 10À. Then at the boundary of such a region, C may be discontinuous. The half well is created by splitting the original well in half by inserting an infinite wall at the origin. 4. In the first case, the kinetic energy is always positive: −. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential Vii a particle is moving in an infinite potential. The potential energy V(x) is shown with the colored lines. If the energy of the particle is less than the potential at −∞ and +∞, then you have bound states. Date Solution for 1. The height of the barrier is 2. Advanced Physics Q&A Library Calculate the Zero-point energy for a particle in an infinite potential well for an electron confined to a 1 nm atom. Within this region, it is subject to the physical boundary conditions that it be well behaved ( i. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential The easiest spherically symmetric potential to solve is the infinite spherical well: potential equals zero inside a sphere and infinity outside the sphere. This is what the infinite square well represents. That means you have to have infinite kinetic energy to go through! That's a pretty solid wall. The solution of the time independent Schrödinger equation will differ depending on whether the energy E is greater than or less than U0. and it forces the wave function to vanish at the boundaries of the well at $x=\pm a$ . This effects a change of the energy known as the INFINITE SQUARE WELL POTENTIAL (PARTICLE IN A BOX). This effects a change of the energy Infinite Oil Well is a report that discusses the potential money investors can make investing in companies that extract oil. Thus each potential infinite…presupposes an actual infinite. What will this quantity be if the width of the potential well is Choice Questions & Answers (MCQs) focuses on “Finite Potential Well”. In a finite Potential well, the potential energy outside the box is ____________. The energy of the particle inside the infinite square well potential will be minimum at n = 1. Copying Interact on desktop, mobile and cloud with the free Wolfram Player or other Wolfram Language products. It h changes to E, (1+ax10), when there is a small potential pump of height V, = 50 mL and width a = L/100, as shown in the figure. G. 8. 9 X 10-29 J. For the first portion of the lab, we will solve for the states of fixed energy for a particularly simple potential: an infinite square well. The potential V (x) is zero for 0 < X < Xshelf = 1nm , and infinite everywhere else. 17) gives the energy En of a particle of mass m in the nth energy state of an infinite square well potential with width L:. ( ) = {. (In a special case in which the potential energy becomes infinite, this restric-tion is relaxed. Consider the solution to the Schrödinger equation. Therefore, (6. 1. For a normalized $\psi$, the expectation value of the energy is simply $$\int_{-L}^{+L}dx\,\psi^* \left( -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2} + V(x) \right) \psi$$ because the integral may be reduced to the interval as the wave function vanishes Just to be clear: Suppose a particle, described by a wave function ψ, is contained within a potential well of infinite height of width L between 0 and L (0<x<L). V , and is bound to the well. Explanation: In a finite potential well, the potential energy of the particle outside the box is a finite constant unlike infinite potential well, where the potential Particles in these states are said to occupy energy levels The quantum-dot region acts as a potential well of a finite height (shown in (b)) that has two finite-height potential barriers at dot boundaries. Solution from fitting boundary conditions Third example: Infinite Potential Well – The potential is defined as: – The 1D Schrödinger equation is: – The solution is the sum of the two plane waves propagating in opposite directions, which is equivalent to the sum of a cosine and a sine (i. Vern Lindberg. Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential well of width 10À. (5. In this exercise, students explore the infinite and finite wells and how the energy eigenstates and eigenvalues change as you change the potential parameters. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential Interference Energy Spectrum of the Infinite Square Well Comments This article was originally published in Entropy , volume 18, issue 4, in 2016. 15 Electron in a one-dimensional infinite PE well. Solutions of the TISE in the IPW. Science. Potential energy. 2563 The one-dimensional infinite square well problem is defined by the potential. In general, the matching between the results of the iterative method and the graphical method This project models the propagation of a quantum wave packet in an infinite square well, and introduces between zero and three potential energy barriers. 1 Answer (s) Completeness of energy eigenfunctions of the infinite potential well vs Fourier series. The infinite square well potential is given by: () ⎩ ⎨ ⎧ ∞ < > ≤ ≤ = x x a x a V x,,, 0 0 0 A particle under the influence of such a potential is free (no forces) between x = 0 and x = a, and is completely excluded (infinite potential) outside that region. Possible wavefunctions and the probability 5. The central "well" region (white) has a zero potential, while the outer "barrier" regions (grey) have an infinitely large potential. 39 nm are shown in comparison The energy: |E| = -E is the binding energy of the particle. The minimum possible energy possessed by the particle inside the infinite square well potential is called Ground State or Zero Point Energy. Explanation: In a finite potential well, the potential energy of the particle outside the box is a finite constant unlike infinite potential well, where the potential An electron is in certain energy state in a one dimensional, infinite potential well from x = 0 to x = L = 2 0 0 p m. It is an extension of the infinite potential well, The ground state (top) and first excited state (bottom) of the infinite square well with a delta-function potential of magnitude V located at x 0 = 3 8 L . 1 x 10~31 kg) (0. We call it the infinite square well, because the wall is a barrier with infinte potential. This is because if n = 0, then ψ 0 (z) = 0 everywhere inside the infinite square well potential and then Infinite Well Energies. Determine the expression for the allowed electron energies. At x<=0 and x>=L, ψ must be 0. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential Quantum properties of the infinite square well. The width of the well is Lx. ) Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential well of width 10À. A potential well having only discrete energy values is known as a dimentional infinite potential well is 2. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential The animations show both a ∨-shaped potential energy well and a half infinite-half ∨-shaped potential energy well. QM PHY202 – p. This preview shows page 2 - 5 out of 35 pages. This is because he claims that the U. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential The minimum possible energy possessed by the particle inside the infinite square well potential is called Ground State or Zero Point Energy. 3390/e18040149 This project models the propagation of a quantum wave packet in an infinite square well, and introduces between zero and three potential energy barriers. The central "well" region (white) has a zero potential, while the outer "barrier" regions (grey) have an infinitely large potential. This effects a change of the energy Wave function. By applying Laplace Transform we can find the general solutions of one dimensional Schrodinger’s time - independent wave equation for a particle in an infinite square well potential. Which one of the following is a possible energy for an excited state? ‘infinite’ potential well can model some types of molecules, e. The density of energy eigenstates grows as the potential well's slope decreases. 3 0 0 L, and x = 0. Find the three longest wavelength photons emitted by the electron as it changes energy levels in the well. entire system to potential V with respect to a point infinitely far away. This applet solves Schrodinger's equation for a particle in a one dimensional potential well. For a classical infinite square quantum The TISE with V = 0, plane waves. In quantum mechanics, the particle in a box model (also known as the infinite potential well or the infinite square well) describes a particle free to move certain energies. In the ground state, the energy of the particle is 5. A more accurate potential function V(x) gives a chance of the electron being outside V(x) These scenarios require the more accurate potential What if the particle energy is higher? What about two wires very close together? Infinite Spherical Potential Well. I'm interested in knowing what would be the wavefunction if I remove one side of the well i. 6 Ve. 3D Infinite-Potential Well In order to find the energies, we first need to take the appropriate derivatives of the wave function. A particle of total energy 9V 0 is incident from the left upon a potential given by V=8V 0 for x<0 Infinite Square Potential Well. linear polyenes (Blinder, 2004). 5. If the energy of the particle is 2 eV when it is in the quantum state associated with this eigenfunction, find the energy when it is in quantum state of lowest possible energy. The square-well potential described in this section has a number of practical applications. The wave packet used is a “Gaussian” wave packet (Eq. h^a1), n =1,2, 3, We have evaluated the energy levels of a particle in an infinite potential well containing identical square-potential barriers of equal width and sepa… Completeness of energy eigenfunctions of the infinite potential well vs Fourier series. The energy If the potential barrier is sufficiently large, then the energy eigenstates will be approximately equal to those of a single particle in an infinite potential 7 มี. The infinite potential well of width a is defined by the energy potential Y(x)=0 for0<x<a ^ V(x) =00 for x <0 and a < x \ ' The corresponding energy eigenvalues and eigenfunctions are021 ^(x)=(2/a)\l2sm(nnx/a) (3) = with = (. First consider the case E The total energy is given by. G x. This effects a change of the energy Quantum properties of the infinite square well. c) Constant. x! 6. The 1D Infinite Well. 1. “infinite” quantum well “particle in a box” finite quantum well Completeness of energy eigenfunctions of the infinite potential well vs Fourier series. The result of comparison showed that energy levels of an infinite well are much higher than that the corresponding energy levels for finite potential well. E2- 0. Elsewhere Start with Schrodinger's wave equation, use the separation of variables technique, and show that the energy is quantized and is given by In section 2 it is shown that m space contains force and potential energy which may be imparted as kinetic energy to material matter. 00 Developed by JASON Learning — in partnership with the National Geographic Society, NOAA, and the U. g. 23 eV. Potential Barrier: Transmission and Reflection. 12. The three-dimensional case may be used to model many more real situations such as a gas in a sealed vessel (Eisberg, 1961; Basdevant & Dalibard, 2002), electrons in metals (Basdevant & Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential well of width 10À. 2 changes if the potential at the walls is not infinite. However, the “right-hand wall” of the well (and the region beyond this wall) has a finite potential energy. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential The 1D Semi-Infinite Well; Imagine a particle trapped in a one-dimensional well of length L. ) TIPLER_06_229-276hr. A particle of mass m is captured in a box. 3 (a) Potential energy for a particle in a one-dimensional finite rectangular well; (b) The ground-state wave function for this potential; (c) The first excited- Energy Levels in a Half-Infinite Linear Well. The potential energy diagram as well as our spherical coordinate definitions are defined below: For my quantum mechanics class, we've been asked to write a program which find energy levels for potential energy wells of different shapes. The allowed energy states of a particle of mass m trapped in an infinite potential well of length L are known as the INFINITE SQUARE WELL POTENTIAL (PARTICLE IN A BOX). Description: A diagram of the confining potential in the particle in a box model. ค. The finite potential well (also known as the finite square well) is a concept from quantum mechanics. Comparison is to the typical potential that binds and electron to a nucleus, or that binds a diatomic molecule (in which case the depth D is called the dissociation energy D). Ask Question Why potential energy has the word potential in it? For example, in the infinite potential well, If only certain wavenumbers are permitted in order to preserve the continuity of the wavefunction, then only the corresponding energy values will be physically realizable for that system. Neutron bound in the nucleus. 39 nm are shown in comparison with the energy levels of an infinite well of the same size. (Cantor 1887) The new idea is that the potentially infinite set presupposes an actually infinite one. potential. For this purpose we have derived analytical equations for the energy eigenvalues which can be solved with a personal The LibreTexts libraries are Powered by MindTouch ® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. However, this is not trivial since the force is a rather complex function of a and b. For this asymmetric infinite square well, mathematically we find that for E < V 0 , we have that after applying the boundary conditions at −1 and 1, The total energy is given by. PARTICLE IN AN INFINITE POTENTIAL WELL CYL100 2013{14 September 2013 We will now look at the solutions of a particle of mass mcon ned to move along the x-axis between 0 to L. The finite rectangular quantum well The finite rectangular quantum well is characterized by zero potential inside the well and a potential V0 outside the well, as shown in Figure 1. Explanation: In a finite potential well, the potential energy of the particle outside the box is a finite constant unlike infinite potential well, where the potential Hi, I am working on a simple 1D potential well problem. Method 2:Solving the Schrodinger equation infinite potential well ,the potential energy outside ,the box is - 45531905 Energy: Infinite Potential – Student Edition $25. 1 Solving Schroedinger's Equation for the Finite Square Well. ❑ Like for the infinite potential, solution is either odd or even:. Consider a particle of mass m in an infinite square well with potential energy 0 for 0 Sz S a oo otherwise V (x) For simplicity, we may take the ‘universe’ here to be the region of 0 S z S a, which is where the wave function is nontrivial. 3 One-dimensional potential well with (a,b) infinite and (c,d) finite potential walls, and (e) energy versus wave number dependence for the case of Particle in an infinite potential well. corresponding to greater kinetic energy in the well than outside. n -th wave function ψ n is the solution of the Schrödinger equation for a bound particle (for example an electron) in a one-dimensional, infinite potential well. mk Ajk kx E Aj Completeness of energy eigenfunctions of the infinite potential well vs Fourier series. ( ) (x < 0, x > L) ( ) 0 (0 < x < L) Ux Ux f The motion would be equivalent to that of a ball bouncing elastically between two walls. The ground state energy of a particle of mass m in an infinite potential well is E,. Ask Question Why potential energy has the word potential in it? So if the potential well has a lenght L then we want n*w/2 = L Where w is the wavelength and n is an integer. Since no particle can have infinite potential energy, C(x) must be zero in regions where V(x) is infinite. When they are fired through a thin slit, rather than scattering like hard spheres they The LibreTexts libraries are Powered by MindTouch ® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Experiments such as electron diffraction show that particles have a wavelike nature. As the ball bounces back and forth, its speed and thus kinetic energy remains constant. Ask Question Why potential energy has the word potential in it? S1: The Infinite Square Well. Particle in a ring. 025 x 10′ [36] J or 37. more realistic finite square well problem. In quantum mechanics, the particle in a box model (also known as the infinite potential well or the infinite square well) describes a particle free to move in a small space surrounded by impenetrable barriers. It is shown that if an infinite potential barrier is suddenly raised at some or all of these zeros, the well can be split into multiple adjacent infinite square wells without affecting the wavefunction. The energy levels for an electron in a potential well of depth 64 eV and width 0. a) 3. , square-integrable) at , and that it be zero at (see Sect. The frequency of the electron associated with this energy is. If you want the statistical mean of the energy of a system in an infinite square well. In this paper, we will discuss the Eigen energy. 1, the particle has energy, E , less than 0. Ey = _— = 6. 6 eV. (c). This is achieved by making the potential 0 between x= 0 and x= Land V = 1for x<0 and x>L(see Figure 1). 1 - Question. 4 0 0 L; it is not zero at intermediate value of x. 3 Infinite Square-Well Potential Three-Dimensional Infinite-Potential Well When more than one wave function has the same energy, those quantum In the finite potential energy well problem the walls extend to a finite potential energy, U0. Both wells are 0. Consider a finite PE well with the same width (1 nm). C. Formula. Under well defined conditions the force and potential energy of m space become infinite, and the ubiquitous m space contains an infinite amount of potential energy which is observable precisely in Completeness of energy eigenfunctions of the infinite potential well vs Fourier series. _ h2n2 ^n Sma2. Infinite-Potential Well Revisited V(x) V = 0 V = ∞ V = 0 L x = +5 x = 2 Consider the quantum mechanics of a particle with mass m that is… The infinite potential well. ) (We use here the "alternative origin" rather having the box centered on the origin. Particle of mass m and fixed total energy E confined to a relatively small segment of one dimensional space between Chapter 11: The Finite Square Well and Other Piecewise-constant Wells This animation shows a finite potential energy well in which a constant potential Solution. 2561 independent wave equation for a particle in an infinite square well potential. In your finite potential well, it sounds like you are looking for bound states, in which case E<0, so you absorb the negative into the square root. This problem has been solved! See the 10 พ. 2 2 (sin ) (2 sin ) 0 2. d) (3pt) Using the boundary conditions, find the energy. 2. Ans: Infinite Potential Well 3. mk Ajk kx E Aj the well (and it is precisely that condition which makes the mathematics so much more complicated in the finite square well). We use n = 1 for the ground level and a = 0. 9 X 10-29 J d) 6. Department of Energy — Energy: Infinite Potential takes students to the surface of the sun, across the ocean floor, and deep beneath Earth’s crust to explore the many sources of 3. Infinite potential well. 8 The finite potential well provides an example of a spectrum of wave functions The energy shift due to this interaction is used to calculate the electric Figure 2. Ask Question Why potential energy has the word potential in it? We have evaluated the energy levels of a particle in an infinite potential well containing identical square-potential barriers of equal width and separation, in a symmetric as well as an asymmetric arrangement, as functions of the number and height of the potential barriers. Again, look at the wavefunctions of the infinite square well as a simple example. Question: Problem 2) Find the wavefunction and energy of an electron that is confined in an infinite potential Consider a finite PE well with the same width (1 nm). (1) where is h-bar, m is the mass of a particle, is the wavefunction, and E is the energy of a given state, for a half-infinite potential, for an infinite one-dimensional square potential well, the potential is given by. 22: Potential of a finite well. 1 x 10-9 m)2. Similarly, as for a quantum particle in a box (that is, an infinite potential well), lower-lying energies of a quantum particle trapped … A well-designed system that combines all state of the art technologies in a well-integrated system can offer much better water quality and no polluting discharge at$200 per year. 8 (9. The horizontal axis shows spatial position x {\displaystyle x} , and the vertical axis shows energy E {\displaystyle E} . Consider the potential shown in fig. The infinite potential well (IPW). Consequently, we may express stationary state n as where En is the associated mechanical energy. First consider the case E Infinite Potential Well Next: Square Potential Barrier Up: One-Dimensional Potentials Previous: Introduction Consider a particle of mass and energy moving in the following simple potential: The electron is confined in an infinite potential well, so its energy is given by. Are the finite PE well levels higher or lower than the corresponding infinite well levels? Find the electron penetration depth into the barrier for each of the three energy levels. Fig. 0 < < . Then the Finite Well Energy Levels. Aromatic compounds contains atomic. The walls of the well are considered to be very high considered to the energy of the particle, so much so as to be effectively infinite. The overall building can offer a high quality of life, consuming only 5% of the water of a similar size conventional development, consuming only about half the energy Introduction of Quantum Mechanics : Dr Prince A Ganai. Finite square well potential. Now, the first lesson to take from this problem is that one does not have to solve the Infinite square well approximation assumes that electrons never get out of the well so V(0)=V(a)=∞ and ψ(0)=ψ(a)=0. We do Third example: Infinite Potential Well – The potential is defined as: – The 1D Schrödinger equation is: – The solution is the sum of the two plane waves propagating in opposite directions, which is equivalent to the sum of a cosine and a sine (i. The potential is 0 inside a box with diagonal points of the origin and (L x,L y,L z) and infinite outside the box. The finite well problem shows that wavefunctions are not localized in the vicinity of the well[10] .
|
5
# As BA denned . ExplamSolve the following linear equation systemHJLA] [H...
## Question
###### As BA denned . ExplamSolve the following linear equation systemHJLA] [H
As BA denned . Explam Solve the following linear equation system HJLA] [H
#### Similar Solved Questions
##### Question 710 ptsThe ODE for the position of a mass tied to a spring hanging from the ceiling is given by z" + 3x' + 2x =(where z is measured in meters), in which positive values of € represent positions below the equilibrium position (x 0). Assume the mass begins moving at time t 0 from the equilibrium position with an initial downward velocity of 4 meterslsec: Which of the following statements is true about the solution for the motion of the mass?The mass initially goes up to its hi
Question 7 10 pts The ODE for the position of a mass tied to a spring hanging from the ceiling is given by z" + 3x' + 2x = (where z is measured in meters), in which positive values of € represent positions below the equilibrium position (x 0). Assume the mass begins moving at time t ...
##### Part A Draw the product of the following reaction.KN= ~C =HN_Draw the molecule on the canvas by choosingMarvin JS1 Submit] Request Answer
Part A Draw the product of the following reaction. KN= ~C = HN_ Draw the molecule on the canvas by choosing Marvin JS 1 Submit] Request Answer...
How many ATP molecules could be made through substrare leveL_phosphoryltion plus oxidative phosphorylation (chcmniosmosis) ifyou started with three molecules of succiny CoA and ended with oxaloacetate? Refer to the fgure (below) as gadc. Acety ] CoA Oriloncetale CoA Citrate NADH NAD 3f4 bltz 3NAD...
##### A sinusoidal waveform is given by b(t,x) = Asin(kx wt): A snapshot of the wave at t = 0 sec is shown below, and the wave is traveling to the right:5 cm10 cm15 cmk-What is the correct waveform at t = where T is the period of the wave? As LPE, explain how you obtained your answer_5 cm10 cm15 cmB10 cm15 cm10 cm15 cmD:cm10 cm15cm
A sinusoidal waveform is given by b(t,x) = Asin(kx wt): A snapshot of the wave at t = 0 sec is shown below, and the wave is traveling to the right: 5 cm 10 cm 15 cm k- What is the correct waveform at t = where T is the period of the wave? As LPE, explain how you obtained your answer_ 5 cm 10 cm 15 c...
##### 11. [-/3 Points]DETAILSOSCALCI 4.10.499-503.WA.TUTSolve the initial value problem_2x3 , Y(0)TutorlalAdditional MaterialseBook
11. [-/3 Points] DETAILS OSCALCI 4.10.499-503.WA.TUT Solve the initial value problem_ 2x3 , Y(0) Tutorlal Additional Materials eBook...
##### Use the cigcnvalucs othc adjaceucy matrix to dctcrmine the number f (n) of closcd walks Kz2 with length n , function of n.
Use the cigcnvalucs othc adjaceucy matrix to dctcrmine the number f (n) of closcd walks Kz2 with length n , function of n....
##### Find the determinant of the matrix.$$[-5]$$
Find the determinant of the matrix. $$[-5]$$...
##### Aman has an average useful power output of 80.0 W. Ifhe is asked to lift a boxof4.0kg to a height of 1Om above the ground, what is the minimum time that he needs? 'g=10 m/s?55s8 5there is no answer6 5
Aman has an average useful power output of 80.0 W. Ifhe is asked to lift a boxof4.0kg to a height of 1Om above the ground, what is the minimum time that he needs? 'g=10 m/s? 55s 8 5 there is no answer 6 5...
##### 9_ [-/3.44 Points]DETAILSSPRECALCZ 5.1.041_Consider the following: 11t t =(a) Find the reference number t for the value of t.(b) Find the terminal point determined by t.(x, y)10. [-/3.44 Points]DETAILSSPRECALC7 5.1.043.Consider the following 41 t =(a) Find the reference number t for the value of t.(b) Find the terminal point determined by t.(x,y)
9_ [-/3.44 Points] DETAILS SPRECALCZ 5.1.041_ Consider the following: 11t t = (a) Find the reference number t for the value of t. (b) Find the terminal point determined by t. (x, y) 10. [-/3.44 Points] DETAILS SPRECALC7 5.1.043. Consider the following 41 t = (a) Find the reference number t for the v...
##### Determine which of the following sets are bases for $\mathrm{R}^{3}$. (a) \{(1,0,-1),(2,5,1),(0,-4,3)\} (b) \{(2,-4,1),(0,3,-1),(6,0,-1)\} (c) \{(1,2,-1),(1,0,2),(2,1,1)\} (d) \{(-1,3,1),(2,-4,-3),(-3,8,2)\} (e) \{(1,-3,-2),(-3,1,3),(-2,-10,-2)\}
Determine which of the following sets are bases for $\mathrm{R}^{3}$. (a) \{(1,0,-1),(2,5,1),(0,-4,3)\} (b) \{(2,-4,1),(0,3,-1),(6,0,-1)\} (c) \{(1,2,-1),(1,0,2),(2,1,1)\} (d) \{(-1,3,1),(2,-4,-3),(-3,8,2)\} (e) \{(1,-3,-2),(-3,1,3),(-2,-10,-2)\}...
##### Of the radioactive nuclides in Table 25.1 (a) which one has the largest value for the decay constant, $\lambda$ ? (b) Which one loses $75 \%$ of its radioactivity in approximately one month? (c) Which ones lose more than $99 \%$ of their radioactivity in one month?
Of the radioactive nuclides in Table 25.1 (a) which one has the largest value for the decay constant, $\lambda$ ? (b) Which one loses $75 \%$ of its radioactivity in approximately one month? (c) Which ones lose more than $99 \%$ of their radioactivity in one month?...
##### Defi ne these terms: system, surroundings, thermal energy,chemical energy, potential energy, kinetic energy, law ofconservation of energy.
Defi ne these terms: system, surroundings, thermal energy, chemical energy, potential energy, kinetic energy, law of conservation of energy....
##### Money is invested at 5% interest compounded quarterly: How long does it take before it is worth five times as much?
Money is invested at 5% interest compounded quarterly: How long does it take before it is worth five times as much?...
##### 1. (19 points) Below are problems in cellcounting. Show your work.You have a cell count of 6.7 x105 cells / ml in a total volume of4.5ml. You have resuspended your cells in 5ml sterilePBS. You need to seed a six-well plate with thefollowing numbers:2 wells at 1 x 104 cells / well (2ml /well)2 wells at 4 x 104 cells / well (2ml /well)2 wells at 2 x 105 cells / well (2ml /well)Determine the volume of the original culture you will need toseed each concentration.2. (81 ponits) Record of cell coun
1. (19 points) Below are problems in cell counting. Show your work. You have a cell count of 6.7 x 105 cells / ml in a total volume of 4.5ml. You have resuspended your cells in 5ml sterile PBS. You need to seed a six-well plate with the following numbers: 2 wells at 1 x 104 cells / well (2ml / we...
##### Which of the following is the best electroyte?A.C2H6O2(aq)B. H2O(l)C.HCl(aq)D.C8H18(l)E. HC2H3O2(aq)
Which of the following is the best electroyte?A.C2H6O2(aq)B. H2O(l)C.HCl(aq)D.C8H18(l)E. HC2H3O2(aq)...
|
## Unsupervised Document Representation using Partition Word-Vectors Averaging
Sep 27, 2018 Blind Submission readers: everyone Show Bibtex
• Abstract: Learning effective document-level representation is essential in many important NLP tasks such as document classification, summarization, etc. Recent research has shown that simple weighted averaging of word vectors is an effective way to represent sentences, often outperforming complicated seq2seq neural models in many tasks. While it is desirable to use the same method to represent documents as well, unfortunately, the effectiveness is lost when representing long documents involving multiple sentences. One reason for this degradation is due to the fact that a longer document is likely to contain words from many different themes (or topics), and hence creating a single vector while ignoring all the thematic structure is unlikely to yield an effective representation of the document. This problem is less acute in single sentences and other short text fragments where presence of a single theme/topic is most likely. To overcome this problem, in this paper we present PSIF, a partitioned word averaging model to represent long documents. P-SIF retains the simplicity of simple weighted word averaging while taking a document's thematic structure into account. In particular, P-SIF learns topic-specific vectors from a document and finally concatenates them all to represent the overall document. Through our experiments over multiple real-world datasets and tasks, we demonstrate PSIF's effectiveness compared to simple weighted averaging and many other state-of-the-art baselines. We also show that PSIF is particularly effective in representing long multi-sentence documents. We will release PSIF's embedding source code and data-sets for reproducing results.
• Keywords: Unsupervised Learning, Natural Language Processing, Representation Learning, Document Embedding
• TL;DR: A simple unsupervised method for multi-sentence-document embedding using partition based word vectors averaging that achieve results comparable to sophisticated models.
0 Replies
|
$$\require{cancel}$$
# 9.3: Gravitational potential energy
Consider a large spherical body of mass $$M$$ with a coordinate system whose origin coincides with the center of the spherical body (for example, the large body could be the Earth). The force, $$\vec F(\vec r)$$ on a body of mass $$m$$ (for example, a satellite), located at a position $$\vec r$$ is then given by: \begin{aligned} \vec F(\vec r) = - G\frac{Mm}{r^2}\hat r=- G\frac{Mm}{r^3}\vec r\end{aligned} where in the second equality, we use the fact that the unit vector in the direction of $$\vec r$$ is simply the vector $$\vec r$$ divided by its magnitude. We can write the force out in Cartesian coordinates:
\begin{aligned} \vec r &= x\hat x + y \hat y + z\hat z\\ r &= \sqrt{x^2+y^2+z^2} =(x^2+y^2+z^2)^\frac{1}{2} \\ \therefore \vec F(x,y,z) &= - G\frac{Mm}{(x^2+y^2+z^2)^\frac{3}{2} }(x\hat x + y \hat y + z\hat z)\end{aligned}
Mathematically, this is equivalent to the force that we considered in Example 8.1.2 of Chapter 8, which we showed was a conservative force. The force of gravity in Newton’s theory is thus a conservative force, for which we can determine a potential energy function.
In order to determine the gravitational potential energy function for the mass $$m$$ in the presence of a mass $$M$$, we calculate the work done by the force of gravity on the mass $$m$$ over a path where the integral for work will be “easy” to evaluate, namely a straight line. Figure $$\PageIndex{1}$$ shows such a path in the radial direction, $$r$$, over which it will be easy to calculate the work done by the force of gravity from mass $$M$$ when mass $$m$$ moves from being a distance $$r_A$$ to a distance $$r_B$$ from the center of mass $$M$$.
The work done by the force of gravity on $$m$$ in going from $$r_A$$ to $$r_B$$ is given by: \begin{aligned} W &= \int_{r_A}^{r_B}\vec F(r) \cdot d\vec r = \int_{r_A}^{r_B} \left(- G\frac{Mm}{r^2}\hat r \right)\cdot d\vec r =\int_{r_A}^{r_B} - G\frac{Mm}{r^2}dr\\ &=\left[G\frac{Mm}{r} \right]_{r_A}^{r_B} =G\frac{Mm}{r_B} - G\frac{Mm}{r_A}\end{aligned} The difference in potential energy in going from position $$A$$ to position $$B$$ is given by the negative of the work done by the force: \begin{aligned} \Delta U = U(r_B) - U(r_A) = -W = G\frac{Mm}{r_A} - G\frac{Mm}{r_B}\end{aligned} By inspection, we can identify the potential energy function for gravity:
$U(r)=-G\frac{Mm}{r}+C$
which is determined only up to a constant, $$C$$.
A particularly useful choice of constant is $$C=0$$. This corresponds to choosing the potential energy to be zero only when $$r$$ goes to infinity. That is, the potential energy of mass $$m$$ is zero only when it is infinitely far away from mass $$M$$. The choice of constant $$C$$ corresponds to the (arbitrary) value of the potential energy when mass $$m$$ is infinitely far from mass $$M$$. When mass $$m$$ is not infinitely far away, it has negative potential energy (if $$C=0$$). This is not a problem! Remember, the only thing that is meaningful is a difference in potential energy, so the specific value of the potential energy has no meaning. The kinetic energy of an object, on the other hand, has to be positive.
Recall that if there are no other forces acting on an object, that object will move in such a way to reduce its potential energy. If the object of mass $$m$$ is located at some distance $$r$$ from the object of mass $$M$$, the force of gravity will attract $$m$$ so that $$r$$ decreases. As $$r$$ decreases in magnitude, the potential energy becomes more negative (larger in magnitude, but further away from zero), and the potential energy of $$m$$ will indeed decrease as it accelerates due to the force of gravity.
## Mechanical energy with gravity
Unless noted otherwise, we will continue our discussion of gravitational potential energy with the particular choice of constant $$C=0$$:
$U(r)=-G\frac{Mm}{r}$
Furthermore, we will assume that $$M$$ is a large body, such as the Earth, which we can consider as fixed, and focus our discussion on describing the motion of mass $$m$$ (e.g. a satellite). If $$M$$ is much bigger than $$m$$, they will both experience a force of gravity from each other of the same magnitude (Newton’s Third Law), but because $$M$$ is so much larger, its acceleration will be much smaller (Newton’s Second Law). Thus, it is a good approximation to assume that $$M$$ is stationary and that only $$m$$ moves when $$M>>m$$.
We can define the total mechanical energy of mass $$m$$ when it has a speed $$v$$ (relative to $$M$$) and is located at a distance $$r$$ from the center of mass $$M$$: \begin{aligned} E = U + K = -G\frac{Mm}{r}+\frac{1}{2}mv^2\end{aligned} where the kinetic energy term is always positive. If gravity is the only force exerted on mass $$m$$, then the mechanical energy, $$E$$, as defined above, will be a constant. The mechanical energy of an object can give us insight into the possible motion of the object.
Imagine launching a rocket straight upwards from the surface of the Earth; once all of the fuel has burnt up, the rocket’s mechanical energy becomes constant as the rocket engine stops doing work on the rocket. As soon as the engine stops providing thrust, the rocket will start to slow down as the force of gravity attracts the rocket back to Earth. If the rocket is going fast enough, it will be able to completely escape the Earth’s gravitational pull and travel to infinity (we assume that there are no other planets or the Sun, just the Earth exists!). If, on the other hand, the rocket’s speed is too low, it will eventually stop and fall back to Earth. This is the same thing that happens to you when you try to jump vertically. If you could jump hard enough, you would be able to escape the Earth’s gravitational pull!
In terms of mechanical energy, we can ask ourselves if the mechanical energy of the rocket is large enough to escape the Earth’s gravitational pull. Specifically, we can ask ourselves what the value of the rocket’s kinetic energy would be when it reaches infinity. The kinetic energy of the rocket is given by: \begin{aligned} K = E - U\end{aligned} If the rocket is infinitely far from the Earth, then its potential energy is zero, and the kinetic energy is equal to $$E$$.
If the mechanical energy, $$E$$, is negative, it is not possible for the rocket to ever make it to infinity because its kinetic energy would have to be negative. In other words, if the mechanical energy is negative, then the object of mass $$m$$ can never escape the gravitational pull of object $$M$$. We say that $$m$$ is “gravitationally bound” to $$M$$.
If the mechanical energy, $$E$$, is exactly zero, then the object’s kinetic energy will become zero just as it reaches infinity. In other words, it will just barely be able to escape the gravitational pull from mass $$M$$. The condition for this to happen is: \begin{aligned} E &= 0\\ K & = -U\\ \frac{1}{2}mv^2 &= G\frac{Mm}{r}\\ \therefore v_{esc} &= \sqrt{\frac{2GM}{r}}\end{aligned} which we can interpret as a condition for the speed of the rocket. If at some distance $$r$$ from $$M$$, the rocket has the speed given by the condition above, then it will have enough kinetic energy to escape the gravitational pull of $$M$$. We call this speed the “escape velocity”.
Finally, if the mechanical energy is greater than zero, then the rocket will have enough energy to escape the gravitational pull of $$M$$ and have a non-zero speed when it reaches infinity.
Exercise $$\PageIndex{1}$$
What is the escape velocity from the surface of the Earth?
1. $$4.29\times 10^{6}\text{km/s}$$
2. $$1.25\times 10^{5}\text{km/s}$$
3. $$11.2\times{km/s}$$
4. $$9.81\times{km/s}$$
Example $$\PageIndex{1}$$
Show that an object of mass $$m$$ in a circular orbit of radius $$r$$ around a body of mass $$M$$ has half of the kinetic energy required to escape the gravitational pull of $$M$$.
Solution:
The only force acting on the object is gravity, so it has a mechanical energy given by: \begin{aligned} E&=U+K\\ E&=-G\frac{Mm}{r}+\frac{1}{2}mv^2\end{aligned} In order for the object to just escape the gravitational pull of $$M$$, it’s mechanical energy must be equal to zero: \begin{aligned} E&=0\\ \therefore K_{esc}&=-U\end{aligned} Since the object is in a circular orbit, we can use Newton’s Second Law to find an expression for $$v^2$$: \begin{aligned} F_{net}&=\frac{mv^2}{r}\\ \frac{GMm}{r^2}&=\frac{mv^2}{r}\\ \frac{GM}{r}&=v^2\end{aligned} where in the second line we used the fact that $$F_{net}$$ is equal to the force of gravity exerted by $$M$$ on the object. The kinetic energy of the object is thus: \begin{aligned} K&=\frac{1}{2}mv^2\\ K&=\frac{1}{2}\frac{GMm}{r}\end{aligned} You will notice that this is very similar to our expression for $$U$$. In fact, we have: \begin{aligned} K&=-\frac{1}{2}U\\ \therefore K&=\frac{1}{2}K_{esc}\end{aligned}
Note:
We can also see that the velocity of an object in a circular orbit is equal to $$\sqrt{GM/r}$$, which is half the escape velocity, $$v_{esc}=\sqrt{2GM/r}$$
### Types of orbits
The mechanical energy of a body of mass $$m$$ determines whether it is gravitationally bound to (i.e. cannot escape) the body of mass $$M$$. The path (orbit) that $$m$$ will take depends on its velocity with respect to $$M$$. Clearly, if the velocity of $$m$$ is directed at the center of $$M$$, then $$m$$ will just collide with $$M$$. In all other cases, the orbit that $$m$$ will take depends on the mechanical energy of $$m$$ as well as the speed of $$m$$ at the point of closest approach to $$M$$ (see Figure $$\PageIndex{2}$$). The velocity of $$m$$ at the point of closest approach will always be perpendicular to the line joining the centers of $$m$$ and $$M$$. The different possible orbits are:
1. A circular orbit of radius $$R$$ (where $$R$$ is the distance of closest approach) if the mechanical energy is negative (i.e. it is bound) and the speed is exactly equal to the value necessary for the gravitational force to provide the required centripetal acceleration for uniform circular motion: \begin{aligned} \sum F = G\frac{Mm}{R^2} &= m\frac{v^2}{R}\\ \therefore v_{circ}=\sqrt{\frac{GM}{R}}\end{aligned}
2. An elliptical orbit if the mechanical energy is negative and the speed at the point of closest approach is different than that required for a circular orbit.
3. A parabolic orbit if the mechanical energy is exactly zero.
4. A hyperbolic orbit if the mechanical energy is bigger than zero.
The possible orbits are illustrated in Figure $$\PageIndex{2}$$, and are curves in the family of “conic sections”, as they can be found by the intersection of a plane and a cone. All conic sections have at least one “focus” point (ellipses have two) that corresponds to the location of $$M$$.
This page titled 9.3: Gravitational potential energy is shared under a CC BY-SA license and was authored, remixed, and/or curated by Howard Martin revised by Alan Ng.
|
# Graphical interface for inspection and curation of monosynaptic connections
Monosynaptic connections, both excitatory and inhibitory connections, are determined in the processing pipeline. You can visualize the connections and perform manual curation in a graphical interface.
1. Launch the monoSyn GUI providing the path to your data:
gui_MonoSyn('path_to_data')
2. The interface shows you a reference cell together with related connections. CCGs (shown in blue) between all potential connections related to a reference cell (ACG shown in black). The ACG for each potential cell is shown in a small plot inset with each CCG. A image with all CCGs between the reference cell and the rest of the population is also displayed, including the ACG of the reference cell (highlighted with a black dot). Significant connections are also highlighted in the image with white dots.
3. You can navigate the reference cells using the top two buttons shown in the GUI, or use the left and right arrows. Each connection is displayed twice: where the direction of the connection is shown in context with other connections for that reference cell. This provide insight into the CCG patterns, which is relevant for judging connections of hub-like cells and to detect patterns across CCGs. Press H in the GUI to learn all the shortcuts.
4. Following the principles stated above, you can now adjust the connections.
5. The primary action you can perform in the GUI is rejecting connections. The pipeline determines the connections that fulfills the statistical test, and the manual process consists of confirming and rejecting these connections.
6. Excitatory connections are shown at first, but you can select to show inhibitory connections from the top drop-down menu.
7. To reject a connection you simply click the CCG and it will turn red as shown in the screenshot above. You can also use the keypad to select/reject the CCGs.
8. You can switch between showing all significant connections, and the subset accepted during the manual curation process.
9. when you have completed the curation process, simply close the figure and you will be prompted to save the curation.
## Launch the Monosyn GUI directly from CellExplorer
1. Launch CellExplorer
cell_metrics = CellExplorer('metrics',cell_metrics);
2. From the top menu select MonoSyn -> Adjust monosynaptic connections. The monoSyn data will now be loaded and the MonoSyn interface will be displayed.
3. Adjust connections using the mouse or the numeric keypad.
4. Once done, simply close the MonoSyn figure.
5. You will be prompted to save the manual curation. If you confirm, it will save the sessionName.mono_res.cellinfo.mat to the data folder and update the connections in the cell_metrics file as well.
|
# Prove $\ln x \ge \frac{x-1}{x}$
Prove that for every $x>0$: $$\ln x \ge \frac{x-1}{x}$$
What I did:
$$f(x) = \ln x, \text{ } g(x) = \frac{x-1}{x}$$
$$f(1) = g(1) = 0$$
So it's enough to prove that $$f'(x) \ge g'(x)$$ I proved this for every $x>1$. How can I prove this for every $0<x\le1$ ?
• The only thing you still need to show is $f'(x) \leq g'(x)$ for $x\in (0,1]$ – Gregor de Cillia Feb 7 '16 at 17:02
• For $0<x<1$ substitute $y=x^{-1)$ – Urgje Feb 7 '16 at 17:02
You approach is good, but there is another which is worthy of consideration. Define $h : (0,\infty) \rightarrow \mathbb{R}$ by $$h(x) = x \text{ln}(x) - x + 1.$$ and show that this function is non-negative if and only if $f(x) \ge g(x)$. Then subject $h$ to a standard functional analysis in order to determine its range. Compute the limits as $x$ tends to $0$ and $\infty$. Moreover, the derivative of $h$ is easily seen to be $\text{ln}(x)$. You should find that $h$ is always non-negative.
Edit: word "positive" replaced with the correct word "non-negative".
• The limit of h as $x$ tends to 0 is $1$ and as x tends to $\infty$ is $\infty$ $h' = lnx$ indeed, But what can I conclude from all of this? – Sijaan Hallak Feb 7 '16 at 17:05
• You will find that $h$ has only a single minimum. Specifically, at $x=1$, where the value is 0. Therefore, $h \ge 0$.I will make a single edit to correct the word positive. – Carl Christian Feb 7 '16 at 17:11
I thought it might be instructive to present a way forward that does not rely on calculus, but rather elementary analysis only.
In THIS ANSWER and THIS ONE, I showed using only the limit definition of the exponential function and Bernoulli's Inequality that the exponential function satisfies the inequality
$$e^x\ge 1+x \tag 1$$
Setting $x=-z/(z+1)$ into $(1)$ and taking the logarithm of both sides reveals
$$\log(1+z)\ge \frac{z}{z+1} \tag 2$$
for $z>-1$. Finally, substituting $x=1+z$ in $(2)$ yields the coveted inequality
$$\frac{x-1}{x}\le \log x$$
for $x>0$.
HINT: set $$f(x)=\ln(x)-\frac{x-1}{x}$$ then we get $$\lim_{x \to 0+}f(x)=+\infty$$ further is $$f'(x)=\frac{x-1}{x^2}$$ and $$f''(x)=-\frac{x-2}{x^3}$$ can you proceed?
• Cant understand what's your point here.. can you explain more? – Sijaan Hallak Feb 7 '16 at 17:04
• show that $x=1$ gives the minimum of the given function – Dr. Sonnhard Graubner Feb 7 '16 at 17:07
|
## Chemistry and Chemical Reactivity (9th Edition)
a) Cr (group 6, period 4): $[Ar]\ 3d^5\ 4s^1$ Removing three electrons: $Cr^{3+}$: $[Ar]\ 3d^3$ b) Both are paramagnetic since they have unpaired electrons, but $Cr^{2+}$ has 4 unpaired, thus being more paramagnetic. c) Since radius increases from top to bottom on the periodic table, the radius of $Al^{3+}$ should be smaller than 64 pm since Al is in the third period.
|
zbMATH — the first resource for mathematics
Compute Distance To:
Author ID: yadav.manoj-kumar Published as: Kumar, M.; Kumar, Manoj; Yadav, Manoj K.; Yadav, Manoj Kumar Homepage: http://www.hri.res.in/~myadav/ External Links: MGP · Wikidata
Documents Indexed: 108 Publications since 1997, including 3 Books
all top 5
Co-Authors
9 single-authored 4 Vermani, Lekh Raj 2 Bardakov, Valeriĭ Georgievich 2 Hatui, Sumana 2 Jain, Vivek Kumar 2 Nath, Rajat Kanti 2 Passi, Inder Bir Singh 2 Rai, Pradeep Kumar 2 Singh, Mahender 1 Dade, Everett C. 1 Dolfi, Silvio 1 Kakkar, Vipul 1 Neshchadim, Mikhail Vladimirovich 1 Vesnin, Andrei Yu.
all top 5
Serials
4 International Journal of Algebra and Computation 3 Communications in Algebra 3 Israel Journal of Mathematics 3 Journal of Algebra 3 Proceedings of the Japan Academy. Series A 2 Journal of Group Theory 1 Journal of the London Mathematical Society. Second Series 1 Journal of Pure and Applied Algebra 1 Rendiconti del Circolo Matemàtico di Palermo. Serie II 1 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 1 Springer Monographs in Mathematics
Fields
24 Group theory and generalizations (20-XX) 1 Associative rings and algebras (16-XX)
Citations contained in zbMATH
63 Publications have been cited 345 times in 256 Documents Cited by Year
On $$\mathrm\text{ Ш}$$-rigidity of groups of order $$p^6$$. Zbl 1310.20025
2015
On automorphisms of some finite $$p$$-groups. Zbl 1148.20014
2008
A fourth-order finite-difference method based on non-uniform mesh for a class of singular two-point boundary value problems. Zbl 0990.65086
Aziz, Tariq; Kumar, Manoj
2001
Numerical method for solving fractional coupled Burgers equations. Zbl 1410.65413
Prakash, Amit; Kumar, Manoj; Sharma, Kapil K.
2015
On central automorphisms fixing the center element-wise. Zbl 1187.20013
2009
A collection of computational techniques for solving singular boundary-value problems. Zbl 1159.65076
Kumar, Manoj; Singh, Neelima
2009
Methods for solving singular boundary value problems using splines: a review. Zbl 1186.65104
Kumar, Manoj; Gupta, Yogesh
2010
An initial-value technique for singularly perturbed boundary value problems via cubic spline. Zbl 1135.65350
Kumar, Manoj; Singh, P.; Mishra, Hradyesh Kumar
2007
A recent survey on computational techniques for solving singularly perturbed boundary value problems. Zbl 1127.65053
Kumar, Manoj; Singh, Pitam; Kumar Mishra, Hradyesh
2007
A non-uniform mesh finite difference method and its convergence for a class of singular two-point boundary value problems. Zbl 1063.65068
Kumar, M.; Aziz, T.
2004
Multilayer perceptrons and radial basis function neural network methods for the solution of differential equations: a survey. Zbl 1236.65107
Kumar, Manoj; Yadav, Neha
2011
Methods for solving singular perturbation problems arising in science and engineering. Zbl 1225.65077
Kumar, Manoj; Parul
2011
A uniform mesh finite difference method for a class of singular two-point boundary value problems. Zbl 1103.65088
Kumar, Manoj; Aziz, Tariq
2006
“Hasse principle” for groups of order $$p^4$$. Zbl 1008.20014
Kumar, Manoj; Vermani, Lekh Raj
2001
“Hasse principle” for extraspecial $$p$$-groups. Zbl 0995.20034
Kumar, Manoj; Vermani, Lekh Raj
2000
An introduction to neural network methods for differential equations. Zbl 1328.92006
2015
Quintic nonpolynomial spline method for the solution of a second-order boundary-value problem with engineering applications. Zbl 1231.65120
Srivastava, Pankaj Kumar; Kumar, Manoj; Mohapatra, R. N.
2011
Automorphisms of Abelian group extensions. Zbl 1209.20021
Passi, I. B. S.; Singh, Mahender; Yadav, Manoj K.
2010
A boundary value approach for a class of linear singularly perturbed boundary value problems. Zbl 1159.65075
Kumar, Manoj; Mishra, Hradyesh Kumar; Singh, Peetam
2009
Class preserving automorphisms of finite $$p$$-groups. Zbl 1129.20017
2007
On automorphisms of some $$p$$-groups. Zbl 1062.20021
Kumar, Manoj; Vermani, Lekh Raj
2002
On finite $$p$$-groups whose central automorphisms are all class preserving. Zbl 1291.20023
2013
On finite $$p$$-groups whose automorphisms are all central. Zbl 1262.20030
Jain, Vivek K.; Yadav, Manoj K.
2012
Class preserving automorphisms of finite $$p$$-groups: a survey. Zbl 1231.20024
2011
Computational techniques for solving differential equations by cubic, quintic, and sextic spline. Zbl 1183.65097
Kumar, Manoj; Srivastava, Pankaj Kumar
2009
Estimation of parameters of generalized inverted exponential distribution for progressive type-II censored sample with binomial removals. Zbl 1307.62060
Singh, Sanjay Kumar; Singh, Umesh; Kumar, Manoj
2013
Numerical treatment of singularly perturbed two point boundary value problems using initial-value method. Zbl 1177.65109
Kumar, Manoj; Mishra, Hradyesh Kumar; Singh, P.
2009
Computational method for finding various solutions for a quasilinear elliptic equation of Kirchhoff type. Zbl 1171.74044
Kumar, Manoj; Kumar, Prashant
2009
Finite groups with many product conjugacy classes. Zbl 1139.20027
2006
Multi-time-step domain decomposition method with non-matching grids for parabolic problems. Zbl 1410.65364
Beneš, Michal; Nekvinda, Aleš; Yadav, Manoj Kumar
2015
Some results on relative commutativity degree. Zbl 1332.20027
Nath, Rajat Kanti; Yadav, Manoj Kumar
2015
Numerical algorithm based on quintic nonpolynomial spline for solving third-order boundary value problems associated with draining and coating flows. Zbl 1260.65068
Srivastava, Pankaj Kumar; Kumar, Manoj
2012
Non-inner automorphisms of order $$p$$ in finite $$p$$-groups of coclass 3. Zbl 1378.20030
Ruscitti, Marco; Legarreta, Leire; Yadav, Manoj K.
2017
Finite groups whose non-linear irreducible characters of the same degree are Galois conjugate. Zbl 1342.20005
Dolfi, Silvio; Yadav, Manoj K.
2016
On the probability distribution associated to commutator word map in finite groups. Zbl 1346.20083
Nath, Rajat Kanti; Yadav, Manoj Kumar
2015
On finite $$p$$-groups with Abelian automorphism group. Zbl 1288.20028
Jain, Vivek K.; Rai, Pradeep K.; Yadav, Manoj K.
2013
Class preserving automorphisms of unitriangular groups. Zbl 1256.20036
Bardakov, Valeriy; Vesnin, Andrei; Yadav, Manoj K.
2012
Radiation effect on unsteady MHD heat and mass transfer flow on a moving inclined porous heated plate in the presence of chemical reaction. Zbl 1305.76115
Uddin, Ziya; Kumar, Manoj
2010
Simulation of a nonlinear Steklov eigenvalue problem using finite-element approximation. Zbl 1197.65179
Kumar, Prashant; Kumar, Manoj
2010
On automorphisms of finite $$p$$-groups. Zbl 1131.20015
2007
Class of dual to ratio estimators for double sampling. Zbl 1115.62305
Kumar, Manoj; Bahl, Shashi
2006
A new iterative technique for a fractional model of nonlinear Zakharov-Kuznetsov equations via Sumudu transform. Zbl 1427.65322
Prakash, Amit; Kumar, Manoj; Baleanu, Dumitru
2018
Finite $$p$$-groups of conjugate type $$\{1,p^{3}\}$$. Zbl 1392.20008
Naik, Tushar Kanta; Yadav, Manoj K.
2018
Bayesian estimation for Poisson-exponential model under progressive type-II censoring data with binomial removal and its application to Ovarian cancer data. Zbl 1349.62487
Singh, Sanjay Kumar; Singh, Umesh; Kumar, Manoj
2016
A recent development of computer methods for solving singularly perturbed boundary value problems. Zbl 1242.34002
Kumar, Manoj; Parul
2011
A finite element approach for finding positive solutions of semilinear elliptic Dirichlet problems. Zbl 1172.65063
Kumar, Manoj; Kumar, Prashant
2009
Direct and inverse estimates for a new family of linear positive operators. Zbl 1111.41016
Gupta, M. K.; Gupta, Vijay; Kumar, Manoj
2007
Estimation of finite population mean using multi-auxiliary variables. Zbl 0952.62011
Bahl, S.; Kumar, Manoj
2000
Exact solutions of fractional partial differential equations by Sumudu transform iterative method. Zbl 1444.35149
Kumar, Manoj; Daftardar-Gejji, Varsha
2019
A new family of predictor-corrector methods for solving fractional differential equations. Zbl 1433.65133
Kumar, Manoj; Daftardar-Gejji, Varsha
2019
The Schur multiplier of groups of order $$\mathrm{p}^{5}$$. Zbl 07076608
Hatui, Sumana; Kakkar, Vipul; Yadav, Manoj K.
2019
A new trigonometrical algorithm for computing real root of non-linear transcendental equations. Zbl 1415.65116
Srivastav, Vivek Kumar; Thota, Srinivasarao; Kumar, Manoj
2019
MHD slips flow of a micro-polar fluid due to moving plate in porous medium with chemical reaction and thermal radiation: a Lie group analysis. Zbl 1401.76019
Singh, Khilap; Kumar, Manoj
2018
Group theory and computation. Invited papers based on the presentations at the workshop ‘Group theory and computational methods’, Bangalore, India, November 5–14, 2016. Zbl 1401.20004
Sastry, N. S. Narasimha (ed.); Yadav, Manoj Kumar (ed.)
2018
The Schur multiplier of central product of groups. Zbl 06867617
Hatui, Sumana; Vermani, L. R.; Yadav, Manoj K.
2018
Note on Caranti’s method of construction of Miller groups. Zbl 1425.20013
Kitture, Rahul Dattatraya; Yadav, Manoj K.
2018
Class-preserving automorphisms of finite $$p$$-groups. II. Zbl 1382.20027
2015
Buckling analysis of a beam-column using multilayer perceptron neural network technique. Zbl 1293.93084
Kumar, Manoj; Yadav, Neha
2013
Phase plane analysis and traveling wave solution of third order nonlinear singular problems arising in thin film evolution. Zbl 1268.76006
Kumar, Manoj; Singh, Neelima
2012
Computer simulation of third-order nonlinear singular problems arising in draining and coating flows. Zbl 1253.76097
Kumar, Manoj; Singh, Neelima
2012
Application of B-spline to numerical solution of a system of singularly perturbed problems. Zbl 1306.65235
Gupta, Yogesh; Srivastava, Pankaj Kumar; Kumar, Manoj
2011
Predictive performance of the improved estimators with exact restrictions in linear regression models. Zbl 1175.62022
Kumar, Manoj; Mishra, Nutan; Gupta, Rajinder
2008
Common coincidence points of $$R$$-weakly commuting fuzzy maps. Zbl 1169.47063
Saini, R. K.; Kumar, M.; Gupta, V.; Singh, S. B.
2008
Exact solutions of fractional partial differential equations by Sumudu transform iterative method. Zbl 1444.35149
Kumar, Manoj; Daftardar-Gejji, Varsha
2019
A new family of predictor-corrector methods for solving fractional differential equations. Zbl 1433.65133
Kumar, Manoj; Daftardar-Gejji, Varsha
2019
The Schur multiplier of groups of order $$\mathrm{p}^{5}$$. Zbl 07076608
Hatui, Sumana; Kakkar, Vipul; Yadav, Manoj K.
2019
A new trigonometrical algorithm for computing real root of non-linear transcendental equations. Zbl 1415.65116
Srivastav, Vivek Kumar; Thota, Srinivasarao; Kumar, Manoj
2019
A new iterative technique for a fractional model of nonlinear Zakharov-Kuznetsov equations via Sumudu transform. Zbl 1427.65322
Prakash, Amit; Kumar, Manoj; Baleanu, Dumitru
2018
Finite $$p$$-groups of conjugate type $$\{1,p^{3}\}$$. Zbl 1392.20008
Naik, Tushar Kanta; Yadav, Manoj K.
2018
MHD slips flow of a micro-polar fluid due to moving plate in porous medium with chemical reaction and thermal radiation: a Lie group analysis. Zbl 1401.76019
Singh, Khilap; Kumar, Manoj
2018
Group theory and computation. Invited papers based on the presentations at the workshop ‘Group theory and computational methods’, Bangalore, India, November 5–14, 2016. Zbl 1401.20004
Sastry, N. S. Narasimha (ed.); Yadav, Manoj Kumar (ed.)
2018
The Schur multiplier of central product of groups. Zbl 06867617
Hatui, Sumana; Vermani, L. R.; Yadav, Manoj K.
2018
Note on Caranti’s method of construction of Miller groups. Zbl 1425.20013
Kitture, Rahul Dattatraya; Yadav, Manoj K.
2018
Non-inner automorphisms of order $$p$$ in finite $$p$$-groups of coclass 3. Zbl 1378.20030
Ruscitti, Marco; Legarreta, Leire; Yadav, Manoj K.
2017
Finite groups whose non-linear irreducible characters of the same degree are Galois conjugate. Zbl 1342.20005
Dolfi, Silvio; Yadav, Manoj K.
2016
Bayesian estimation for Poisson-exponential model under progressive type-II censoring data with binomial removal and its application to Ovarian cancer data. Zbl 1349.62487
Singh, Sanjay Kumar; Singh, Umesh; Kumar, Manoj
2016
On $$\mathrm\text{ Ш}$$-rigidity of groups of order $$p^6$$. Zbl 1310.20025
2015
Numerical method for solving fractional coupled Burgers equations. Zbl 1410.65413
Prakash, Amit; Kumar, Manoj; Sharma, Kapil K.
2015
An introduction to neural network methods for differential equations. Zbl 1328.92006
2015
Multi-time-step domain decomposition method with non-matching grids for parabolic problems. Zbl 1410.65364
Beneš, Michal; Nekvinda, Aleš; Yadav, Manoj Kumar
2015
Some results on relative commutativity degree. Zbl 1332.20027
Nath, Rajat Kanti; Yadav, Manoj Kumar
2015
On the probability distribution associated to commutator word map in finite groups. Zbl 1346.20083
Nath, Rajat Kanti; Yadav, Manoj Kumar
2015
Class-preserving automorphisms of finite $$p$$-groups. II. Zbl 1382.20027
2015
On finite $$p$$-groups whose central automorphisms are all class preserving. Zbl 1291.20023
2013
Estimation of parameters of generalized inverted exponential distribution for progressive type-II censored sample with binomial removals. Zbl 1307.62060
Singh, Sanjay Kumar; Singh, Umesh; Kumar, Manoj
2013
On finite $$p$$-groups with Abelian automorphism group. Zbl 1288.20028
Jain, Vivek K.; Rai, Pradeep K.; Yadav, Manoj K.
2013
Buckling analysis of a beam-column using multilayer perceptron neural network technique. Zbl 1293.93084
Kumar, Manoj; Yadav, Neha
2013
On finite $$p$$-groups whose automorphisms are all central. Zbl 1262.20030
Jain, Vivek K.; Yadav, Manoj K.
2012
Numerical algorithm based on quintic nonpolynomial spline for solving third-order boundary value problems associated with draining and coating flows. Zbl 1260.65068
Srivastava, Pankaj Kumar; Kumar, Manoj
2012
Class preserving automorphisms of unitriangular groups. Zbl 1256.20036
Bardakov, Valeriy; Vesnin, Andrei; Yadav, Manoj K.
2012
Phase plane analysis and traveling wave solution of third order nonlinear singular problems arising in thin film evolution. Zbl 1268.76006
Kumar, Manoj; Singh, Neelima
2012
Computer simulation of third-order nonlinear singular problems arising in draining and coating flows. Zbl 1253.76097
Kumar, Manoj; Singh, Neelima
2012
Multilayer perceptrons and radial basis function neural network methods for the solution of differential equations: a survey. Zbl 1236.65107
Kumar, Manoj; Yadav, Neha
2011
Methods for solving singular perturbation problems arising in science and engineering. Zbl 1225.65077
Kumar, Manoj; Parul
2011
Quintic nonpolynomial spline method for the solution of a second-order boundary-value problem with engineering applications. Zbl 1231.65120
Srivastava, Pankaj Kumar; Kumar, Manoj; Mohapatra, R. N.
2011
Class preserving automorphisms of finite $$p$$-groups: a survey. Zbl 1231.20024
2011
A recent development of computer methods for solving singularly perturbed boundary value problems. Zbl 1242.34002
Kumar, Manoj; Parul
2011
Application of B-spline to numerical solution of a system of singularly perturbed problems. Zbl 1306.65235
Gupta, Yogesh; Srivastava, Pankaj Kumar; Kumar, Manoj
2011
Methods for solving singular boundary value problems using splines: a review. Zbl 1186.65104
Kumar, Manoj; Gupta, Yogesh
2010
Automorphisms of Abelian group extensions. Zbl 1209.20021
Passi, I. B. S.; Singh, Mahender; Yadav, Manoj K.
2010
Radiation effect on unsteady MHD heat and mass transfer flow on a moving inclined porous heated plate in the presence of chemical reaction. Zbl 1305.76115
Uddin, Ziya; Kumar, Manoj
2010
Simulation of a nonlinear Steklov eigenvalue problem using finite-element approximation. Zbl 1197.65179
Kumar, Prashant; Kumar, Manoj
2010
On central automorphisms fixing the center element-wise. Zbl 1187.20013
2009
A collection of computational techniques for solving singular boundary-value problems. Zbl 1159.65076
Kumar, Manoj; Singh, Neelima
2009
A boundary value approach for a class of linear singularly perturbed boundary value problems. Zbl 1159.65075
Kumar, Manoj; Mishra, Hradyesh Kumar; Singh, Peetam
2009
Computational techniques for solving differential equations by cubic, quintic, and sextic spline. Zbl 1183.65097
Kumar, Manoj; Srivastava, Pankaj Kumar
2009
Numerical treatment of singularly perturbed two point boundary value problems using initial-value method. Zbl 1177.65109
Kumar, Manoj; Mishra, Hradyesh Kumar; Singh, P.
2009
Computational method for finding various solutions for a quasilinear elliptic equation of Kirchhoff type. Zbl 1171.74044
Kumar, Manoj; Kumar, Prashant
2009
A finite element approach for finding positive solutions of semilinear elliptic Dirichlet problems. Zbl 1172.65063
Kumar, Manoj; Kumar, Prashant
2009
On automorphisms of some finite $$p$$-groups. Zbl 1148.20014
2008
Predictive performance of the improved estimators with exact restrictions in linear regression models. Zbl 1175.62022
Kumar, Manoj; Mishra, Nutan; Gupta, Rajinder
2008
Common coincidence points of $$R$$-weakly commuting fuzzy maps. Zbl 1169.47063
Saini, R. K.; Kumar, M.; Gupta, V.; Singh, S. B.
2008
An initial-value technique for singularly perturbed boundary value problems via cubic spline. Zbl 1135.65350
Kumar, Manoj; Singh, P.; Mishra, Hradyesh Kumar
2007
A recent survey on computational techniques for solving singularly perturbed boundary value problems. Zbl 1127.65053
Kumar, Manoj; Singh, Pitam; Kumar Mishra, Hradyesh
2007
Class preserving automorphisms of finite $$p$$-groups. Zbl 1129.20017
2007
On automorphisms of finite $$p$$-groups. Zbl 1131.20015
2007
Direct and inverse estimates for a new family of linear positive operators. Zbl 1111.41016
Gupta, M. K.; Gupta, Vijay; Kumar, Manoj
2007
A uniform mesh finite difference method for a class of singular two-point boundary value problems. Zbl 1103.65088
Kumar, Manoj; Aziz, Tariq
2006
Finite groups with many product conjugacy classes. Zbl 1139.20027
2006
Class of dual to ratio estimators for double sampling. Zbl 1115.62305
Kumar, Manoj; Bahl, Shashi
2006
A non-uniform mesh finite difference method and its convergence for a class of singular two-point boundary value problems. Zbl 1063.65068
Kumar, M.; Aziz, T.
2004
On automorphisms of some $$p$$-groups. Zbl 1062.20021
Kumar, Manoj; Vermani, Lekh Raj
2002
A fourth-order finite-difference method based on non-uniform mesh for a class of singular two-point boundary value problems. Zbl 0990.65086
Aziz, Tariq; Kumar, Manoj
2001
“Hasse principle” for groups of order $$p^4$$. Zbl 1008.20014
Kumar, Manoj; Vermani, Lekh Raj
2001
“Hasse principle” for extraspecial $$p$$-groups. Zbl 0995.20034
Kumar, Manoj; Vermani, Lekh Raj
2000
Estimation of finite population mean using multi-auxiliary variables. Zbl 0952.62011
Bahl, S.; Kumar, Manoj
2000
all top 5
all top 5
Cited in 110 Serials
16 Applied Mathematics and Computation 13 Analysis Mathematica 11 Communications in Algebra 11 Computers & Mathematics with Applications 10 Journal of Computational and Applied Mathematics 9 Journal of Algebra and its Applications 8 International Journal of Computer Mathematics 7 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 5 Israel Journal of Mathematics 5 Journal of Algebra 5 Computational Mathematics and Modeling 5 Advances in Difference Equations 5 International Journal of Differential Equations 5 International Journal of Applied and Computational Mathematics 4 Abstract and Applied Analysis 4 Journal of Applied Mathematics and Computing 3 Indian Journal of Pure & Applied Mathematics 3 Monatshefte für Mathematik 3 Semigroup Forum 3 Numerical Algorithms 3 Mathematical Problems in Engineering 3 Cogent Mathematics 2 Journal of Mathematical Physics 2 Russian Mathematical Surveys 2 Archiv der Mathematik 2 Proceedings of the Japan Academy. Series A 2 Applied Numerical Mathematics 2 Applied Mathematics Letters 2 International Journal of Algebra and Computation 2 Advances in Engineering Software 2 Journal of the Egyptian Mathematical Society 2 Journal of Difference Equations and Applications 2 Proceedings of the National Academy of Sciences, India. Section A. Physical Sciences 2 Mediterranean Journal of Mathematics 2 International Journal for Computational Methods in Engineering Science and Mechanics 2 Asian-European Journal of Mathematics 2 Vestnik Yuzhno-Ural’skogo Gosudarstvennogo Universiteta. Seriya Matematicheskoe Modelirovanie i Programmirovanie 2 International Journal of Group Theory 2 S$$\vec{\text{e}}$$MA Journal 2 Arabian Journal of Mathematics 2 Communications in Mathematics and Statistics 2 Journal of Computational and Engineering Mathematics 2 Vestnik Yuzhno-Ural’skogo Gosudarstvennogo Universiteta. Seriya Matematika. Mekhanika. Fizika 1 Bulletin of the Australian Mathematical Society 1 Computers and Fluids 1 Computer Physics Communications 1 International Journal for Numerical Methods in Fluids 1 Journal of the Franklin Institute 1 Lithuanian Mathematical Journal 1 Chaos, Solitons and Fractals 1 Advances in Mathematics 1 Calcolo 1 Computing 1 International Journal of Mathematics and Mathematical Sciences 1 Kyungpook Mathematical Journal 1 Proceedings of the American Mathematical Society 1 Quaestiones Mathematicae 1 Rendiconti del Circolo Matemàtico di Palermo. Serie II 1 Ricerche di Matematica 1 Note di Matematica 1 Applied Mathematics and Mechanics. (English Edition) 1 Chinese Annals of Mathematics. Series B 1 Acta Mathematica Hungarica 1 Bulletin of the Iranian Mathematical Society 1 Statistics 1 Journal of Symbolic Computation 1 Sequential Analysis 1 Mathematical and Computer Modelling 1 Journal of Scientific Computing 1 Computational Mathematics and Mathematical Physics 1 Automation and Remote Control 1 Journal of Statistical Computation and Simulation 1 Expositiones Mathematicae 1 Indagationes Mathematicae. New Series 1 Bulletin of the Belgian Mathematical Society - Simon Stevin 1 Computational and Applied Mathematics 1 International Journal of Numerical Methods for Heat & Fluid Flow 1 Turkish Journal of Mathematics 1 ETNA. Electronic Transactions on Numerical Analysis 1 Journal of Mathematical Chemistry 1 European Mathematical Society Newsletter 1 Transformation Groups 1 Differential Equations and Dynamical Systems 1 Nonlinear Dynamics 1 Journal of Inequalities and Applications 1 Journal of Group Theory 1 Lobachevskii Journal of Mathematics 1 International Journal of Nonlinear Sciences and Numerical Simulation 1 Journal of the Australian Mathematical Society 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 Journal of Industrial and Management Optimization 1 Statistical Methodology 1 Thailand Statistician 1 Proyecciones 1 Applications and Applied Mathematics 1 Optimization Letters 1 Mathematical Modelling of Natural Phenomena 1 Advances in High Energy Physics 1 Discrete and Continuous Dynamical Systems. Series S 1 Acta Universitatis Sapientiae. Mathematica ...and 10 more Serials
all top 5
Cited in 40 Fields
112 Numerical analysis (65-XX) 66 Group theory and generalizations (20-XX) 62 Ordinary differential equations (34-XX) 41 Partial differential equations (35-XX) 15 Fluid mechanics (76-XX) 13 Biology and other natural sciences (92-XX) 10 Statistics (62-XX) 9 Harmonic analysis on Euclidean spaces (42-XX) 8 Nonassociative rings and algebras (17-XX) 8 Approximations and expansions (41-XX) 6 Associative rings and algebras (16-XX) 6 Functional analysis (46-XX) 6 Computer science (68-XX) 5 Real functions (26-XX) 5 Systems theory; control (93-XX) 4 History and biography (01-XX) 4 Algebraic geometry (14-XX) 4 Operations research, mathematical programming (90-XX) 3 Number theory (11-XX) 3 Commutative algebra (13-XX) 3 Category theory; homological algebra (18-XX) 3 Integral equations (45-XX) 2 Sequences, series, summability (40-XX) 2 Operator theory (47-XX) 2 Classical thermodynamics, heat transfer (80-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Mathematical logic and foundations (03-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Field theory and polynomials (12-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Functions of a complex variable (30-XX) 1 Special functions (33-XX) 1 Integral transforms, operational calculus (44-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 General topology (54-XX) 1 Probability theory and stochastic processes (60-XX) 1 Mechanics of deformable solids (74-XX) 1 Optics, electromagnetic theory (78-XX) 1 Quantum theory (81-XX) 1 Information and communication theory, circuits (94-XX)
Wikidata Timeline
The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
|
# Suppose R is a commutative ring, M is the maximum ideal of R, and R/M is not a field. Prove: (R/M)² = 0.
It is a well known theorem that if R is a commutative ring with identity not equal to zero then R/M is a field. So R in this question doesn't have a non-zero identity. For example, if R = 2Z, which is a ring with no identity, and whose maximal ideal is 4Z, then 2Z/4Z = {4Z, 2 + 4Z}. I'm not sure what exactly does the notation (R/M)² mean or how it is defined. If (R/M)² means the set {ab∈R/M | a,b∈R/M}, then (2Z/4Z)² = {4Z} (= 0?) Can anyone help to prove?
• That's a product of ideals. See en.wikipedia.org/wiki/Ideal_(ring_theory)#Ideal_operations for a clear description of the sum and product of ideals. Nov 25 at 9:09
• In your example, viewing $\{4\mathbb{Z},2+4\mathbb{Z}\}$ as a ring without identity, then $4\mathbb{Z}$ is the $0$ element. Thus, $\{4\mathbb{Z}\}=\{0\}$, as required. Nov 25 at 9:37
• Eric and Sam, thank you very much for your clarification. May I know how to prove the question? Can you suggest any reading or relevant part of any textbook? Nov 25 at 10:20
• math.meta.stackexchange.com/questions/5020/… Nov 25 at 22:21
The question can be rephrased as:
Suppose $$R$$ is a nonzero rng with no nontrivial ideals, and $$R$$ has no multiplicative identity. Then $$R^2=\{0\}$$.
The $$R^2$$ should be interpreted as the ideal product, that is, the set $$\{\sum r_is_i\mid r_i, s_i\in R\}$$ with finite sums only, of course.
Since this is always an ideal, we know either $$R^2=\{0\}$$ or $$R^2=R$$. If $$R^2=R$$, we can show $$R$$ has an identity, and then $$R$$ must be a field.
Suppose $$R^2=R$$. Then there must exist an $$x$$ such that $$xR\neq \{0\}$$, and in fact, $$xR=R$$ again since there are no nontrivial ideals. In particular $$xr=x$$ for some $$r\in R$$.
Here's an important thing to notice: the annihilator $$ann(z):=\{r\in R\mid zr=0\}$$ is also an ideal of $$R$$. It can again be only one of two things, $$R$$ or $$\{0\}$$.
At this point, we have above that $$x\neq 0$$, and $$xr\neq 0$$, so $$ann(x)=\{0\}$$. Also, $$ann(r)=\{0\}$$ since $$xr\neq 0$$.
Because $$xr^2-xr=xr-x=0$$, we can see $$x(r^2-r)=0$$. But as $$ann(x)=\{0\}$$, it must be that $$r^2=r$$.
Then for any other $$y\in R$$, $$yr-y\neq 0$$ would imply also that $$yrr-yr\neq 0$$, but of course that isn't the case because $$y(r^2-r)=y0=0$$. This means then that $$yr-y=0$$ for all $$y$$, and that just says that $$r$$ is the identity of $$R$$.
So going back to your original statement: if $$R$$ is a nonzero ring with no nontrivial ideals, and $$R$$ is not a field, then it must be that $$R^2=\{0\}$$.
|
2014
11-30
A Chocolate Manufacturer’s Problem
My mama always said:Life is like a box of chocolate, you never know what you are going to get.
――From Forrest Gump
ACM is a chocolate manufacturer. Inspired by the famous quote above, recently, they are designing a new brand of chocolate named Life. Each piece of chocolate is a random simple polygon. After molding and shaping, every piece is put in a small box. Until you open the box, you will not know what you will get: a huge piece or only a tiny fraction. It is really like life and that is the reason it is named for.
However, here comes a problem. The manufacturer has to print the logo on each piece of chocolate. The logo is a circle with ‘ACM’ inside. Here is an example below. It is fortunate that the logo can be printed on the chocolate.
Now the manufacturer is asking you for help. Given the information about the chocolate shape and the radius of the logo, you need to judge whether or not there is enough space to print the logo.
The input contains no more than 20 cases, and each case is formatted as follows.
n
x1 y1
x2 y2
xn yn
r
The first line is the number of vertices of the polygon, n, which satisfies 3 ≤ n ≤ 50. Following n lines are the x- and y-coordinates of the n vertices. They are float numbers and satisfy 0 ≤ xi ≤ 1000 and 0 ≤ yi ≤ 1000 (i = 1, …, n). Line segments (xi, yi)�(xi+ 1 , yi + 1) (i = 1, …, n) and the line segment (xn, yn)�(x1, y1) form the border of the polygon. After the description of the polygon, a float number r follows, meaning the radius of the logo( r <= 1000).
The input ends by a single zero.
You may assume that the polygon is simple, that is, its border never crosses or touches itself.
The input contains no more than 20 cases, and each case is formatted as follows.
n
x1 y1
x2 y2
xn yn
r
The first line is the number of vertices of the polygon, n, which satisfies 3 ≤ n ≤ 50. Following n lines are the x- and y-coordinates of the n vertices. They are float numbers and satisfy 0 ≤ xi ≤ 1000 and 0 ≤ yi ≤ 1000 (i = 1, …, n). Line segments (xi, yi)�(xi+ 1 , yi + 1) (i = 1, …, n) and the line segment (xn, yn)�(x1, y1) form the border of the polygon. After the description of the polygon, a float number r follows, meaning the radius of the logo( r <= 1000).
The input ends by a single zero.
You may assume that the polygon is simple, that is, its border never crosses or touches itself.
3
0 0
0 1
1 0
0.28
3
0 0
0 1
1 0
0.3
0
Yes
No
Hint
Here is a picture illustrated the first case. It may be helpful for you to understand the problem.
/*
* =====================================================================================
*
* Filename: hdu3644.cpp
*
* Description: Calculate Geometry -- Simulated annealing
*
* Version: 1.0
* Created: 2012年01月21日 10时41分15秒
* Revision: none
* Compiler: gcc
*
* Author: SphinX (Yuxiang Ye), [email protected]
* Organization:
*
* =====================================================================================
*/
/*
* 10年杭州网络赛的一道题,碉堡了~貌似是我A的最艰难的题了,虽然一开始
*就看出是模拟退火,但无奈计算几何基础太差,敲的过程中各种错误,然后又重新学
*了遍计算几何基本功,但还是各种WA跟TLE,最后发现是精度问题。。。
* 其中辛苦一言难尽。。。
*/
#include <stdlib.h>
#include <iostream>
#include <algorithm>
#include <cstdio>
#include <cmath>
using namespace std;
typedef double LF;
const LF eps = 1e-3;
const int MAXN = 104;
struct Point
{
LF x, y;
Point(){}
Point(const LF _x, const LF _y)
{
x = _x;
y = _y;
}
}; /* ---------- end of struct Point ---------- */
typedef struct Point Point;
struct Segment
{
Point ps, pe;
Segment(){}
Segment(const Point _ps, const Point _pe)
{
ps = _ps;
pe = _pe;
}
}; /* ---------- end of struct Segment ---------- */
typedef struct Segment Segment;
int n;
Point pnt[MAXN], tmp[MAXN];
Segment seg[MAXN];
/*
* === FUNCTION ======================================================================
* Name: dblcmp
* Description:
* =====================================================================================
*/
int
dblcmp ( const LF & x )
{
return fabs(x) < eps ? 0 : (x < 0.0 ? -1 : 1);
} /* ----- end of function dblcmp ----- */
/*
* === FUNCTION ======================================================================
* Name: xmult
* Description:
* =====================================================================================
*/
LF
xmult ( const Point & p1, const Point & p2, const Point & p0 )
{
LF vx[2] = {p1.x - p0.x, p2.x - p0.x};
LF vy[2] = {p1.y - p0.y, p2.y - p0.y};
return vx[0] * vy[1] - vx[1] * vy[0];
} /* ----- end of function xmult ----- */
/*
* === FUNCTION ======================================================================
* Name: pnt2pnt_dist
* Description:
* =====================================================================================
*/
LF
pnt2pnt_dist ( const Point & pa, const Point &pb )
{
return sqrt((pa.x - pb.x) * (pa.x - pb.x)
+ (pa.y - pb.y) * (pa.y - pb.y));
} /* ----- end of function pnt2pnt_dist ----- */
/*
* === FUNCTION ======================================================================
* Name: pnt2seg_dist
* Description:
* =====================================================================================
*/
LF
pnt2seg_dist ( const Point & pt, const Segment & seg )
{
LF a = pnt2pnt_dist(pt, seg.pe);
if( a < eps )
{
return a;
}
LF b = pnt2pnt_dist(pt, seg.ps);
if( b < eps )
{
return b;
}
LF c = pnt2pnt_dist(seg.pe, seg.ps);
if( c < eps )
{
return a;
}
if( a * a >= b * b + c * c )
{
return b;
}
if( b * b >= a * a + c * c )
{
return a;
}
LF l = (a + b + c) * 0.5;
LF s = sqrt(l * (l - a) * (l - b) * (l - c));
return 2.0 * s / c;
} /* ----- end of function pnt2seg_dist ----- */
/*
* === FUNCTION ======================================================================
* Name: pnt2poly_dist
* Description:
* =====================================================================================
*/
LF
pnt2poly_dist ( const Point & pt )
{
LF mn = pnt2seg_dist(pt, seg[0]);
for( int i = 1; i < n ; ++i )
{
mn = min(mn, pnt2seg_dist(pt, seg[i]));
}
return mn;
} /* ----- end of function pnt2poly_dist ----- */
/*
* === FUNCTION ======================================================================
* Name: seg_cross_judge
* Description:
* =====================================================================================
*/
bool
seg_cross_judge ( const Segment & u, const Segment & v )
{
return max(u.pe.x, u.ps.x) >= min(v.pe.x, v.ps.x)
&& max(v.pe.x, v.ps.x) >= min(u.pe.x, u.ps.x)
&& max(u.pe.y, u.ps.y) >= min(v.pe.y, v.ps.y)
&& max(v.pe.y, v.ps.y) >= min(u.pe.y, u.ps.y)
&& -1 == dblcmp(xmult(v.ps, u.pe, u.ps)) * dblcmp(xmult(v.pe, u.pe, u.ps))
&& -1 == dblcmp(xmult(u.ps, v.pe, v.ps)) * dblcmp(xmult(u.pe, v.pe, v.ps));
} /* ----- end of function seg_cross_judge ----- */
/*
* === FUNCTION ======================================================================
* Name: pnt_inside_judge
* Description:
* =====================================================================================
*/
bool
pnt_inside_judge ( const Point & pt )
{
LF ang = rand() + 0.0;
Point ps = Point(pt.x + 10000.0 * cos(ang), pt.y + 10000.0 * sin(ang));
Segment sg = Segment(pt, ps);
int cnt = 0;
for( int i = 0; i < n ; ++i )
{
if (seg_cross_judge(sg, seg[i]))
{
++cnt;
}
}
return cnt % 2;
} /* ----- end of function pnt_inside_judge ----- */
/*
* === FUNCTION ======================================================================
* Name: exist_judge
* Description: 模拟退火
* =====================================================================================
*/
bool
exsist_judge ()
{
LF maxr = 0.0;
for( int i = 0; i < n ; ++i )
{
tmp[i].x = (pnt[i].x + pnt[i + 1].x) * 0.5;
tmp[i].y = (pnt[i].y + pnt[i + 1].y) * 0.5;
min_dist[i] = pnt2poly_dist(tmp[i]);
}
for (int i = 0; i < n - 1; ++i)
{
for (int j = i + 1; j < n; ++j)
{
maxr = max(maxr, pnt2pnt_dist(pnt[i], pnt[j]));
}
}
while (maxr > 1e-3)
{
for( int k = 0; k < 5 ; ++k )
{
for (int i = 0; i < n; ++i)
{
LF ang = rand() + 0.0;
Point pk = Point(tmp[i].x + maxr * cos(ang), tmp[i].y + maxr * sin(ang));
if (!pnt_inside_judge(pk))
{
continue ;
}
LF buf = pnt2poly_dist(pk);
if (buf + eps > radius)
{
return true;
}
if (buf > min_dist[i])
{
min_dist[i] = buf;
tmp[i] = pk;
}
}
}
maxr *= 0.9;
}
return false;
} /* ----- end of function exist_judge ----- */
/*
* === FUNCTION ======================================================================
* Name: main
* Description: VIM自动生成的代码,比较吓人~
* =====================================================================================
*/
int
main ( int argc, char *argv[] )
{
srand(static_cast<unsigned int>(time(NULL)));
while( scanf ( "%d", &n ), n )
{
for( int i = 0; i < n ; ++i )
{
scanf ( "%lf %lf", &pnt[i].x, &pnt[i].y );
}
pnt[n] = pnt[0];
for( int i = 0; i < n ; ++i )
{
seg[i] = Segment(pnt[i], pnt[i + 1]);
}
cout << (exsist_judge() ? "Yes" : "No") << endl;
}
return EXIT_SUCCESS;
} /* ---------- end of function main ---------- */
1. 第2题,TCP不支持多播,多播和广播仅应用于UDP。所以B选项是不对的。第2题,TCP不支持多播,多播和广播仅应用于UDP。所以B选项是不对的。
2. 有限自动机在ACM中是必须掌握的算法,实际上在面试当中几乎不可能让你单独的去实现这个算法,如果有题目要用到有限自动机来降低时间复杂度,那么这种面试题应该属于很难的级别了。
|
NZ Level 8 (NZC) Level 3 (NCEA) [In development]
Complex Numbers and 4 Operations
Lesson
In summary, here is how you undertake the four operations with complex numbers. If you would like to go back to where we covered them in more depth click on the links below.
• To add an subtract complex numbers, add/subtract corresponding real and imaginary components.
• Follow all the other rules of algebraic convention
Multiplication
To multiply complex numbers, we follow the rules of algebraic conventions as well as remembering that $i\times i=-1$i×i=1
Division
Division of complex numbers is carried out by multiplying by a fraction constructed using the conjugate of the denominator. This removes the complex component from the denominator.
#### Worked Examples
##### QUESTION 1
Evaluate $\left(3+6i\right)+\left(7+3i\right)$(3+6i)+(7+3i).
##### QUESTION 2
Simplify $-6\left(3-5i\right)$6(35i).
##### QUESTION 3
Simplify $-2i\left(4-3i\right)^2$2i(43i)2, leaving your answer in terms of $i$i.
##### QUESTION 4
Find the value of $\frac{4+6i}{1+i}$4+6i1+i.
### Outcomes
#### M8-9
Manipulate complex numbers and present them graphically
#### 91577
Apply the algebra of complex numbers in solving problems
|
# Packing the MagAO-X Table at LCO for Shipment¶
This procedure describes how to pack MagAO-X optical table for shipment.
Estimate Time to Complete: 2 hrs
This document can be dowloaded as pdf:
MagAO-X Table Packing at LCO
## Initial Conditions¶
• [ ] The optical table is in the cleanroom, on the handling cart, fully disconnected from the control electronics.
## Preparations¶
• [ ] Secure all optics for transport.
• [ ] Check that all components on table are securely fastened.
• [ ] Install dessicant.
• [ ] Shrink-wrap the instrument.
• [ ] Install the reflective blanket for solar shielding.
## Prepare the Box.¶
• [ ] Move the empty box and shipping frame into the unpacking room with the forklift, placing it on 4 furniture dollies.
The instrument box is brought to the unpacking area with the forklift.
The instrument box being set down on dollies with the fork lift. Note the placement and orientation of the dollies.
• [ ] Move the box towards the inside garage door, and stage 4 more dollies to accept the lid.
Box in position to have the lid removed.
• [ ] Remove the front panel from the box, place out of the way (e.g. outside).
• [ ] Remove all bolts holding the lid to the pallet
• [ ] Place the straight extension on the lifting fixture, and place it in the box-lid position.
• [ ] Attach the lifting fixture to the lid.
• [ ] Lift the lid with the crane, and set it down on the dollies.
Box lid off, on dollies.
• [ ] Move the pallet into the cleanroom to make room to maneuver the lid.
• [ ] Then move the lid out of the way, e.g. outside on the lift.
## Installing The Instrument On The Frame¶
• [ ] Move the pallet and frame towards the front of the unpacking room.
• [ ] Bring the instrument and cart out of the cleanroom.
NOTE: position the cart so that the taller heavier side of the instrument will set down on the side of the frame with the most wire-rope isolators. m
• [ ] Install the scale between the load spreader and the crane, and place the lifting fixture in the instrument+cart position.
• [ ] Install the triangle stabilizing rope ratchets, leaving them loose.
Preparation of the lifting fixture.
• [ ] With the crane, carefully position the load spreader over the table.
NOTE: be sure to guide both ends of the load spreader so it does not contact the instrumetn
• [ ] Attach the load spreader to the cart. Two shackles are used to extend the length. The hooks should be placed opening up.
Use two shackles for correct length.
Hooks must open up on the cart to get the correct length.
• [ ] Lift the instrument+cart, which weigh 1920 lbs, until all 4 wheels are off the ground. If it is out of balance, it will be necessary to manually correct.
Lifting the instrument on its cart.
• [ ] Tighten the triangle stabilizing rope ratchets.
• [ ] With a person on each end stabilizing using the cart handles, lift the instrument to sufficient height to clear the shipping frame.
• [ ] Carefully roll the pallet and frame under the instrument
• [ ] Lower the instrument slowly to just touch the frame, but do not unload the crane.
The instrument on the frame.
• [ ] While the instrumetn is still supported by the crane, start bolts at each corner to guid the instrument down.
• [ ] Lower the instrument until half the weight is off the crane.
• [ ] Start all bolts, including installation of the Emerson Clamp base plates.
• [ ] Fully lower the instrument, such that the crane is still supporting the cart weight of 320 lbs
• [ ] Tighten all bolts holding the instrument to the shipping frame.
Tightening the bolts.
• [ ] Remove the 8 bolts holding the cart to the table.
• [ ] Lower the cart so that it rests on the pallet.
The cart lowered onto the pallet.
• [ ] Disassemble the cart, moving the pieces to storage area.
• [ ] Install the Emerson Clamps.
• [ ] Arm all drop-n-tells, and install the data loggers.
## Install the Lid and Door¶
• [ ] Move the pallet and instrument on the dollies back into the cleanroom to make space for the lid
• [ ] Bring the lid back into the unpacking area and position it to be lifted on. The open side goes towards the MagAO-X label on the instrument.
• [ ] Put the load spreader back in the position to balance the lid, and attach it with the crane to the lid.
• [ ] Lift the lid, and roll the instrument under the box.
• [ ] Set the lid down on the pallet
MagAO-X on the shipping frame inside the box.
• [ ] Install the bolts along the bottom of the lid, securing it to the pallet.
• [ ] Bring the front door panel back inside, and lift it into position.
• [ ] Bolt the lid on.
• [ ] Remove the liffting eyes from the box lid and stow them on the lower left insspection panel.
|
# Solved – Why does \$l_2\$ norm regularization not have a square root
Specifically talking about Ridge Regression's cost function since Ridge Regression is based off of the $$l_2$$ norm. We should expect the cost function to be:
$$J(theta)=MSE(theta) + alphasqrt{sum_{i=1}^{n}theta_i^2}$$
Actual:
$$J(theta)=MSE(theta) + alphafrac{1}{2}sum_{i=1}^{n}theta_i^2$$
Contents
Also, minimizing $$MSE$$ subject to $$|theta|_2le c$$ is equivalent to minimizing $$MSE$$ subject to $$|theta|_2^2le c^2$$.
|
#### DominiConnor
At the risk of saying "I told you so", some time ago I did point out that as cash cows MFE programs would come under pressure to increase their size to help the general budget of their universities.
Doubling student numbers without compromising quality is not that hard to do, many resources like lecture rooms and lecturers are rarely anywhere near their capacity. Some extra staff and some library resource and you are mostly there.
However...
The point of the exercise is to make more money and the temptation and pressure will be to be "more efficient", ie keep the level of resources at much the same level.
The question is whether anyone reading this will care ?
Again something I've seen is how people often choose their MFE on the branding so quality of teaching etc don't matter so much.
Would you rather a badly taught MFE from MIT or a better taught one from Fordham ?
Also it will do unintuitive things to their applicant/acceptance ratio.
At first glance one might think that is must halve it, and since a depression % of MFE applicants see that as a mark of quality, may hurt the program.
But...
One if the many reasons the ratio is a bogus indicator is that people self-select; they think to themselves that they can't get into MIT because they're not in the top X%. But if X doubles, then they may well think they can do it and so will lots of others, it is not a linear relationship, doubling the number of places may more than double the number of applicants, possibly, all I really know is that it's non-linear, no one will know the reality yet.
Replies
19
Views
3K
Replies
4
Views
8K
Replies
19
Views
10K
Replies
1
Views
649
Replies
9
Views
5K
|
# PIN diode
Type Layers of a PIN diode Semiconductor 1950 The diode may be denoted by "PIN" letters on the diagram
A PIN diode is a diode with a wide, undoped intrinsic semiconductor region between a p-type semiconductor and an n-type semiconductor region. The p-type and n-type regions are typically heavily doped because they are used for ohmic contacts.
The wide intrinsic region is in contrast to an ordinary p–n diode. The wide intrinsic region makes the PIN diode an inferior rectifier (one typical function of a diode), but it makes it suitable for attenuators, fast switches, photodetectors, and high-voltage power electronics applications.
The PIN photodiode was invented by Jun-Ichi Nishizawa and his colleagues in 1950. It is a semiconductor device.
## Operation
A PIN diode operates under what is known as high-level injection. In other words, the intrinsic "i" region is flooded with charge carriers from the "p" and "n" regions. Its function can be likened to filling up a water bucket with a hole on the side. Once the water reaches the hole's level it will begin to pour out. Similarly, the diode will conduct current once the flooded electrons and holes reach an equilibrium point, where the number of electrons is equal to the number of holes in the intrinsic region.
When the diode is forward biased, the injected carrier concentration is typically several orders of magnitude higher than the intrinsic carrier concentration. Due to this high level injection, which in turn is due to the depletion process, the electric field extends deeply (almost the entire length) into the region. This electric field helps in speeding up of the transport of charge carriers from the P to the N region, which results in faster operation of the diode, making it a suitable device for high-frequency operation.[citation needed]
## Characteristics
The PIN diode obeys the standard diode equation for low-frequency signals. At higher frequencies, the diode looks like an almost perfect (very linear, even for large signals) resistor. The P-I-N diode has a relatively large stored charge adrift in a thick intrinsic region. At a low-enough frequency, the stored charge can be fully swept and the diode turns off. At higher frequencies, there is not enough time to sweep the charge from the drift region, so the diode never turns off. The time required to sweep the stored charge from a diode junction is its reverse recovery time, and it is relatively long in a PIN diode. For a given semiconductor material, on-state impedance, and minimum usable RF frequency, the reverse recovery time is fixed. This property can be exploited; one variety of P-I-N diode, the step recovery diode, exploits the abrupt impedance change at the end of the reverse recovery to create a narrow impulse waveform useful for frequency multiplication with high multiples.[citation needed]
The high-frequency resistance is inversely proportional to the DC bias current through the diode. A PIN diode, suitably biased, therefore acts as a variable resistor. This high-frequency resistance may vary over a wide range (from 0.1 Ω to 10 kΩ in some cases;[1] the useful range is smaller, though).
The wide intrinsic region also means the diode will have a low capacitance when reverse-biased.
In a PIN diode the depletion region exists almost completely within the intrinsic region. This depletion region is much larger than in a PN diode and almost constant-size, independent of the reverse bias applied to the diode. This increases the volume where electron-hole pairs can be generated by an incident photon. Some photodetector devices, such as PIN photodiodes and phototransistors (in which the base-collector junction is a PIN diode), use a PIN junction in their construction.
The diode design has some design trade-offs. Increasing the area of the intrinsic region increases its stored charge reducing its RF on-state resistance while also increasing reverse bias capacitance and increasing the drive current required to remove the charge during a fixed switching time, with no effect on the minimum time required to sweep the charge from the I region. Increasing the thickness of the intrinsic region increases the total stored charge, decreases the minimum RF frequency, and decreases the reverse-bias capacitance, but doesn't decrease the forward-bias RF resistance and increases the minimum time required to sweep the drift charge and transition from low to high RF resistance. Diodes are sold commercially in a variety of geometries for specific RF bands and uses.
## Applications
PIN diodes are useful as RF switches, attenuators, photodetectors, and phase shifters.[2]
### RF and microwave switches
A PIN diode RF microwave switch
Under zero- or reverse-bias (the "off" state), a PIN diode has a low capacitance. The low capacitance will not pass much of an RF signal. Under a forward bias of 1 mA (the "on" state), a typical PIN diode will have an RF resistance of about 1 ohm, making it a good conductor of RF. Consequently, the PIN diode makes a good RF switch.
Although RF relays can be used as switches, they switch relatively slowly (on the order of 10s of milliseconds). A PIN diode switch can switch much more quickly (e.g., 1 microsecond), although at lower RF frequencies it isn't reasonable to expect switching times in the same order of magnitude as the RF period.
For example, the capacitance of an "off"-state discrete PIN diode might be 1 pF. At 320 MHz, the capacitive reactance of 1 pF is 497 ohms:
{\displaystyle {\begin{aligned}Z_{\mathrm {diode} }&={\frac {1}{2\pi fC}}\\&={\frac {1}{2\pi (320\times 10^{6}\,\mathrm {Hz} )(1\times 10^{-12}\,\mathrm {F} )}}\\&=497\,\Omega \end{aligned}}}
As a series element in a 50 ohm system, the off-state attenuation is:
{\displaystyle {\begin{aligned}A&=20\log _{10}\left({\frac {Z_{\mathrm {load} }}{Z_{\mathrm {source} }+Z_{\mathrm {diode} }+Z_{\mathrm {load} }}}\right)\\&=20\log _{10}\left({\frac {50\,\Omega }{50\,\Omega +497\,\Omega +50\,\Omega }}\right)\\&={21.5}\,\mathrm {dB} \end{aligned}}}
This attenuation may not be adequate. In applications where higher isolation is needed, both shunt and series elements may be used, with the shunt diodes biased in complementary fashion to the series elements. Adding shunt elements effectively reduces the source and load impedances, reducing the impedance ratio and increasing the off-state attenuation. However, in addition to the added complexity, the on-state attenuation is increased due to the series resistance of the on-state blocking element and the capacitance of the off-state shunt elements.
PIN diode switches are used not only for signal selection, but also component selection. For example, some low-phase-noise oscillators use them to range-switch inductors.[3]
### RF and microwave variable attenuators
An RF microwave PIN diode attenuator
By changing the bias current through a PIN diode, it is possible to quickly change its RF resistance.
At high frequencies, the PIN diode appears as a resistor whose resistance is an inverse function of its forward current. Consequently, PIN diode can be used in some variable attenuator designs as amplitude modulators or output leveling circuits.
PIN diodes might be used, for example, as the bridge and shunt resistors in a bridged-T attenuator. Another common approach is to use PIN diodes as terminations connected to the 0 degree and -90 degree ports of a quadrature hybrid. The signal to be attenuated is applied to the input port, and the attenuated result is taken from the isolation port. The advantages of this approach over the bridged-T and pi approaches are (1) complementary PIN diode bias drives are not needed—the same bias is applied to both diodes—and (2) the loss in the attenuator equals the return loss of the terminations, which can be varied over a very wide range.
### Limiters
PIN diodes are sometimes designed for use as input protection devices for high-frequency test probes and other circuits. If the input signal is small, the PIN diode has negligible impact, presenting only a small parasitic capacitance. Unlike a rectifier diode, it does not present a nonlinear resistance at RF frequencies, which would give rise to harmonics and intermodulation products. If the signal is large, then when the PIN diode starts to rectify the signal, the forward current charges the drift region and the device RF impedance is a resistance inversely proportional to the signal amplitude. That signal amplitude varying resistance can be used to terminate some predetermined portion the signal in a resistive network dissipating the energy or to create an impedance mismatch that reflects the incident signal back toward the source. The latter may be combined with an isolator, a device containing a circulator which uses a permanent magnetic field to break reciprocity and a resistive load to separate and terminate the backward traveling wave. When used as a shunt limiter the PIN diode is a low impedance over the entire RF cycle, unlike paired rectifier diodes that would swing from a high resistance to a low resistance during each RF cycle clamping the waveform and not reflecting it as completely. The ionization recovery time of gas molecules that permits the creation of the higher power spark gap input protection device ultimately relies on similar physics in a gas.
### Photodetector and photovoltaic cell
The PIN photodiode was invented by Jun-ichi Nishizawa and his colleagues in 1950.[4]
PIN photodiodes are used in fibre optic network cards and switches. As a photodetector, the PIN diode is reverse-biased. Under reverse bias, the diode ordinarily does not conduct (save a small dark current or Is leakage). When a photon of sufficient energy enters the depletion region of the diode, it creates an electron-hole pair. The reverse-bias field sweeps the carriers out of the region, creating current. Some detectors can use avalanche multiplication.
The same mechanism applies to the PIN structure, or p-i-n junction, of a solar cell. In this case, the advantage of using a PIN structure over conventional semiconductor p–n junction is better long-wavelength response of the former. In case of long wavelength irradiation, photons penetrate deep into the cell. But only those electron-hole pairs generated in and near the depletion region contribute to current generation. The depletion region of a PIN structure extends across the intrinsic region, deep into the device. This wider depletion width enables electron-hole pair generation deep within the device, which increases the quantum efficiency of the cell.
Commercially available PIN photodiodes have quantum efficiencies above 80-90% in the telecom wavelength range (~1500 nm), and are typically made of germanium or InGaAs. They feature fast response times (higher than their p-n counterparts), running into several tens of gigahertz,[5] making them ideal for high speed optical telecommunication applications. Similarly, silicon p-i-n photodiodes[6] have even higher quantum efficiencies, but can only detect wavelengths below the bandgap of silicon, i.e. ~1100 nm.
Typically, amorphous silicon thin-film cells use PIN structures. On the other hand, CdTe cells use NIP structure, a variation of the PIN structure. In a NIP structure, an intrinsic CdTe layer is sandwiched by n-doped CdS and p-doped ZnTe; the photons are incident on the n-doped layer, unlike in a PIN diode.
A PIN photodiode can also detect X-ray and gamma ray photons.
In modern fiber-optical communications, the speed of optical transmitters and receivers is one of the most important parameters. Due to the small surface of the photodiode, its parasitic (unwanted) capacitance is reduced. The bandwidth of modern pin photodiodes is reaching the microwave and millimeter waves range.[7]
## Example PIN photodiodes
SFH203 and BPW34 are cheap general purpose PIN diodes in 5 mm clear plastic cases with bandwidths over 100 MHz.
3. ^ "Microwave Switches: Application Notes". Herley General Microwave. Archived from the original on 2013-10-30.{{cite web}}: CS1 maint: unfit URL (link)
|
Lemma 15.22.9. Let $R$ be a domain. Any flat $R$-module is torsion free.
Proof. If $x \in R$ is nonzero, then $x : R \to R$ is injective, and hence if $M$ is flat over $R$, then $x : M \to M$ is injective. Thus if $M$ is flat over $R$, then $M$ is torsion free. $\square$
There are also:
• 2 comment(s) on Section 15.22: Torsion free modules
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
# Prime Decomposition/Examples
Jump to navigation Jump to search
## Examples of Prime Decompositions
The prime decompositions for the first few integers are as follows:
$n$ Prime Decomposition of $n$
$1$ $1$
$2$ $2$
$3$ $3$
$4$ $2^2$
$5$ $5$
$6$ $2 \times 3$
$7$ $7$
$8$ $2^3$
$9$ $3^2$
$10$ $2 \times 5$
$11$ $11$
$12$ $2^2 \times 3$
|
# A fellowship position is available in U.S. Environmental Protection Agency’s Office of Water
A fellowship position is available in U.S. Environmental Protection Agency’s Office of Water
The ORISE Fellow will collaborate with cross-program staff and regional offices to research and improve current practices related to water quality assessment and monitoring, conduct technical and programmatic analyses in support of watershed restoration and protection activities, and provideresearch for training materials on watershed management. This will involve research to support the Watershed Branch in the implementation of EPAsHealthy Watersheds Program, Watershed Academy, and Water Quality Assessment Workgroup.This program, administered by ORAU through its contract with the U.S. Department of Energy (DOE) to manage the Oak Ridge Institute for Science and Education (ORISE), was established through an interagency agreement between DOE and EPA. Participants do not become employees of EPA, DOE orthe program administrator.The participant will receive a monthly stipend commensurate with educational level and experience. The current yearly stipend amounts for this opportunity are $50,643 (BS) and$61,947 (MS).Please apply by 9/1/22 on our website.
|
### IIFT 2012 Question Paper Question 5
Instructions
Analyse the following chart showing the exports and imports of Sono Ltd. and answer the questions based on this chart
Question 5
# What is the approximate percentage point difference in the maximum annual percentage increase in export and the minimum annual percentage decrease in Imports?
Solution
The annual % increase in exports is as shown below:-
2003 = $$\frac{150-112.5}{112.5} = 33.33\%$$
2004 = $$0\%$$
2005 = $$\frac{200-150}{150} = 33.33\%$$
2006 = -ve
2007 = $$\frac{200-175}{175} = 14.28\%$$
2008 = $$\frac{275-200}{200} = 37.5\%$$
2009 = -ve
2010 = $$\frac{262.5-200}{200} = 31.5\%$$.
Thus, the maximum % increase in exports = $$37.5\%$$
The lowest % decrease in the year 2003 and = $$\frac{275-200}{275} = 9.09\%$$
Thus, the required difference = $$37.5 - 9.09 \approx 28\%$$
Hence, option A is the correct answer.
|
1. Oct 9, 2008
### tundra
Okay I don't know anything about physics and I'm trying to understand Heisenbergs (sry spelling) uncertainty principle, they say you can't measure the position and momentum at the same time (of an Electron lets say), I want to know WHY?
I mean how does an electron move in the first place? does it teleport randomly around an atom is that why?
can you measure a trajectory of an electron? because if it orbits around in a trajectory and has a constant mass how is it uncertain where it's going? I would understand if it's teleporty-ish you know?
2. Oct 9, 2008
### granpa
the whole concept of the electron 'orbiting' the nucleus is considered to be primitive if not completely wrong.
the electrons position cont be determined because it really is spread out.
3. Oct 9, 2008
### Danger
As for the uncertainty aspect, it's because the mere act of trying to measure something changes its state.
4. Oct 9, 2008
### LURCH
This is one of those times when it might be more helpful to think of the electron as being more like a wave. You could point to the peak of a wave, and then the trough, and then several other places in between, and asked the question, "is this the location of the wave?" And the answer each time would be, "yes."
You see, according to the theory, it isn't just that we cannot know the exact location of the electron. It would seem that an electron actually does not have an exact location.
Here is an analogy that I frequently use, but please bear in mind that it is only an analogy:
Suppose you see a photograph of a baseball. The ball has just been thrown, and you are seeing it in flight between the pitcher's mound and homebase. The camera is looking straight down from overhead, and you are seeing the ball against a uniform background of grass. How can you tell which direction the ball is traveling?
Well, if the camera was using high-speed film and a very fast shutter speed, it would be nearly impossible to tell which way the ball is going. You could point very precisely to the edges of the ball on the photo, and say that you know the location of the ball. You could tell very definitely where in the picture the ball is and where it is not. But you could not say where the ball is headed.
Now, no matter how fast the shutter speed, if the ball was traveling when the photo was taken, there will be some blurring along the leading and trailing edges of the ball in the photo. If you blow the photo up to a very large size you might be able to see that blurring. Then you could point to the edges that are slightly blurred and say that the ball must of the have been travelling along that path, but it would only be a very vageu notion odf the direction of travel. However, due to the blurring at the edges, you have now lost the very sharp line along which you could say, " here there is baseball, and here there is none."
On the other hand, you could have the picture taken with low-speed film and a very long exposure. Then you would see the baseball as a long streak across the photo, and you have a very excellent idea of its path of travel. However, you would've lost any notion of its location. You could point to any spot along that streak and ask, "was this the location of the ball at the time the photo was taken?" And the answer would be, "yes."
Again, I will mention that this is only an analogy illustrating the mutually exclusive nature of knowing both the location and the momentum of nearly anything. In this analogy, for instance, the ball has a definite location and a definite path of travel, and we are simply unable to pin down both. In particle physics, it is generally believed that this is not the case. The particles literally have only a probability of being in a certain location and a probability of traveling a certain path. One can calculate the locations with the highest probability, but never come up with a single location with a 100% probability (definitely the exact location of the particle). Neither can one calculate any location in the universe that has a 0% probability. So, you could point at any place in the whole universe and ask, "is this the location of the electron from my example?" And the answer would be, "probably not, but maybe."
I think that a lot of the difficulty people have with this theory is not that they couldn't understand it, but that their mind rejects it. Our everyday experience tells us that objects cannot exist in such a state. Because no one has "everyday experience" with particles on the Quantum level, some of their behaviors tend to go against what we have learned to accept as possible. So it may help you to start out by trying to convince yourself to believe that electrons really do exist only as clouds of probability, not as the little plastic ball you've seen orbiting the nucleus in models in the classroom. After accepting and believing, understanding should come more easily.
5. Oct 9, 2008
### Staff: Mentor
Fundamentally, we do not know how the electron moves around an atom, in the sense that you are talking about.
The mathematical formalism of quantum mechanics does not address the question "what is the electron really doing before we observe it?" It only gives us a method to calculate probabilities for the result of the observation.
There are various interpretations of QM that attempt to answer the question "what is the electron really doing?" They all reduce to the same mathematics, as far as calculating probabilities is concerned, so it is impossible to decide which interpretation is correct, by experiment.
6. Oct 9, 2008
### DaveC426913
Is this the gist of Feynman's famous line: "Shut up and calculate"?
7. Oct 9, 2008
### Gear300
it means we don't know everything
8. Oct 9, 2008
### jobyts
Is the uncertainity principle mathematically proved?, or it is an assumption due to our limited knowledge on particle physics.
There could be many things in the physics that is unknown to us. Why is it that only for the electron movement, we have an uncertainity principle and not for all others?
9. Oct 9, 2008
### Hootenanny
Staff Emeritus
The Uncertainty Principle is a special case of the Robertson-Schrödinger relation which results from the non-zero commutator between quantum mechanical operators. So yes, the HUP is mathematically proved, it is simply a special case of a mathematical relation.
10. Oct 9, 2008
### Staff: Mentor
Or, to approach it from another direction, the HUP follows inescapably from the assumptions that particles are represented as waves whose wavelength is related to momentum via $\lambda = h / p$, and that these waves superpose (add) like other waves to form localized wave packets. The mathematics describing wave packets is part of Fourier analysis, which applies to all kinds of waves. The HUP has analogues in other kinds of waves.
11. Oct 10, 2008
### Minte
I really like LURCH's examples. Kudos. ^^ That could be in a textbook.
Here's an elaboration, just because I can't let it go unsaid.
The exact location of an electron isn't known. It's more like a cloud of probability. It's "probably" in a certain range.
Electrons have more wave characteristics than "normal" (macro) matter, generally speaking. It's almost as if the electron is "smeared" over the general area, to quote Asimov.
12. Oct 10, 2008
### Crosson
I second this sentiment, crediting LURCH for an apparently original and insightful analogy.
The uncertainty principle becomes more relevant when the object has a small mass. Electrons have a very small mass, but protons, atoms, and molecules obey the uncertainty principle as well.
13. Oct 10, 2008
### system downs
This is true, don't be confused by the metaphorical responses.As for the orbiting the atom thing, completely ignore the guy who said the orbiting is wrong, he is stating a purely hypothetical theory with no experimental basis. The electrons travel as waves, but when the are at rest the act as particles.(planck's constant). This is why you can't measure the velocity and position of an electron at the same time. We are foced to use different methods for finding each, and only one method can be used at once. This is also what makes determinism currently invalid.
14. Oct 11, 2008
### Minte
Oh! I found a really cool analogy of this the other day.
Imagine that you're blindfolded in a room, and you have to stay in one location and find a chair. All you have is a baseball to judge distances. So you throw the ball around a little bit (assume that it can come back to you), and you eventually hit the chair. But when you measured it, you moved it. So you can't be exactly certain of where it is, just the general area.
...It's not exactly it, but I still like the analogy.
15. Oct 11, 2008
### Crosson
This is an analogy of an analogy. It is universally recognized among physicists that the HUP is not caused in a semi-classical way by the measurement apparatus. It is a fundamental property of variables that don't commute, position and momentum are just one case.
If the HUP was merely a consequence of a crude measuring process, then its likely that physicists and engineers would find clever ways to overcome the limitation. Instead the HUP is a fundamental property of all measurements, a property of reality, unless quantum mechanics is wrong.
16. Oct 13, 2008
### Minte
Haha, I know. I just saw it and thought that it could be a representation with macro objects with fewer wavelike tendencies. =) Maybe it's because it was an old book...
17. Oct 13, 2008
### granpa
now that is interesting. any connection to the non-commutative geometry suggested for double special relativity?
18. Oct 13, 2008
### HallsofIvy
Staff Emeritus
One crucial point that should be mentioned is that the energy contained by a photon depends on frequency and so on wavelength. In order to determine the position of a very small particle, you have to decrease the wave length to be smaller than the particle, so increase the frequency and thus the energy. The more accurately you measure the position, the harder you "hit" the particle with a photon and so the more you disturb its momentum.
19. Oct 13, 2008
### DaveC426913
Yah, but the trouble with this explanation is that is encourages exploration for more "passive" forms of observing.
|
## selecting a certain question from group
Dear all,
I am using \insertgroup[3]{questions-simple} to insert 3 question from group "question-simple".
How can I select 3 certain ones from the group ie 1,5 and 10th question. I am using numbers for its question (ie \begin{question}{1})
thank you
KYR.
### Replies (6)
#### RE: selecting a certain question from group - Added by Frédéric Bréalabout 4 years ago
With AMC 1.3.0 or higher.
The documentation
Groups of questions can be manipulated more precisely thanks to the following commands:
\insertgroup[n]{mygroup} (using optional parameter n) only inserts the n first elements from the group.
\insertgroupfrom[n]{groupname}{i} The command does the same as \insertgroup[n]{groupname}, starting from element at index i (the first element has index 0).
\cleargroup{mygroup} clears all group content.
\copygroup{groupA}{groupB} copies all the elements from group groupA to the end of group groupB. With an optional argument n, only the n first elements will be copied: \copygroup[n]{groupA}{groupB}
#### RE: selecting a certain question from group - Added by KYR GIabout 4 years ago
just one more question. How can I squeeze as much as possible tick answers in a line?
I am using multicols but it can go up to 6.
I have no text per answer just the tick boxes,that the student must select.
thank you again,
KYR. GI.
#### RE: selecting a certain question from group - Added by KYR GIabout 4 years ago
just to add to the above, without shuffling the answers, so that the answers are in the same order as in the source.tex
#### RE: selecting a certain question from group - Added by Frédéric Bréalabout 4 years ago
just to add to the above, without shuffling the answers, so that the answers are in the same order as in the source.tex
\begin{choices}[o]
\correctchoice{}
\wrongchoice{}
\wrongchoice{}
\wrongchoice{}
\wrongchoice{}
\wrongchoice{}
\end{choices}
I am using multicols but it can go up to 6.
I don't understand why ? What is the paper size ?
I think this code could work,choose your unit.
\begin{multicols}{6}
\begin{choices}[o]
\correctchoice{}\hspace*{-3cm}
\wrongchoice{}\hspace*{-3em}
\wrongchoice{}\hspace*{-3pt}}
\wrongchoice{}\hspace*{-3mm}
\wrongchoice{}\hspace*{-3ex}
\wrongchoice{}
\end{choices}
\end{multicols}
#### RE: selecting a certain question from group - Added by KYR GIabout 4 years ago
the [0] worked fine.
I am using two columns for the whole test and 6-10 columns for the choices of each question.
What I am trying to do is to match a 2x 8 byte word (ie
ff ae ab s3 4a ae ae ae \newline
ae ae ae ae ae ae ae ae \newline
to the answer choices. (student must find certain hex numbers).
I tried to use a table inside the choices, but with partial success. ie
\begin{tabular}{p{5mm} p{5mm} p{5mm} p{5mm} p{5mm} p{5mm} p{5mm} p{5mm}}
\wrongchoice{} &\wrongchoice{} &\wrongchoice{} &\wrongchoice{} &\wrongchoice{}&\wrongchoice{}&\correctchoice{} &\correctchoice{} \\
the catalog is print ok, but the student subject has no boxes at all. it is just empty.
#### RE: selecting a certain question from group - Added by KYR GIabout 4 years ago
it seems I managed to do what I had in mind.
I used the \AMCBoxOnly{ordered=true} command as follows.
\AMCBoxOnly{ordered=true}{
\wrongchoice[A]{}\correctchoice[A]{}\wrongchoice[C]{}\wrongchoice[D]{}\wrongchoice[A]{}\correctchoice[A]{}\wrongchoice[C]{}\wrongchoice[D]{}
\newline
The answers are printed in order and in one line.
It can be useful to other people trying the same.
KYR.
(1-6/6)
|
Projected Nesterov's Proximal-Gradient Algorithm for Sparse Signal Recovery
We develop a projected Nesterov’s proximal-gradient (PNPG) approach for sparse signal reconstruction that combines adaptive step size with Nesterov’s momentum acceleration. The objective function that we wish to minimize is the sum of a convex differentiable data-fidelity (negative log-likelihood (NLL)) term and a convex regularization term. We apply sparse signal regularization where the signal belongs to a closed convex set within the closure of the domain of the NLL; the convex-set constraint facilitates flexible NLL domains and accurate signal recovery. Signal sparsity is imposed using the ℓ₁-norm penalty on the signal’s linear transform coefficients. The PNPG approach employs a projected Nesterov’s acceleration step with restart and a duality-based inner iteration to compute the proximal mapping. We propose an adaptive step-size selection scheme to obtain a good local majorizing function of the NLL and reduce the time spent backtracking. Thanks to step-size adaptation, PNPG converges faster than the methods that do not adjust to the local curvature of the NLL. We present an integrated derivation of the momentum acceleration and proofs of O(k⁻²) objective function convergence rate and convergence of the iterates, which account for adaptive step size, inexactness of the iterative proximal mapping, and the convex-set constraint. The tuning of PNPG more »
Authors:
;
Award ID(s):
Publication Date:
NSF-PAR ID:
10024074
Journal Name:
IEEE Transactions on Signal Processing
Volume:
65
Issue:
13
Page Range or eLocation-ID:
3510-3525
ISSN:
1053-587X
|
# Primality test for numbers of the form $N=4 \cdot 3^n-1$
Can you provide proof or counterexample for the claim given below?
Inspired by Lucas-Lehmer primality test I have formulated the following claim:
Let $$P_m(x)=2^{-m}\cdot((x-\sqrt{x^2-4})^m+(x+\sqrt{x^2-4})^m)$$ . Let $$N= 4 \cdot 3^{n}-1$$ where $$n\ge3$$ . Let $$S_i=S_{i-1}^3-3 S_{i-1}$$ with $$S_0=P_9(6)$$ . Then $$N$$ is prime if and only if $$S_{n-2} \equiv 0 \pmod{N}$$ .
You can run this test here .
Numbers $$n$$ such that $$4 \cdot 3^n-1$$ is prime can be found here .
I was searching for counterexample using the following PARI/GP code:
CE431(n1,n2)=
{
for(n=n1,n2,
N=4*3^n-1;
S=2*polchebyshev(9,1,3);
ctr=1;
while(ctr<=n-2,
S=Mod(2*polchebyshev(3,1,S/2),N);
ctr+=1);
if(S==0 && !ispseudoprime(N),print("n="n)))
}
This answer proves that if $$N$$ is prime, then $$S_{n-2}\equiv 0\pmod N$$.
Proof :
First of all, let us prove by induction on $$i$$ that $$S_i=s^{2\cdot 3^{i+2}}+t^{2\cdot 3^{i+2}}\tag1$$ where $$s=\sqrt 2-1,t=\sqrt 2+1$$ with $$st=1$$.
We see that $$(1)$$ holds for $$i=0$$ since $$S_0=P_9(6)=(3-2\sqrt 2)^9+(3+2\sqrt 2)^9=s^{2\cdot 3^2}+t^{2\cdot 3^2}$$
Supposing that $$(1)$$ holds for $$i$$ gives \begin{align}S_{i+1}&=S_i^3-3S_i \\\\&=(s^{2\cdot 3^{i+2}}+t^{2\cdot 3^{i+2}})^3-3(s^{2\cdot 3^{i+2}}+t^{2\cdot 3^{i+2}}) \\\\&=s^{2\cdot 3^{i+3}}+t^{2\cdot 3^{i+3}}\qquad\square\end{align}
Using $$(1)$$ and $$N=4\cdot 3^n-1$$, we get $$S_{n-2}=s^{2\cdot 3^{n}}+t^{2\cdot 3^{n}}=s^{(N+1)/2}+t^{(N+1)/2}$$
So, we have, by the binomial theorem, \begin{align}&S_{n-2}^2-2 \\\\&=s\cdot s^{N}+t\cdot t^{N} \\\\&=(\sqrt 2-1)(\sqrt 2-1)^N+(\sqrt 2+1)(\sqrt 2+1)^N \\\\&=\sqrt 2\sum_{i=0}^{N}\binom Ni(\sqrt 2)^i((-1)^{N-i}+1^{N-i}) \\&\qquad\quad +\sum_{i=0}^{N}\binom Ni(\sqrt 2)^i(1^{N-i}-(-1)^{N-i}) \\\\&=\sum_{j=1}^{(N+1)/2}\binom N{2j-1}2^{j+1}+\sum_{j=0}^{(N-1)/2}\binom{N}{2j}2^{j+1}\end{align}
Using that $$\binom{N}{m}\equiv 0\pmod N$$ for $$1\le m\le N-1$$, we get $$S_{n-2}^2-2\equiv 2^{\frac{N+1}{2}+1}+2=4\cdot 2^{\frac{N-1}{2}}+2\pmod N\tag 2$$
Here, since $$4\cdot 3^{2k}-1\equiv 4\cdot 9^k-1\equiv 3\pmod 8$$ and $$4\cdot 3^{2k+1}-1\equiv 12\cdot 9^k-1\equiv 3\pmod 8$$ we see that $$N\equiv 3\pmod 8$$ from which we have $$2^{\frac{N-1}{2}}\equiv \left(\frac 2N\right)=(-1)^{(N^2-1)/8}=-1\pmod N\tag3$$ where $$\left(\frac{q}{p}\right)$$ denotes the Legendre symbol.
From $$(2)(3)$$, we have $$S_{n-2}^2-2\equiv 4\cdot (-1)+2\pmod N$$ from which $$S_{n-2}\equiv 0\pmod N$$ follows.$$\quad\blacksquare$$
|
# How to express the special property of a partial order?
A partial order is a preorder with the following property:
$a\leq b \wedge b\leq a \implies a = b\quad(1)$
In my opinion, the following property it’s more intuitive:
$a\leq b \wedge b\leq c \implies a \neq c\quad(2)$
How do I prove they are equivalent?
First I start by saying $(1) \implies (2)$
$\lnot \big( (1) \land \lnot (2) \big)=\lnot(1)\lor(2) =$
$\lnot(a\leq b \land b\leq a\implies a=b)\lor(2)=$
$\lnot\Big(\lnot\big((a\leq b\land b\leq a)\land\lnot(a=b)\big)\Big)\lor(2)=$
$\big((a\le b\land b\le a)\land\lnot(a=b)\big)\lor(2)=$
$(a\le b\land b\le a \land a\ne b)\lor(2)=$
$(a\le b\land b\le a \land a\ne b)\lor\lnot\big((a\le b \land b \le c)\land\lnot(a\ne c)\big) =$
$(a\le b\land b\le a \land a\ne b)\lor\big(\lnot(a\le b \land b \le c)\lor(a\ne c)\big) = \quad ...$
I’m kind of stuck there. Is there a way to continue showing that this is a tautology?
• They aren't equivalent. They actually negate each other. – user223391 Apr 16 '18 at 20:48
• Well no, otherwise you would be able to evaluate it to false – gurghet Apr 16 '18 at 20:51
• Why downvoter? why? – gurghet Apr 16 '18 at 20:55
• @gurghet It wasn't me ... in fact I appreciate your attempt to prove equivalence! Unfortunately, it doesn't work, since your proposal leads to a problem .. see my Answer – Bram28 Apr 16 '18 at 20:58
Your alternative doesn't work, since a partial order is reflexive, and so if you pick $a=b=c$, then with your suggestion you get $a \not = a$, which is a contradiction.
• Oh, that was actually faster. Thanks, but the = is not part of the partial order definition, so why did you bring reflexivity in this? I mean reflexivity states that a ≤ a, right? – gurghet Apr 16 '18 at 20:58
• PS: I’m approaching this from a cat theory point of view so ≤ is just a symbol for a relation, it doesn't know what an = is, I suppose = is just our way to say that we are talking about the same object – gurghet Apr 16 '18 at 21:04
• @gurghet Yes, reflexivity is $a \le a$, so if you pick the same $a$ for $a$, $b$, and $c$ you get $(a \le a \land a \le a) \rightarrow a \not = a$, and since a partial order is reflexive, we have $a \le a$ for any $a$, and thus $a \le a \land a \le a$ is true, and hence we obtain $a \not = a$ – Bram28 Apr 16 '18 at 21:07
• Ok now I get it – gurghet Apr 16 '18 at 21:15
• In categorical jargon, I think a partial order can be defined as a preorder (i.e., a category such that each hom-set has at most one element) that is skeletal (i.e., such that no two distinct objects are isomorphic). IMHO this jargon adds nothing to the underlying intuitive ideas behind preorders and partial orders. $\ddot{\smile}$. – Rob Arthan Apr 16 '18 at 21:22
Order relations come into two brands: strict and loose. Actually, they’re equivalent formulations, when the identity relation is available.
Let’s talk about relations on the set $A$. A relation $R$ on $A$ is areflexive if $(a,a)\notin R$, for all $a\in A$.
A strict preorder is just a transitive relation. A strict order is an areflexive and transitive relation.
A loose preorder is a reflexive and transitive relation. A loose order is a reflexive, antisymmetric (the property you seem not to like) and transitive relation.
Note that the identity relation is needed for stating the antisymmetric property.
How can one pass from strict to loose (pre)orders? Of course we need to have the identity relation available.
Suppose $S$ is a strict (pre)order on $A$ and define $$S^+=S\cup\{(a,a):a\in A\}$$ It’s easy to show that $S^+$ is transitive as well. It is also reflexive by construction. Hence $S^+$ is a loose preorder when $S$ is a strict preorder. Suppose $S$ is a strict order and also that $(a,b)\in S^+$ and $(b,a)\in S^+$. We cannot have $a\ne b$, because otherwise $(a,b)\in S$ and $(b,a)\in S$ and, by transitivity, $(a,a)\in S$: a contradiction to $S$ being areflexive. Therefore $a=b$ and so $S^+$ is antisymmetric.
Thus, from any strict order $S$ we obtain a loose one. Conversely, if $L$ is a loose order on $A$, it can be easily proved that $$L^-=L\setminus\{(a,a):a\in A\}$$ is a strict order on $A$. Moreover, it is obvious that
• if $S$ is a strict order, then $S=(S^+)^-$;
• if $L$ is a loose order, then $L=(L^-)^+$.
Hence there’s no real difference between strict and loose orders on $A$: they correspond bijectively.
In order that a strict preorder $S$ is a strict order, it is necessary and sufficient that
for all $a,b,c\in A$, if $(a,b)\in S$ and $(b,c)\in S$, then $a\ne c$.
Indeed, suppose $S$ is a strict order. Then $(a,b)\in S$ and $(b,a)\in S$ implies $(a,a)\in S$, by transitivity: a contradiction to $S$ being areflexive.
Conversely, suppose $S$ is a strict preorder that satisfies the above property and $(a,a)\in S$ (that is, $S$ is not areflexive). Then, with $b=c=a$, we have $(a,b)\in S$, $(b,c)\in S$ and $a=c$: a contradiction.
So your “more intuitive” property indeed characterizes strict preorders that are strict orders. But is it really more intuitive? This is a matter of preference. Strict (pre)orders can be defined with no appeal to the identity relation, but your property makes use of it (of its negation, but it’s the same).
For a loose preorder the property you like is definitely not equivalent to it being a loose order. Indeed, no loose order satisfies it for the simple reason it is reflexive.
• That's a very good point. Thanks. Didn't know areflexivity was a thing. Do you know any application of such an order by the way? – gurghet Apr 16 '18 at 23:14
• @gurghet Since strict and loose orders are essentially the same thing, there's no application of the former type that cannot be formulated with the latter. – egreg Apr 16 '18 at 23:37
• @egreg "Irreflexive" is a more common word than "areflexive" – Derek Elkins Apr 17 '18 at 1:42
|
# Awk
Last updated - Fri Feb 17 07:16:01 EST 2012
Part of the Unix tutorials And then there's My blog
• Why learn AWK?
• Basic Structure
• Executing an AWK script
• Which shell to use with AWK?
• Dynamic Variables
• The Essential Syntax of AWK
• Arithmetic Expressions
• Unary arithmetic operators
• The Autoincrement and Autodecrement Operators
• Assignment Operators
• Conditional expressions
• Regular Expressions
• And/Or/Not
• Commands
• AWK Built-in Variables
• FS - The Input Field Separator Variable
• OFS - The Output Field Separator Variable
• NF - The Number of Fields Variable
• NR - The Number of Records Variable
• RS - The Record Separator Variable
• ORS - The Output Record Separator Variable
• FILENAME - The Current Filename Variable
• Associative Arrays
• Multi-dimensional Arrays
• Example of using AWK's Associative Arrays
• Output of the script
• Picture Perfect PRINTF Output
• PRINTF - formatting output
• Escape Sequences
• Format Specifiers
• Width - specifying minimum field size
• Left Justification
• The Field Precision Value
• Explicit File output
• AWK Numerical Functions
• Trigonometric Functions
• Exponents, logs and square roots
• Truncating Integers
• "Random Numbers
• The Lotto script
• String Functions
• The Length function
• The Index Function
• The Substr function
• GAWK's Tolower and Toupper function
• The Split function
• NAWK's string functions
• The Match function
• The System function
• The Getline function
• The systime function
• The Strftime function
• User Defined Functions
• AWK patterns
• Formatting AWK programs
• Environment Variables
• ARGC - Number or arguments (NAWK/GAWK)
• ARGV - Array of arguments (NAWK/GAWK)
• ARGIND - Argument Index (GAWK only)
• FNR (NAWK/GAWK)
• OFMT (NAWK/GAWK)
• RSTART, RLENGTH and match (NAWK/GAWK)
• SUBSEP - Multi-dimensional array separator (NAWK/GAWK)
• ENVIRON - environment variables (GAWK only)
• IGNORECASE (GAWK only)
• CONVFMT - conversion format (GAWK only)
• ERRNO - system errors (GAWK only)
• FIELDWIDTHS - fixed width fields (GAWK only)
• AWK, NAWK, GAWK, or PERL
Copyright 2001,2004 Bruce Barnett and General Electric Company
You are allowed to print copies of this tutorial for your personal use, and link to this page, but you are not allowed to make electronic copies, or redistribute this tutorial in any form without permission.
Original version written in 1994 and published in the Sun Observer
Updated: Tue Mar 9 11:07:08 EST 2004
Updated: Fri Feb 16 05:32:38 EST 2007
Updated: Wed Apr 16 20:55:07 EDT 2008
Awk is an extremely versatile programming language for working on files. We'll teach you just enough to understand the examples in this page, plus a smidgen.
The examples given below have the extensions of the executing script as part of the filename. Once you download it, and make it executable, you can rename it anything you want.
# Why learn AWK?
In the past I have covered grep and sed. This section discusses AWK, another cornerstone of UNIX shell programming. There are three variations of AWK:
AWK - the original from AT&T
NAWK - A newer, improved version from AT&T
GAWK - The Free Software foundation's version
Originally, I didn't plan to discuss NAWK, but several UNIX vendors have replaced AWK with NAWK, and there are several incompatibilities between the two. It would be cruel of me to not warn you about the differences. So I will highlight those when I come to them. It is important to know than all of AWK's features are in NAWK and GAWK. Most, if not all, of NAWK's features are in GAWK. NAWK ships as part of Solaris. GAWK does not. However, many sites on the Internet have the sources freely available. If you user Linux, you have GAWK. But in general, assume that I am talking about the classic AWK unless otherwise noted.
Why is AWK so important? It is an excellent filter and report writer. Many UNIX utilities generates rows and columns of information. AWK is an excellent tool for processing these rows and columns, and is easier to use AWK than most conventional programming languages. It can be considered to be a pseudo-C interpretor, as it understands the same arithmatic operators as C. AWK also has string manipulation functions, so it can search for particular strings and modify the output. AWK also has associative arrays, which are incredible useful, and is a feature most computing languages lack. Associative arrays can make a complex problem a trivial exercise.
I won't exhaustively cover AWK. That is, I will cover the essential parts, and avoid the many variants of AWK. It might be too confusing to discuss three different versions of AWK. I won't cover the GNU version of AWK called "gawk." Similarly, I will not discuss the new AT&T AWK called "nawk." The new AWK comes on the Sun system, and you may find it superior to the old AWK in many ways. In particular, it has better diagnostics, and won't print out the infamous "bailing out near line ..." message the original AWK is prone to do. Instead, "nawk" prints out the line it didn't understand, and highlights the bad parts with arrows. GAWK does this as well, and this really helps a lot. If you find yourself needing a feature that is very difficult or impossible to do in AWK, I suggest you either use NAWK, or GAWK, or convert your AWK script into PERL using the "a2p" conversion program which comes with PERL. PERL is a marvelous language, and I use it all the time, but I do not plan to cover PERL in these tutorials. Having made my intention clear, I can continue with a clear conscience.
Many UNIX utilities have strange names. AWK is one of those utilities. It is not an abbreviation for awkward. In fact, it is an elegant and simple language. The work "AWK" is derived from the initials of the language's three developers: A. Aho, B. W. Kernighan and P. Weinberger.
# Basic Structure
The essential organization of an AWK program follows the form:
pattern { action }
The pattern specifies when the action is performed. Like most UNIX utilities, AWK is line oriented. That is, the pattern specifies a test that is performed with each line read as input. If the condition is true, then the action is taken. The default pattern is something that matches every line. This is the blank or null pattern. Two other important patterns are specified by the keywords "BEGIN" and "END." As you might expect, these two words specify actions to be taken before any lines are read, and after the last line is read. The AWK program below:
BEGIN { print "START" }
{ print }
END { print "STOP" }
adds one line before and one line after the input file. This isn't very useful, but with a simple change, we can make this into a typical AWK program:
BEGIN { print "File\tOwner"," }
{ print $8, "\t",$3}
END { print " - DONE -" }
I'll improve the script in the next sections, but we'll call it "FileOwner." But let's not put it into a script or file yet. I will cover that part in a bit. Hang on and follow with me so you get the flavor of AWK.
The characters "\t" Indicates a tab character so the output lines up on even boundries. The "$8" and "$3" have a meaning similar to a shell script. Instead of the eighth and third argument, they mean the eighth and third field of the input line. You can think of a field as a column, and the action you specify operates on each line or row read in.
There are two differences between AWK and a shell processing the characters within double quotes. AWK understands special characters follow the "\" character like "t". The Bourne and C UNIX shells do not. Also, unlike the shell (and PERL) AWK does not evaluate variables within strings. To explain, the second line could not be written like this:
{print "$8\t$3" }
That example would print "$8$3." Inside the quotes, the dollar sign is not a special character. Outside, it corresponds to a field. What do I mean by the third and eight field? Consider the Solaris "/usr/bin/ls -l" command, which has eight columns of information. The System V version (Similar to the Linux version), "/usr/5bin/ls -l," has 9 columns. The third column is the owner, and the eighth (or nineth) column in the name of the file. This AWK program can be used to process the output of the "ls -l" command, printing out the filename, then the owner, for each file. I'll show you how.
Update: On a linux system, change "$8" to "$9".
One more point about the use of a dollar sign. In scripting languages like Perl and the various shells, a dollar sign means the word following is the name of the variable. Awk is different. The dollar sign means that we are refering to a field or column in the current line. When switching between Perl and AWK you must remener that "$" has a different meaning. So the following piece of code prints two "fields" to standard out. The first field printed is the number "5", the second is the fifth field (or column) on the input line. BEGIN { x=5 } { print x,$x}
# Executing an AWK script
So let's start writing our first AWK script. There are a couple of ways to do this.
Assuming the first script is called "FileOwner," the invocation would be
ls -l | FileOwner
This might generate the following if there were only two files in the current directory:
File Owner
a.file barnett
another.file barnett
- DONE -
There are two problems with this script. Both problems are easy to fix, but I'll hold off on this until I cover the basics.
The script itself can be written in many ways. The C shell version would look like this:
#!/bin/csh -f
# Linux users have to change $8 to$9
awk '\
BEGIN { print "File\tOwner" } \
{ print $8, "\t",$3} \
END { print " - DONE -" } \
'
As you can see in the above script, each line of the AWK script must have a backslash if it is not the last line of the script. This is necessary as the C shell doesn't, by default, allow strings I have a long list of complaints about using the C shell. See Top Ten reasons not to use the C shell
The Bourne shell (as does most shells) allows quoted strings to span several lines:
#!/bin/sh
# Linux users have to change $8 to$9
awk '
BEGIN { print "File\tOwner" }
{ print $8, "\t",$3}
END { print " - DONE -" }
'
The third form is to store the commands in a file, and execute
awk -f filename
Since AWK is also an interpretor, you can save yourself a step and make the file executable by add one line in the beginning of the file:
#!/bin/awk -f
BEGIN { print "File\tOwner" }
{ print $8, "\t",$3}
END { print " - DONE -" }
Change the permission with the chmod command, (i.e. "chmod +x awk_example1.awk"), and the script becomes a new command. Notice the "-f" option following '#!/bin/awk "above, which is also used in the third format where you use AWK to execute the file directly, i.e. "awk -f filename". The "-f" option specifies the AWK file containing the instructions. As you can see, AWK considers lines that start with a "#" to be a comment, just like the shell. To be precise, anything from the "#" to the end of the line is a comment (unless its inside an AWK string. However, I always comment my AWK scripts with the "#" at the start of the line, for reasons I'll discuss later.
Which format should you use? I prefer the last format when possible. It's shorter and simpler. It's also easier to debug problems. If you need to use a shell, and want to avoid using too many files, you can combine them as we did in the first and second example.
# Which shell to use with AWK?
The format of AWK is not free-form. You cannot put new line breaks just anywhere. They must go in particular locations. To be precise, in the original AWK you can insert a new line character after the curly braces, and at the end of a command, but not elsewhere. If you wanted to break a long line into two lines at any other place, you had to use a backslash:
#!/bin/awk -f
BEGIN { print "File\tOwner" }
{ print $8, "\t", \$3}
END { print " - DONE -" }
The Bourne shell version would be
#!/bin/sh
awk '
BEGIN { print "File\tOwner" }
{ print $8, "\t", \$3}
END { print "done"}
'
while the C shell would be
#!/bin/csh -f
awk '
BEGIN { print "File\tOwner" }\
{ print $8, "\t", \\$3}\
END { print "done"}\
'
As you can see, this demonstrates how awkward the C shell is when enclosing an AWK script. Not only are back slashes needed for every line, some lines need two. (Note - this is true when using old awk (e.g. on Solaris) because the print statement had to be on one line. Newer AWK's are more flexible where newlines can be added.) Many people will warn you about the C shell. Some of the problems are subtle, and you may never see them. Try to include an AWK or sed script within a C shell script, and the back slashes will drive you crazy. This is what convinced me to learn the Bourne shell years ago, when I was starting out. I strongly recommend you use the Bourne shell for any AWK or sed script. If you don't use the Bourne shell, then you should learn it. As a minimum, learn how to set variables, which by some strange coincidence is the subject of the next section.
# Dynamic Variables
Since you can make a script an AWK executable by mentioning "#!/bin/awk -f" on the first line, including an AWK script inside a shell script isn't needed unless you want to either eliminate the need for an extra file, or if you want to pass a variable to the insides of an AWK script. Since this is a common problem, now is as good a time to explain the technique. I'll do this by showing a simple AWK program that will only print one column. NOTE: there will be a bug in the first version. The number of the column will be specified by the first argument. The first version of the program, which we will call "Column," looks like this:
#!/bin/sh
#NOTE - this script does not work!
column=$1 awk '{print$column}'
Click here to get file (but be aware that it doesn't work): Column1.sh
A suggested use is:
ls -l | Column 3
This would print the third column from the ls command, which would be the owner of the file. You can change this into a utility that counts how many files are owned by each user by adding
ls -l | Column 3 | uniq -c | sort -nr
Only one problem: the script doesn't work. The value of the "column" variable is not seen by AWK. Change "awk" to "echo" to check. You need to turn off the quoting when the variable is seen. This can be done by ending the quoting, and restarting it after the variable:
#!/bin/sh
column=$1 awk '{print$'$column'}' Click here to get file: Column2.sh This is a very important concept, and throws experienced programmers a curve ball. In many computer languages, a string has a start quote, and end quote, and the contents in between. If you want to include a special character inside the quote, you must prevent the character from having the typical meaning. In the C language, this is down by putting a backslash before the character. In other languages, there is a special combination of characters to to this. In the C and Bourne shell, the quote is just a switch. It turns the interpretation mode on or off. There is really no such concept as "start of string" and "end of string." The quotes toggle a switch inside the interpretor. The quote character is not passed on to the application. This is why there are two pairs of quotes above. Notice there are two dollar signs. The first one is quoted, and is seen by AWK. The second one is not quoted, so the shell evaluates the variable, and replaces "$column" by the value. If you don't understand, either change "awk" to "echo," or change the first line to read "#!/bin/sh -x."
Some improvements are needed, however. The Bourne shell has a mechanism to provide a value for a variable if the value isn't set, or is set and the value is an empty string. This is done by using the format:
${ variable:- defaultvalue} This is shown below, where the default column will be one: #!/bin/sh column=${1:-1}
awk '{print $'$column'}'
We can save a line by combining these two steps:
#!/bin/sh
awk '{print $'${1:-1}'}'
It is hard to read, but it is compact. There is one other method that can be used. If you execute an AWK command and include on the command line
variable= value
this variable will be set when the AWK script starts. An example of this use would be:
#!/bin/sh
awk '{print $c}' c=${1:-1}
This last variation does not have the problems with quoting the previous example had. You should master the earlier example, however, because you can use it with any script or command. The second method is special to AWK. Modern AWK's have other options as well. See the comp.unix.shell FAQ.
# The Essential Syntax of AWK
Earlier I discussed ways to start an AWK script. This section will discuss the various grammatical elements of AWK.
# Arithmetic Expressions
There are several arithmetic operators, similar to C. These are the binary operators, which operate on two variables:
+--------------------------------------------+
| AWK Table 1 |
| Binary Operators |
|Operator Type Meaning |
+--------------------------------------------+
|+ Arithmetic Addition |
|- Arithmetic Subtraction |
|* Arithmetic Multiplication |
|/ Arithmetic Division |
|% Arithmetic Modulo |
|<space> String Concatenation |
+--------------------------------------------+
Using variables with the value of "7" and "3," AWK returns the following results for each operator when using the print command:
+---------------------+
|Expression Result |
+---------------------+
|7+3 10 |
|7-3 4 |
|7*3 21 |
|7/3 2.33333 |
|7%3 1 |
|7 3 73 |
+---------------------+
There are a few points to make. The modulus operator finds the remainder after an integer divide. The print command output a floating point number on the divide, but an integer for the rest. The string concatenate operator is confusing, since it isn't even visible. Place a space between two variables and the strings are concatenated together. This also shows that numbers are converted automatically into strings when needed. Unlike C, AWK doesn't have "types" of variables. There is one type only, and it can be a string or number. The conversion rules are simple. A number can easily be converted into a string. When a string is converted into a number, AWK will do so. The string "123" will be converted into the number 123. However, the string "123X" will be converted into the number 0. (NAWK will behave differently, and converts the string into integer 123, which is found in the beginning of the string).
# Unary arithmetic operators
The "+" and "-" operators can be used before variables and numbers. If X equals 4, then the statement:
print -x;
will print "-4."
# The Autoincrement and Autodecrement Operators
AWK also supports the "++" and "--" operators of C. Both increment or decrement the variables by one. The operator can only be used with a single variable, and can be before or after the variable. The prefix form modifies the value, and then uses the result, while the postfix form gets the results of the variable, and afterwards modifies the variable. As an example, if X has the value of 3, then the AWK statement
print x++, " ", ++x;
would print the numbers 3 and 5. These operators are also assignment operators, and can be used by themselves on a line:
x++;
--y;
# Assignment Operators
Variables can be assigned new values with the assignment operators. You know about "++" and "--." The other assignment statement is simply:
variable = arithmetic_expression
Certain operators have precedence over others; parenthesis can be used to control grouping. The statement
x=1+2*3 4;
is the same as
x = (1 + (2 * 3)) "4";
Both print out "74."
Notice spaces can be added for readability. AWK, like C, has special assignment operators, which combine a calculation with an assignment. Instead of saying
x=x+2;
you can more concisely say:
x+=2;
The complete list follows:
+-----------------------------------------+
| AWK Table 2 |
| Assignment Operators |
|Operator Meaning |
|+= Add result to variable |
|-= Subtract result from variable |
|*= Multiply variable by result |
|/= Divide variable by result |
|%= Apply modulo to variable |
+-----------------------------------------+
# Conditional expressions
The second type of expression in AWK is the conditional expression. This is used for certain tests, like the if or while. Boolean conditions evaluate to true or false. In AWK, there is a definite difference between a boolean condition, and an arithmetic expression. You cannot convert a boolean condition to an integer or string. You can, however, use an arithmetic expression as a conditional expression. A value of 0 is false, while anything else is true. Undefined variables has the value of 0. Unlike AWK, NAWK lets you use booleans as integers.
Arithmetic values can also be converted into boolean conditions by using relational operators:
+---------------------------------------+
| AWK Table 3 |
| Relational Operators |
|Operator Meaning |
+---------------------------------------+
|== Is equal |
|!= Is not equal to |
|> Is greater than |
|>= Is greater than or equal to |
|< Is less than |
|<= Is less than or equal to |
+---------------------------------------+
These operators are the same as the C operators. They can be used to compare numbers or strings. With respect to strings, lower case letters are greater than upper case letters.
# Regular Expressions
Two operators are used to compare strings to regular expressions:
+-----------------------------+
| AWK Table 4 |
|Regular Expression Operators |
|Operator Meaning |
+-----------------------------+
|~ Matches |
|!~ Doesn't match |
+-----------------------------+
The order in this case is particular. The regular expression must be enclosed by slashes, and comes after the operator. AWK supports extended regular expressions, so the following are examples of valid tests:
word !~ /START/
lawrence_welk ~ /(one|two|three)/
# And/Or/Not
There are two boolean operators that can be used with conditional expressions. That is, you can combine two conditional expressions with the "or" or "and" operators: "&&" and "||." There is also the unary not operator: "!."
# Commands
There are only a few commands in AWK. The list and syntax follows:
if ( conditional ) statement [ else statement ]
while ( conditional ) statement
for ( expression ; conditional ; expression ) statement
for ( variable in array ) statement
break
continue
{ [ statement ] ...}
variable= expression
print [ expression-list ] [ > expression ]
printf format [ , expression-list ] [ > expression ]
next
exit
At this point, you can use AWK as a language for simple calculations; If you wanted to calculate something, and not read any lines for input, you could use the BEGIN keyword discussed earlier, combined with a exit command:
#!/bin/awk -f
BEGIN {
# Print the squares from 1 to 10 the first way
i=1;
while (i <= 10) {
printf "The square of ", i, " is ", i*i;
i = i+1;
}
# do it again, using more concise code
for (i=1; i <= 10; i++) {
printf "The square of ", i, " is ", i*i;
}
# now end
exit;
}
The following asks for a number, and then squares it:
#!/bin/awk -f
BEGIN {
print "type a number";
}
{
print "The square of ", $1, " is ",$1*$1; print "type another number"; } END { print "Done" } Click here to get file: awk_ask_for_square.awk The above isn't a good filter, because it asks for input each time. If you pipe the output of another program into it, you would generate a lot of meaningless prompts. Here is a filter that you should find useful. It counts lines, totals up the numbers in the first column, and calculates the average. Pipe "wc -c *" into it, and it will count files, and tell you the average number of words per file, as well as the total words and the number of files. #!/bin/awk -f BEGIN { # How many lines lines=0; total=0; } { # this code is executed once for each line # increase the number of files lines++; # increase the total size, which is field #1 total+=$1;
}
END {
# end, now output the total
print lines " lines read";
print "total is ", total;
if (lines > 0 ) {
print "average is ", total/lines;
} else {
print "average is 0";
}
}
You can pipe the output of "ls -s" into this filter to count the number of files, the total size, and the average size. There is a slight problem with this script, as it includes the output of "ls" that reports the total. This causes the number of files to be off by one. Changing
lines++;
to
if ($1 != "total" ) lines++; will fix this problem. Note the code which prevents a divide by zero. This is common in well-written scripts. I also initialize the variables to zero. This is not necessary, but it is a good habit. # AWK Built-in Variables I have mentioned two kinds of variables: positional and user defined. A user defined variable is one you create. A positional variable is not a special variable, but a function triggered by the dollar sign. Therefore print$1;
and
X=1;
print $X; do the same thing: print the first field on the line. There are two more points about positional variables that are very useful. The variable "$0" refers to the entire line that AWK reads in. That is, if you had eight fields in a line,
print $0; is similar to print$1, $2,$3, $4,$5, $6,$7, $8 This will change the spacing between the fields; otherwise, they behave the same. You can modify positional variables. The following commands$2="";
print;
deletes the second field. If you had four fields, and wanted to print out the second and fourth field, there are two ways. This is the first:
#!/bin/awk -f
{
$1="";$3="";
print;
}
and the second
#!/bin/awk -f
{
print $2,$4;
}
These perform similarly, but not identically. The number of spaces between the values vary. There are two reasons for this. The actual number of fields does not change. Setting a positional variable to an empty string does not delete the variable. It's still there, but the contents has been deleted. The other reason is the way AWK outputs the entire line. There is a field separator that specifies what character to put between the fields on output. The first example outputs four fields, while the second outputs two. In-between each field is a space. This is easier to explain if the characters between fields could be modified to be made more visible. Well, it can. AWK provides special variables for just that purpose.
# FS - The Input Field Separator Variable
AWK can be used to parse many system administration files. However, many of these files do not have whitespace as a separator. as an example, the password file uses colons. You can easily change the field separator character to be a colon using the "-F" command line option. The following command will print out accounts that don't have passwords:
awk -F: '{if ($2 == "") print$1 ": no password!"}' </etc/passwd
There is a way to do this without the command line option. The variable "FS" can be set like any variable, and has the same function as the "-F" command line option. The following is a script that has the same function as the one above.
#!/bin/awk -f
BEGIN {
FS=":";
}
{
if ( $2 == "" ) { print$1 ": no password!";
}
}
The second form can be used to create a UNIX utility, which I will name "chkpasswd," and executed like this:
chkpasswd </etc/passwd
The command "chkpasswd -F:" cannot be used, because AWK will never see this argument. All interpreter scripts accept one and only one argument, which is immediately after the "#!/bin/awk" string. In this case, the single argument is "-f." Another difference between the command line option and the internal variable is the ability to set the input field separator to be more than one character. If you specify
FS=": ";
then AWK will split a line into fields wherever it sees those two characters, in that exact order. You cannot do this on the command line.
There is a third advantage the internal variable has over the command line option: you can change the field separator character as many times as you want while reading a file. Well, at most once for each line. You can even change it depending on the line you read. Suppose you had the following file which contains the numbers 1 through 7 in three different formats. Lines 4 through 6 have colon separated fields, while the others separated by spaces.
ONE 1 I
TWO 2 II
#START
THREE:3:III
FOUR:4:IV
FIVE:5:V
#STOP
SIX 6 VI
SEVEN 7 VII
The AWK program can easily switch between these formats:
#!/bin/awk -f
{
if ($1 == "#START") { FS=":"; } else if ($1 == "#STOP") {
FS=" ";
} else {
#print the Roman number in column 3
print $3 } } Click here to get file: awk_example3.awk Note the field separator variable retains its value until it is explicitly changed. You don't have to reset it for each line. Sounds simple, right? However, I have a trick question for you. What happens if you change the field separator while reading a line? That is, suppose you had the following line One Two:Three:4 Five and you executed the following script: #!/bin/awk -f { print$2
FS=":"
print $2 } What would be printed? "Three" or "Two:Three:4?" Well, the script would print out "Two:Three:4" twice. However, if you deleted the first print statement, it would print out "Three" once! I thought this was very strange at first, but after pulling out some hair, kicking the deck, and yelling at muself and everyone who had anything to do with the development of UNIX, it is intuitively obvious. You just have to be thinking like a professional programmer to realize it is intuitive. I shall explain, and prevent you from causing yourself physical harm. If you change the field separator before you read the line, the change affects what you read. If you change it after you read the line, it will not redefine the variables. You wouldn't want a variable to change on you as a side-effect of another action. A programming language with hidden side effects is broken, and should not be trusted. AWK allows you to redefine the field separator either before or after you read the line, and does the right thing each time. Once you read the variable, the variable will not change unless you change it. Bravo! To illustrate this further, here is another version of the previous code that changes the field separator dynamically. In this case, AWK does it by examining field "$0," which is the entire line. When the line contains a colon, the field separator is a colon, otherwise, it is a space:
#!/bin/awk -f
{
if ( $0 ~ /:/ ) { FS=":"; } else { FS=" "; } #print the third field, whatever format print$3
}
This example eliminates the need to have the special "#START" and "#STOP" lines in the input.
# OFS - The Output Field Separator Variable
There is an important difference between
print $2$3
and
print $2,$3
The first example prints out one field, and the second prints out two fields. In the first case, the two positional parameters are concatenated together and output without a space. In the second case, AWK prints two fields, and places the output field separator between them. Normally this is a space, but you can change this by modifying the variable "OFS."
If you wanted to copy the password file, but delete the encrypted password, you could use AWK:
#!/bin/awk -f
BEGIN {
FS=":";
OFS=":";
}
{
#!/bin/awk -f
}
# RS - The Record Separator Variable
Normally, AWK reads one line at a time, and breaks up the line into fields. You can set the "RS" variable to change AWK's definition of a "line." If you set it to an empty string, then AWK will read the entire file into memory. You can combine this with changing the "FS" variable. This example treats each line as a field, and prints out the second and third line:
#!/bin/awk -f
BEGIN {
# change the record separator from newline to nothing
RS=""
# change the field separator from whitespace to newline
FS="\n"
}
{
# print the second and third line of the file
print $2,$3;
}
The two lines are printed with a space between. Also this will only work if the input file is less than 100 lines, therefore this technique is limited. You can use it to break words up, one word per line, using this:
#!/bin/awk -f
BEGIN {
RS=" ";
}
{
print ;
}
but this only works if all of the words are separated by a space. If there is a tab or punctuation inside, it would not.
# ORS - The Output Record Separator Variable
The default output record separator is a newline, like the input. This can be set to be a newline and carriage return, if you need to generate a text file for a non-UNIX system.
#!/bin/awk -f
# this filter adds a carriage return to all lines
# before the newline character
BEGIN {
ORS="\r\n"
}
{ print }
# FILENAME - The Current Filename Variable
The last variable known to regular AWK is "FILENAME," which tells you the name of the file being read.
#!/bin/awk -f
# reports which file is being read
BEGIN {
f="";
}
{ if (f != FILENAME) {
f=FILENAME;
}
print;
}
This can be used if several files need to be parsed by AWK. Normally you use standard input to provide AWK with information. You can also specify the filenames on the command line. If the above script was called "testfilter," and if you executed it with
testfilter file1 file2 file3
It would print out the filename before each change. An alternate way to specify this on the command line is
testfilter file1 - file3 <file2
In this case, the second file will be called "-," which is the conventional name for standard input. I have used this when I want to put some information before and after a filter operation. The prefix and postfix files special data before and after the real data. By checking the filename, you can parse the information differently. This is also useful to report syntax errors in particular files:
#!/bin/awk -f
{
if (NF == 6) {
# do the right thing
} else {
if (FILENAME == "-" ) {
print "SYNTAX ERROR, Wrong number of fields,",
"in STDIN, line #:", NR, "line: ", $0; } else { print "SYNTAX ERROR, Wrong number of fields,", "Filename: ", FILENAME, "line # ", NR,"line: ",$0;
}
}
}
# Associative Arrays
I have used dozens of different programming languages over the last 20 years, and AWK is the first language I found that has associative arrays. This term may be meaningless to you, but believe me, these arrays are invaluable, and simplify programming enormously. Let me describe a problem, and show you how associative arrays can be used for reduce coding time, giving you more time to explore another stupid problem you don't want to deal with in the first place.
Let's suppose you have a directory overflowing with files, and you want to find out how many files are owned by each user, and perhaps how much disk space each user owns. You really want someone to blame; it's hard to tell who owns what file. A filter that processes the output of ls would work:
ls -l | filter
But this doesn't tell you how much space each user is using. It also doesn't work for a large directory tree. This requires find and xargs:
find . -type f -print | xargs ls -l | filter
The third column of "ls" is the username. The filter has to count how many times it sees each user. The typical program would have an array of usernames and another array that counts how many times each username has been seen. The index to both arrays are the same; you use one array to find the index, and the second to keep track of the count. I'll show you one way to do it in AWK--the wrong way:
#!/bin/awk -f
# bad example of AWK programming
# this counts how many files each user owns.
BEGIN {
number_of_users=0;
}
{
# must make sure you only examine lines with 8 or more fields
if (NF>7) {
user=0;
# look for the user in our list of users
for (i=1; i<=number_of_users; i++) {
# is the user known?
if (username[i] == $3) { # found it - remember where the user is user=i; } } if (user == 0) { # found a new user username[++number_of_users]=$3;
user=number_of_users;
}
# increase number of counts
count[user]++;
}
}
END {
for (i=1; i<=number_of_users; i++) {
}
}
I don't want you to read this script. I told you it's the wrong way to do it. If you were a C programmer, and didn't know AWK, you would probably use a technique like the one above. Here is the same program, except this example that uses AWK's associative arrays. The important point is to notice the difference in size between these two versions:
#!/bin/awk -f
{
username[$3]++; } END { for (i in username) { print username[i], i; } } Click here to get file: count_users0.awk This is shorter, simpler, and much easier to understand--Once you understand exactly what an associative array is. The concept is simple. Instead of using a number to find an entry in an array, use anything you want. An associative array in an array whose index is a string. All arrays in AWK are associative. In this case, the index into the array is the third field of the "ls" command, which is the username. If the user is "bin," the main loop increments the count per user by effectively executing username["bin"]++; UNIX guru's may gleefully report that the 8 line AWK script can be replaced by: awk '{print$3}' | sort | uniq -c | sort -nr
True, However, this can't count the total disk space for each user. We need to add some more intelligence to the AWK script, and need the right foundation to proceed. There is also a slight bug in the AWK program. If you wanted a "quick and dirty" solution, the above would be fine. If you wanted to make it more robust, you have to handle unusual conditions. If you gave this program an empty file for input, you would get the error:
awk: username is not an array
Also, if you piped the output of "ls -l" to it, the line that specified the total would increment a non-existing user. There are two techniques used to eliminate this error. The first one only counts valid input:
#!/bin/awk -f
{
if (NF>7) {
username[$3]++; } } END { for (i in username) { print username[i], i; } } Click here to get file: count_users1.awk This fixes the problem of counting the line with the total. However, it still generates an error when an empty file is read as input. To fix this problem, a common technique is to make sure the array always exists, and has a special marker value which specifies that the entry is invalid. Then when reporting the results, ignore the invalid entry. #!/bin/awk -f BEGIN { username[""]=0; } { username[$3]++;
}
END {
for (i in username) {
if (i != "") {
}
}
}
This happens to fix the other problem. Apply this technique and you will make your AWK programs more robust and easier for others to use.
# Multi-dimensional Arrays
Some people ask if AWK can handle multi-dimensional arrays. It can. However, you don't use conventional two-dimensional arrays. Instead you use associative arrays. (Did I even mention how useful associative arrays are?) Remember, you can put anything in the index of an associative array. It requires a different way to think about problems, but once you understand, you won't be able to live without it. All you have to do is to create an index that combines two other indices. Suppose you wanted to effectively execute
a[1,2] = y;
This is invalid in AWK. However, the following is perfectly fine:
a[1 "," 2] = y;
Remember: the AWK string concatenation operator is the space. It combines the three strings into the single string "1,2." Then it uses it as an index into the array. That's all there is to it. There is one minor problem with associative arrays, especially if you use thefor command to output each element: you have no control over the order of output. You can create an algorithm to generate the indices to an associative array, and control the order this way. However, this is difficult to do. Since UNIX provides an excellent sort utility, more programmers separate the information processing from the sorting. I'll show you what I mean.
# Example of using AWK's Associative Arrays
I often find myself using certain techniques repeatedly in AWK. This example will demonstrate these techniques, and illustrate the power and elegance of AWK. The program is simple and common. The disk is full. Who's gonna be blamed? I just hope you use this power wisely. Remember, you may be the one who filled up the disk.
Having resolved my moral dilemma, by placing the burden squarely on your shoulders, I will describe the program in detail. I will also discuss several tips you will find useful in large AWK programs. First, initialize all arrays used in a for loop. There will be four arrays for this purpose. Initialization is easy:
u_count[""]=0;
g_count[""]=0;
ug_count[""]=0;
all_count[""]=0;
The second tip is to pick a convention for arrays. Selecting the names of the arrays, and the indices for each array is very important. In a complex program, it can become confusing to remember which array contains what. I suggest you clearly identify the indices and contents of each array. To demonstrate, I will use a "_count" to indicate the number of files, and "_sum" to indicate the sum of the file sizes. In addition, the part before the "_" specifies the index used for the array, which will be either "u" for user, "g" for group, "ug" for the user and group combination, and "all" for the total for all files. In other programs, I have used names like
Follow a convention like this, and it will be hard for you to forget the purpose of each associative array. Even when a quick hack comes back to haunt you three years later. I've been there.
The third suggestion is to make sure your input is in the correct form. It's generally a good idea to be pessimistic, but I will add a simple but sufficient test in this example.
if (NF != 10) {
# ignore
} else {
etc.
I placed the test and error clause up front, so the rest of the code won't be cluttered. AWK doesn't have user defined functions. NAWK, GAWK and PERL do.
The next piece of advice for complex AWK scripts is to define a name for each field used. In this case, we want the user, group and size in disk blocks. We could use the file size in bytes, but the block size corresponds to the blocks on the disk, a more accurate measurement of space. Disk blocks can be found by using "ls -s." This adds a column, so the username becomes the fourth column, etc. Therefore the script will contain:
size=$1; user=$4;
group=$5; This will allow us to easily adapt to changes in input. We could use "$1" throughout the script, but if we changed the number of fields, which the "-s" option does, we'd have to change each field reference. You don't want to go through an AWK script, and change all the "$1" to "$2," and also change the "$2" to "$3" because those are really the "$1" that you just changed to "$2." Of course this is confusing. That's why it's a good idea to assign names to the fields. I've been there too.
Next the AWK script will count how many times each combination of users and groups occur. That is, I am going to construct a two-part index that contains the username and groupname. This will let me count up the number of times each user/group combination occurs, and how much disk space is used.
Consider this: how would you calculate the total for just a user, or for just a group? You could rewrite the script. Or you could take the user/group totals, and total them with a second script.
You could do it, but it's not the AWK way to do it. If you had to examine a bazillion files, and it takes a long time to run that script, it would be a waste to repeat this task. It's also inefficient to require two scripts when one can do everything. The proper way to solve this problem is to extract as much information as possible in one pass through the files. Therefore this script will find the number and size for each category:
Each user
Each group
Each user/group combination
All users and groups
This is why I have 4 arrays to count up the number of files. I don't really need 4 arrays, as I can use the format of the index to determine which array is which. But this does maake the program easier to understand for now. The next tip is subtle, but you will see how useful it is. I mentioned the indices into the array can be anything. If possible, select a format that allows you to merge information from several arrays. I realize this makes no sense right now, but hang in there. All will become clear soon. I will do this by constructing a universal index of the form
<user> <group>
This index will be used for all arrays. There is a space between the two values. This covers the total for the user/group combination. What about the other three arrays? I will use a "*" to indicate the total for all users or groups. Therefore the index for all files would be "* *" while the index for all of the file owned by user daemon would be "daemon *." The heart of the script totals up the number and size of each file, putting the information into the right category. I will use 8 arrays; 4 for file sizes, and 4 for counts:
u_count[user " *"]++;
g_count["* " group]++;
ug_count[user " " group]++;
all_count["* *"]++;
u_size[user " *"]+=size;
g_size["* " group]+=size;
ug_size[user " " group]+=size;
all_size["* *"]+=size;
This particular universal index will make sorting easier, as you will see. Also important is to sort the information in an order that is useful. You can try to force a particular output order in AWK, but why work at this, when it's a one line command for sort? The difficult part is finding the right way to sort the information. This script will sort information using the size of the category as the first sort field. The largest total will be the one for all files, so this will be one of the first lines output. However, there may be several ties for the largest number, and care must be used. The second field will be the number of files. This will help break a tie. Still, I want the totals and sub-totals to be listed before the individual user/group combinations. The third and fourth fields will be generated by the index of the array. This is the tricky part I warned you about. The script will output one string, but the sort utility will not know this. Instead, it will treat it as two fields. This will unify the results, and information from all 4 arrays will look like one array. The sort of the third and fourth fields will be dictionary order, and not numeric, unlike the first two fields. The "*" was used so these sub-total fields will be listed before the individual user/group combination.
The arrays will be printed using the following format:
for (i in u_count) {
if (i != "") {
print u_size[i], u_count[i], i;
}
}
O
I only showed you one array, but all four are printed the same way. That's the essence of the script. The results is sorted, and I converted the space into a tab for cosmetic reasons.
# Output of the script
I changed my directory to /usr/ucb, used the script in that directory. The following is the output:
size count user group
3173 81 * *
3173 81 root *
2973 75 * staff
2973 75 root staff
88 3 * daemon
88 3 root daemon
64 2 * kmem
64 2 root kmem
48 1 * tty
48 1 root tty
This says there are 81 files in this directory, which takes up 3173 disk blocks. All of the files are owned by root. 2973 disk blocks belong to group staff. There are 3 files with group daemon, which takes up 88 disk blocks.
As you can see, the first line of information is the total for all users and groups. The second line is the sub-total for the user "root." The third line is the sub-total for the group "staff." Therefore the order of the sort is useful, with the sub-totals before the individual entries. You could write a simple AWK or grep script to obtain information from just one user or one group, and the information will be easy to sort.
There is only one problem. The /usr/ucb directory on my system only uses 1849 blocks; at least that's what du reports. Where's the discrepancy? The script does not understand hard links. This may not be a problem on most disks, because many users do not use hard links. Still, it does generate inaccurate results. In this case, the program vi is also eexeditview, and 2 other names. The program only exists once, but has 7 names. You can tell because the link count (field 2) reports 7. This causes the file to be counted 7 times, which causes an inaccurate total. The fix is to only count multiple links once. Examining the link count will determine if a file has multiple links. However, how can you prevent counting a link twice? There is an easy solution: all of these files have the same inode number. You can find this number with the -i option to ls. To save memory, we only have to remember the inodes of files that have multiple links. This means we have to add another column to the input, and have to renumber all of the field references. It's a good thing there are only three. Adding a new field will be easy, because I followed my own advice.
The final script should be easy to follow. I have used variations of this hundreds of times and find it demonstrates the power of AWK as well as provide insight to a powerful programming paradigm. AWK solves these types of problems easier than most languages. But you have to use AWK the right way.
Note - this version was written for a Solaris box. You have to verify if ls is generating the right number of arguments. The -g argument may need to be deleted, and the check for the number of files may have to be modified. UpdatedI added a Linux version below - to be downloaded.
This is a fully working version of the program, that accurately counts disk space, appears below:
#!/bin/sh
find . -type f -print | xargs /usr/bin/ls -islg |
awk '
BEGIN {
# initialize all arrays used in for loop
u_count[""]=0;
g_count[""]=0;
ug_count[""]=0;
all_count[""]=0;
}
{
# validate your input
if (NF != 11) {
# ignore
} else {
# assign field names
inode=$1; size=$2;
linkcount=$4; user=$5;
group=$6; # should I count this file? doit=0; if (linkcount == 1) { # only one copy - count it doit++; } else { # a hard link - only count first one seen[inode]++; if (seen[inode] == 1) { doit++; } } # if doit is true, then count the file if (doit ) { # total up counts in one pass # use description array names # use array index that unifies the arrays # first the counts for the number of files u_count[user " *"]++; g_count["* " group]++; ug_count[user " " group]++; all_count["* *"]++; # then the total disk space used u_size[user " *"]+=size; g_size["* " group]+=size; ug_size[user " " group]+=size; all_size["* *"]+=size; } } } END { # output in a form that can be sorted for (i in u_count) { if (i != "") { print u_size[i], u_count[i], i; } } for (i in g_count) { if (i != "") { print g_size[i], g_count[i], i; } } for (i in ug_count) { if (i != "") { print ug_size[i], ug_count[i], i; } } for (i in all_count) { if (i != "") { print all_size[i], all_count[i], i; } } } ' | # numeric sort - biggest numbers first # sort fields 0 and 1 first (sort starts with 0) # followed by dictionary sort on fields 2 + 3 sort +0nr -2 +2d | # add header (echo "size count user group";cat -) | # convert space to tab - makes it nice output # the second set of quotes contains a single tab character tr ' ' ' ' # done - I hope you like it Click here to get file: count_users3.awk Remember when I said I didn't need to use 4 different arrays? I can use just one. This is more confusing, but more concise #!/bin/sh find . -type f -print | xargs /usr/bin/ls -islg | awk ' BEGIN { # initialize all arrays used in for loop count[""]=0; } { # validate your input if (NF != 11) { # ignore } else { # assign field names inode=$1;
size=$2; linkcount=$4;
user=$5; group=$6;
# should I count this file?
doit=0;
if (linkcount == 1) {
# only one copy - count it
doit++;
} else {
# a hard link - only count first one
seen[inode]++;
if (seen[inode] == 1) {
doit++;
}
}
# if doit is true, then count the file
if (doit ) {
# total up counts in one pass
# use description array names
# use array index that unifies the arrays
# first the counts for the number of files
count[user " *"]++;
count["* " group]++;
count[user " " group]++;
count["* *"]++;
# then the total disk space used
size[user " *"]+=size;
size["* " group]+=size;
size[user " " group]+=size;
size["* *"]+=size;
}
}
}
END {
# output in a form that can be sorted
for (i in count) {
if (i != "") {
print size[i], count[i], i;
}
}
} ' |
# numeric sort - biggest numbers first
# sort fields 0 and 1 first (sort starts with 0)
# followed by dictionary sort on fields 2 + 3
sort +0nr -2 +2d |
(echo "size count user group";cat -) |
# convert space to tab - makes it nice output
# the second set of quotes contains a single tab character
tr ' ' ' '
# done - I hope you like it
Here is a version that works with modern Linux systems, but assumes you have well-behaved filenames (without spaces, etc,): count_users_new.awk
# Picture Perfect PRINTF Output
So far, I described several simple scripts that provide useful information, in a somewhat ugly output format. Columns might not line up properly, and it is often hard to find patterns or trends without this unity. As you use AWK more, you will be desirous of crisp, clean formatting. To achieve this, you must master the printf function.
# PRINTF - formatting output
The printf is very similar to the C function with the same name. C programmers should have no problem using printf function.
Printf has one of these syntactical forms:
printf ( format);
printf ( format, arguments...);
printf ( format) >expression;
printf ( format, arguments...) > expression;
The parenthesis and semicolon are optional. I only use the first format to be consistent with other nearby printf statements. A print statement would do the same thing. Printf reveals it's real power when formatting commands are used.
The first argument to the printf function is the format. This is a string, or variable whose value is a string. This string, like all strings, can contain special escape sequences to print control characters.
# Escape Sequences
The character "\" is used to "escape" or mark special characters. The list of these characters is in table below:
+-------------------------------------------------------+
| AWK Table 5 |
| Escape Sequences |
|Sequence Description |
+-------------------------------------------------------+
|\a ASCII bell (NAWK only) |
|\b Backspace |
|\f Formfeed |
|\n Newline |
|\r Carriage Return |
|\t Horizontal tab |
|\v Vertical tab (NAWK only) |
|\ddd Character (1 to 3 octal digits) (NAWK only) |
|\xdd Character (hexadecimal) (NAWK only) |
| Any character c |
+-------------------------------------------------------+
It's difficult to explain the differences without being wordy. Hopefully I'll provide enough examples to demonstrate the differences.
With NAWK, you can print three tab characters using these three different representations:
printf("\t\11\x9\n");
A tab character is decimal 9, octal 11, or hexadecimal 09. See the man page ascii(7) for more information. Similarly, you can print three double-quote characters (decimal 34, hexadecimal 22, or octal 42 ) using
printf("\"\x22\42\n");
You should notice a difference between the printf function and the print function. Print terminates the line with the ORS character, and divides each field with the OFS separator. Printf does nothing unless you specify the action. Therefore you will frequently end each line with the newline character "\n," and you must specify the separating characters explicitly.
# Format Specifiers
The power of the printf statement lies in the format specifiers, which always start with the character "%." The format specifiers are described in table 6:
+----------------------------------------+
| AWK Table 6 |
| Format Specifiers |
|Specifier Meaning |
+----------------------------------------+
|c ASCII Character |
|d Decimal integer |
|e Floating Point number |
| (engineering format) |
|f Floating Point number |
| (fixed point format) |
|g The shorter of e or f, |
| with trailing zeros removed |
|o Octal |
|s String |
|% Literal % |
+----------------------------------------+
Again, I'll cover the differences quickly. Table 3 illustrates the differences. The first line states "printf(%c\n",100.0)"" prints a "d."
+--------------------------------+
| AWK Table 7 |
| Example of format conversions |
|Format Value Results |
+--------------------------------+
|%c 100.0 d |
|%c "100.0" 1 (NAWK?) |
|%c 42 " |
|%d 100.0 100 |
|%e 100.0 1.000000e+02 |
|%f 100.0 100.000000 |
|%g 100.0 100 |
|%o 100.0 144 |
|%s 100.0 100.0 |
|%s "13f" 13f |
|%d "13f" 0 (AWK) |
|%d "13f" 13 (NAWK) |
|%x 100.0 64 |
+--------------------------------+
This table reveals some differences between AWK and NAWK. When a string with numbers and letters are coverted into an integer, AWK will return a zero, while NAWK will convert as much as possible. The second example, marked with "NAWK?" will return "d" on some earlier versions of NAWK, while later versions will return "1."
Using format specifiers, there is another way to print a double quote with NAWK. This demonstrates Octal, Decimal and Hexadecimal conversion. As you can see, it isn't symmetrical. Decimal conversions are done differently.
printf("%s%s%s%c\n", "\"", "\x22", "\42", 34);
Between the "%" and the format character can be four optional pieces of information. It helps to visualize these fields as:
%<sign><zero><width>.<precision>format
I'll discuss each one separately.
# Width - specifying minimum field size
If there is a number after the "%," this specifies the minimum number of characters to print. This is the width field. Spaces are added so the number of printed characters equal this number. Note that this is the minimum field size. If the field becomes to large, it will grow, so information will not be lost. Spaces are added to the left.
This format allows you to line up columns perfectly. Consider the following format:
printf("%st%d\n", s, d);
If the string "s" is longer than 8 characters, the columns won't line up. Instead, use
printf("%20s%d\n", s, d);
As long as the string is less than 20 characters, the number will start on the 21st column. If the string is too long, then the two fields will run together, making it hard to read. You may want to consider placing a single space between the fields, to make sure you will always have one space between the fields. This is very important if you want to pipe the output to another program.
Adding informational headers makes the output more readable. Be aware that changing the format of the data may make it difficult to get the columns aligned perfectly. Consider the following script:
#!/usr/bin/awk -f
BEGIN {
printf("String Number\n");
}
{
printf("%10s %6d\n", $1,$2);
}
It would be awkward (forgive the choice of words) to add a new column and retain the same alignment. More complicated formats would require a lot of trial and error. You have to adjust the first printf to agree with the second printf statement. I suggest
#!/usr/bin/awk -f
BEGIN {
printf("%10s %6sn", "String", "Number");
}
{
printf("%10s %6d\n", $1,$2);
}
or even better
#!/usr/bin/awk -f
BEGIN {
format1 ="%10s %6sn";
format2 ="%10s %6dn";
printf(format1, "String", "Number");
}
{
printf(format2, $1,$2);
}
The last example, by using string variables for formatting, allows you to keep all of the formats together. This may not seem like it's very useful, but when you have multiple formats and multiple columns, it's very useful to have a set of templates like the above. If you have to add an extra space to make things line up, it's much easier to find and correct the problem with a set of format strings that are together, and the exact same width. CHainging the first columne from 10 characters to 11 is easy.
# Left Justification
The last example places spaces before each field to make sure the minimum field width is met. What do you do if you want the spaces on the right? Add a negative sign before the width:
printf("%-10s %-6d\n", $1,$2);
This will move the printing characters to the left, with spaces added to the right.
# The Field Precision Value
The precision field, which is the number between the decimal and the format character, is more complex. Most people use it with the floating point format (%f), but surprisingly, it can be used with any format character. With the octal, decimal or hexadecimal format, it specifies the minimum number of characters. Zeros are added to met this requirement. With the %e and %f formats, it specifies the number of digits after the decimal point. The %e "e+00" is not included in the precision. The %g format combines the characteristics of the %d and %f formats. The precision specifies the number of digits displayed, before and after the decimal point. The precision field has no effect on the %c field. The %s format has an unusual, but useful effect: it specifies the maximum number of significant characters to print.
If the first number after the "%," or after the "%-," is a zero, then the system adds zeros when padding. This includes all format types, including strings and the %c character format. This means "%010d" and "%.10d" both adds leading zeros, giving a minimum of 10 digits. The format "%10.10d" is therefore redundant. Table 8 gives some examples:
+--------------------------------------------+
| AWK Table 8 |
| Examples of complex formatting |
|Format Variable Results |
+--------------------------------------------+
|%c 100 "d" |
|%10c 100 " d" |
|%010c 100 "000000000d" |
+--------------------------------------------+
|%d 10 "10" |
|%10d 10 " 10" |
|%10.4d 10.123456789 " 0010" |
|%10.8d 10.123456789 " 00000010" |
|%.8d 10.123456789 "00000010" |
|%010d 10.123456789 "0000000010" |
+--------------------------------------------+
|%e 987.1234567890 "9.871235e+02" |
|%10.4e 987.1234567890 "9.8712e+02" |
|%10.8e 987.1234567890 "9.87123457e+02" |
+--------------------------------------------+
|%f 987.1234567890 "987.123457" |
|%10.4f 987.1234567890 " 987.1235" |
|%010.4f 987.1234567890 "00987.1235" |
|%10.8f 987.1234567890 "987.12345679" |
+--------------------------------------------+
|%g 987.1234567890 "987.123" |
|%10g 987.1234567890 " 987.123" |
|%10.4g 987.1234567890 " 987.1" |
|%010.4g 987.1234567890 "00000987.1" |
|%.8g 987.1234567890 "987.12346" |
+--------------------------------------------+
|%o 987.1234567890 "1733" |
|%10o 987.1234567890 " 1733" |
|%010o 987.1234567890 "0000001733" |
|%.8o 987.1234567890 "00001733" |
+--------------------------------------------+
|%s 987.123 "987.123" |
|%10s 987.123 " 987.123" |
|%10.4s 987.123 " 987." |
|%010.8s 987.123 "000987.123" |
+--------------------------------------------+
|%x 987.1234567890 "3db" |
|%10x 987.1234567890 " 3db" |
|%010x 987.1234567890 "00000003db" |
|%.8x 987.1234567890 "000003db" |
+--------------------------------------------+
There is one more topic needed to complete this lesson on printf.
# Explicit File output
Instead of sending output to standard output, you can send output to a named file. The format is
printf("string\n") > "/tmp/file";
You can append to an existing file, by using ">>:"
printf("string\n") >> "/tmp/file";
Like the shell, the double angle brackets indicates output is appended to the file, instead of written to an empty file. Appending to the file does not delete the old contents. However, there is a subtle difference between AWK and the shell.
Consider the shell program:
#!/bin/sh
while x=line
do
echo got $x >>/tmp/a echo got$x >/tmp/b
done
This will read standard input, and copy the standard input to files "/tmp/a" and "/tmp/b." File "/tmp/a" will grow larger, as information is always appended to the file. File "/tmp/b," however, will only contain one line. This happens because each time the shell see the ">" or ">>" characters, it opens the file for writing, choosing the truncate/create or appending option at that time.
Now consider the equivalent AWK program:
#!/usr/bin/awk -f
{
print $0 >>"/tmp/a" print$0 >"/tmp/b"
}
This behaves differently. AWK chooses the create/append option the first time a file is opened for writing. Afterwards, the use of ">" or ">>" is ignored. Unlike the shell, AWK copies all of standard input to file "/tmp/b."
Instead of a string, some versions of AWK allow you to specify an expression:
# [note to self] check this one - it might not work
printf("string\n") > FILENAME ".out";
The following uses a string concatenation expression to illustrate this:
#!/usr/bin/awk -f
END {
for (i=0;i<30;i++) {
printf("i=%d\n", i) > "/tmp/a" i;
}
}
This script never finishes, because AWK can have 10 additional files open, and NAWK can have 20. If you find this to be a problem, look into PERL.
I hope this gives you the skill to make your AWK output picture perfect.
# AWK Numerical Functions
In previous tutorials, I have shown how useful AWK is in manipulating information, and generating reports. When you add a few functions, AWK becomes even more, mmm, functional.
There are three types of functions: numeric, string and whatever's left. Table9 lists all of the numeric functions:
+----------------------------------+
| AWK Table 9 |
+----------------------------------+
| Numeric Functions |
|Name Function Variant |
+----------------------------------+
|cos cosine AWK |
|exp Exponent AWK |
|int Integer AWK |
|log Logarithm AWK |
|sin Sine AWK |
|sqrt Square Root AWK |
|atan2 Arctangent NAWK |
|rand Random NAWK |
|srand Seed Random NAWK |
+----------------------------------+
# Trigonometric Functions
Oh joy. I bet millions, if not dozens, of my readers have been waiting for me to discuss trigonometry. Personally, I don't use trigonometry much at work, except when I go off on a tangent.
Sorry about that. I don't know what came over me. I don't usually resort to puns. I'll write a note to myself, and after I sine the note, I'll have my boss cosine it.
Now stop that! I hate arguing with myself. I always lose. Thinking about math I learned in the year 2 B.C. (Before Computers) seems to cause flashbacks of high school, pimples, and (shudder) times best left forgotten. The stress of remembering those days must have made me forget the standards I normally set for myself. Besides, no-one appreciates obtuse humor anyway, even if I find acute way to say it.
I better change the subject fast. Combining humor and computers is a very serious matter.
Here is a NAWK script that calculates the trigonometric functions for all degrees between 0 and 360. It also shows why there is no tangent, secant or cosecant function. (They aren't necessary). If you read the script, you will learn of some subtle differences between AWK and NAWK. All this in a thin veneer of demonstrating why we learned trigonometry in the first place. What more can you ask for? Oh, in case you are wondering, I wrote this in the month of December.
#!/usr/bin/nawk -f
#
# A smattering of trigonometry...
#
# This AWK script plots the values from 0 to 360
# for the basic trigonometry functions
# but first - a review:
#
# (Note to the editor - the following diagram assumes
# a fixed width font, like Courier.
# otherwise, the diagram looks very stupid, instead of slightly stupid)
#
# Assume the following right triangle
#
# Angle Y
#
# |
# |
# |
# a | c
# |
# |
# +------- Angle X
# b
#
# since the triangle is a right angle, then
# X+Y=90
#
# Basic Trigonometric Functions. If you know the length
# of 2 sides, and the angles, you can find the length of the third side.
# Also - if you know the length of the sides, you can calculate
# the angles.
#
# The formulas are
#
# sine(X) = a/c
# cosine(X) = b/c
# tangent(X) = a/b
#
# reciprocal functions
# cotangent(X) = b/a
# secant(X) = c/b
# cosecant(X) = c/a
#
# Example 1)
# if an angle is 30, and the hypotenuse (c) is 10, then
# a = sine(30) * 10 = 5
# b = cosine(30) * 10 = 8.66
#
# The second example will be more realistic:
#
# Suppose you are looking for a Christmas tree, and
# while talking to your family, you smack into a tree
# because your head was turned, and your kids were arguing over who
# was going to put the first ornament on the tree.
#
# As you come to, you realize your feet are touching the trunk of the tree,
# and your eyes are 6 feet from the bottom of your frostbitten toes.
# While counting the stars that spin around your head, you also realize
# the top of the tree is located at a 65 degree angle, relative to your eyes.
# You suddenly realize the tree is 12.84 feet high! After all,
# tangent(65 degrees) * 6 feet = 12.84 feet
# All right, it isn't realistic. Not many people memorize the
# tangent table, or can estimate angles that accurately.
# I was telling the truth about the stars spinning around the head, however.
#
BEGIN {
# assign a value for pi.
PI=3.14159;
# select an "Ed Sullivan" number - really really big
BIG=999999;
# pick two formats
# Keep them close together, so when one column is made larger
# the other column can be adjusted to be the same width
fmt1="%7s %8s %8s %8s %10s %10s %10s %10sn";
# print out the title of each column
fmt2="%7d %8.2f %8.2f %8.2f %10.2f %10.2f %10.2f %10.2fn";
# old AWK wants a backslash at the end of the next line
# to continue the print statement
# new AWK allows you to break the line into two, after a comma
"Tangent","Cotangent","Secant", "Cosecant");
for (i=0;i<=360;i++) {
# convert degrees to radians
r = i * (PI / 180 );
# in new AWK, the backslashes are optional
# in OLD AWK, they are required
printf(fmt2, i, r,
# cosine of r
cos(r),
# sine of r
sin(r),
#
# I ran into a problem when dividing by zero.
# So I had to test for this case.
#
# old AWK finds the next line too complicated
# I don't mind adding a backslash, but rewriting the
# next three lines seems pointless for a simple lesson.
# This script will only work with new AWK, now - sigh...
# On the plus side,
# I don't need to add those back slashes anymore
#
# tangent of r
(cos(r) == 0) ? BIG : sin(r)/cos(r),
# cotangent of r
(sin(r) == 0) ? BIG : cos(r)/sin(r),
# secant of r
(cos(r) == 0) ? BIG : 1/cos(r),
# cosecant of r
(sin(r) == 0) ? BIG : 1/sin(r));
}
# put an exit here, so that standard input isn't needed.
exit;
}
NAWK also has the arctangent function. This is useful for some graphics work, as
arc tangent(a/b) = angle (in radians)
Therefore if you have the X and Y locations, the arctangent of the ratio will tell you the angle. The atan2() function returns a value from negative pi to positive pi.
# Exponents, logs and square roots
The following script uses three other arithmetic functions: logexp, and sqrt. I wanted to show how these can be used together, so I divided the log of a number by two, which is another way to find a square root. I then compared the value of the exponent of that new log to the built-in square root function. I then calculated the difference between the two, and converted the difference into a positive number.
#!/bin/awk -f
# demonstrate use of exp(), log() and sqrt in AWK
# e.g. what is the difference between using logarithms and regular arithmetic
# note - exp and log are natural log functions - not base 10
#
BEGIN {
# what is the about of error that will be reported?
ERROR=0.000000000001;
# loop a long while
for (i=1;i<=2147483647;i++) {
# find log of i
logi=log(i);
# what is square root of i?
# divide the log by 2
logsquareroot=logi/2;
# convert log of i back
squareroot=exp(logsquareroot);
# find the difference between the logarithmic calculation
# and the built in calculation
diff=sqrt(i)-squareroot;
# make difference positive
if (diff < 0) {
diff*=-1;
}
if (diff > ERROR) {
printf("%10d, squareroot: %16.8f, error: %16.14f\n",
i, squareroot, diff);
}
}
exit;
}
Yawn. This example isn't too exciting, except to those who enjoy nitpicking. Expect the program to reach 3 million before you see any errors. I'll give you a more exciting sample soon.
# Truncating Integers
All version of AWK contain the int function. This truncates a number, making it an integer. It can be used to round numbers by adding 0.5:
printf("rounding %8.4f gives %8dn", x, int(x+0.5));
# "Random Numbers
NAWK has functions that can generate random numbers. The function rand returns a random number between 0 and 1. Here is an example that calculates a million random numbers between 0 and 100, and counts how often each number was used:
#!/usr/bin/nawk -f
# old AWK doesn't have rand() and srand()
# only new AWK has them
# how random is the random function?
BEGIN {
# srand();
i=0;
while (i++<1000000) {
x=int(rand()*100 + 0.5);
y[x]++;
}
for (i=0;i<=100;i++) {
printf("%dt%d\n",y[i],i);
}
exit;
}
If you execute this script several times, you will get the exact same results. Experienced programmers know random number generators aren't really random, unless they use special hardware. These numbers are pseudo-random, and calculated using some algorithm. Since the algorithm is fixed, the numbers are repeatable unless the numbers are seeded with a unique value. This is done using the srand function above, which is commented out. Typically the random number generator is not given a special seed until the bugs have been worked out of the program. There's nothing more frustrating than a bug that occurs randomly. The srand function may be given an argument. If not, it uses the current time and day to generate a seed for the random number generator.
# The Lotto script
I promised a more useful script. This may be what you are waiting for. It reads two numbers, and generates a list of random numbers. I call the script "lotto.awk."
#!/usr/bin/nawk -f
BEGIN {
# Assume we want 6 random numbers between 1 and 36
# We could get this information by reading standard input,
# but this example will use a fixed set of parameters.
#
# First, initialize the seed
srand();
# How many numbers are needed?
NUM=6;
# what is the minimum number
MIN=1;
# and the maximum?
MAX=36;
# How many numbers will we find? start with 0
Number=0;
while (Number < NUM) {
r=int(((rand() *(1+MAX-MIN))+MIN));
# have I seen this number before?
if (array[r] == 0) {
# no, I have not
Number++;
array[r]++;
}
}
# now output all numbers, in order
for (i=MIN;i<=MAX;i++) {
# is it marked in the array?
if (array[i]) {
# yes
printf("%d ",i);
}
}
printf("\n");
exit;
}
If you do win a lottery, send me a postcard.
# String Functions
Besides numeric functions, there are two other types of function: strings and the whatchamacallits. First, a list of the string functions:
+-------------------------------------------------+
| AWK Table 10 |
| String Functions |
|Name Variant |
+-------------------------------------------------+
|index(string,search) AWK, NAWK, GAWK |
|length(string) AWK, NAWK, GAWK |
|split(string,array,separator) AWK, NAWK, GAWK |
|substr(string,position) AWK, NAWK, GAWK |
|substr(string,position,max) AWK, NAWK, GAWK |
|sub(regex,replacement) NAWK, GAWK |
|sub(regex,replacement,string) NAWK, GAWK |
|gsub(regex,replacement) NAWK, GAWK |
|gsub(regex,replacement,string) NAWK, GAWK |
|match(string,regex) NAWK, GAWK |
|tolower(string) GAWK |
|toupper(string) GAWK |
+-------------------------------------------------+
Most people first use AWK to perform simple calculations. Associative arrays and trigonometric functions are somewhat esoteric features, that new users embrace with the eagerness of a chain smoker in a fireworks factory. I suspect most users add some simple string functions to their repertoire once they want to add a little more sophistication to their AWK scripts. I hope this column gives you enough information to inspire your next effort.
There are four string functions in the original AWK: index(), length(), split(), and substr(). These functions are quite versatile.
# The Length function
What can I say? The length() function calculates the length of a string. I often use it to make sure my input is correct. If you wanted to ignore empty lines, check the length of the each line before processing it with
if (length($0) > 1) { . . . } You can easily use it to print all lines longer than a certain length, etc. The following command centers all lines shorter than 80 characters: #!/bin/awk -f { if (length($0) < 80) {
prefix = "";
for (i = 1;i<(80-length($0))/2;i++) prefix = prefix " "; print prefix$0;
} else {
print;
}
}
# The Index Function
If you want to search for a special character, the index() function will search for specific characters inside a string. To find a comma, the code might look like this:
sentence="This is a short, meaningless sentence.";
if (index(sentence, ",") > 0) {
printf("Found a comma in position \%d\n", index(sentence,","));
}
The function returns a positive value when the substring is found. The number specified the location of the substring.
If the substring consists of 2 or more characters, all of these characters must be found, in the same order, for a non-zero return value. Like the length() function, this is useful for checking for proper input conditions.
# The Substr function
The substr() function can extract a portion of a string. One common use is to split a string into two parts based on a special character. If you wanted to process some mail addresses, the following code fragment might do the job:
#!/bin/awk -f
{
# field 1 is the e-mail address - perhaps
if ((x=index($1,"@")) > 0) { username = substr($1,1,x-1);
hostname = substr($1,x+1,length($1));
# the above is the same as
# hostname = substr($1,x+1); printf("username = %s, hostname = %s\n", username, hostname); } } Click here to get file: email.awk The substr() function takes two or three arguments. The first is the string, the second is the position. The optional third argument is the length of the string to extract. If the third argument is missing, the rest of the string is used. The substr function can be used in many non-obvious ways. As an example, it can be used to convert upper case letters to lower case. #!/usr/bin/awk -f # convert upper case letters to lower case BEGIN { LC="abcdefghijklmnopqrstuvwxyz"; UC="ABCDEFGHIJKLMNOPQRSTUVWXYZ"; } { out=""; # look at each character for(i=1;i<=length($0);i++) {
# get the character to be checked
}
# The Split function
Another way to split up a string is to use the split() function. It takes three arguments: the string, an array, and the separator. The function returns the number of pieces found. Here is an example:
#!/usr/bin/awk -f
BEGIN {
# this script breaks up the sentence into words, using
# a space as the character separating the words
string="This is a string, is it not?";
search=" ";
n=split(string,array,search);
for (i=1;i<=n;i++) {
printf("Word[%d]=%s\n",i,array[i]);
}
exit;
}
The third argument is typically a single character. If a longer string is used, only the first letter is used as a separator.
# NAWK's string functions
NAWK (and GAWK) have additional string functions, which add a primitive SED-like functionality: sub(), match(), and gsub().
Sub() performs a string substitution, like sed. To replace "old" with "new" in a string, use
sub(/old/, "new", string)
If the third argument is missing, $0 is assumed to be string searched. The function returns 1 if a substitution occurs, and 0 if not. If no slashes are given in the first argument, the first argument is assumed to be a variable containing a regular expression. The sub() only changes the first occurrence. The gsub() function is similar to the g option in sed: all occurrence are converted, and not just the first. That is, if the patter occurs more than once per line (or string), the substitution will be performed once for each found pattern. The following script: #!/usr/bin/nawk -f BEGIN { string = "Another sample of an example sentence"; pattern="[Aa]n"; if (gsub(pattern,"AN",string)) { printf("Substitution occurred: %s\n", string); } exit; } Click here to get file: awk_example15.awk print the following when executed: Substitution occurred: ANother sample of AN example sentence As you can see, the pattern can be a regular expression. # The Match function As the above demonstrates, the sub() and gsub() returns a positive value if a match is found. However, it has a side-effect of changing the string tested. If you don't wish this, you can copy the string to another variable, and test the spare variable. NAWK also provides the match() function. If match() finds the regular expression, it sets two special variables that indicate where the regular expression begins and ends. Here is an example that does this: #!/usr/bin/nawk -f # demonstrate the match function BEGIN { regex="[a-zA-Z0-9]+"; } { if (match($0,regex)) {
# RSTART is where the pattern starts
# RLENGTH is the length of the pattern
before = substr($0,1,RSTART-1); pattern = substr($0,RSTART,RLENGTH);
after = substr($0,RSTART+RLENGTH); printf("%s<%s>%s\n", before, pattern, after); } } Click here to get file: awk_example16.awk Lastly, there are the whatchamacallit functions. I could use the word "miscellaneous," but it's too hard to spell. Darn it, I had to look it up anyway. +-----------------------------------------------+ | AWK Table 11 | | Miscellaneous Functions | |Name Variant | +-----------------------------------------------+ |getline AWK, NAWK, GAWK | |getline <file NAWK, GAWK | |getline variable NAWK, GAWK | |getline variable <file NAWK, GAWK | |"command" | getline NAWK, GAWK | |"command" | getline variable NAWK, GAWK | |system(command) NAWK, GAWK | |close(command) NAWK, GAWK | |systime() GAWK | |strftime(string) GAWK | |strftime(string, timestamp) GAWK | +-----------------------------------------------+ # The System function NAWK has a function system() that can execute any program. It returns the exit status of the program. if (system("/bin/rm junk") != 0) print "command didn't work"; The command can be a string, so you can dynamically create commands based on input. Note that the output isn't sent to the NAWK program. You could send it to a file, and open that file for reading. There is another solution, however. # The Getline function AWK has a command that allows you to force a new line. It doesn't take any arguments. It returns a 1, if successful, a 0 if end-of-file is reached, and a -1 if an error occurs. As a side effect, the line containing the input changes. This next script filters the input, and if a backslash occurs at the end of the line, it reads the next line in, eliminating the backslash as well as the need for it. #!/usr/bin/awk -f # look for a as the last character. # if found, read the next line and append { line =$0;
while (substr(line,length(line),1) == "\\") {
# chop off the last character
line = substr(line,1,length(line)-1);
i=getline;
if (i > 0) {
line = line $0; } else { printf("missing continuation on line %d\n", NR); } } print line; } Click here to get file: awk_example17.awk Instead of reading into the standard variables, you can specify the variable to set: getline a_line print a_line; NAWK and GAWK allow the getline function to be given an optional filename or string containing a filename. An example of a primitive file preprocessor, that looks for lines of the format #include filename and substitutes that line for the contents of the file: #!/usr/bin/nawk -f { # a primitive include preprocessor if (($1 == "#include") && (NF == 2)) {
# found the name of the file
filename = $2; while (i = getline < filename ) { print; } } else { print; } } Click here to get file: include.nawk NAWK's getline can also read from a pipe. If you have a program that generates single line, you can use "command" | getline; print$0;
or
"command" | getline abc;
print abc;
If you have more than one line, you can loop through the results:
while ("command" | getline) {
cmd[i++] = $0; } for (i in cmd) { printf("%s=%s\n", i, cmd[i]); } Only one pipe can be open at a time. If you want to open another pipe, you must execute close("command"); This is necessary even if the end of file is reached. # The systime function The systime() function returns the current time of day as the number of seconds since Midnight, January 1, 1970. It is useful for measuring how long portions of your GAWK code takes to execute. #!/usr/local/bin/gawk -f # how long does it take to do a few loops? BEGIN { LOOPS=100; # do the test twice start=systime(); for (i=0;i<LOOPS;i++) { } end = systime(); # calculate how long it takes to do a dummy test do_nothing = end-start; # now do the test again with the *IMPORTANT* code inside start=systime(); for (i=0;i<LOOPS;i++) { # How long does this take? while ("date" | getline) { date =$0;
}
close("date");
}
end = systime();
newtime = (end - start) - do_nothing;
if (newtime <= 0) {
printf("%d loops were not enough to test, increase it\n",
LOOPS);
exit;
} else {
printf("%d loops took %6.4f seconds to execute\n",
LOOPS, newtime);
printf("That's %10.8f seconds per loop\n",
(newtime)/LOOPS);
# since the clock has an accuracy of +/- one second, what is the error
printf("accuracy of this measurement = %6.2f%%\n",
(1/(newtime))*100);
}
exit;
}
# The Strftime function
GAWK has a special function for creating strings based on the current time. It's based on the strftime(3c) function. If you are familiar with the "+" formats of the date(1) command, you have a good head-start on understanding what the strftime command is used for. The systime() function returns the current date in seconds. Not very useful if you want to create a string based on the time. While you could convert the seconds into days, months, years, etc., it would be easier to execute "date" and pipe the results into a string. (See the previous script for an example). GAWK has another solution that eliminates the need for an external program.
The function takes one or two arguments. The first argument is a string that specified the format. This string contains regular characters and special characters. Special characters start with a backslash or the percent character. The backslash characters with the backslash prefix are the same I covered earlier. In addition, the strftime() function defines dozens of combinations, all of which start with "%." The following table lists these special sequences:
+---------------------------------------------------------------+
| AWK Table 12 |
| GAWK's strftime formats |
+---------------------------------------------------------------+
|%a The locale's abbreviated weekday name |
|%A The locale's full weekday name |
|%b The locale's abbreviated month name |
|%B The locale's full month name |
|%c The locale's "appropriate" date and time representation |
|%d The day of the month as a decimal number (01--31) |
|%H The hour (24-hour clock) as a decimal number (00--23) |
|%I The hour (12-hour clock) as a decimal number (01--12) |
|%j The day of the year as a decimal number (001--366) |
|%m The month as a decimal number (01--12) |
|%M The minute as a decimal number (00--59) |
|%p The locale's equivalent of the AM/PM |
|%S The second as a decimal number (00--61). |
|%U The week number of the year (Sunday is first day of week) |
|%w The weekday as a decimal number (0--6). Sunday is day 0 |
|%W The week number of the year (Monday is first day of week) |
|%x The locale's "appropriate" date representation |
|%X The locale's "appropriate" time representation |
|%y The year without century as a decimal number (00--99) |
|%Y The year with century as a decimal number |
|%Z The time zone name or abbreviation |
|%% A literal %. |
+---------------------------------------------------------------+
Depending on your operating system, and installation, you may also have the following formats:
+-----------------------------------------------------------------------+
| AWK Table 13 |
| Optional GAWK strftime formats |
+-----------------------------------------------------------------------+
|%D Equivalent to specifying %m/%d/%y |
|%e The day of the month, padded with a blank if it is only one digit |
|%h Equivalent to %b, above |
|%n A newline character (ASCII LF) |
|%r Equivalent to specifying %I:%M:%S %p |
|%R Equivalent to specifying %H:%M |
|%T Equivalent to specifying %H:%M:%S |
|%t A TAB character |
|%k The hour as a decimal number (0-23) |
|%l The hour (12-hour clock) as a decimal number (1-12) |
|%C The century, as a number between 00 and 99 |
|%u is replaced by the weekday as a decimal number [Monday == 1] |
|%V is replaced by the week number of the year (using ISO 8601) |
|%v The date in VMS format (e.g. 20-JUN-1991) |
+-----------------------------------------------------------------------+
One useful format is
strftime("%y_%m_%d_%H_%M_%S")
This constructs a string that contains the year, month, day, hour, minute and second in a format that allows convenient sorting. If you ran this at noon on Christmas, 1994, it would generate the string
94_12_25_12_00_00
Here is the GAWK equivalent of the date command:
#! /usr/local/bin/gawk -f
#
BEGIN {
format = "%a %b %e %H:%M:%S %Z %Y";
print strftime(format);
}
You will note that there is no exit command in the begin statement. If I was using AWK, an exit statement is necessary. Otherwise, it would never terminate. If there is no action defined for each line read, NAWK and GAWK do not need an exit statement.
If you provide a second argument to the strftime() function, it uses that argument as the timestamp, instead of the current system's time. This is useful for calculating future times. The following script calculates the time one week after the current time:
#!/usr/local/bin/gawk -f
BEGIN {
# get current time
ts = systime();
# the time is in seconds, so
one_day = 24 * 60 * 60;
next_week = ts + (7 * one_day);
format = "%a %b %e %H:%M:%S %Z %Y";
print strftime(format, next_week);
exit;
}
# User Defined Functions
Finally, NAWK and GAWK support user defined functions. This function demonstrates a way to print error messages, including the filename and line number, if appropriate:
#!/usr/bin/nawk -f
{
if (NF != 4) {
error("Expected 4 fields");
} else {
print;
}
}
function error ( message ) {
if (FILENAME != "-") {
printf("%s: ", FILENAME) > "/dev/tty";
}
printf("line # %d, %s, line: %s\n", NR, message, $0) >> "/dev/tty"; } Click here to get file: awk_example18.nawk # AWK patterns In my first tutorial on AWK, I described the AWK statement as having the form pattern {commands} I have only used two patterns so far: the special words BEGIN and END. Other patterns are possible, yet I haven't used any. There are several reasons for this. The first is that these patterns aren't necessary. You can duplicate them using an if statement. Therefore this is an "advanced feature." Patterns, or perhaps the better word is conditions, tend to make an AWK program obscure to a beginner. You can think of them as an advanced topic, one that should be attempted after becoming familiar with the basics. A pattern or condition is simply an abbreviated test. If the condition is true, the action is performed. All relational tests can be used as a pattern. The "head -10" command, which prints the first 10 lines and stops, can be duplicated with {if (NR <= 10 ) {print}} Changing the if statement to a condition shortens the code: NR <= 10 {print} Besides relational tests, you can also use containment tests, i. e. do strings contain regular expressions? Printing all lines that contain the word "special" can be written as {if ($0 ~ /special/) {print}}
or more briefly
$0 ~ /special/ {print} This type of test is so common, the authors of AWK allow a third, shorter format: /special/ {print} These tests can be combined with the AND (&&) and OR (||) commands, as well as the NOT (!) operator. Parenthesis can also be added if you are in doubt, or to make your intention clear. The following condition prints the line if it contains the word "whole" or columns 1 and 2 contain "part1" and "part2" respectively. ($0 ~ /whole/) || (($1 ~ /part1/) && ($2 ~ /part2/)) {print}
This can be shortened to
/whole/ || $1 ~ /part1/ &&$2 ~ /part2/ {print}
There is one case where adding parenthesis hurts. The condition
/whole/ {print}
works, but
(/whole/) {print}
does not. If parenthesis are used, it is necessary to explicitly specify the test:
($0 ~ /whole) {print} A murky situation arises when a simple variable is used as a condition. Since the variable NF specifies the number of fields on a line, one might think the statement NF {print} would print all lines with one of more fields. This is an illegal command for AWK, because AWK does not accept variables as conditions. To prevent a syntax error, I had to change it to NF != 0 {print} I expected NAWK to work, but on some SunOS systems it refused to print any lines at all. On newer Solaris systems it did behave properly. Again, changing it to the longer form worked for all variations. GAWK, like the newer version of NAWK, worked properly. After this experience, I decided to leave other, exotic variations alone. Clearly this is unexplored territory. I could write a script that prints the first 20 lines, except if there were exactly three fields, unless it was line 10, by using NF == 3 ? NR == 10 : NR < 20 { print } But I won't. Obscurity, like puns, is often unappreciated. There is one more common and useful pattern I have not yet described. It is the comma separated pattern. A common example has the form: /start/,/stop/ {print} This form defines, in one line, the condition to turn the action on, and the condition to turn the action off. That is, when a line containing "start" is seen, it is printed. Every line afterwards is also printed, until a line containing "stop" is seen. This one is also printed, but the line after, and all following lines, are not printed. This triggering on and off can be repeated many times. The equivalent code, using the if command, is: { if ($0 ~ /start/) {
triggered=1;
}
if (triggered) {
print;
when I could use
# AWK, NAWK, GAWK, or PERL
This concludes my series of tutorials on AWK and the NAWK and GAWK variants. I originally intended to avoid extensive coverage of NAWK and GAWK. I am a PERL user myself, and still believe PERL is worth learning, yet I don't intend to cover this very complex language. Understanding AWK is very useful when you start to learn PERL, and a part of me felt that teaching you extensive NAWK and GAWK features might encourage you to bypass PERL.
I found myself going into more depth than I planned, and I hope you found this useful. I found out a lot myself, especially when I discussed topics not covered in other books. Which reminds me of some closing advice: if you don't understand how something works, experiment and see what you can discover. Good luck, and happy AWKing...
Other of my Unix shell tutorials can be found here. Other shell tutorials can be found at Heiner's SHELLdorado and Chris F. A. Johnson's Unix Shell Page
I'd like to thank the following for feedback:
Ranjit Singh
Richard Janis Beckert
Christian Haarmann
Kuang He
Guenter Knauf
Chay Wesley
Todd Andrews
Lynn Young
This document was translated by troff2html v0.21 on September 22, 2001 and then manually edited to make it compliant with:
• 0
点赞
• 0
评论
• 0
收藏
• 一键三连
• 扫一扫,分享海报
12-05 131
03-02 39
09-14
02-20
04-12
©️2021 CSDN 皮肤主题: 编程工作室 设计师:CSDN官方博客
1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、C币套餐、付费专栏及课程。
|
# Square root of a positive-definite Markov matrix
Let $M$ be a positive definite Markov matrix, meaning that $M_{ij}\geq0$, $\sum_j{M_{ij}}=1$ and all eigenvalues of $M$ are positive. Given the spectral decomposition $M=V\Lambda V^{-1}$, the matrix $R=\sqrt M$ can be defined as $R=V\sqrt\Lambda V^{-1}$. I would like to know when is $R$ a Markov matrix. By the Perron-Frobenius theorem, $M \bf{1}=\bf{1}$, where $\bf{1}$ is the all-ones vector. Using this fact, it is easy to prove that $\sum_j{R_{ij}}=1$ for all $M$ defined as above. However, when is $R_{ij}\geq0$?
• I added a clarification. Apr 3, 2014 at 15:18
• If is markov matrix also the sum of the elements in a row is one or not? Apr 3, 2014 at 15:20
• Yes, the sum in a row is 1. Apr 3, 2014 at 15:39
• This article could help you to answer your question. A sufficient condition for $\sqrt{M}$ to be a stochastic (Markov) matrix is that $M$ is the inverse of an M-matrix. Apr 3, 2014 at 16:54
|
Statistique - Probabilités - Optimisation et Contrôle
# Sylvain Sardy (Université de Genève), "A phase transition for finding needles in nonlinear haystacks with LASSO articial neural network"
Europe/Paris
René Baire (IMB)
### René Baire
#### IMB
Description
To fit sparse linear associations, a LASSO sparsity inducing penalty provably allows to recover the important features (needles) with high probability in certain regimes even if the sample size is smaller than the dimension of the input vector (haystack). We investigate whether a phase transition also exists to to fit sparse nonlinear associations known as articial neural networks. Using certain activation functions, proper selection of the penalty parameter λ and a sparsity inducing optimization algorithm, we observe identification of good needles on simulated and real data.
|
# Division fields of abelian varieties over function fields
Let $k$ be a finitely generated field (for example a finite field or a number field) and $K/k$ a finitely generated regular extension with $trdeg(K/k)=1$. Let $A/K$ be a principally polarized abelian variety. For every prime number $l\neq char(K)$ let $K_l:=K(A[l])$ be the field obtained by adjoining the coordinates of the $l$-torsion points of $A$ to $K$. For such a function field extension $K_l/K$ it is natural to consider the algebraic closure $k_l$ of $k$ in $K_l$, i.e. the field of elements of $K_l$ which are algebraic over $k$. In brief: I am wondering what we can say about $k_l$.
Of course, the existence of the Weil pairing afforded by the principal polarization forces $k(\mu_l)\subset k_l$ for every prime $l\neq char(K)$. This inclusion needs not be an equality. (Consider the case where $A$ has abelian subvarieties defined over $k$.)
Let us call an abelian variety $B/K$ weakly isotrivial, if there is a non-zero abelian variety $B_0/\overline{k}$ and a $\overline{K}$-homomorphism $B_{0, \overline{K}}\to A_{\overline{K}}$ with finite kernel.
Question: Suppose that $A$ is not weakly isotrivial. Is it true that there is a constant $l_0(A/K)$ such that $k_l=k(\mu_l)$ for every prime number $l\ge l_0(A/K)$? (If no, is this true after replacing $K$ by a finite extension and $k$ by its algebraic closure in this extension''?)
Remarks:
a) For example if $char(K)=0$, $End(A)=\mathbb{Z}$ and $\dim(A)=2, 6$ or odd, then the answer to this question is yes''. The proof we have for that case is somewhat special, however, because then we have exceptionally good control over the associated monodromy groups. (We use that $Gal(K_l/K)=GSp(2\dim(A), \mathbb{F}_l)$ for almost all primes $l$, some group theory and the Mordell-Lang theorem.)
b) The special case $\dim(A)=1$ is contained in the book on modular forms by S. Lang.
c) It is clear in the situation of the question that there is a constant $l(A/K)$ such that $[K_l:K]>[k_l:k]$ for all primes $l\ge l(A/K)$. Otherwise we would have $K_l=k_l K$ and hence $K_l\subset \overline{k} K$ for infinitely many primes $l$. This would imply $|A(\overline{k}K)[l]|=l^{2\dim(A)}$ for infinitely many primes $l$, which is not the case by the Mordell-Lang theorem.
-
|
×
INTELLIGENT WORK FORUMS
FOR ENGINEERING PROFESSIONALS
Are you an
Engineering professional?
Join Eng-Tips Forums!
• Talk With Other Members
• Be Notified Of Responses
• Keyword Search
Favorite Forums
• Automated Signatures
• Best Of All, It's Free!
*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.
#### Posting Guidelines
Promoting, selling, recruiting, coursework and thesis posting is forbidden.
# Scatter Plot Scales
## Scatter Plot Scales
(OP)
Hello all.
Using Excel 2010, version 14.0.7015.1000
Does anyone know if you can fix the horizontal scale on a scatter plot so that dates appear on a fixed pattern? I have many years of data and I would like the plots to have 1/1/xxxx for the labels. I can get this to work if the data only spans a couple of years by setting the major unit to 365.25, but as the number of years increases this stops working.
Same general question applies to trying to get the first day of the month to appear on the scale.
Anybody found a solution to this?
Mike Lambert
### RE: Scatter Plot Scales
hi,
Use a LINE Chart with Markers.
Set the Line to No Line.
Format the x-axis Type as Date Axis.
Select Units as Years.
Skip,
Just traded in my OLD subtlety...
for a NUance!
### RE: Scatter Plot Scales
(OP)
Thanks Skip!
I had tried using a line plot several years (versions) ago and that would not maintain the relative position of the data. I see that particular issue has been fixed.
Thanks again.
mike
Mike Lambert
### RE: Scatter Plot Scales
(OP)
Looks like I was a little early in saying this was solved.
So the line graph works fine if all of you data has the same dates. However, I you try to add data that has a different date range, it does not plot correctly.
Here is a quick example of the problem I'm seeing with made up data. As you can see, series 2 is not plotting at the correct location relative to the x (date) axis.
Any ideas?
Mike Lambert
### RE: Scatter Plot Scales
Here's your source data as it should be structured...
date series 1 series 2
1/1/2000 10
1/1/2001 20
1/1/2005 15
1/1/2002 7
1/1/2004 10
You implicitly assigned your second table data DATEs to the secondary axis OR your x-axis implicitly changed from a NUMBER axis, which would include DATE, to a CATEGORY axis, which is simply point by point, rather than numerically proportional.
Skip,
Just traded in my OLD subtlety...
for a NUance!
### RE: Scatter Plot Scales
Note that Line plots do not really make use of the numerical values of the abcissa; they're solely used as labels.
Part of the problem is that you are using a truncation of the number of days of the year, i.e., a year is actually more like 365.2422 days. However, there may be a simpler way. Since you know the exact mapping of the first day of each month to the timestamp that Excel uses, you can create a table of month lengths that you can use to normalize their actual lengths to a uniform spacing. With some brute-forcing, you could remap the time stamps to look like:
1/1/2000 200008.333
1/1/2001 200108.333
1/1/2005 200508.333
1/1/2002 200208.333
1/1/2004 200408.333
8.333 is 1/12th of 100
TTFN
I can do absolutely anything. I'm an expert!
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers
### RE: Scatter Plot Scales
Sorry, IRstuff, that is not necessarily true.
In the data I posted there is no data for 2003, yet I have 1/1/2003 on the x-axis with no y-axis value obvoiusly.
The "trick" is to change the x-axis to DATE and the UNIT as YEAR. Solves the 365.25 problem as well.
Skip,
Just traded in my OLD subtlety...
for a NUance!
### RE: Scatter Plot Scales
I didn't know that Line Charts worked like XY Charts when the X axis was set to date format.
Actually (at least in XL 2016) if the X axis data is date numbers the chart automatically displays in date format, although if you want to set the label spacing you have to change the axis type from Automatic to Date.
Doug Jenkins
Interactive Design Services
http://newtonexcelbach.wordpress.com/
### RE: Scatter Plot Scales
Is it possible to get vertical grid lines on a line chart? I can only find options for the horizontal grid lines.
Doug Jenkins
Interactive Design Services
http://newtonexcelbach.wordpress.com/
### RE: Scatter Plot Scales
#### Quote (I)
Is it possible to get vertical grid lines on a line chart? I can only find options for the horizontal grid lines.
Select the chart
Select the Design ribbon under Chart Tools
Click the drop-down arrow on the "Add Chart Element" Icon
Select Grid Lines, then Primary Major Vertical, or Primary Minor Vertical
Doug Jenkins
Interactive Design Services
http://newtonexcelbach.wordpress.com/
### RE: Scatter Plot Scales
Bottom line: If you want the effect of x-y chart with dates on the x-axis, to properly display irregular units (Month, Year) you need to use a Line Chart with Markers, Series No Line, Date Axis and appropriate Units.
Skip,
Just traded in my OLD subtlety...
for a NUance!
### RE: Scatter Plot Scales
Skip,
Just traded in my OLD subtlety...
for a NUance!
### RE: Scatter Plot Scales
Base Units. Good point!
And if you needed to plot Date/Time, the Line Chart would not do the job: You'd need a Scatter Chart, But then Months/Years would not be a display factor.
Skip,
Just traded in my OLD subtlety...
for a NUance!
#### Red Flag This Post
Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.
#### Red Flag Submitted
Thank you for helping keep Eng-Tips Forums free from inappropriate posts.
The Eng-Tips staff will check this out and take appropriate action.
Close Box
# Join Eng-Tips® Today!
Join your peers on the Internet's largest technical engineering professional community.
It's easy to join and it's free.
Here's Why Members Love Eng-Tips Forums:
• Talk To Other Members
• Notification Of Responses To Questions
• Favorite Forums One Click Access
• Keyword Search Of All Posts, And More...
Register now while it's still free!
|
# Center of mass of an equilateral triangle (Kleppner)
geoffrey159
## Homework Statement
Find the center of mass of an equilateral triangle with side ##a##
## Homework Equations
## \vec R = \frac{1}{M} \int \vec r \ dm ##
## dm = \frac{M}{A} dx dy ##
## A = \frac{\sqrt{3}}{4}a^2 ##
## The Attempt at a Solution
I set a pair of orthogonal axis ##(\vec x,\vec y)## so that one side of the triangle lies on the ##x## axis, and one vertex is at the origin.
I find the following position for the center of mass:
##R_x = \frac{1}{A} (\int_0^{\frac{a}{2}}\int_0^{\sqrt{3}x} x \ dy\ dx + \int_\frac{a}{2}^{a}\int_0^{\sqrt{3}(a-x)} x \ dy\ dx ) = \frac{a}{2}##
##R_y = \frac{1}{A} (\int_0^{\frac{a}{2}}\int_0^{\sqrt{3}x} y \ dy\ dx + \int_\frac{a}{2}^{a}\int_0^{\sqrt{3}(a-x)} y \ dy\ dx ) = \frac{a}{2\sqrt{3}}##
Do you think it is correct?
Homework Helper
Gold Member
2022 Award
That's the correct answer. It would have been easier to put the apex of the triangle on the y-axis: then, by symmetry, R_x = 0.
geoffrey159
I did not think about that! Thank you!
duarthiago
##R_x = \frac{1}{A} (\int_0^{\frac{a}{2}}\int_0^{\sqrt{3}x} x \ dy\ dx + \int_\frac{a}{2}^{a}\int_0^{\sqrt{3}(a-x)} x \ dy\ dx ) = \frac{a}{2}##
##R_y = \frac{1}{A} (\int_0^{\frac{a}{2}}\int_0^{\sqrt{3}x} y \ dy\ dx + \int_\frac{a}{2}^{a}\int_0^{\sqrt{3}(a-x)} y \ dy\ dx ) = \frac{a}{2\sqrt{3}}##
Please let me ask: I think I understood why the upper term from integration interval is equal to $x\sqrt{3}$, is the height in function of x, but what is the idea behind the upper term from $\int_0^{\sqrt{3}(a-x)} x \ dy$?
Staff Emeritus
Please let me ask: I think I understood why the upper term from integration interval is equal to $x\sqrt{3}$, is the height in function of x, but what is the idea behind the upper term from $\int_0^{\sqrt{3}(a-x)} x \ dy$?
Oh, of course, it would be something like a piecewise function where $x \sqrt{3}$ if $0 \leq x \leq \frac{a}{2}$ and $a \sqrt{3} - x \sqrt{3}$ if $\frac{a}{2} < x \leq a$. Thank you for your answer.
|
Suggested languages for you:
Americas
Europe
Q. 2
Expert-verified
Found in: Page 436
### Precalculus Enhanced with Graphing Utilities
Book edition 6th
Author(s) Sullivan
Pages 1200 pages
ISBN 9780321795465
# In Problems $1-3$, convert each angle in degrees to radians. Express your answer as a multiple of $\mathrm{\pi }$.$2.-400°$
The angle in radian measure is $-\frac{20\mathrm{\pi }}{9}$.
See the step by step solution
## Step 1. Given information
Given angle measure in degree is $-400°$.
To convert degree measure to radian, multiply degree measure to $\frac{\mathrm{\pi }}{180°}$.
This gives
$-400°×\frac{\mathrm{\pi }}{180°}=-\frac{20\mathrm{\pi }}{9}$.
|
# RF Ground Stitching in Proteus Ares 8
I am using Proteus Ares 8 to create a PCB for a simple RF circuit. I have a double sided copper board, filled with groundplane on both sides, with a single 50 Ohm microstripline across the board to two SMA sockets, just as an example:
What I am trying to achieve is what is called RF stitching between the bottom and top copper along the edge of the strip-line. Something like the following:
Here I have placed a track alternating between layers, which works, but gets unworkable when the design is more elaborate. I would like to place vias alone, but, then the copper is releaved around the via, because it's not connected to ground.
About the best I can seem to do is to use a small drill hole through the board, set up to be the same size as the via. I can make the hole coated.
However, using this method, I cannot seem to remove the relief around the board. The best I can get is is the thermal relief around the pad, but since my concern is RF grounding, I would like to remove this thermal relief.
In the pad settings, I can select different options for the relief, such as Thermal (shown above) or Thermal-X (45 degree rotated cross), but, I am unable to remove the relief completely with either the Solid or None option.
Any help or suggestions for this would be greatly appreciated.
George, M1GEO.
• Can you set your via's net to GND? Does removing the little linking traces break the via's association with a net? Perhaps copy-pasting one of the vias is a work-around. (I've never used Proteus.) – user2943160 Sep 7 '16 at 1:59
• Sorry for delay. Vias cannot be set to ground directly, because they take on the net from a track - In the first image, see they're connected by a track from the SMA ground. As you ask, removing the track removes the net association. But not with pads. Pads can be set to the GND net directly. Copy-pasting one of the vias is a good idea - I will try and report back. – M1GEO Sep 8 '16 at 11:35
• @user2943160 I tried to copy-paste the via once it was on the ground net, but it takes any traces connected to the via with the copy-paste. See: my attempt here – M1GEO Sep 8 '16 at 12:16
• In the end I used pads modified to be covered with resist and made them the same size as vias, with the plated option selected, and the relief set to solid! The boards have gone off to Fab. The Gerbers looked okay. – M1GEO Sep 9 '16 at 18:33
• Go ahead and write that out as a self-answer to your question, then. – user2943160 Sep 10 '16 at 0:07
Use Pads instead of Vias. Create a component with one pad called Ex: stitchingVia. Place as many as you want in the schematic. and connect them to GND.
in the PCB part View the pad properties and change the Relief to none.
,
Do not connect the pads with tracks. Place a Zone and select the NET to GND and disable the Relief Rins checkbox.
You can create a new pad to use as a via. Someone asked similar questions about Proteus and its uses HERE and LabCenter responded to answer a few questions. The bit you are interested in would be what they say about vias:
"In Proteus, vias are treated as transient objects so they pick up the net to which they are tracked our routed. This connectivity is re-scanned and refreshed when the netlist changes, the project re-opened etc. Stitching to a zone is therefore done with pads. Place a pad, edit and assign to net. Then, either replicate or block copy as required. The clearance will default to that specified in the edit zone dialogue but can be overriden in the pad dialogue itself."
This was also something I was wanting to do before, so am glad I found the response directly from LabCenter. I linked the page in case there were other issues on there you could potentially need some answers to.
For completeness, in Proteus 8.8 and above, Zone Stitching is supported directly by the ARES: See this LabCenter instructional video on YouTube.
|
# Distinction between problems (such as equations), and universal truths
How the distinction between problems (find/describe such values of x that… ) and universal truths (identities) is taught to secondary-school students and higher? Especially in English-speaking countries.
Ī mean the following thing (sorry if Ī present an obvious stuff in a confused way): if we have a relation, equality or else, and a proposition with this relation at the top and some variable(s), then we have two distinct useful cases, although notation might be the same. In the first case the proposition is not necessarily true and we have to find values of the variable where it is true. In the second case the proposition is necessarily true (law/identity/“theorem”) and variables become bound by universal quantification. Universal truths can be used in subsequent equivalent transformations of expressions, equations, inequalities, or something alike. Students must learn to distinguish these cases, haven’t they? Examples:
Binary relations: “=” (equals) “>” (greater than)
Problems: “$x^2 - x - 1 = 0$” (equation) “$x^2 - x - 1 > 0$”
Universal truths: “$(a+b)(a-b) = a^2 - b^2$” “$\exp(x) > 0$”
How may be used: “α² = δ² ⇒ α = δ ∨ α = −δ ” exp(α)⋅ℓ > 0 ⇔ ℓ > 0
(transformed equation) (transformed inequality (problem))
“(α + 1)(α − 1) = α² − 1 ” cosh α > 0
(transformed expression) (derived inequality (truth))
Why am Ī preoccupied with it? A year ago, the only person happened to cooperate with me about precise equation–identity distinction in English Wikipedia was a mathematician from France; was a coincidence he wasn’t taught in English? Today Ī argued on this topic (against two presumedly native English speakers) at a neighbouring site and become frustrated. Could an English speaker be correct in insisting that something like an+1 = a × an is an “equation” and calling the “=” symbol in it “equation sign”?
• I do see this as a very valid question, but I think you are not presenting it correctly. The problem is not human language, the problem is the looseness in mathematical language. I answered assuming this was for general education which I think is highly inappropriate. However, this is very appropriate to mathematical notations which affect both mathematicians and computer scientists. I have always found mathematical proofs to be too loose for my liking--there was never any overarching structure enforced. This is a problem for computer scientists trying to study theorem proving. Oct 30 '14 at 8:19
• And in an even broader sense I have seen that there is a wide range of notations across the mathematical disciplines (including mathematics, physics, computer science, etc.). This includes not only mathematical notations but even terms used. This is a real problem because it makes it difficult to communicate between the disciplines. Oct 30 '14 at 8:22
I am comfortable saying "Solve the equation $x+2$=4" and also saying "Using the equation $(a+b)(a-b)=a^2-b^2$, we see that...".
On other other hand I would only ever speak of solving an equation, not an identity, and I would be willing to use the word "identity" in my second example above.
So I think I would say that an identity is a kind of equation, one which is universally quantified.
In general, one can avoid confusion jut by saying what you mean, as in "Solve the equation $x+2=4$", or "For all numbers $a$ and $b$, $(a+b)(a-b)=a^2-b^2$"
• Of course, boys and girls can be told about what does the teacher mean in each particular case. But how do they learn to understand the difference when they see a formula on a blackboard? Oct 25 '14 at 10:21
• @IncnisMrsi What exactly are you hoping for? I think, in general, people pick these things up from context, in exactly the same way they learn to distinguish the two uses of the word "lead" in "The lead developer on the project" and "The alchemist had finally succeeded in turning lead into gold". Also, I think generally one should write "for each" on the board where it is appropriate. Sometimes the extra symbols $\equiv$ or $:=$ can be helpful as well. Oct 25 '14 at 14:30
• @IncnisMrsi I really do not appreciate this level of hostility. You asked a question, and I am telling you that, as far as I know, there is no common English language distinction between "equality for a particular $x$" and "Equality for all $x$", unless you explicitly state that somehow. If a student is given $x^2 = 4$ on a homework assignment, none of them will say "False: $1^2 \neq 4$ is a counterexample". Equality does have a meaning all on its own, after all. Oct 25 '14 at 15:15
• I would say that an "equation" is any statement expressing equality of two expressions. Both $x^2=1$ and $x(x)=x^2$ are equations, but they are (generally) quantified differently. An identity is a specific kind of equation which is generally universally quantified somehow. So $x(x)=x^2$ is an identity. So it is okay to call $x(x)=x^2$ both an equation and an identity. Oct 25 '14 at 15:23
• Interesting. I'm from Poland, and Polish for "equation" (in the sense: something you solve for x) is "równanie". Not only I'd never use this word for an identity, but if a student of mine called e.g. $a^2-b^2=(a-b)(a+b)$ a "równanie", I would correct his erroneous usage of the word. (And I usually spend some time with first year students explaining how the innocently-looking equality sign can mean different things.) Oct 28 '14 at 20:19
In my experience (being taught in a US school), there is no explicit distinction between an identity and an equation. The reason being is that to make the distinction you have to introduce first order logic which is not generally appropriate for secondary (high school) mathematics education. Students can usually ascertain an identity (a formula) from an equation to be solved so this is not usually an issue. The students that have trouble using formulas or have trouble solving equations are unlikely to benefit from a more detailed explanation (the detailed explanation will likely only further confuse them and make the topic of mathematics far less tangible and far more abstract).
Examples:
We are generally taught that given $ax^2 + bx + c = 0$ such that $a \neq 0$ that $x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$. This is an identity. It's a first order equation. Formally it would state the following:
$$\forall x \in \mathbb{C}.\forall a, b, c\in \mathbb{R}:\left( \left(a\neq 0\wedge ax^2 + bx + c = 0\right) \rightarrow x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}\right)$$
1. Solving an equation:
Solve the following equation: $10x^2 + 6x + 8 = 0$ wher $x$ must be a real number.
The very first thing to do when solving an equation is to determine whether or not it can be solved. This too is a first order logic equation. The question (which must be answered) is whether or not the following is true:
$$\exists x\in \mathbb{R}: 10x^2 + 6x + 8 = 0$$
In this particular instance, the answer would be that the above is false therefore there are no solutions. But how do you know that? How do you prove that?
Here is a different example: Solve the following: $x^2 + 6x + 8 = 0$. Again, we must first decide whether or not the following first order equation is true or false:
$$\exists x\in \mathbb{R}: x^2 + 6x + 8 = 0$$
In this case it is true (how do we know?). Once we know the equation can be satisfied, we next attempt to find the particular solutions. This is not easy and in fact, "impossible" in general. Here is an example:
Find real values of $x$ that satisfy the following equation
$$e^x = x + 2$$
It can be proved that a solution exists:
$$f(x) = e^x - x - 2 \rightarrow \text{find } f(x) = 0 \\ f'(x) = e^x - 1 \rightarrow f'(x) = 0 \rightarrow e^x = 1 \rightarrow x = \ln(1) = 0$$
Since the derivative changes from negative to positive at $x = 0$, this represents a relative minimum (and since this is the only extrema this is the global minimum). $f(0) = 1 - 0 - 2 = -1$. Further since $\lim_{x\rightarrow-\infty} e^x = 0$ and $\lim_{x\rightarrow-\infty} = -\infty$ (thus subtracting negative infinity creates a positive infinity) and that $e^x$ dominates $-x$ such that $\lim_{x\rightarrow\infty} e^x - x = +\infty$, we know that this function, $f(x)$, goes towards $+\infty$ on either side, there must be two spots where $f(x) = 0$ and therefore there are exactly two real solutions to $e^x = x + 2$.
Yet even though we know the equation to be satisfiable, it is not clear how (or analytically possible) to solve for those two values.
So when solving equations there are two separate issues that we usually do not identify in secondary education (largely because it's beyond the scope). The first is, whether or not there is a solution and the second is how to find the solution once you can show that one exists--note that the first step becomes tedious to the point of being prohibitive towards learning for the vast majority of problems encountered in elementary to high school level mathematics courses.
• «⋯ = 1 − 0 − 2 = −2 ». ORLY? Oct 27 '14 at 12:15
• By the way, to prove no more than two real solutions you have to make some more analysis beyond differentiability and limits. Oct 27 '14 at 12:27
• @IncnisMrsi "By the way, to prove no more than two real solutions you have to make some more analysis beyond differentiability and limits." If the function is continuous over all of the reals (which it is) and has only a single (real) critical point (which it does) and we know that the function ends up at $+ \infty$ on both sides, then the fact that its absolute minimum is $-2$ means that it must cross the x-axis to the left and to the right of that critical point. Further, since there are no other (real) critical points, it could not possibly turn back around towards the x-axis. Oct 27 '14 at 17:32
• The fact that its absolute minimum is −2 means that it’s easier to count extrema of a function than perform two arithmetic operations on integers ☺ Oct 27 '14 at 18:51
• The mistake in elementary arithmetic was not fixed for two days, so author’s deviation from the correct value of $f(0)$ is added to the score of the posting. Oct 30 '14 at 6:10
|
## Essential University Physics: Volume 1 (3rd Edition)
$v_1=-67 cm/s$
We know that the sum of the momentums must be 0 due to conservation of momentum: $0= m_1v_1+m_2v_2$ $v_1=- \frac{m_2v_2}{m_1}$ $v_1=- \frac{(91 \ mg)(47 cm/s)}{64 mg}$ $v_1=-67 cm/s$
|
How to handle missing data when determining differences between groups using chi-squared or Fisher's exact test
I have 168 rows of patient data: 104 controls and 64 cases. I want to know if albumin status (low or high) is related to case/control status. I made a table using R:
> table(Albumin, Status, useNA = "ifany")
Albumin Control Case
Low 51 16
High 39 32
<NA> 14 16
As you can see, I have missing data. I did a chi-squared test on the entire table:
> chisq.test(table(Albumin, Status, useNA = "ifany"))$p.value [1] 0.006222513 Question: Should I perform the test on the 3x2 table above that includes the missing data? Or should I perform it on a 2x2 table that excludes the missing data, as shown below? > chisq.test(table(Albumin, Status))$p.value
[1] 0.01496166
Problem: In this example, both approaches yield significant p-values. However, I have other variables for which the difference is insignificant when missing values are excluded, but significant when they are included. I have some variables with only one missing value, as well.
Question: How should I apply the chi-squared test in those situations? Is my choice of test correct, or should I be using Fisher's exact test or some other test? And are there any diagnostics that I need to do before even applying these tests?
|
# SYS01 DEM03
Hi Team, I am struggling with DEM03 and the SYS01 Time Settings by Week
Firstly, in SYS01, 1st Forecast Year remains empty, please see attached the blueprint of the same and the associated formulae as well as the time settings view.
This in turn I believe leads to DEM03 Demand forecast to be empty for Baseline Forecast.
Alternatively, it is also possibly related to the time settings, which I enclose.
May I have your thoughts please? I'm looking forward to hearing from you.
Thank you, Matthias
Tagged:
Hi @mstanisic ,
Kindly look at the formula, you are calculating the Yearvalue of 'Current Period?' line item. But your FY summary boolean for 'Current Period?' line item would be BLANK as summary is set to 'None' as seen in the screenshot. So set the summary of 'Current Period?' line item to 'Any' and you will get the booleans in place. Refer the screenshot below
I hope this helps.
~Anand
|
# Having major issues with this problem - can someone get me started?
• Jun 9th 2012, 09:00 PM
kirshchereza
Having major issues with this problem - can someone get me started?
"The relationship between the load on a reel, L (kilotons) per metre (L kNm-1) and the reel diameter, x meters, is modelled by a graph consisting of two parabolic arcs, AB and BC
Picture of graph with L on the y-axis and x(meters) on the x axis with a parabola with points in order of A,D,E,F,B and C on the parabola.
Arc AB is part of the parabola L=px^2 +qx+r. Points D(0.1, 2.025), E(0.2,2.9) and F(0.3, 3.425) lie on the arc AB. Set up a system of three simultaneous equations relating to p, q and r. DO NOT SOLVE EQUATIONS.
Can anyone give me any help with this without the graph?
Thank you so much!
• Jun 10th 2012, 04:44 AM
skeeter
Re: Having major issues with this problem - can someone get me started?
if three points $(x_1,y_1)$ , $(x_2,y_2)$ , $(x_3,y_3)$ lie on the parabola $y = px^2 + qx + r$ , then ...
$y_1 = px_1^2 + qx_1 + r$
$y_2 = px_2^2 + qx_2 + r$
$y_3 = px_3^2 + qx_3 + r$
|
# Tagged: square-free
## Problem 420
In this post, we study the Fundamental Theorem of Finitely Generated Abelian Groups, and as an application we solve the following problem.
Problem.
Let $G$ be a finite abelian group of order $n$.
If $n$ is the product of distinct prime numbers, then prove that $G$ is isomorphic to the cyclic group $Z_n=\Zmod{n}$ of order $n$.
|
# Can you take three letters away from a four-letter word and manage to have it mean the same thing?
+1 vote
210 views
Can you take three letters away from a four-letter word and manage to have it mean the same thing?
posted Jun 21, 2016
Share this puzzle
## 1 Answer
+2 votes
the word is "five"
if we take three letters 'f', 'i', 'e' ; the remaining will be 'v' , which is roman for five.
answer Jun 21, 2016 by
Similar Puzzles
+2 votes
There is a 5 letter word, if you take away the 5th letter, it is still the same.
If you take away the 1st letter it is also the same.
Even if you take away the 3rd letter, it is still the same.
You can even take away the 5th, 3rd and the 1st, at the same time!!!!!!!!
|
## Reading the Comics, May 23, 2020: Parents Can’t Do Math Edition
This was a week of few mathematically-themed comic strips. I don’t mind. If there was a recurring motif, it was about parents not doing mathematics well, or maybe at all. That’s not a very deep observation, though. Let’s look at what is here.
Liniers’s Macanudo for the 18th puts forth 2020 as “the year most kids realized their parents can’t do math”. Which may be so; if you haven’t had cause to do (say) long division in a while then remembering just how to do it is a chore. This trouble is not unique to mathematics, though. Several decades out of regular practice they likely also have trouble remembering what the 11th Amendment to the US Constitution is for, or what the rule is about using “lie” versus “lay”. Some regular practice would correct that, though. In most cases anyway; my experience suggests I cannot possibly learn the rule about “lie” versus “lay”. I’m also shaky on “set” as a verb.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 18th shows a mathematician talking, in the jargon of first and second derivatives, to support the claim there’ll never be a mathematician president. Yes, Weinersmith is aware that James Garfield, 20th President of the United States, is famous in trivia circles for having an original proof of the Pythagorean theorem. It would be a stretch to declare Garfield a mathematician, though, except in the way that anyone capable of reason can be a mathematician. Raymond Poincaré, President of France for most of the 1910s and prime minister before and after that, was not a mathematician. He was cousin to Henri Poincaré, who founded so much of our understanding of dynamical systems and of modern geometry. I do not offhand know what presidents (or prime ministers) of other countries have been like.
Weinersmith’s mathematician uses the jargon of the profession. Specifically that of calculus. It’s unlikely to communicate well with the population. The message is an ordinary one, though. The first derivative of something with respect to time means the rate at which things are changing. The first derivative of a thing, with respect to time being positive means that the quantity of the thing is growing. So, that first half means “things are getting more bad”.
The second derivative of a thing with respect to time, though … this is interesting. The second derivative is the same thing as the first derivative with respect to time of “the first derivative with respect to time”. It’s what the change is in the rate-of-change. If that second derivative is negative, then the first derivative will, in time, change from being positive to being negative. So the rate of increase of the original thing will, in time, go from a positive to a negative number. And so the quantity will eventually decline.
So the mathematician is making a this-is-the-end-of-the-beginning speech. The point at which the the second derivative of a quantity changes sign is known as the “inflection point”. Reaching that is often seen as the first important step in, for example, disease epidemics. It is usually the first good news, the promise that there will be a limit to the badness. It’s also sometimes mentioned in economic crises or sometimes demographic trends. “Inflection point” is likely as technical a term as one can expect the general public to tolerate, though. Even that may be pushing things.
Gary Wise and Lance Aldrich’s Real Life Adventures for the 19th has a father who can’t help his son do mathematics. In this case, finding square roots. There are many ways to find square roots by hand. Some are iterative, in which you start with an estimate and do a calculation that (typically) gets you a better estimate of the square root you want. And then repeat the calculation, starting from that improved estimate. Some use tables of things one can expect to have calculated, such as exponentials and logarithms. Or trigonometric tables, if you know someone who’s worked out lots of cosines and sines already.
Henry Scarpelli and Craig Boldman’s Archie rerun for the 20th mentions romantic triangles. And Moose’s relief that there’s only two people in his love triangle. So that’s our geometry wordplay for the week.
Bill Watterson’s Calvin and Hobbes repeat for the 20th has Calvin escaping mathematics class.
Julie Larson’s The Dinette Set rerun for the 21st fusses around words. Along the way Burl mentions his having learned that two negatives can make a positive, in mathematics. Here it’s (most likely) the way that multiplying or dividing two negative numbers will produce a positive number.
This covers the week. My next Reading the Comics post should appear at this tag, when it’s written. Thanks for reading.
## Reading the Comics, March 28, 2020: Closing A Week Edition
I know; I’m more than a week behind the original publication of these strips. The Playful Math Education Blog Carnival took a lot of what attention I have these days. I’ll get caught up again soon enough. Comic Strip Master Command tried to help me, by having the close of a week ago being pretty small mathematics mentions, too. For example:
Thaves’s Frank and Ernest for the 26th is the anthropomorphic numerals joke for the week. Also anthropomorphic letters, for a bonus.
Craig Boldman and Henry Scarpelli’s Archie for the 27th has Moose struggling in mathematics this term. This is an interesting casual mention; the joke, of Moose using three words to describe a thing he said he could in two, would not fit sharply for anything but mathematics. Or, possibly, a measuring class, but there’s no high school even in fiction that has a class in measuring.
Bud Blake’s Vintage Tiger for the 27th has Tiger and Hugo struggling to find adjective forms for numbers. We can giggle at Hugo struggling for “quadruple” and going for something that makes more sense. We all top out somewhere, though, probably around quintuple or sextuple. I have never known anyone who claimed to know what the word would be for anything past decuple, and even staring at the dictionary page for “decuple” I don’t feel confident in it.
Hilary Price’s Rhymes With Orange for the 28th uses a blackboard full of calculations as shorthand for real insight into science. From context they’re likely working on some physics problem and it’s quite hard to do that without mathematics, must agree.
Ham’s Life On Earth for the 28th uses $E = mc^2$ as a milestone in a child’s development.
John Deering’s Strange Brew for the 28th name-drops slide rules, which, yeah, have mostly historical or symbolic importance these days. There might be some niche where they’re particularly useful (besides teaching logarithms), but I don’t know of it.
And what of the strips from last week? I’ll discuss them in an essay at this link, soon, I hope. Take care, please.
## Reading the Comics, January 13, 2020: The State Pinball Championships Were Yesterday Edition
I am not my state’s pinball champion, although for the first time I did make it through the first round of play. What is important about this is that between that and a work trip I needed time for things which were not mathematics this past week. So my first piece this week will be a partial listing of comic strips that, last week, mentioned mathematics but not in a way I could build an essay around. … It’s not going to be a week with long essays, either, though. Here’s a start, though.
Henry Scarpelli’s Archie rerun for the 12th of January was about Moose’s sudden understanding of algebra, and wish for it to be handy. Well, every mathematician knows the moment when suddenly something makes sense, maybe even feels inevitably true. And then we do go looking for excuses to show it off.
Art Sansom and Chip Sansom’s The Born Loser for the 12th has the Loser helping his kid with mathematics homework. And the kid asking about when they’ll use it outside school.
Jason Chatfield’s Ginger Meggs for the 13th has Meggs fail a probability quiz, an outcome his teacher claims is almost impossible. If the test were multiple-choice (including true-or-false) it is possible to calculate the probability of a person making wild guesses getting every answer wrong (or right) and it usually is quite the feat, at least if the test is of appreciable length. For more open answers it’s harder to say what the chance of someone getting the question right, or wrong, is. And then there’s the strange middle world of partial credit.
My love does give multiple-choice quizzes occasionally and it is always a source of wonder when a student does worse than blind chance would. Everyone who teaches has seen that, though.
Jan Eliot’s Stone Soup Classics for the 13th just mentions the existence of mathematics homework, as part of the morning rush of events.
Ed Allison’s Unstrange Phenomenon for the 13th plays with optical illusions, which include several based on geometric tricks. Humans have some abilities at estimating relative areas and distances and lengths. But they’re not, like, smart abilities. They can be fooled, basically because their settings are circumstances where there’s no evolutionary penalty for being fooled this way. So we can go on letting the presence of arrow pointers mislead us about the precise lengths of lines, and that’s all right. There are, like, eight billion cognitive tricks going on all around us and most of them are much more disturbing.
That’s a fair start for the week. I hope to have a second part to this Tuesday. Thanks for reading.
## Reading the Comics, January 7, 2020: I Could Have Slept In Edition
It’s been another of those weeks where the comic strips mentioned mathematics but not in any substantive way. I haven’t read Saturday’s comics yet, as I write this, so perhaps the last day of the week will revolutionize things. In the meanwhile, here’s the strips you can look at and agree say ‘mathematics’ in them somewhere.
Dave Whamond’s Reality Check for the 5th of January, 2020, uses a blackboard full of arithmetic as signifier of teaching. The strip is an extended riff on Extruded Inspirational Teacher Movie product. I like the strip, but I don’t fault you if you think it’s a lot of words deployed against a really ignorable target.
Henry Scarpelli’s Archie rerun for the 7th has Archie not doing his algebra homework.
Bill Bettwy’s Take It From The Tinkersons on the 6th started a short series about Tillman Tinkerson and his mathematics homework. The storyline went on Tuesday, and Wednesday, and finished on Thursday.
Nate Fakes’s Break of Day for the 7th uses arithmetic as signifier of a child’s intelligence.
Mark Pett’s Mr Lowe rerun for the 7th has Lowe trying to teach arithmetic. Also, the strip is rerunning again, which is nice to see.
And that’s enough for now. I’ll read Saturday’s comics next and maybe have another essay at this link, soon. Thanks for reading.
## Reading the Comics, August 16, 2019: The Comments Drive Me Crazy Edition
Last week was another light week of work from Comic Strip Master Command. One could fairly argue that nothing is worth my attention. Except … one comic strip got onto the calendar. And that, my friends, is demanding I pay attention. Because the comic strip got multiple things wrong. And then the comments on GoComics got it more wrong. Got things wrong to the point that I could not be sure people weren’t trolling each other. I know how nerds work. They do this. It’s not pretty. So since I have the responsibility to correct strangers online I’ll focus a bit on that.
Robb Armstrong’s JumpStart for the 13th starts off all right. The early Roman calendar had ten months, December the tenth of them. This was a calendar that didn’t try to cover the whole year. It just started in spring and ran into early winter and that was it. This may seem baffling to us moderns, but it is, I promise you, the least confusing aspect of the Roman calendar. This may seem less strange if you think of the Roman calendar as like a sports team’s calendar, or a playhouse’s schedule of shows, or a timeline for a particular complicated event. There are just some fallow months that don’t need mention.
Things go wrong with Rob’s claim that December will have five Saturdays, five Sundays, and five Mondays. December 2019 will have no such thing. It has four Saturdays. There are five Sundays, Mondays, and Tuesdays. From Crunchy’s response it sounds like Joe’s run across some Internet Dubious Science Folklore. You know, where you see a claim that (like) Saturn will be larger in the sky than anytime since the glaciers receded or something. And as you’d expect, it’s gotten a bit out of date. December 2018 had five Saturdays, Sundays, and Mondays. So did December 2012. And December 2007.
And as this shows, that’s not a rare thing. Any month with 31 days will have five of some three days in the week. August 2019, for example, has five Thursdays, Fridays, and Saturdays. October 2019 will have five Tuesdays, Wednesdays, and Thursdays. This we can show by the pigeonhole principle. And there are seven months each with 31 days in every year.
It’s not every year that has some month with five Saturdays, Sundays, and Mondays in it. 2024 will not, for example. But a lot of years do. I’m not sure why December gets singled out for attention here. From the setup about December having long ago been the tenth month, I guess it’s some attempt to link the fives of the weekend days to the ten of the month number. But we get this kind of December about every five or six years.
This 823 years stuff, now that’s just gibberish. The Gregorian calendar has its wonders and mysteries yes. None of them have anything to do with 823 years. Here, people in the comments got really bad at explaining what was going on.
So. There are fourteen different … let me call them year plans, available to the Gregorian calendar. January can start on a Sunday when it is a leap year. Or January can start on a Sunday when it is not a leap year. January can start on a Monday when it is a leap year. January can start on a Monday when it is not a leap year. And so on. So there are fourteen possible arrangements of the twelve months of the year, what days of the week the twentieth of January and the thirtieth of December can occur on. The incautious might think this means there’s a period of fourteen years in the calendar. This comes from misapplying the pigeonhole principle.
Here’s the trouble. January 2019 started on a Tuesday. This implies that January 2020 starts on a Wednesday. January 2025 also starts on a Wednesday. But January 2024 starts on a Monday. You start to see the pattern. If this is not a leap year, the next year starts one day of the week later than this one. If this is a leap year, the next year starts two days of the week later. This is all a slightly annoying pattern, but it means that, typically, it takes 28 years to get back where you started. January 2019 started on Tuesday; January 2020 on Wednesday, and January 2021 on Friday. the same will hold for January 2047 and 2048 and 2049. There are other successive years that will start on Tuesday and Wednesday and Friday before that.
Except.
The important difference between the Julian and the Gregorian calendars is century years. 1900. 2000. 2100. These are all leap years by the Julian calendar reckoning. Most of them are not, by the Gregorian. Only century years divisible by 400 are. 2000 was a leap year; 2400 will be. 1900 was not; 2100 will not be, by the Gregorian scheme.
These exceptions to the leap-year-every-four-years pattern mess things up. The 28-year-period does not work if it stretches across a non-leap-year century year. By the way, if you have a friend who’s a programmer who has to deal with calendars? That friend hates being a programmer who has to deal with calendars.
There is still a period. It’s just a longer period. Happily the Gregorian calendar has a period of 400 years. The whole sequence of year patterns from 2000 through 2019 will reappear, 2400 through 2419. 2800 through 2819. 3200 through 3219.
(Whether they were also the year patterns for 1600 through 1619 depends on where you are. Countries which adopted the Gregorian calendar promptly? Yes. Countries which held out against it, such as Turkey or the United Kingdom? No. Other places? Other, possibly quite complicated, stories. If you ask your computer for the 1619 calendar it may well look nothing like 2019’s, and that’s because it is showing the Julian rather than Gregorian calendar.)
Except.
This is all in reference to the days of the week. The date of Easter, and all of the movable holidays tied to Easter, is on a completely different cycle. Easter is set by … oh, dear. Well, it’s supposed to be a simple enough idea: the Sunday after the first spring full moon. It uses a notional moon that’s less difficult to predict than the real one. It’s still a bit of a mess. The date of Easter is periodic again, yes. But the period is crazy long. It would take 5,700,000 years to complete its cycle on the Gregorian calendar. It never will. Never try to predict Easter. It won’t go well. Don’t believe anything amazing you read about Easter online.
Michael Jantze’s The Norm (Classics) for the 15th is much less trouble. It uses some mathematics to represent things being easy and things being hard. Easy’s represented with arithmetic. Hard is represented with the calculations of quantum mechanics. Which, oddly, look very much like arithmetic. $\phi = BA$ even has fewer symbols than $1 + 1 = 2$ has. But the symbols mean different abstract things. In a quantum mechanics context, ‘A’ and ‘B’ represent — well, possibly matrices. More likely operators. Operators work a lot like functions and I’m going to skip discussing the ways they don’t. Multiplying operators together — B times A, here — works by using the range of one function as the domain of the other. Like, imagine ‘B’ means ‘take the square of’ and ‘A’ means ‘take the sine of’. Then ‘BA’ would mean ‘take the square of the sine of’ (something). The fun part is the ‘AB’ would mean ‘take the sine of the square of’ (something). Which is fun because most of the time, those won’t have the same value. We accept that, mathematically. It turns out to work well for some quantum mechanics properties, even though it doesn’t work like regular arithmetic. So $\phi = BA$ holds complexity, or at least strangeness, in its few symbols.
Henry Scarpelli and Craig Boldman’s Archie for the 16th is a joke about doing arithmetic on your fingers and toes. That’s enough for me.
There were some more comic strips which just mentioned mathematics in passing.
Brian Boychuk and Ron Boychuk’s The Chuckle Brothers rerun for the 11th has a blackboard of mathematics used to represent deep thinking. Also, it I think, the colorist didn’t realize that they were standing in front of a blackboard. You can see mathematicians doing work in several colors, either to convey information in shorthand or because they had several colors of chalk. Not this way, though.
Mark Leiknes’s Cow and Boy rerun for the 16th mentions “being good at math” as something to respect cows for. The comic’s just this past week started over from its beginning. If you’re interested in deeply weird and long-since cancelled comics this is as good a chance to jump on as you can get.
And Stephen ‘s Herb and Jamaal rerun for the 16th has a kid worried about a mathematics test.
That’s the mathematically-themed comic strips for last week. All my Reading the Comics essays should be at this link. I’ve traditionally run at least one essay a week on Sunday. But recently that’s moved to Tuesday for no truly compelling reason. That seems like it’s working for me, though. I may stick with it. If you do have an opinion about Sunday versus Tuesday please let me know.
Don’t let me know on Twitter. I continue to have this problem where Twitter won’t load on Safari. I don’t know why. I’m this close to trying it out on a different web browser.
And, again, I’m planning a fresh A To Z sequence. It’s never to early to think of mathematics topics that I might explain. I should probably have already started writing some. But you’ll know the official announcement when it comes. It’ll have art and everything.
## Reading the Comics, June 1, 2019: More Than I Thought Edition
When I collected last week’s mathematically-themed comic strips I thought this set an uninspiring one. That changed sometime while I wrote. That’s the sort of week I like to have.
Richard Thompson’s Richard’s Poor Almanac for the 28th is a repeat; all these strips are. And I’ve featured it here before too. But never before in color, so I’ll take this chance to show it one last time. One of the depicted plants is the “Non-Euclidean Creeper”, which “ignores the geometry of the space-time continuum”. Non-Euclidean is one of those few geometry-related words that people recognize — maybe even only learn — in their adulthood. It has connotations of the bizarre and the weird and the wrong.
And it is a bit weird. While we live in a non-Euclidean space, we never really notice. Euclidean space is the geometry we’re used to from drawing shapes on paper and putting boxes in the corners of basements. And from this we’ve given “non-Euclidean” this sinister reputation. We credit it with defying common sense and even logic itself, although it’s geometry. It can’t defy logic. It can defy intuition. Non-Euclidean geometries have the idea that there are no such things as parallel lines. Or the idea that there are too many parallel lines. And it can get to weird results, particularly if we look at more than three dimensions of space. Those also tax the imagination. It will get a weed a bad reputation.
Chen Weng’s Messycow Comics for the 30th is about a child’s delight in learning how to count. I don’t remember ever being so fascinated by counting that it would distract me permanently. I do remember thinking it was amazing that once a pattern was established it kept on, with no reason to ever stop, or even change. My recollection is I thought this somehow unfair to the alphabet, which had a very sudden sharp end.
The counting numbers — counting in general — seem to be things we’ve evolved to understand. Other animals know how to count. Here I recommend again Stanislas Dehaene’s The Number Sense: How the Mind Creates Mathematics, which describes some of the things we know about how animals do mathematics. It also describes how children come to understand it.
Samson’s Dark Side of the Horse for the 31st is a bit of play with arithmetic. Horace simplifies his problem by catching all the numerals with loops in them — the zeroes and the eights — and working with what’s left. Evidently he’s already cast out all the nines. (This is me making a joke. Casting out nines is a simple checksum that you can do which can guard against some common arithmetic mistakes. It doesn’t catch everything. But it is simple enough to do that it can be worth using.)
The part that disappoints me is that to load the problem up with digits with loops, we get a problem that’s not actually hard: 100 times anything is easy. If the problem were, say, 189 times 80008005 then you’d have a problem someone might sensibly refuse to do. But without those zeroes at the start it’d be harder to understand what Horace was doing. Maybe if it were 10089 times 800805 instead.
Hilary Price and Rina Piccolo’s Rhymes with Orange for the 1st is the anthropomorphic numerals joke for the week. Also the anthropomorphic letters joke. The capital B sees occasional use in mathematics. It can represent the ball, that is, the set of all points that represent the interior of a sphere of a set radius. Usually a radius of 1. It also sometimes appears in equations as a parameter, a number whose value is fixed for the length of the problem but whose value we don’t care about. I had thought there were a few other roles for B alone, such as a label to represent the Bessel functions. These are a family of complicated-looking polynomials with some nice properties it’s too great a diversion for me to discuss just now. But they seem to more often be labelled with a capital J for reasons that probably seemed compelling at the time. It’ll also get used in logic, where B might stand for the second statement of some argument. 4, meanwhile, is that old familiar thing.
And there were a couple of comics which I like, but which mentioned mathematics so slightly that I couldn’t put a paragraph into them. Henry Scarpelli and Craig Boldman’s Archie rerun for the 27th, for example, mentions mathematics class as one it’s easy to sleep through. And Tony Cochrane’s Agnes for the 28th also mentions mathematics class, this time as one it’s hard to pay attention to.
This clears out last week’s comic strips. This present week’s strips should be at this link on Sunday. I haven’t yet read Friday or Saturday’s comics, so perhaps there’s been a flood, but this has been a slow week so far.
## Reading the Comics, May 11, 2019: I Concede I Am Late Edition
I concede I am late in wrapping up last week’s mathematically-themed comics. But please understand there were important reasons for my not having posted this earlier, like, I didn’t get it written in time. I hope you understand and agree with me about this.
Bill Griffith’s Zippy the Pinhead for the 9th brings up mathematics in a discussion about perfection. The debate of perfection versus “messiness” begs some important questions. What I’m marginally competent to discuss is the idea of mathematics as this perfect thing. Mathematics seems to have many traits that are easy to think of as perfect. That everything in it should follow from clearly stated axioms, precise definitions, and deductive logic, for example. This makes mathematics seem orderly and universal and fair in a way that the real world never is. If we allow that this is a kind of perfection then … does mathematics reach it?
Even the idea of a “precise definition” is perilous. If it weren’t there wouldn’t be so many pop mathematics articles about why 1 isn’t a prime number. It’s difficult to prove that any particular set of axioms that give us interesting results are also logically consistent. If they’re not consistent, then we can prove absolutely anything, including that the axioms are false. That seems imperfect. And few mathematicians even prepare fully complete, step-by-step proofs of anything. It takes ridiculously long to get anything done if you try. The proofs we present tend to show, instead, the reasoning in enough detail that we’re confident we could fill in the omitted parts if we really needed them for some reason. And that’s fine, nearly all the time, but it does leave the potential for mistakes present.
Zippy offers up a perfect parallelogram. Making it geometry is of good symbolic importance. Everyone knows geometric figures, and definitions of some basic ideas like a line or a circle or, maybe, a parallelogram. Nobody’s ever seen one, though. There’s never been a straight line, much less two parallel lines, and even less the pair of parallel lines we’d need for a parallellogram. There can be renderings good enough to fool the eye. But none of the lines are completely straight, not if we examine closely enough. None of the pairs of lines are truly parallel, not if we extend them far enough. The figure isn’t even two-dimensional, not if it’s rendered in three-dimensional things like atoms or waves of light or such. We know things about parallelograms, which don’t exist. They tell us some things about their shadows in the real world, at least.
Mark Litzler’s Joe Vanilla for the 9th is a play on the old joke about “a billion dollars here, a billion dollars there, soon you’re talking about real money”. As we hear more about larger numbers they seem familiar and accessible to us, to the point that they stop seeming so big. A trillion is still a massive number, at least for most purposes. If you aren’t doing combinatorics, anyway; just yesterday I was doing a little toy problem and realized it implied 470,184,984,576 configurations. Which still falls short of a trillion, but had I made one arbitrary choice differently I could’ve blasted well past a trillion.
Ruben Bolling’s Super-Fun-Pak Comix for the 9th is another monkeys-at-typewriters joke, that great thought experiment about probability and infinity. I should add it to my essay about the Infinite Monkey Theorem. Part of the joke is that the monkey is thinking about the content of the writing. This doesn’t destroy the prospect that a monkey given enough time would write any of the works of William Shakespeare. It makes the simple estimates of how unlikely that is, and how long it would take to do, invalid. But the event might yet happen. Suppose this monkey decided there was no credible way to delay Hamlet’s revenge to Act V, and tried to write accordingly. Mightn’t the monkey make a mistake? It’s easy to type a letter you don’t mean to. Or a word you don’t mean to. Why not a sentence you don’t mean to? Why not a whole act you don’t mean to? Impossible? No, just improbable. And the monkeys have enough time to let the improbable happen.
Eric the Circle for the 10th, this one by Kingsnake, declares itself set in “the 20th dimension, where shape has no meaning”. This plays on a pop-cultural idea of dimensions as a kind of fairyland, subject to strange and alternate rules. A mathematician wouldn’t think of dimensions that way. 20-dimensional spaces — and even higher-dimensional spaces — follow rules just as two- and three-dimensional spaces do. They’re harder to draw, certainly, and mathematicians are not selected for — or trained in — drawing, at least not in United States schools. So attempts at rendering a high-dimensional space tend to be sort of weird blobby lumps, maybe with a label “N-dimensional”.
And a projection of a high-dimensional shape into lower dimensions will be weird. I used to have around here a web site with a rotatable tesseract, which would draw a flat-screen rendition of what its projection in three-dimensional space would be. But I can’t find it now and probably it ran as a Java applet that you just can’t get to work anymore. Anyway, non-interactive videos of this sort of thing are common enough; here’s one that goes through some of the dimensions of a tesseract, one at a time. It’ll give some idea how something that “should” just be a set of cubes will not look so much like that.
Steve Kelly and Jeff Parker’s Dustin for the 11th is a variation on the “why do I have to learn this” protest. This one is about long division and the question of why one needs to know it when there’s cheap, easily-available tools that do the job better. It’s a fair question and Hayden’s answer is a hard one to refute. I think arithmetic’s worth knowing how to do, but I’ll also admit, if I need to divide something by 23 I’m probably letting the computer do it.
And a couple of the comics that week seemed too slight to warrant discussion. You might like them anyway. Brian Boychuk and Ron Boychuk’s Chuckle Brothers for the 5th featured a poorly-written numeral. Charles Schulz’s Peanuts Begins rerun for the 6th has Violet struggling with counting. Glenn McCoy and Gary McCoy’s The Flying McCoys for the 8th has someone handing in mathematics homework. Henry Scarpelli and Craig Boldman’s Archie rerun for the 9th talks about Jughead sleeping through mathematics class. All routine enough stuff.
This and other Reading the Comics posts should appear at this link. I mean to have a post tomorrow, although it might make my work schedule a little easier to postpone that until Monday. We’ll see.
## Reading the Comics, April 5, 2019: The Slow Week Edition
People reading my Reading the Comics post Sunday maybe noticed something. I mean besides my correct, reasonable complaining about the Comics Kingdom redesign. That is that all the comics were from before the 30th of March. That is, none were from the week before the 7th of April. The last full week of March had a lot of comic strips. The first week of April didn’t. So things got bumped a little. Here’s the results. It wasn’t a busy week, not when I filter out the strips that don’t offer much to write about. So now I’m stuck for what to post Thursday.
Jason Poland’s Robbie and Bobby for the 3rd is a Library of Babel comic strip. This is mathematical enough for me. Jorge Luis Borges’s Library is a magnificent representation of some ideas about infinity and probability. I’m surprised to realize I haven’t written an essay specifically about it. I have touched on it, in writing about normal numbers, and about the infinite monkey theorem.
The strip explains things well enough. The Library holds every book that will ever be written. In the original story there are some constraints. Particularly, all the books are 410 pages. If you wanted, say, a 600-page book, though, you could find one book with the first 410 pages and another book with the remaining 190 pages and then some filler. The catch, as explained in the story and in the comic strip, is finding them. And there is the problem of finding a ‘correct’ text. Every possible text of the correct length should be in there. So every possible book that might be titled Mark Twain vs Frankenstein, including ones that include neither Mark Twain nor Frankenstein, is there. Which is the one you want to read?
Henry Scarpelli and Craig Boldman’s Archie for the 4th features an equal-divisions problem. In principle, it’s easy to divide a pizza (or anything else) equally; that’s what we have fractions for. Making them practical is a bit harder. I do like Jughead’s quick work, though. It’s got the slight-of-hand you expect from stage magic.
Scott Hilburn’s The Argyle Sweater for the 4th takes place in an algebra class. I’m not sure what algebraic principle $7^4 \times 13^6$ demonstrates, but it probably came from somewhere. It’s 4,829,210. The exponentials on the blackboard do cue the reader to the real joke, of the sign reading “kick10 me”. I question whether this is really an exponential kicking situation. It seems more like a simple multiplication to me. But it would be harder to make that joke read clearly.
Tony Cochran’s Agnes for the 5th is part of a sequence investigating how magnets work. Agnes and Trout find just … magnet parts inside. This is fair. It’s even mathematics.
Thermodynamics classes teach one of the great mathematical physics models. This is about what makes magnets. Magnets are made of … smaller magnets. This seems like question-begging. Ultimately you get down to individual molecules, each of which is very slightly magnetic. When small magnets are lined up in the right way, they can become a strong magnet. When they’re lined up in another way, they can be a weak magnet. Or no magnet at all.
How do they line up? It depends on things, including how the big magnet is made, and how it’s treated. A bit of energy can free molecules to line up, making a stronger magnet out of a weak one. Or it can break up the alignments, turning a strong magnet into a weak one. I’ve had physics instructors explain that you could, in principle, take an iron rod and magnetize it just by hitting it hard enough on the desk. And then demagnetize it by hitting it again. I have never seen one do this, though.
This is more than just a physics model. The mathematics of it is … well, it can be easy enough. A one-dimensional, nearest-neighbor model, lets us describe how materials might turn into magnets or break apart, depending on their temperature. Two- or three-dimensional models, or models that have each small magnet affected by distant neighbors, are harder.
And then there’s the comic strips that didn’t offer much to write about.
Brian Basset’s Red and Rover for the 3rd,
Liniers’s Macanudo for the 5th, Stephen Bentley’s Herb and Jamaal rerun for the 5th, and Gordon Bess’s Redeye rerun for the 5th all idly mention mathematics class, or things brought up in class.
Doug Savage’s Savage Chickens for the 2nd is another more-than-100-percent strip. Richard Thompson’s Richard’s Poor Almanac for the 3rd is a reprint of his Christmas Tree guide including a fir that “no longer inhabits Euclidean space”.
Mike Baldwin’s Cornered for the 31st depicts a common idiom about numbers. Eric the Circle for the 5th, by Rafoliveira, plays on the ∞ symbol.
And that covers the mathematically-themed comic strips from last week. There are more coming, though. I’ll show them on Sunday. Thanks for reading.
## Reading the Comics, October 18, 2018: Quick Half-Week Edition
There were enough mathematically-themed comic strips last week to split across two essays. The first half of them don’t take too much time to explain. Let me show you.
Henry Scarpelli and Craig Boldman’s Archie for the 15th is the pie-chart wordplay joke for the week. I don’t remember there ever being pie at the high school cafeteria, but back when I was in high school I often skipped lunch to hang out in the computer room.
Will Henry’s Wallace the Brave for the 15th alludes to a report on trapezoids. I can’t imagine what about this would be so gold-star-worthy when I’ve surely already written plenty about trapezoids. … Really, that thing trying to classify how many different kinds of trapezoids there are would be my legacy to history if I hadn’t also written about how many grooves are on a record’s side.
Thaves’s Frank and Ernest for the 17th is, for me, extremely relatable content. I don’t say that my interest in mathematics is entirely because there was this Berenstain Bears book about jobs which made it look like a mathematician’s job was to do sums in an observatory on the Moon. But it didn’t hurt. When I joke about how seven-year-old me wanted to be the astronaut who drew Popeye, understand, that’s not much comic exaggeration.
Justin Thompson’s Mythtickle rerun for the 17th is a timely choice about lotteries and probabilities. Vlad raises a fair point about your chance of being struck by lightning. It seems like that’s got to depend on things like where you are. But it does seem like we know what we mean when we say “the chance you’ll be hit by lightning”. At least I think it means “the probability that a person will be hit by lightning at some point in their life, if we have no information about any environmental facts that might influence this”. So it would be something like the number of people struck by lightning over the course of a year divided by the number of people in the world that year. You might have a different idea of what “the chance you’ll be hit by lightning” means, and it’s worth trying to think what precisely that does mean to you.
Lotteries are one of those subjects that a particular kind of nerd likes to feel all smug about. Pretty sure every lottery comic ever has drawn a comment about a tax on people who can’t do mathematics. This one did too. But then try doing the mathematics. The Mega Millions lottery, in the US, has a jackpot for the first drawing this week estimated at more than a billion dollars. The chance of winning is about one in 300 million. A ticket costs two dollars. So what is the expectation value of playing? You lose two dollars right up front, in the cost of the ticket. What do you get back? A one-in-300-million chance of winning a billion dollars. That is, you can expect to get back a bit more than three dollars. The implication is: you make a profit of dollar on each ticket you buy. There’s something a bit awry here, as you can tell from my decision not to put my entire savings into lottery tickets this week. But I won’t say someone is foolish or wrong if they buy a couple.
Mike Baldwin’s Cornered for the 18th is a bit of mathematics-circling wordplay, featuring the blackboard full of equations. The blackboard doesn’t have any real content on it, but it is a good visual shorthand. And it does make me notice that rounding a quantity off is, in a way, making it simpler. If we are only a little interested in the count of the thing, “two thousand forty” or even “two thousand” may be more useful than the exact 2,038. The loss of precision may be worth it for the ease with which the rounded-off version is remembered and communicated.
If you’d like to see more Reading the Comics posts then try this link. Other essays which mention Archie should be at this link. Topics raised by Wallace the Brave should be at this link. Frank and Ernest is the subject of essays at this link. Topics brought up by Mythtickle are at this link. It’s a new tag, though, and I’m not sure there’ll ever be another use of it. And this and other essays mentioning Cornered are at this link. And do please stick around for more of my Fall 2018 Mathematics A-To-Z, coming twice a week through the rest of the year, I hope.
## Reading the Comics, May 18, 2018: Quincy Doesn’t Make The Cut Edition
I hate to disillusion anyone but I lack hard rules about what qualifies as a mathematically-themed comic strip. During a slow week, more marginal stuff makes it. This past week was going slow enough that I tagged Wednesday’s Quincy rerun, from March of 1979 for possible inclusion. And all it does is mention that Quincy’s got a mathematics test due. Fortunately for me the week picked up a little. It cheats me of an excuse to point out Ted Shearer’s art style to people, but that’s not really my blog’s business.
Also it may not surprise you but since I’ve decided I need to include GoComics images I’ve gotten more restrictive. Somehow the bit of work it takes to think of a caption and to describe the text and images of a comic strip feel like that much extra work.
Roy Schneider’s The Humble Stumble for the 13th of May is a logic/geometry puzzle. Is it relevant enough for here? Well, I spent some time working it out. And some time wondering about implicit instructions. Like, if the challenge is to have exactly four equally-sized boxes after two toothpicks are moved, can we have extra stuff? Can we put a toothpick where it’s just a stray edge, part of no particular shape? I can’t speak to how long you stay interested in this sort of puzzle. But you can have some good fun rules-lawyering it.
Jeff Harris’s Shortcuts for the 13th is a children’s informational feature about Aristotle. Aristotle is renowned for his mathematical accomplishments by many people who’ve got him mixed up with Archimedes. Aristotle it’s harder to say much about. He did write great texts that pop-science writers credit as giving us the great ideas about nature and physics and chemistry that the Enlightenment was able to correct in only about 175 years of trying. His mathematics is harder to summarize though. We can say certainly that he knew some mathematics. And that he encouraged thinking of subjects as built on logical deductions from axioms and definitions. So there is that influence.
Dan Thompson’s Brevity for the 15th is a pun, built on the bell curve. This is also known as the Gaussian distribution or the normal distribution. It turns up everywhere. If you plot how likely a particular value is to turn up, you get a shape that looks like a slightly melted bell. In principle the bell curve stretches out infinitely far. In practice, the curve turns into a horizontal line so close to zero you can’t see the difference once you’re not-too-far away from the peak.
Jason Chatfield’s Ginger Meggs for the 16th I assume takes place in a mathematics class. I’m assuming the question is adding together four two-digit numbers. But “what are 26, 24, 33, and 32” seems like it should be open to other interpretations. Perhaps Mr Canehard was asking for some class of numbers those all fit into. Integers, obviously. Counting numbers. Compound numbers rather than primes. I keep wanting to say there’s something deeper, like they’re all multiples of three (or something) but they aren’t. They haven’t got any factors other than 1 in common. I mention this because I’d love to figure out what interesting commonality those numbers have and which I’m overlooking.
Ed Stein’s Freshly Squeezed for the 17th is a story problem strip. Bit of a passive-aggressive one, in-universe. But I understand why it would be formed like that. The problem’s incomplete, as stated. There could be some fun in figuring out what extra bits of information one would need to give an answer. This is another new-tagged comic.
Henry Scarpelli and Craig Boldman’s Archie for the 19th name-drops calculus, credibly, as something high schoolers would be amazed to see one of their own do in their heads. There’s not anything on the blackboard that’s iconically calculus, it happens. Dilton’s writing out a polynomial, more or less, and that’s a fit subject for high school calculus. They’re good examples on which to learn differentiation and integration. They’re a little more complicated than straight lines, but not too weird or abstract. And they follow nice, easy-to-summarize rules. But they turn up in high school algebra too, and can fit into geometry easily. Or any subject, really, as remember, everything is polynomials.
Mark Anderson’s Andertoons for the 19th is Mark Anderson’s Andertoons for the week. Glad that it’s there. Let me explain why it is proper construction of a joke that a Fibonacci Division might be represented with a spiral. Fibonacci’s the name we give to Leonardo of Pisa, who lived in the first half of the 13th century. He’s most important for explaining to the western world why these Hindu-Arabic numerals were worth learning. But his pop-cultural presence owes to the Fibonacci Sequence, the sequence of numbers 1, 1, 2, 3, 5, 8, and so on. Each number’s the sum of the two before it. And this connects to the Golden Ratio, one of pop mathematics’ most popular humbugs. As the terms get bigger and bigger, the ratio between a term and the one before it gets really close to the Golden Ratio, a bit over 1.618.
So. Draw a quarter-circle that connects the opposite corners of a 1×1 square. Connect that to a quarter-circle that connects opposite corners of a 2×2 square. Connect that to a quarter-circle connecting opposite corners of a 3×3 square. And a 5×5 square, and an 8×8 square, and a 13×13 square, and a 21×21 square, and so on. Yes, there are ambiguities in the way I’ve described this. I’ve tried explaining how to do things just right. It makes a heap of boring words and I’m trying to reduce how many of those I write. But if you do it the way I want, guess what shape you have?
And that is why this is a correctly-formed joke about the Fibonacci Division.
## Reading the Comics, April 11, 2018: Obscure Mathematical Terms Edition
I’d like to open today’s installment with a trifle from Thomas K Dye. He’s a friend, and the cartoonist behind the long-running web comic Newshounds, its new spinoff Infinity Refugees, and some other projects.
Dye also has a Patreon, most recently featuring a subscribers-only web comic. And he’s good enough to do the occasional bit of spot art to spruce up my work here.
Henry Scarpelli and Craig Boldman’s Archie rerun for the 9th of April, 2018 is, for me, relatable. I think I’ve read off this anecdote before. The first time I took Real Analysis I was completely lost. Getting me slightly less lost was borrowing a library book on Real Analysis from the mathematics library. The book was in French, a language I can only dimly read. But the different presentation and, probably, the time I had to spend parsing each sentence helped me get a basic understanding of the topic. So maybe trying algebra upside-down isn’t a ridiculous idea.
Lincoln Pierce’s Big Nate rerun for the 9th presents an arithmetic sequence, which is always exciting to work with, if you’re into sequences. I had thought Nate was talking about mathematics quizzes but I see that’s not specified. Could be anything. … And yes, there is something cool in finding a pattern. Much of mathematics is driven by noticing, or looking for, patterns in things and then describing the rules by which new patterns can be made. There’s many easy side questions to be built from this. When would quizzes reach a particular value? When would the total number of points gathered reach some threshold? When would the average quiz score reach some number? What kinds of patterns would match the 70-68-66-64 progression but then do something besides reach 62 next? Or 60 after that? There’s some fun to be had. I promise.
Mike Thompson’s Grand Avenue for the 10th is one of the resisting-the-teacher’s-problem style. The problem’s arithmetic, surely for reasons of space. The joke doesn’t depend on the problem at all.
Dave Whamond’s Reality Check for the 10th similarly doesn’t depend on what the question is. It happens to be arithmetic, but it could as easily be identifying George Washington or picking out the noun in a sentence.
Leigh Rubin’s Rubes for the 10th riffs on randomness. In this case it’s riffing on the unpredictability and arbitrariness of random things. Random variables are very interesting in certain fields of mathematics. What makes them interesting is that any specific value — the next number you generate — is unpredictable. But aggregate information about the values is predictable, often with great precision. For example, consider normal distributions. (A lot of stuff turns out to be normal.) In that case we can be confident that the values that come up most often are going to be close to the arithmetic mean of a bunch of values. And that there’ll be about as many values greater than the mean as there are less than the mean. And this will be only loosely true if you’ve looked at a handful of values, at ten or twenty or even two hundred of them. But if you looked at, oh, a hundred thousand values, these truths would be dead-on. It’s wonderful and it seems to defy intuition. It just works.
John Atkinson’s Wrong Hands for the 10th is the anthropomorphic numerals joke for the week. It’s easy to think of division as just making numbers smaller: 4 divided by 6 is less than either 4 or 6. 1 divided by 4 is less than either 1 or 4. But this is a bad intuition, drawn from looking at the counting numbers that don’t look boring. But 4 divided by 1 isn’t less than either 1 or 4. Same with 6 divided by 1. And then when we look past counting numbers we realize that’s not always so. 6 divided by ½ gives 12, greater than either of those numbers, and I don’t envy the teachers trying to explain this to an understandably confused student. And whether 6 divided by -1 gives you something smaller than 6 or smaller than -1 is probably good for an argument in an arithmetic class.
Zach Weinersmith, Chris Jones and James Ashby’s Snowflakes for the 11th has an argument about predicting humans mathematically. It’s so very tempting to think people can be. Some aspects of people can. In the founding lore of statistics is the astonishment at how one could predict how many people would die, and from what causes, over a time. No person’s death could be forecast, but their aggregations could be. This unsettles people. It should: it seems to defy reason. It seems to me even people who embrace a deterministic universe suppose that while, yes, a sufficiently knowledgeable creature might forecast their actions accurately, mere humans shouldn’t be sufficiently knowledgeable.
No strips are tagged for the first time this essay. Just noticing.
## Reading the Comics, November 11, 2017: Pictured Comics Edition
And now the other half of last week’s comic strips. It was unusually rich in comics that come from Comics Kingdom or Creators.com, which have limited windows of access and therefore make me feel confident I should include the strips so my comments make any sense.
Rick Kirkman and Jerry Scott’s Baby Blues for the 9th mentions mathematics homework as a resolutely rage-inducing topic. It’s mathematics homework, obviously, or else it wouldn’t be mentioned around here. And even more specifically it’s Common Core mathematics homework. So it always is with attempts to teach subjects better. Especially mathematics, given how little confidence people have in their own mastery. I can’t blame parents for supposing any change to be just malice.
Bill Amend’s FoxTrot Classics for the 9th is about random numbers. As Jason says, it is hard to generate random numbers. Random numbers are a resource. Having a good source of them makes a lot of computation work. But they’re hard to make. It seems to be a contradiction to create random numbers by an algorithm. There’s reasons we accept pseudorandom numbers, or find quasirandom numbers. This strip originally ran the 16th of November, 2006.
Chris Browne’s Hagar the Horrible for the 10th is about the numerous. There’s different kinds of limits. There’s the greatest number of things we can count in an instant. There’s a limit to how long a string of digits or symbols we can remember. There’s the biggest number of things we can visualize. And “visualize” is a slippery concept. I think I have a pretty good idea what we mean when we say “a thousand” of something. I could calculate how long it took me to do something a thousand times, or to write a thousand of something. I know that it was at about a thousand words that, last A To Z sequence, I got to feeling I should wrap up any particular essay. But did I see any particular difference between word 999 and word 1,000? No; what I really knew was “about enough paragraphs” and maybe “fills just over two screens in my text editor”. So do I know what a thousand is? Anyway, we all have our limits, acknowledge them or not.
Henry Scarpelli and Craig Boldman’s Archie rerun for the 17th is about Moose’s struggle with mathematics. Just writing “more or less” doesn’t fix an erroneous answer, true. But error margins, and estimates of where an answer should be, can be good mathematics. (Part of the Common Core that many parents struggle with is making the estimate of an answer the first step, and a refined answer later. Based on what I see crossing social media, this really offends former engineering majors who miss the value in having an expected approximate answer.) It’s part of how we define limits, and derivatives, and integrals, and all of calculus. But it’s in a more precise way than Moose tries to do.
Ted Shearer’s Quincy for the 18th of September, 1978 is a story-problem joke. Some of these aren’t complicated strips.
## Reading the Comics, April 15, 2017: Extended Week Edition
It turns out last Saturday only had the one comic strip that was even remotely on point for me. And it wasn’t very on point either, but since it’s one of the Creators.com strips I’ve got the strip to show. That’s enough for me.
Henry Scarpelli and Craig Boldman’s Archie for the 8th is just about how algebra hurts. Some days I agree.
Ruben Bolling’s Super-Fun-Pak Comix for the 8th is an installation of They Came From The Third Dimension. “Dimension” is one of those oft-used words that’s come loose of any technical definition. We use it in mathematics all the time, at least once we get into Introduction to Linear Algebra. That’s the course that talks about how blocks of space can be stretched and squashed and twisted into each other. You’d expect this to be a warmup act to geometry, and I guess it’s relevant. But where it really pays off is in studying differential equations and how systems of stuff changes over time. When you get introduced to dimensions in linear algebra they describe degrees of freedom, or how much information you need about a problem to pin down exactly one solution.
It does give mathematicians cause to talk about “dimensions of space”, though, and these are intuitively at least like the two- and three-dimensional spaces that, you know, stuff moves in. That there could be more dimensions of space, ordinarily inaccessible, is an old enough idea we don’t really notice it. Perhaps it’s hidden somewhere too.
Amanda El-Dweek’s Amanda the Great of the 9th started a story with the adult Becky needing to take a mathematics qualification exam. It seems to be prerequisite to enrolling in some new classes. It’s a typical set of mathematics anxiety jokes in the service of a story comic. One might tsk Becky for going through university without ever having a proper mathematics class, but then, I got through university without ever taking a philosophy class that really challenged me. Not that I didn’t take the classes seriously, but that I took stuff like Intro to Logic that I was already conversant in. We all cut corners. It’s a shame not to use chances like that, but there’s always so much to do.
Mark Anderson’s Andertoons for the 10th relieves the worry that Mark Anderson’s Andertoons might not have got in an appearance this week. It’s your common kid at the chalkboard sort of problem, this one a kid with no idea where to put the decimal. As always happens I’m sympathetic. The rules about where to move decimals in this kind of multiplication come out really weird if the last digit, or worse, digits in the product are zeroes.
Mel Henze’s Gentle Creatures is in reruns. The strip from the 10th is part of a story I’m so sure I’ve featured here before that I’m not even going to look up when it aired. But it uses your standard story problem to stand in for science-fiction gadget mathematics calculation.
Dave Blazek’s Loose Parts for the 12th is the natural extension of sleep numbers. Yes, I’m relieved to see Dave Blazek’s Loose Parts around here again too. Feels weird when it’s not.
Bill Watterson’s Calvin and Hobbes rerun for the 13th is a resisting-the-story-problem joke. But Calvin resists so very well.
John Deering’s Strange Brew for the 13th is a “math club” joke featuring horses. Oh, it’s a big silly one, but who doesn’t like those too?
Dan Thompson’s Brevity for the 14th is one of the small set of punning jokes you can make using mathematician names. Good for the wall of a mathematics teacher’s classroom.
Shaenon K Garrity and Jefferey C Wells’s Skin Horse for the 14th is set inside a virtual reality game. (This is why there’s talk about duplicating objects.) Within the game, the characters are playing that game where you start with a set number (in this case 20) tokens and take turn removing a couple of them. The “rigged” part of it is that the house can, by perfect play, force a win every time. It’s a bit of game theory that creeps into recreational mathematics books and that I imagine is imprinted in the minds of people who grow up to design games.
## Reading the Comics, March 18, 2017: Pi Day Edition
No surprise what the recurring theme for this set of mathematics-mentioning comic strips is. Look at the date range. But here goes.
Henry Scarpelli and Craig Boldman’s Archie rerun for the 13th uses algebra as the thing that will stun a class into silence. I know the silence. As a grad student you get whole minutes of instructions on how to teach a course before being sent out as recitation section leader for some professor. And what you do get told is the importance of asking students their thoughts and their ideas. This maybe works in courses that are obviously friendly to opinions or partially formed ideas. But in Freshman Calculus? It’s just deadly. Even if you can draw someone into offering an idea how we might start calculating a limit (say), they’re either going to be exactly right or they’re going to need a lot of help coaxing the idea into something usable. I’d like to have more chatty classes, but some subjects are just hard to chat about.
Steve Skelton’s 2 Cows And A Chicken for the 13th includes some casual talk about probability. As normally happens, they figure the chances are about 50-50. I think that’s a default estimate of the probability of something. If you have no evidence to suppose one outcome is more likely than the other, then that is a reason to suppose the chance of something is 50 percent. This is the Bayesian approach to probability, in which we rate things as more or less likely based on what information we have about how often they turn out. It’s a practical way of saying what we mean by the probability of something. It’s terrible if we don’t have much reliable information, though. We need to fall back on reasoning about what is likely and what is not to save us in that case.
Scott Hilburn’s The Argyle Sweater lead off the Pi Day jokes with an anthropomorphic numerals panel. This is because I read most of the daily comics in alphabetical order by title. It is also because The Argyle Sweater is The Argyle Sweater. Among π’s famous traits is that it goes on forever, in decimal representations, yes. That’s not by itself extraordinary; dull numbers like one-third do that too. (Arguably, even a number like ‘2’ does, if you write all the zeroes in past the decimal point.) π gets to be interesting because it goes on forever without repeating, and without having a pattern easily describable. Also because it’s probably a normal number but we don’t actually know that for sure yet.
Mark Parisi’s Off The Mark panel for the 14th is another anthropomorphic numerals joke and nearly the same joke as above. The answer, dear numeral, is “chained tweets”. I do not know that there’s a Twitter bot posting the digits of π in an enormous chained Twitter feed. But there’s a Twitter bot posting the digits of π in an enormous chained Twitter feed. If there isn’t, there is now.
John Zakour and Scott Roberts’s Working Daze for the 14th is your basic Pi Day Wordplay panel. I think there were a few more along these lines but I didn’t record all of them. This strip will serve for them all, since it’s drawn from an appealing camera angle to give the joke life.
Dave Blazek’s Loose Parts for the 14th is a mathematics wordplay panel but it hasn’t got anything to do with π. I suspect he lost track of what days he was working on, back six or so weeks when his deadline arrived.
Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips for the 15th is some sort of joke about the probability of the world being like what it seems to be. I’m not sure precisely what anyone is hoping to express here or how it ties in to world peace. But the world does seem to be extremely well described by techniques that suppose it to be random and unpredictable in detail. It is extremely well predictable in the main, which shows something weird about the workings of the world. It seems to be doing all right for itself.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 15th is built on the staggering idea that the Earth might be the only place with life in the universe. The cosmos is a good stand-in for infinitely large things. It might be better as a way to understand the infinitely large than actual infinity would be. Somehow thinking of the number of stars (or whatnot) in the universe and writing out a representable number inspires an understanding for bigness that the word “infinity” or the symbols we have for it somehow don’t seem to, at least to me.
Mikael Wulff and Anders Morgenthaler’s TruthFacts for the 17th gives us valuable information about how long ahead of time the comic strips are working. Arithmetic is probably the easiest thing to use if one needs an example of a fact. But even “2 + 2 = 4” is a fact only if we accept certain ideas about what we mean by “2” and “+” and “=” and “4”. That we use those definitions instead of others is a reflection of what we find interesting or useful or attractive. There is cultural artifice behind the labelling of this equation as a fact.
Jimmy Johnson’s Arlo and Janis for the 18th capped off a week of trying to explain some point about the compression and dilution of time in comic strips. Comic strips use space and time to suggest more complete stories than they actually tell. They’re much like every other medium in this way. So, to symbolize deep thinking on a subject we get once again a panel full of mathematics. Yes, I noticed the misquoting of “E = mc2” there. I am not sure what Arlo means by “Remember the boat?” although thinking on it I think he did have a running daydream about living on a boat. Arlo and Janis isn’t a strongly story-driven comic strip, but Johnson is comfortable letting the setting evolve. Perhaps all this is forewarning that we’re going to jump ahead to a time in Arlo’s life when he has, or has had, a boat. I don’t know.
|
GMAT Changed on April 16th - Read about the latest changes here
It is currently 26 May 2018, 03:09
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# A silo is filled to capacity with W pounds of wheat. Rats eat r pounds
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 45429
A silo is filled to capacity with W pounds of wheat. Rats eat r pounds [#permalink]
### Show Tags
29 Oct 2017, 01:32
Expert's post
2
This post was
BOOKMARKED
00:00
Difficulty:
(N/A)
Question Stats:
55% (00:34) correct 45% (00:51) wrong based on 42 sessions
### HideShow timer Statistics
A silo is filled to capacity with W pounds of wheat. Rats eat r pounds a day. After 25 days, what percentage of the silo’s capacity have the rats eaten?
A. 25r/W
B. 25r/(100W)
C. 2,500r/W
D. r/W
E. r/(25W)
_________________
SC Moderator
Joined: 22 May 2016
Posts: 1676
A silo is filled to capacity with W pounds of wheat. Rats eat r pounds [#permalink]
### Show Tags
29 Oct 2017, 08:41
Bunuel wrote:
A silo is filled to capacity with W pounds of wheat. Rats eat r pounds a day. After 25 days, what percentage of the silo’s capacity have the rats eaten?
A. 25r/W
B. 25r/(100W)
C. 2,500r/W
D. r/W
E. r/(25W)
Algebra
The silo is full of wheat.
Capacity is W pounds ("lbs")
Rats eat r lbs per day * 25 days = 25r lbs eaten, are gone
Percentage of the silo's capacity that the rats have eaten:
$$\frac{AmtGone}{AmtFull} * 100=\frac{25r}{W} * 100 =\frac{2,500r}{W}$$
Pick smart numbers
Let full capacity, W = 100 lbs
Let the rats eat r = 1 lb per day
The rats eat 1 lb * 25 days = 25 lbs
They have eaten 25 of the 100 pounds of the silo's capacity.
Percentage of capacity that the rats have eaten:
$$(\frac{part}{whole}) * 100 =\frac{25}{100} * 100 =$$ 25
With W = 100 and r = 1, find the answer choice that yields 25. "Percentage" is included in the prompt. Look for 25 only.
Eliminate B, D, and E immediately. Their denominators are huge.
Between A and C - the question ask for percentage, not the decimal representation of percentage.
A) 25r/W = 25/100 = .25
C) 2500r/W = 2500/100 = 25
That's a match.
_________________
In the depths of winter, I finally learned
that within me there lay an invincible summer.
A silo is filled to capacity with W pounds of wheat. Rats eat r pounds [#permalink] 29 Oct 2017, 08:41
Display posts from previous: Sort by
|
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 04 Aug 2020, 03:51
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Working simultaneously and independently at an identical constant rate
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 65785
Working simultaneously and independently at an identical [#permalink]
### Show Tags
10 Dec 2010, 05:21
2
53
00:00
Difficulty:
15% (low)
Question Stats:
81% (01:58) correct 19% (02:18) wrong based on 2638 sessions
### HideShow timer Statistics
Working simultaneously and independently at an identical constant rate, 4 machines of a certain type can produce a total of x units of product P in 6 days. How many of these machines, working simultaneously and independently at this constant rate, can produce a total of 3x units of product P in 4 days?
A. 24
B. 18
C. 16
D. 12
E. 8
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 65785
Working simultaneously and independently at an identical constant rate [#permalink]
### Show Tags
18 Oct 2015, 12:12
7
112
Working simultaneously and independently at an identical constant rate, 4 machines of a certain type can produce a total of x units of product P in 6 days. How many of these machines, working simultaneously and independently at this constant rate, can produce a total of 3x units of product P in 4 days?
(A) 24
(B) 18
(C) 16
(D) 12
(E) 8
The rate of 4 machines is rate=job/time=x/6 units per day --> the rate of 1 machine 1/6*(x/6)=x/24 units per day;
Now, again as {time}*{combined rate}={job done} then 4*(m*x/24)=3x --> m=18.
Or as 3 times more job should be done in 1.5 times less days than 3*1.5=4.5 times more machines will be needed 4*4.5=18.
_________________
Manager
Joined: 01 Jan 2015
Posts: 61
Re: Working simultaneously and independently at an identical constant rate [#permalink]
### Show Tags
18 Oct 2015, 13:52
22
15
Bunuel wrote:
Working simultaneously and independently at an identical constant rate, 4 machines of a certain type can produce a total of x units of product P in 6 days. How many of these machines, working simultaneously and independently at this constant rate, can produce a total of 3x units of product P in 4 days?
(A) 24
(B) 18
(C) 16
(D) 12
(E) 8
Kudos for a correct solution.
If it takes 4 machines 6 days to produce x units
then it takes 4 machines 18 days to produce 3x units
then it takes $$M$$ machines 4 days to produce 3x units
There is an inverse relationship between the amount of time and the number of machines required to do the task
So if the task is 3x units, then (4 machines)*(18 days)=(M machines)*(4 days) --> M = 18
##### General Discussion
Manager
Status: swimming against the current
Joined: 24 Jul 2009
Posts: 123
Location: Chennai, India
### Show Tags
10 Dec 2010, 05:33
15
5
ajit257 wrote:
Working simultaneously and independently at an identical constant rate, 4 machines of a
certain type can produce a total of x units of product P in 6 days. How many of these
machines, working simultaneously and independently at this constant rate, can produce a
total of 3x units of product P in 4 days?
A. 24
B. 18
C. 16
D. 12
E. 8
4 machines and 6 days so the total work done is 24
4x6=24
now 3 times the work is 72
So in 4 days if the work is to be completed its 72/4 = 18
Manager
Joined: 25 Jun 2012
Posts: 60
Location: India
WE: General Management (Energy and Utilities)
### Show Tags
21 Sep 2012, 10:16
3
1
Rate*Time = Work
(4R)*(6)=x ( Total 4 machines)
therefore R=x/24
now again (n)*(x/24)*(4) = 3x
solving for n, we get n=18 nos.
Director
Joined: 22 Mar 2011
Posts: 576
WE: Science (Education)
### Show Tags
21 Sep 2012, 11:42
7
1
ajit257 wrote:
Working simultaneously and independently at an identical constant rate, 4 machines of a
certain type can produce a total of x units of product P in 6 days. How many of these
machines, working simultaneously and independently at this constant rate, can produce a
total of 3x units of product P in 4 days?
A. 24
B. 18
C. 16
D. 12
E. 8
4 machines----------x units-----------6days
4 machines---------3x units-----------3*6 = 18 days
M machines---------3x unites----------4 days
The number of days and the number of machines which produce a certain number of units (in this case 3x) are inversely proportional.
This is because all the machines have the same constant rate.
Necessarily 4*18=M*4, therefore M = 18.
_________________
PhD in Applied Mathematics
Love GMAT Quant questions and running.
SVP
Joined: 14 Apr 2009
Posts: 2272
Location: New York, NY
### Show Tags
21 Sep 2012, 14:19
4
1
This is a variation of the Distance = Rate * Time
Except we modify and say Output = # Machines * (Rate * Time)
X = # * r*t
X = 4 * r * 6
x = 24r
r = x/24
Solve for r because we want to find out the rate...then apply that rate to the new situation, which is output of 3x in 4 days
3x = N * r * 4
3x = N * (x/24) * 4
3x = N * (x/6)
18x = N *x
18 = N
Intern
Joined: 31 Aug 2014
Posts: 20
Re: Working simultaneously and independently at an identical [#permalink]
### Show Tags
14 Sep 2015, 05:54
7
ajit257 wrote:
Working simultaneously and independently at an identical constant rate, 4 machines of a certain type can produce a total of x units of product P in 6 days. How many of these machines, working simultaneously and independently at this constant rate, can produce a total of 3x units of product P in 4 days?
A. 24
B. 18
C. 16
D. 12
E. 8
solving in shortcut
m1 d1 h1 / w1 = m2 d2 h2/w2
4x6/x = mx4/3x
solving we get m=18
press kudos if you love my shortcut
Board of Directors
Joined: 17 Jul 2014
Posts: 2420
Location: United States (IL)
Concentration: Finance, Economics
GMAT 1: 650 Q49 V30
GPA: 3.92
WE: General Management (Transportation)
Re: Working simultaneously and independently at an identical constant rate [#permalink]
### Show Tags
18 Oct 2015, 12:52
8
3
Working simultaneously and independently at an identical constant rate, 4 machines of a certain type can produce a total of x units of product P in 6 days. How many of these machines, working simultaneously and independently at this constant rate, can produce a total of 3x units of product P in 4 days?
tough one, let's see...
4 machines do x units in 6 days
we have x/6 => rate of the 4 machines
we know that we need to have 3x units in 4 days
therefore, we need to get to 3x/4 rate of the machines.
rate of one machine is x/6*1/4 = x/24.
now, we need to know how many machines need to work simultaneously, to get 3x done in 4 days.
4x/4 work needs to be done by machines that work at x/24 rate.
let's assign a constant Y for the number of machines:
(x/24)*y = 3x/4
y = 3x/4 * 24/x
cancel 4 with 24, and x with x and get -> 18. Answer choice B
Senior Manager
Joined: 01 Mar 2015
Posts: 294
Re: Working simultaneously and independently at an identical constant rate [#permalink]
### Show Tags
18 Oct 2015, 21:41
18
5
Bunuel wrote:
Working simultaneously and independently at an identical constant rate, 4 machines of a certain type can produce a total of x units of product P in 6 days. How many of these machines, working simultaneously and independently at this constant rate, can produce a total of 3x units of product P in 4 days?
(A) 24
(B) 18
(C) 16
(D) 12
(E) 8
Kudos for a correct solution.
we have x units by 4 machine in 6 days
=> x units in 24 machine days
=> 3x units in 72 machine days
therefore machine required = 72 / 4 = 18
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 10784
Location: Pune, India
Re: Working simultaneously and independently at an identical constant rate [#permalink]
### Show Tags
18 Oct 2015, 22:39
17
17
Bunuel wrote:
Working simultaneously and independently at an identical constant rate, 4 machines of a certain type can produce a total of x units of product P in 6 days. How many of these machines, working simultaneously and independently at this constant rate, can produce a total of 3x units of product P in 4 days?
(A) 24
(B) 18
(C) 16
(D) 12
(E) 8
Kudos for a correct solution.
Using the method discussed in this post: http://www.veritasprep.com/blog/2013/02 ... variation/
4 machines --- x units --- 6 days
? machines --- 3x units -- 4 days
Machines required = 4 * (3x/x) * (6/4) = 18
_________________
Karishma
Veritas Prep GMAT Instructor
Verbal Forum Moderator
Status: Greatness begins beyond your comfort zone
Joined: 08 Dec 2013
Posts: 2445
Location: India
Concentration: General Management, Strategy
Schools: Kelley '20, ISB '19
GPA: 3.2
WE: Information Technology (Consulting)
Re: Working simultaneously and independently at an identical constant rate [#permalink]
### Show Tags
18 Oct 2015, 23:09
3
Let work done per machine per day = m
Work done by 4 machines in a day, 4m = x/6
Work done by 1 machine in a day , m = x/24
Since work needs to completed in 4 days , work done by 1 machine in 4 days = x/6
Number of machines required = Total work to be completed in 3 days/ Work done by a single machine in 4 days
= 3x/(x/6) = 18
_________________
When everything seems to be going against you, remember that the airplane takes off against the wind, not with it. - Henry Ford
The Moment You Think About Giving Up, Think Of The Reason Why You Held On So Long
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 10784
Location: Pune, India
Re: Working simultaneously and independently at an identical constant rate [#permalink]
### Show Tags
18 Dec 2015, 20:31
2
1
VeritasPrepKarishma wrote:
Bunuel wrote:
Working simultaneously and independently at an identical constant rate, 4 machines of a certain type can produce a total of x units of product P in 6 days. How many of these machines, working simultaneously and independently at this constant rate, can produce a total of 3x units of product P in 4 days?
(A) 24
(B) 18
(C) 16
(D) 12
(E) 8
Kudos for a correct solution.
Using the method discussed in this post: http://www.veritasprep.com/blog/2013/02 ... variation/
4 machines --- x units --- 6 days
? machines --- 3x units -- 4 days
Machines required = 4 * (3x/x) * (6/4) = 18
Quote:
In the end, you took 6/4 because the machines have fewer days to do more (3x) work, so more machines will be needed and therefore one has to multiply with the value greater than 1, i.e. 6/4 and not 4/6. Is that a correct reasoning?
Forget "work" here - whether it is more or less. Focus on the relation between the unknown and the variable in question only. So focus on machines and days only.
You have fewer days so you will need more machines to complete the work. So you multiply by 6/4, a quantity greater than 1. This will increase the number of machines.
_________________
Karishma
Veritas Prep GMAT Instructor
Manager
Joined: 12 Feb 2011
Posts: 74
Working simultaneously and independently at an identical constant rate [#permalink]
### Show Tags
17 Feb 2016, 18:09
7
3
4 machines take 6 days to complete x amount of work, so 1 machine will take $$6*4=24$$ days to complete same work. 3x, i.e. three times of work will require that single machine to work for $$24*3=72$$ days. Now in order to do 72 days worth of work in 4 days, we'd need $$72/4=18$$ machines.
Intern
Joined: 09 Apr 2016
Posts: 1
Re: Working simultaneously and independently at an identical [#permalink]
### Show Tags
09 Apr 2016, 14:05
3
4 machines can do x work in 6 days
4 machines can do x/6 work in 1 day
1 machine can do x/6*1/4 work in 1 day
so,
n machines in 4 days= 3x work
n machines in 1 day = n * (x/6*1/4)
n machines in 4 days = n * (x/6*1/4)*4
Hence, n machines can do n*(x/6*1/4)*4 work in 4 days i.e. n*(x/6*1/4)*4 =3x ====>>>> n=18
Intern
Joined: 23 Mar 2016
Posts: 24
Schools: Tulane '18 (M\$)
Re: Working simultaneously and independently at an identical constant rate [#permalink]
### Show Tags
02 May 2016, 13:55
3
given : 4 machines rate = x stuff / 6 days or 4m = x/6
need to find y machines = 3x stuff / 4 days = ym = 3x/4
two equations, two unknowns, solvable now.
4 = x/6 & y = 3x/4
4 = x/6
24 = x (now plug into second equation)
y = 3(24) / 4
y = 3(6)
y = 18
Target Test Prep Representative
Affiliations: Target Test Prep
Joined: 04 Mar 2011
Posts: 2800
Re: Working simultaneously and independently at an identical constant rate [#permalink]
### Show Tags
03 May 2016, 04:52
4
6
Bunuel wrote:
Working simultaneously and independently at an identical constant rate, 4 machines of a certain type can produce a total of x units of product P in 6 days. How many of these machines, working simultaneously and independently at this constant rate, can produce a total of 3x units of product P in 4 days?
(A) 24
(B) 18
(C) 16
(D) 12
(E) 8
Kudos for a correct solution.
We are given that 4 machines can produce a total of x units in 6 days. Since rate = work/time, the rate of the 4 machines is x/6. We need to determine how many machines, working at the same rate, can produce 3x units of product P in 4 days. Thus, we need to determine how many machines are needed for a rate of 3x/4.
Since we know that all the machines are working at an identical constant rate, we can create a proportion to determine the number of machines necessary for a rate of 3x/4. The proportion is as follows:
“4 machines are to a rate of x/6 as n machines are to a rate of 3x/4."
4/(x/6) = n/(3x/4)
24/x = 4n/(3x)
When we cross multiply, we obtain:
72x = 4xn
18 = n
_________________
# Jeffrey Miller | Head of GMAT Instruction | [email protected]
250 REVIEWS
5-STAR RATED ONLINE GMAT QUANT SELF STUDY COURSE
NOW WITH GMAT VERBAL (BETA)
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
Director
Status: Professional GMAT Tutor
Affiliations: AB, cum laude, Harvard University (Class of '02)
Joined: 10 Jul 2015
Posts: 827
Location: United States (CA)
Age: 40
GMAT 1: 770 Q47 V48
GMAT 2: 730 Q44 V47
GMAT 3: 750 Q50 V42
GMAT 4: 730 Q48 V42 (Online)
GRE 1: Q168 V169
GRE 2: Q170 V170
WE: Education (Education)
Re: Working simultaneously and independently at an identical constant rate [#permalink]
### Show Tags
17 May 2016, 18:58
2
Attached is a visual that should help.
Attachments
Screen Shot 2016-05-17 at 7.55.31 PM.png [ 126.58 KiB | Viewed 33222 times ]
Manager
Joined: 07 Mar 2016
Posts: 60
Re: Working simultaneously and independently at an identical constant rate [#permalink]
### Show Tags
23 Aug 2016, 23:29
2
2
Bunuel wrote:
Working simultaneously and independently at an identical constant rate, 4 machines of a certain type can produce a total of x units of product P in 6 days. How many of these machines, working simultaneously and independently at this constant rate, can produce a total of 3x units of product P in 4 days?
(A) 24
(B) 18
(C) 16
(D) 12
(E) 8
Kudos for a correct solution.
Let individual rate be 'a'
therefore, for 4 machines we have:
$$\frac{1}{a} +\frac{1}{a} + \frac{1}{a} +\frac{1}{a} = \frac{x}{6}$$
-->$$\frac{4}{a} = \frac{x}{6}$$
--> ax = 6 x 4 = 24 ......(eq1)
now If Z machines of same rate were used to produce 3x work in 4 days, we would have:
Z($$\frac{1}{a}$$) = $$\frac{3x}{4}$$
--> Z = $$\frac{3x(a)}{4}$$
--> Z = $$\frac{3 (24)}{4}$$ ......(using eq1)
--> Z = 18
Manager
Joined: 25 Jun 2016
Posts: 60
GMAT 1: 780 Q51 V46
Re: Working simultaneously and independently at an identical [#permalink]
### Show Tags
08 Jan 2017, 16:41
2
4
Here's a way to solve this question quickly:
Here's a more in-depth look at a reliable approach for combined work questions with multiple identical machines:
Re: Working simultaneously and independently at an identical [#permalink] 08 Jan 2017, 16:41
Go to page 1 2 Next [ 29 posts ]
|
# Objects of a congruence category
A congruence category is defined as follows according to Awodey's book Category Theory:
We have a congruence $\sim$ on a category $C$. Then $C^\sim$ is defined as: \begin{align*} (C^\sim)_0 &= C_0 \\ (C^\sim)_1 &= \{ \langle f, g \rangle, f \sim g \} \\ \tilde{1}_C &= \langle 1_C, 1_C \rangle \\ \langle f', g' \rangle \circ \langle f, g \rangle &= \langle f' f, g' g \rangle \end{align*}
From this definition I cannot understand what the objects of $C^\sim$ should be. One possibility is that $C^\sim$ has the same objects as $C$, except for the terminal object $(C^\sim)_1$ which is already well defined as a set of arrow "pairs" (clarified here: https://math.stackexchange.com/a/1578265/587087).
Another confusion is the fact that Awodey is defining an object of $C^\sim$, $(C^\sim)_1$ as a set. Should I read this definition as the set of incoming arrows of the terminal object? In that case since congruence relation is an equivalence relation and equivalence relations are reflexive, we would have all the incoming arrows of $C_1$ also in $(C^\sim)_1$. With this approact, I would define objects and morphisms as follows:
A congruence category of the category $C$, $C^\sim$ has objects of $C$ as objects, and morphisms are defined as follows:
$$\forall f\in \operatorname{Hom}(X, Y)_{C}: \langle f, f \rangle \in \operatorname{Hom}(X, Y)_{C^\sim}$$
Until now, we defined objects of newly constructed categories in terms of the base categories; we never defined them in such manner. So the question is that is my understanding of this definition of the congruence category accurate? Am I missing something here?
It seems like Awodey uses $C_0$ notation to denote the set of objects of the category $C$, and $C_1$ to denote the set of morphisms of the category $C$. With this notation, the congruence category is well defined.
|
## Current Issue
column
Display Method:
2021, vol. 28, no. 10, pp. 1545-1548. https://doi.org/10.1007/s12613-021-2354-7
Abstract:
2021, vol. 28, no. 10, pp. 1549-1564. https://doi.org/10.1007/s12613-021-2335-x
Abstract:
Silicon (Si) is widely considered to be the most attractive candidate anode material for use in next-generation high-energy-density lithium (Li)-ion batteries (LIBs) because it has a high theoretical gravimetric Li storage capacity, relatively low lithiation voltage, and abundant resources. Consequently, massive efforts have been exerted to improve its electrochemical performance. While some progress in this field has been achieved, a number of severe challenges, such as the element’s large volume change during cycling, low intrinsic electronic conductivity, and poor rate capacity, have yet to be solved. Methods to solve these problems have been attempted via the development of nanosized Si materials. Unfortunately, reviews summarizing the work on Si-based alloys are scarce. Herein, the recent progress related to Si-based alloy anode materials is reviewed. The problems associated with Si anodes and the corresponding strategies used to address these problems are first described. Then, the available Si-based alloys are divided into Si/Li-active and inactive systems, and the characteristics of these systems are discussed. Other special systems are also introduced. Finally, perspectives and future outlooks are provided to enable the wider application of Si-alloy anodes to commercial LIBs.
2021, vol. 28, no. 10, pp. 1565-1583. https://doi.org/10.1007/s12613-020-2239-1
Abstract:
All-solid-state Li-ion batteries (ASSLIBs) have been widely studied to achieve Li-ion batteries (LIBs) with high safety and energy density. Recent reviews and experimental papers have focused on methods that improve the ionic conductivity, stabilize the electrochemical performance, and enhance the electrolyte/electrode interfacial compatibility of several solid-state electrolytes (SSEs), including oxides, sulfides, composite and gel electrolytes, and so on. Garnet-structured Li7La3Zr2O12 (LLZO) is highly regarded an SSE with excellent application potential. However, this type of electrolyte also possesses a number of disadvantages, such as low ionic conductivity, unstable cubic phase, and poor interfacial compatibility with anodes/cathodes. The benefits of LLZO have urged many researchers to explore effective solutions to overcome its inherent limitations. Herein, we review recent developments on garnet-structured LLZO and provide comprehensive insights to guide the development of garnet-structured LLZO-type electrolytes. We not only systematically and comprehensively discuss the preparation, element doping, structure, stability, and interfacial improvement of LLZOs but also provide future perspectives for these materials. This review expands the current understanding on advanced solid garnet electrolytes and provides meaningful guidance for the commercialization of ASSLIBs.
2021, vol. 28, no. 10, pp. 1584-1602. https://doi.org/10.1007/s12613-021-2278-2
Abstract:
In the past few years, the all-solid lithium battery has attracted worldwide attentions, the ionic conductivity of some all-solid lithium-ion batteries has reached 10−3–10−2 S/cm, indicating that the transport of lithium ions in solid electrolytes is no longer a major problem. However, some interface issues become research hotspots. Examples of these interfacial issues include the electrochemical decomposition reaction at the electrode–electrolyte interface; the low effective contact area between the solid electrolyte and the electrode etc. In order to solve the issues, researchers have pursued many different approaches. The addition of a buffer layer between the electrode and the solid electrolyte has been at the center of this endeavor. In this review paper, we provide a systematic summarization of the problems on the electrode–solid electrolyte interface and detailed reflection on the latest works of buffer-based therapies, and the review will end with a personal perspective on the improvement of buffer-based therapies.
2021, vol. 28, no. 10, pp. 1603-1610. https://doi.org/10.1007/s12613-020-2247-1
Abstract:
Silicon (Si) particles were functionalized using carbon dots (CDs) to enhance the interaction between the Si particles and the binders. First, CDs rich in polar groups were synthesized using a simple hydrothermal method. Then, CDs were loaded on the Si surface by impregnation to obtain the functionalized Si particles (Si/CDs). The phases and microstructures of the Si/CDs were observed using Fourier-transform infrared reflection, X-ray diffraction, scanning electron microscopy, and high-resolution transmission electron microscopy. Si/CDs were used as the active material of the anode for electrochemical performance experiments. The electrochemical performance of the Si/CD electrode was assessed using cyclic voltammetry, electrochemical impedance spectroscopy, and constant current charge and discharge experiment. The electrodes prepared with Si/CDs showed good mechanical structure stability and electrochemical performance. After 150 cycles at 0.2 C, the capacity retention rate of the Si/CD electrode was 64.0%, which is twice as much as that of pure Si electrode under the same test conditions.
2021, vol. 28, no. 10, pp. 1611-1620. https://doi.org/10.1007/s12613-021-2266-6
Abstract:
Silicon anodes are considered to have great prospects for use in batteries; however, many of their defects still need to be improved. The preparation of hybrid materials based on porous carbon is one of the effective ways to alleviate the adverse impact resulting from the volume change and the inferior electronic conductivity of a silicon electrode. Herein, a chain-like carbon cluster structure is prepared, in which MOF-derived porous carbon acts as a shell structure to integrally encapsulate Si nanoparticles, and CNTs play a role in connecting carbon shells. Based on the exclusive structure, the carbon shell can accommodate the volume expansion more effectively, and CNTs can improve the overall stability and conductivity. The resulting composite reveals excellent rate capacity and enhanced cycling stability; in particular, a capacity of 732 mA·h·g−1 at 2 A·g−1 is achieved with a reservation rate of 72.3% after cycling 100 times at 1 A·g−1.
2021, vol. 28, no. 10, pp. 1621-1628. https://doi.org/10.1007/s12613-021-2270-x
Abstract:
The silicon-based material exhibits a high theoretical specific capacity and is one of the best anode for the next generation of advanced lithium-ion batteries (LIBs). However, it is difficult for the silicon-based anode to form a stable solid-state interphase (SEI) during Li alloy/de-alloy process due to the large volume change (up to 300%) between silicon and Li4.4Si, which seriously limits the cycle life of the LIBs. Herein, we use strontium fluoride (SrF2) particle to coat the silicon−carbon (Si/C) electrode (SrF2@Si/C) to help forming a stable and high mechanical strength SEI by spontaneously embedding the SrF2 particle into SEI. Meanwhile the formed SEI can inhibit the volume expansion of the silicon−carbon anode during the cycle. The electrochemical test results show that the cycle performance and the ionic conductivity of the SrF2@Si/C anode has been significantly improved. The X-ray photoelectron spectroscopy (XPS) analysis reveals that there are fewer electrolyte decomposition products formed on the surface of the SrF2@Si/C anode. This study provides a facile approach to overcome the problems of Si/C electrode during the electrochemical cycling, which will be beneficial to the industrial application of silicon-based anode materials.
2021, vol. 28, no. 10, pp. 1629-1635. https://doi.org/10.1007/s12613-021-2249-7
Abstract:
Antimony sulfide (Sb2S3) is a promising anode for lithium-ion batteries due to its high capacity and vast reserves. However, the low electronic conductivity and severe volume change during cycling hinder its commercialization. Herein our work, a three-dimensional (3D) Sb2S3 thin film anode was fabricated via a simple vapor transport deposition system by using natural stibnite as raw material and stainless steel fiber-foil (SSF) as 3D current collector, and a carbon nanotube interphase was introduced onto the film surface by a simple dropping-heating process to promote the electrochemical performances. This 3D structure can greatly improve the initial coulombic efficiency to a record of 86.6% and high reversible rate capacity of 760.8 mAh·g−1 at 10 C. With carbon nanotubes interphase modified, the Sb2S3 anode cycled extremely stable with high capacity retention of 94.7% after 160 cycles. This work sheds light on the economical preparation and performance optimization of Sb2S3-based anodes.
2021, vol. 28, no. 10, pp. 1636-1646. https://doi.org/10.1007/s12613-021-2289-z
Abstract:
Anion-immobilized solid composite electrolytes (SCEs) are important to restrain the propagation of lithium dendrites for all solid-state lithium metal batteries (ASSLMBs). Herein, a novel SCEs based on metal-organic frameworks (MOFs, UiO-66-NH2) and superacid ZrO2 (S-ZrO2) fillers are proposed, and the samples were characterized by X-ray diffraction (XRD), scanning electron microscope (SEM), energy dispersive X-ray spectroscopy (EDS), thermo-gravimetric analyzer (TGA) and some other electrochemical measurements. The –NH2 groups of UiO-66-NH2 combines with F atoms of poly(vinylidene fluoride-co-hexafluoropropylene) (PVDF-HFP) chains by hydrogen bonds, leading to a high electrochemical stability window of 5 V. Owing to the incorporation of UiO-66-NH2 and S-ZrO2 in PVDF-HFP polymer, the open metal sites of MOFs and acid surfaces of S-ZrO2 can immobilize anions by strong Lewis acid-base interaction, which enhances the effect of immobilization anions, achieving a high Li-ion transference number (t+) of 0.72, and acquiring a high ionic conductivity of 1.05×10–4 S·cm–1 at 60°C. The symmetrical Li/Li cells with the anion-immobilized SCEs may steadily operate for over 600 h at 0.05 mA·cm–2 without the short-circuit occurring. Besides, the solid composite Li/LiFePO4 (LFP) cell with the anion-immobilized SCEs shows a superior discharge specific capacity of 158 mAh·g–1 at 0.2 C. The results illustrate that the anion-immobilized SCEs are one of the most promising choices to optimize the performances of ASSLMBs.
2021, vol. 28, no. 10, pp. 1647-1655. https://doi.org/10.1007/s12613-021-2315-1
Abstract:
To improve the sulfur loading capacity of lithium-sulfur batteries (Li–S batteries) cathode and avoid the inevitable “shuttle effect”, hollow N doped carbon coated CoO/SnO2 (CoO/SnO2@NC) composite has been designed and prepared by a hydrothermal-calcination method. The specific surface area of CoO/SnO2@NC composite is 85.464 m2·g–1, and the pore volume is 0.1189 cm3·g–1. The hollow core-shell structure as a carrier has a sulfur loading amount of 66.10%. The initial specific capacity of the assembled Li–S batteries is 395.7 mAh·g–1 at 0.2 C, which maintains 302.7 mAh·g–1 after 400 cycles. When the rate increases to 2.5 C, the specific capacity still has 221.2 mAh·g–1. The excellent lithium storage performance is attributed to the core-shell structure with high specific surface area and porosity. This structure effectively increases the sulfur loading, enhances the chemical adsorption of lithium polysulfides, and reduces direct contact between CoO/SnO2 and the electrolyte.
2021, vol. 28, no. 10, pp. 1656-1665. https://doi.org/10.1007/s12613-021-2319-x
Abstract:
The commercial development of lithium–sulfur batteries (Li–S) is severely limited by the shuttle effect of lithium polysulfides (LPSs) and the non-conductivity of sulfur. Herein, porous g-C3N4 nanotubes (PCNNTs) are synthesized via a self-template method and utilized as an efficient sulfur host material. The one-dimensional PCNNTs have a high specific surface area (143.47 m2·g−1) and an abundance of macro-/mesopores, which could achieve a high sulfur loading rate of 74.7wt%. A Li–S battery bearing the PCNNTs/S composite as a cathode displays a low capacity decay of 0.021% per cycle over 800 cycles at 0.5 C with an initial capacity of 704.8 mAh·g−1. PCNNTs with a tubular structure could alleviate the volume expansion caused by sulfur and lithium sulfide during charge/discharge cycling. High N contents could greatly enhance the adsorption capacity of the carbon nitride for LPSs. These synergistic effects contribute to the excellent cycling stability and rate performance of the PCNNTs/S composite electrode.
2021, vol. 28, no. 10, pp. 1666-1674. https://doi.org/10.1007/s12613-021-2286-2
Abstract:
Ultrafine nano-scale Cu2Sb alloy confined in a three-dimensional porous carbon was synthesized using NaCl template-assisted vacuum freeze-drying followed by high-temperature sintering and was evaluated as an anode for sodium-ion batteries (SIBs) and potassium-ion batteries (PIBs). The alloy exerts excellent cycling durability (the capacity can be maintained at 328.3 mA·h·g−1 after 100 cycles for SIBs and 260 mA·h·g−1 for PIBs) and rate capability (199 mA·h·g−1 at 5 A·g−1 for SIBs and 148 mA·h·g−1 at 5 A·g−1 for PIBs) because of the smooth electron transport path, fast Na/K ion diffusion rate, and restricted volume changes from the synergistic effect of three-dimensional porous carbon networks and the ultrafine bimetallic nanoalloy. This study provides an ingenious design route and a simple preparation method toward exploring a high-property electrode for K-ion and Na-ion batteries, and it also introduces broad application prospects for other electrochemical applications.
2021, vol. 28, no. 10, pp. 1675-1683. https://doi.org/10.1007/s12613-021-2261-y
Abstract:
p-Benzoquinone (BQ) is a promising candidate for next-generation sodium-ion batteries (SIBs) because of its high theoretical specific capacity, good reaction reversibility, and high resource availability. However, practical application of BQ faces many challenges, such as a low discharge plateau (~2.7 V) as cathode material or a high discharge plateau as anode material compared with inorganic materials for SIBs and high solubility in organic electrolytes, resulting in low power and energy densities. Here, tetrahydroxybenzoquinone tetrasodium salt (Na4C6O6) is synthesized through a simple neutralization reaction at low temperatures. The four –ONa electron-donating groups introduced on the structure of BQ greatly lower the discharge plateau by over 1.4 V from ~2.70 V to ~1.26 V, which can change BQ from cathode to anode material for SIBs. At the same time, the addition of four –ONa hydrophilic groups inhibits the dissolution of BQ in the organic electrolyte to a certain extent. As a result, Na4C6O6 as the anode displays a moderate discharge capacity and cycling performance at an average work voltage of ~1.26 V versus Na/Na+. When evaluated as a Na-ion full cell (NIFC), a Na3V2(PO4)3 || Na4C6O6 NIFC reveals a moderate discharge capacity and an average discharge plateau of ~1.4 V. This research offers a new molecular structure design strategy for reducing the discharge plateau and simultaneously restraining the dissolution of organic electrode materials.
2021, vol. 28, no. 10, pp. 1684-1692. https://doi.org/10.1007/s12613-021-2312-4
Abstract:
Aqueous zinc-ion batteries (ZIBs) are deemed as the idea option for large-scale energy storage systems owing to many alluring merits including low manufacture cost, environmental friendliness, and high operations safety. However, to develop high-performance cathode is still significant for practical application of ZIBs. Herein, Ba0.23V2O5·1.1H2O (BaVO) nanobelts were fabricated as cathode materials of ZIBs by a typical hydrothermal synthesis method. Benefiting from the increased interlayer distance of 1.31 nm by Ba2+ and H2O pre-intercalated, the obtained BaVO nanobelts showed an excellent initial discharge capacity of 378 mAh·g−1 at 0.1 A·g−1, a great rate performance (e.g., 172 mAh·g−1 at 5 A·g−1), and a superior capacity retention (93% after 2000 cycles at 5 A·g−1).
2021, vol. 28, no. 10, pp. 1693-1704. https://doi.org/10.1007/s12613-021-2328-9
Abstract:
The microstructural properties and electrochemical performance of zinc (Zn) sacrificial anodes during strain-induced melt activation (SIMA) were investigated in this study. The samples were subjected to a compressive ratio of 20%–50% at various temperatures (425–435°C) and durations (5–30 min). Short-term electrochemical tests (anode tests) based on DNV-RP-B401 and potentiodynamic polarization tests were performed in 3.5wt% NaCl solution to evaluate the electrochemical efficiency and corrosion behavior of the samples, respectively. The electrochemical test results for the optimum sample confirmed that the corrosion current density declined by 90% and the anode efficiency slightly decreased relative to that of the raw sample. Energy-dispersive X-ray spectroscopy, scanning electron microscopy, metallographic images, and microhardness profiles showed the accumulation of alloying elements on the boundary and the conversion of uniform corrosion into localized corrosion, hence the decrease of the Zn sacrificial anode’s efficiency after the SIMA process.
2021, vol. 28, no. 10, pp. 1705-1715. https://doi.org/10.1007/s12613-021-2258-6
Abstract:
Mg–Sn–Y alloys with different Sn contents (wt%) were assessed as anode candidates for Mg-air batteries. The relationship between microstructure (including the second phase, grain size, and texture) and discharge properties of the Mg–Sn–Y alloys was examined using microstructure observation, electrochemical measurements, and galvanostatic discharge tests. The Mg–0.7Sn–1.4Y alloy had a high steady discharge voltage of 1.5225 V and a high anodic efficiency of 46.6% at 2.5 mA·cm−2. These good properties were related to its microstructure: small grain size of 3.8 μm, uniform distribution of small second phase particles of 0.6 μm, and a high content (vol%) of (\begin{document}$11\overline{2}0$\end{document})/(\begin{document}$10\overline{1}0$\end{document}) orientated grains. The scanning Kelvin probe force microscopy (SKPFM) indicated that the Sn3Y5 and MgSnY phases were effective cathodes causing micro-galvanic corrosion which promoted the dissolution of Mg matrix during the discharge process.
2021, vol. 28, no. 10, pp. 1716-1725. https://doi.org/10.1007/s12613-020-2186-x
Abstract:
Synthesizing atomically precise Ag nanoclusters (NCs), which is essential for the general development of NCs, is quite challenging. In this study, we report the synthesis of high-purity atomically precise Ag NCs via a kinetically controlled strategy. The Ag NCs were prepared using a mild reducing agent via a one-pot method. The as-prepared Ag NCs were confirmed to be Ag49(D-pen)24 (D-pen: D-penicillamine) on the basis of their matrix-assisted laser desorption ionization time-of-flight mass spectrometric and thermogravimetric characteristics. The interfacial structures of the Ag NCs were illustrated by proton nuclear magnetic resonance and Fourier-transform infrared spectroscopy. The Ag NCs were supported on activated carbon (AC) to form Ag NCs/AC, which displayed excellent activity for the catalytic reduction of 4-nitrophenol with a kinetic reaction rate constant k of 0.21 min−1. Such a high k value indicates that the composite could outperform several previously reported catalysts. Moreover, the catalytic activity of Ag NCs/AC remained nearly constant after six times of recycle, which suggests its excellent stability.
2021, vol. 28, no. 10, pp. 1726-1734. https://doi.org/10.1007/s12613-020-2146-5
Abstract:
The properties of γ-ray-reduced graphene oxide samples (GRGOs) were compared with those of hydrazine hydrate-reduced graphene oxide (HRGO). Fourier transform infrared spectroscopy, X-ray diffractometry, Raman spectroscopy, Brunauer–Emmett–Teller surface area analysis, thermogravimetric analysis, electrometry, and cyclic voltammetry were carried out to verify the reduction process, structural changes, and defects of the samples, as well as to measure their thermal, electrical, and electrochemical properties. Irradiation with γ-rays distorted the structure of GRGOs and generated massive defects through the extensive formation of new smaller sp2-hybridized domains compared with those of HRGO. The thermal stability of GRGOs was higher than that of HRGO, indicating the highly efficient removal of thermally-labile oxygen species by γ-rays. RRGO prepared at 80 kGy showed a pseudocapacitive behavior comparable with the electrical double-layer capacitance behavior of HRGO. Interestingly, the specific capacitance of GRGO was enhanced by nearly three times compared with that of HRGO. These results reflect the advantages of radiation reduction in energy storage applications.
2021, vol. 28, no. 10, pp. 1735-1744. https://doi.org/10.1007/s12613-021-2272-8
Abstract:
MnO2/biomass carbon nanocomposite was synthesized by a facile hydrothermal reaction. Silkworm excrement acted as a carbon precursor, which was activated by ZnCl2 and FeCl3 combining chemical agents under Ar atmosphere. Thin and flower-like MnO2 nanowires were in-situ anchored on the surface of the biomass carbon. The biomass carbon not only offered high conductivity and good structural stability but also relieved the large volume expansion during the charge/discharge process. The obtained MnO2/biomass carbon nanocomposite electrode exhibited a high specific capacitance (238 F·g−1 at 0.5 A·g−1) and a superior cycling stability with only 7% degradation after 2000 cycles. The observed good electrochemical performance is accredited to the materials’ high specific surface area, multilevel hierarchical structure, and good conductivity. This study proposes a promising method that utilizes biological waste and broadens MnO2-based electrode material application for next-generation energy storage and conversion devices.
|
# Browse College of Agricultural, Consumer and Environmental Sciences (ACES) by Title
• (Urbana, Ill. : University of Illinois, College of Agriculture, Extension Service in Agriculture and Home Economics,, 1962)
application/pdf
PDF (597Kb)
• (Urbana, Ill. : University of Illinois College of Agriculture and Agricultural Experiment Station,, 1930)
application/pdf
PDF (4Mb)
• (2002)
Results include an analysis of the water balance components over periods bounded with similar satiated moisture storage conditions. The field site was found to have an excess storage during most periods which is partially ...
application/pdf
PDF (11Mb)
• (2013-02-03)
Polycystic ovary syndrome (PCOS) is the most common endocrine disorder of reproductive age women and has received substantial research time and financial support over the past few decades. Despite on-going research efforts, ...
application/pdf
PDF (2Mb)
• (Urbana, Ill. : University of Illinois Agricultural Experiment Station,, 1937)
application/pdf
PDF (5Mb)
• (2014-09-16)
Global agricultural markets reflect the increasing complexity of modern consumer demand for food safety and quality. This demand has triggered changes throughout the food industry, and led to greater opportunities for ...
application/pdf
PDF (1Mb)
• (1955)
application/pdf
PDF (2Mb)
• (2007)
Production and profitability of corn (Zea mays L.) is highly dependent on fertilizer nitrogen (N) use. The efficiency of fertilizer N use and the capacity of the soil to supply N are the two main components determining the ...
application/pdf
PDF (2Mb)
• (Urbana, Ill. : University of Illinois Agricultural Experiment Station,, 1935)
application/pdf
PDF (4Mb)
• (1989)
Climate effects on corn and soybean production on two representative midwestern grain farms are incorporated into production function estimates by using physiologic crop growth simulation models over fourteen years of ...
application/pdf
PDF (7Mb)
• (2006)
The objectives of this research were to survey the variability in a sample set of commercial hybrids and then identify different hybrids to determine relationships between starch rheological characteristics and food ...
application/pdf
PDF (3Mb)
• (Urbana, Ill. : University of Illinois Agricultural Experiment Station, 1957)
application/pdf
PDF (2Mb)
• (2000)
The characterization of spatial variability in soil properties is a prerrequisite to many activities, such as site specific management (i.e., precision farming), groundwater and solute tranport modeling, groundwater pollution ...
application/pdf
PDF (41Mb)
• (2000)
The characterization of spatial variability in soil properties is a prerequisite to many activities, such as site specific management (i.e., precision farming), groundwater and solute transport modeling, groundwater pollution ...
application/pdf
PDF (11Mb)
• (1992)
Sequences with various levels of homology to knob have been isolated from T. dactyloides. These sequences have various degrees of cross-hybridization to knob and are apparently arranged differently in the genome as ascertained ...
application/pdf
PDF (3Mb)
• (1980)
A series of experiments dealing with variation as well as inheritance of hardseededness in soybeans was conducted from July 1977 to July 1979. Field and greenhouse plantings were made at Isabela Experiment Station, Puerto ...
application/pdf
PDF (3Mb)
• (1996)
Studies were conducted on eight soft red winter wheat cultivars to evaluate differences in root growth, to evaluate the effects of barley yellow dwarf virus (BYDV) on root and shoot growth, and to study the effects of a ...
application/pdf
PDF (5Mb)
• (1954)
application/pdf
PDF (6Mb)
• (1991)
The I locus controls pattern expression of anthocyanin pigments in soybean seed coats. Using isolines of soybean cultivars varying only at the I locus, a seed coat cell wall protein that is affected by seed coat color was ...
application/pdf
PDF (4Mb)
• (1998)
To estimate the actual physical distance and to identify the putative DNA sequences responsible for variation in $\Theta$, a partial YAC contig was constructed for the D23S22-D23S23 interval. Primary screening of a YAC ...
application/pdf
PDF (5Mb)
|
#### Vol. 306, No. 2, 2020
Recent Issues Vol. 307: 1 2 Vol. 306: 1 2 Vol. 305: 1 2 Vol. 304: 1 2 Vol. 303: 1 2 Vol. 302: 1 2 Vol. 301: 1 2 Vol. 300: 1 2 Online Archive Volume: Issue:
The Journal Editorial Board Subscriptions Officers Special Issues Submission Guidelines Submission Form Contacts ISSN: 1945-5844 (e-only) ISSN: 0030-8730 (print) Author Index To Appear Other MSP Journals
The homotopy groups of the $\eta$-periodic motivic sphere spectrum
### Kyle Ormsby and Oliver Röndigs
Vol. 306 (2020), No. 2, 679–697
##### Abstract
We compute the homotopy groups of the $\eta$-periodic motivic sphere spectrum over a field $\mathsf{k}$ of finite cohomological dimension with characteristic not $2$ and in which $-1$ is a sum of four squares. We also study the general characteristic $0$ case and show that the ${\alpha }_{1}$-periodic slice spectral sequence over $ℚ$ determines the ${\alpha }_{1}$-periodic slice spectral sequence over all extensions $\mathsf{k}∕ℚ$. This leads to a speculation on the role of a “connective Witt-theoretic $J$-spectrum” in $\eta$-periodic motivic homotopy theory.
However, your active subscription may be available on Project Euclid at
https://projecteuclid.org/pjm
We have not been able to recognize your IP address 18.208.202.194 as that of a subscriber to this journal.
Online access to the content of recent issues is by subscription, or purchase of single articles.
or by using our contact form.
|
# Feeling Your Way Around in High Dimensions
The simplest objects of interest in any dimension, which are also the basis for approximating arbitrary objects, are the convex polytopes and in this column I'll explain how to begin to probe them...
Bill Casselman
University of British Columbia, Vancouver, Canada
Email Bill Casselman
### Introduction
Visualization of objects in dimensions greater than three is not possible, which means that it is very difficult to comprehend objects in those dimensions. You cannot see the whole thing all in one glance. All one can do is probe with mathematics to verify a limited number of selected properties, like the proverbial blind men dealing with the elephant. The simplest objects of interest in any dimension, which are also the basis for approximating arbitrary objects, are the convex polytopes and in this column I'll explain how to begin to probe them.
Convex polytopes occur in many applications of mathematics, particularly related to linear optimization problems in the domain of linear programming. There is a huge literature on this topic, but I shall be interested in something different, because even in theoretical mathematics interesting problems involving convex polytopes arise. The terminology traditionally used in linear programming originated in applications, perhaps with non-mathematicians in mind, and it diverges considerably from standard mathematical terminology. Instead, I shall use notation more conventionally used in geometry.
### What is a convex polytope?
In any dimension $n$, a convex set is one with the property that the line segment between any two points in it is also in the set. For example, billiard balls and cubes are convex, but coffee cups and doughnuts are not. There is another way to characterize convex sets, at least if they are closed: A closed convex set is the intersection of all closed half planes containing it.
There is a third way to characterize closed convex sets. The convex hull of any subset $\Sigma$ of ${\Bbb R}^{n}$ is the set of all linear combinations $$\sum_{i=1}^{m} c_{i} P_{i}$$ with each $P_{i}$ in $\Sigma$, each $c_{i} \ge 0$, and $\sum c_{i} = 1$. If the set has just two elements in it, this is the line segment between them, and if it has three this will be the triangle with these points as vertices. In general, each point of the convex hull of $\Sigma$ is a weighted average of points in $\Sigma$. A point of a set is extremal if it is not in the interior of a line segment between two points of the set. For example, the points on the unit sphere are the extremal points of the solid sphere, and the extremal points of a cube are its corners. Every closed convex set is the convex hull of its extremal points.
A closed convex set is called a polytope if it is the convex hull of a finite number of points and rays, like this:
Any closed convex set is the intersection of a collection of half-planes, but for a polytope this collection may be taken to be finite. In other words, a convex polytope is the intersection of a finite number of half-spaces $f(x) \ge 0$, where $f$ is a function defined by a formula $$f(x) = \Big(\sum_{i=1}^{n} c_{i} x_{i}\Big) + c_{n+1}.$$
So there are two ways in which a polytope might be handed to you, so to speak. In the first, you are given a finite set of points and a finite set of rays, and the polytope is the convex hull of these. In the other, you are given a finite collection of affine functions $f_{i}(x)$, and the polytope is the intersection of the regions $f_{i}(x) \ge 0$. Each of these two ways of specifying a polytope is important in some circumstances, and one of the basic operations in working with convex polytopes is to transfer back and forth between one specification and the other.
There are other natural questions to ask, too. A face of a polytope is any one of the polytopes on its boundary. A facet is a proper face of maximal dimension. Here are some natural problems concerning a given convex polytope, which I suppose given as the intersection of half-planes:
• to tell whether the region is bounded;
• if it is bounded, to find its extremal vertices, and if not, its extremal edges as well;
• if it is bounded, to find its volume;
• to find the maximum value of a linear function on the polytope;
• to find the dimension of the polytope (taken to be $-1$ if the region is empty, which it may very well be).
But here I have space for exactly one of these questions and some related issues: How to find the dimension of a polytope specified by a set of inequalities? This is by no means a trivial matter. For example, the inequalities
\eqalign { x &\ge 0 \cr y &\ge 0 \cr x + y &\le 1 \cr }
define a triangle, whereas
\eqalign { x &\ge 0 \cr y &\ge 0 \cr x + y &\le 0 \cr }
specifies just the origin, and
\eqalign { x &\ge 0 \cr y &\ge 0 \cr x + y &\le -1 \cr }
specifies an empty set. So the answer to the question does not depend on just the algebraic form of the inequalities.
I include also links to a collection of Python programs that illustrate what I say in a practical fashion.
### Affine functions and affine transformations
The basic idea of of dealing with a convex polytope is to find one or more coordinate systems well adapted to it. I shall assume that the inequalities you are given to start with are all of the form $f(x) \ge 0$, where $f(x)$ is an affine function, one of the form $\Big(\sum_{i=1}^{n} c_{n}x_{n}\Big) + c_{n+1}$. This differs from a linear function by the presence of the constant. This assumption is easy enough to arrange by changing signs and shifting variables from one side of an equation to another.
A coordinate system well adapted to a polytope will have as origin a vertex of the polytope. But there is one technical point to be dealt with, right off. There might not be any vertices!. There are two possible reasons for this. One is that the region defined by the inequalities might be empty. But there is another possibility. The region $x + y - 1 \ge 0$ in ${\Bbb R}^{2}$ has no vertices--every point in it is in a line segment between two other points in the region. A similar example, although not so evidently, is the region $\Omega$ defined by the inequalities \eqalign { x + y + z + 1 &\ge 0 \cr x &\ge 0 \cr y + z - 2 &\ge 0 \cr } in 3D. Seeing this requires a small amount of work. The point is that the linear parts of the group of functions here are linearly dependent, since $$x+ y + z = x + (y+z)$$ The planes $x=0$ and $y+z=0$ intersect in a line through the origin, which happens to be the line through the vector $[0, 1, -1]$. The functions $x$ and $y+z$, hence also $x+y+z$ are invariant along lines parallel to this line. If $P$ is any point in $\Omega$, so are all the points $P + t[0, 1, -1]$. Another way of saying this is that the functions defining $\Omega$ are all invariant with respect to translations by scalar multiples of the vector $[0, 1, -1]$.
This suggests that to understand any region $\Omega$ defined by inequalities $f_{i}(x) \ge 0$, we must find out all the vectors whose translations leave the functions $f_{i}(x)$ invariant. Once we know that, we shall reduce the complexity of our problem by looking at a slice through the region that is not invariant under any of these, and which does have vertices. We may as well take this to be $\Omega$ in subsequent probing.
We shall do this at the same time we carry out another simple but necessary task. The inequalities defining $\Omega$ have no particular relationship to whatever coordinate system we start with. We would like to find a coordinate system that is somehow adapted to it. A first step is to require:
I. The coordinates are among the affine functions specifying $\Omega$ to start with.
A second step is to require:
II. The origin of the coordinate system must be a vertex of $\Omega$.
I'll call a coordinate system satisfying both conditions a coordinate system adapted to $\Omega$ at that vertex. Having such a coordinate system is equivalent to being given a linear basis of unit vectors at that point, which I'll call a vertex frame. In a first phase of computation we shall find a coordinate system satisfying I, and find one satisfying II in a second phase. In satisfying I, we shall compute at the same time the translations leaving the original $\Omega$ invariant.
So, given the original inequalities, we are going to make some coordinate changes, first to get I, then to get both I and II. These will be affine coordinate changes. The way to do this is hinted at by the example we have already looked at. If we take as new coordinates the functions $$X = x, Y = y+z-2, Z = z$$ the inequalities can be expressed as \eqalign { X &\ge 0 \cr Y &\ge 0 \cr X + Y - 1 &\ge 0 \, . \cr }
From these equations we can see that the region is invariant with respect to translations along the $Z$-axis (in the new coordinates), and that the new origin $(X, Y) = (0, 0)$ is a vertex of the slice $Z = 0$ of $\Omega$.
### Affine coordinate changes
Suppose we start with coordinates $(x_{1}, x_{2})$ in two dimensions. An affine coordinate change to new coordinates $(y_{1}, y_{2})$ is determined by a pair of equations \eqalign { x_{1} &= a_{1,1}y_{1} + a_{1,2}y_{2} + a_{1} \cr x_{2} &= a_{2,1}y_{1} + a_{2,2}y_{2} + a_{2} \cr } or, more succinctly, as a single matrix equation $$\left[ \matrix { x_{1} \cr x_{2} \cr } \right ] = \left[ \matrix { a_{1,1} & a_{1,2} \cr a_{21,} & a_{2,2} \cr } \right ] \left[ \matrix { y_{1} \cr y_{2} \cr } \right ] + \left[ \matrix { a_{1} \cr a_{2} \cr } \right ] \quad {\rm or} \quad x = Ay + a \, .$$ There is another useful way to write this, too: $$\left [ \matrix { x \cr 1 \cr } \right ] = \left[ \matrix { A & a \cr 0 & 1 } \right ] \left [ \matrix { y \cr 1 \cr } \right ]$$ The matrix $\left[ \matrix { A & a \cr 0 & 1 } \right ]$ is called an affine transformation matrix.
If we are given an affine function $f(x) = c_{1}x_{1} + c_{2}x_{2} + c$, what is its expression in the new coordinates? We can figure this out by doing only a small amount of calculation. The expression for $f$ can be written as a matrix equation, since $$c_{1}x_{1} + c_{2}x_{2} + c = \left [ \matrix { c_{1} & c_{2} & c } \right ] \left [ \matrix { x_{1} \cr x_{2} \cr 1 } \right ]$$ But if we change coordinates from $x$ to $y$ we have $$f(x) = \left [ \matrix { c_{1} & c_{2} & c } \right ] \left [ \matrix { x \cr 1 \cr } \right ] = \left [ \matrix { c_{1} & c_{2} & c } \right ] \left[ \matrix { A & a \cr 0 & 1 } \right ] \left [ \matrix { y \cr 1 \cr } \right ]$$ so the vector of coefficients of $f$ in the new system is $$\left [ \matrix { c_{1} & c_{2} & c } \right ] \left[ \matrix { A & a \cr 0 & 1 } \right ] \, .$$ More tediously: \eqalign { f &= c_{1}x_{1} + c_{2}x_{2} + c \cr &= c_{1}(a_{1,1}y_{1} + a_{1,2}y_{2} + a_{1}) + c_{2}(a_{2,1}y_{1} + a_{2,2}y_{2} + a_{2} ) + c \cr &= (c_{1}a_{1,1} + c_{2}a_{2,1}) y_{1} + (c_{1}a_{1,2} + c_{2}a_{2,2}) y_{2} + (c_{1}a_{1} + c_{2}a_{2} + c) \, . \cr } Theorem. If we are given an array of row vectors defining affine functions $$F_{x} = \left[ \matrix { f_{1} \cr f_{2} \cr \dots \cr f_{M} \cr } \right ]$$ then applying an affine coordinate transformation will change it to the array whose rows are now given in new coordinates $$F_{y} = F_{x} \left[ \matrix { A & a \cr 0 & 1 \cr } \right ] \, .$$
### Finding a good affine basis
How do we find a coordinate system satifying condition I? Let's re-examine an earlier example. In slightly different notation, we started with the matrix of affine functions $$F_{x} = \left [ \matrix { 1 & 1 & 1 & \phantom{-}1 \cr 1 & 0 & 0 & \phantom{-}0 \cr 1 & 1 & 0 & -2 \cr } \right ]$$ We happened to notice, essentially by accident, that by a change of variables this became $$F_{y} = \left [ \matrix { 1 & 0 & 0 & 0 \cr 0 & 1 & 0 & 0 \cr 1 & 1 & 0 & 1 \cr } \right ]$$ How exactly do we interpret this? How can we do this sort of thing systematically?
The rows of the new matrix still represent the original affine functions, but in a different coordinate system. In this new coordinate system, a subset of the rows will hold a version of the identity matrix. In this example, it is a copy of $I_{2} = \left[\matrix { 1 & 0 \cr 0 & 1 \cr } \right ]$. The corresponding affine functions are a subset of the new coordinates. Ideally, we would get a copy of the full identity matrix $I_{3}$, but in this case the inequalities are in some sense degenerate, and this matches with the related facts (1) we get $I_{2}$ rather than $I_{3}$ and (2) the third column of the new matrix is all zeroes. Very generally, if we start with $m$ inequalities in $n$ dimensions, corresponding to an $m \times (n+1)$ matrix, we want to obtain another $m \times (n+1)$ matrix some of whose rows amount to some $I_{r}$, with $n-r$ vanishing columns. In this case, the inequalities will be invariant with respect to translations by a linear subspace of dimension $n-r$, and in order to go further we want to replace the original region by one in dimension $r$. We do this by simply eliminating those vanishing columns.
How do we accomplish this? By the familiar process of Gaussian elimination applying elementary column operations. In doing this, we shall arrive at something that looks first like $$\left [ \matrix { 1 & 0 & 0 & 0 & * \cr 0 & 1 & 0 & 0 & * \cr * & * & 0 & 0 & * \cr 0 & 0 & 1 & 0 & * \cr * & * & * & 0 & * \cr * & * & * & 0 & * \cr * & * & * & 0 & * \cr } \right ] \; ,$$ and then upon reduction to a slice $$\left [ \matrix { \, 1 & 0 & 0 & * \cr 0 & 1 & 0 & * \cr * & * & 0 & * \cr 0 & 0 & 1 & * \cr * & * & * & * \cr * & * & * & * \cr * & * & * & * \cr } \right ] \; .$$ If the rows giving rise to $I_{r}$ are the $f_{i}$ with $i$ in $I$, then they form the basis of the new coordinate system. In this example, $I = \{ 1, 2, 4 \}$.
But the origin of this system might not be a vertex of the region we are interested in. Suppose we start with the inequalities in 2D \eqalign { x_{1} + 2x_{2} &\ge \phantom{-}1 \cr 2x_{1} + x_{2} &\ge \phantom{-}1 \cr x_{1} + x_{2} &\ge \phantom{-}1 \cr x_{1} - x_{2} &\ge -1 \cr 3x_{1}+2x_{2} &\le \phantom{-}6 \, . \cr } These become the array of functions and corresponding matrix
\eqalign { x_{1} + 2x_{2} - 1 &\ge 0 \cr 2x_{1} + x_{2} - 1 &\ge 0 \cr x_{1} + x_{2} - 1 &\ge 0 \cr x_{1} - x_{2} + 1 &\ge 0 \cr -3 x_{1} - 2x_{2} + 6 &\ge 0 \cr } $$F_{x} = \left [ \matrix { \phantom{-}1 & \phantom{-}2 & -1 \cr \phantom{-}2 & \phantom{-}1 & \phantom{-}1 \cr \phantom{-}1 & \phantom{-}1 & -1 \cr \phantom{-}1 & -1 & \phantom{-}1 \cr -3 & -2 & \phantom{-}6 \cr } \right ] \, .$$
By Gaussian elimination through elementary column operations this becomes $$F_{y} = \left [ \matrix { \phantom{-}1 & \phantom{-}0 & \phantom{-}0 \cr \phantom{-}0 & \phantom{-}1 & \phantom{-}0 \cr \phantom{-}1/3 & \phantom{-}1/3 & -1/3\cr -1 & \phantom{-}1 & \phantom{-}1 \cr -1/3 & -4/3 & \phantom{-}13/3 \cr } \right ] \; .$$ How is this to be read? The coordinate variables are the rows containing the copy of$I_{2}$, namely $f_{1}$ and $f_{2}$. The other functions are \eqalign { f_{3} &= (1/3) f_{1} + (1/3) f_{2} \cr f_{4} &= -f_{1} + f_{2} + 1 \cr f_{5} &= -(1/3)f_{1} -(4/3)f_{2} + 13/3 \, . \cr } The rows display the coefficients of the original functions, but in the current coordinates. The origin in this coordinate system is the point $(1/3, 1/3)$ in the original one. It is not a vertex of the region defined by the inequalities, since the third function in the new coordinate system is $(1/3)y_{1} + (1/3)y_{2} -1/3$, which is negative at $(0, 0)$.
But we are in good shape to find a true vertex of the region.
The traditional simplex method starts with a linear function, say $\varphi$, and a vertex frame for the region $\Omega$. It then moves out from this to other vertex frames and other vertices either (a) to locate a vertex at which $\varphi$ takes a maximum value or (b) find that the region is in fact unbounded, and the function is not bounded on it.
We do not yet have a vertex frame for the region we are interested in. In fact that's what we want to find. But it turns out that from the region we are given and the frame we do have, satisying condition I, we can construct a new convex polytope together with a very particular linear function $\varphi$, with this marvelous property: first of all, we are given in the construction of the new region a vertex frame for it; second, finding a maximum value for $\varphi$ on this new polytope will tell us (a) whether or not the original region is empty and (b) if it is not empty exhibit a vertex frame for it.
### Finding a vertex
So, suppose now that we have a coordinate system whose coordinates are among the affine functions we started with. We must apply a new coordinate transformation to wind up with a coordinate system in which, in addition, the origin satisfies the inequalities, or in other words is a vertex of the region concerned.
The way to do this, as mentioned in the previous section, is through a clever trick. We add one dimension to our system, and consequently a new coordinate variable $x_{n+1}$ as well as as two new inequalities requiring $0 \le x_{n+1} \le 1$. The new variable introduces homogeneous coordinates to the original system, so each inequality $\sum_{1}^{n} c_{i}x_{i} + c_{n+1} \ge 0$ becomes $\sum_{1}^{n} c_{i}x_{i} + c_{n+1}x_{n+1} + 0 \ge 0$. This adds a column to our matrix $F$, and also adds two rows. The two rows correspond to the new inequalities $f_{m+1} = x_{n+1} \ge 0$ and $f_{m+2} = -x_{n+1} + 1 \ge 0$. For our last example, the new matrix is $$\left [ \matrix { \phantom{-}1 & \phantom{-}0 & \phantom{-}0 & 0 \cr \phantom{-}0 & \phantom{-}1 & \phantom{-}0 & 0 \cr \phantom{-}1/3 & \phantom{-}1/3 & -1/3 & 0 \cr -1 & \phantom{-}1 & \phantom{-}1 & 0 \cr -1/3 & -4/3 & \phantom{-}13/3 & 0 \cr \phantom{-}0 & \phantom{-}0 & \phantom{-}1 & 0 \cr \phantom{-}0 & \phantom{-}0 & -1 & 1 \cr } \right ] \; .$$ In the hyperplane $x_{n+1} = 0$ there is just one point of this region, at which all the $x_{i} = 0$. Its intersection with the hyperplane $x_{n+1} = 1$, or $1 - x_{n+1} = 0$, is exactly our original region. Basically, what we have done is to erect a cone of height $1$ over the original region.
The point of this is that for this new problem the coordinate origin is a vertex and a coordinate system satisfying both conditions I and II. Given this, we can pursue traditional methods of linear programming, moving around from one vertex frame to another, to find a maximum value of the affine function $x_{n+1}$. This will have one of two possible outcomes. Either the maximum value remains at $0$, in which case the original domain is empty, or the maximum value is $1$, in which case we have a vertex of the original region, which is embedded into the hyperplane $x_{n+1} = 1$.
In our example, the second case occurs. We find ourselves with the matrix $$\left [ \matrix { -1 & -1 & \phantom{-}3 & 1\cr \phantom{-}0 & \phantom{-}1 & \phantom{-}0 & 0\cr \phantom{-}0 & \phantom{-}0 & \phantom{-}1 & 0\cr \phantom{-}0 & \phantom{-}2 & -3 & 0\cr -4 & -1 & -1 & 4\cr -1 & \phantom{-}0 & \phantom{-}0 & 1\cr \phantom{-}1 & \phantom{-}0 & \phantom{-}0 & 0\cr } \right ]$$ How is this to be read? There are three coordinate functions, singled out by the copy of the identity matrix $I_{3}$: $f_{6}$, $f_{2}$, $f_{3}$. The other functions are \eqalign { f_{1} &= -f_{6} -f_{1} + 3f_{2} + 1 \cr f_{4} &= 2f_{1} - 3f_{2} \cr f_{5} &= -4f_{6} -f_{1} - f_{2} + 4 \cr f_{6} &= -f_{7} + 1 \cr } To interpret in terms of the original problem, we set $f_{7} = 0$ (i.e. $f_{6} = 1$). Looking at the picture, you can see that this is associated to the vertex at $(0, 1)$ in original coordinates.
### Finding the dimension
So far, we have two possible outcomes---either the region we are considering is empty, or we have located one of its vertices and a vertex frame at that vertex as well. In the first case, we have in effect learned that the region has dimension $-1$. How do we find the dimension in the second case?
To explain what happens next, I have to introduce a new notion, that of the tangent cone at a vertex. The figures below should suffice to explain this. On the left is a 2D region defined by inequalities that are indicated by lines. On the right is an illustration of one of the vertices, along with its tangent cone.
Finding dimension is a purely local affair, so we don't have to worry any more with other vertices---we can throw away all those functions which are not $0$ at the given vertex. The region defined by the inequalities is now the tangent cone at the vertex. Apply the simplex method to maximize the last coordinate variable. If it is $0$, that variable is a constant $0$ on the cone, and may be discarded. Then try again with the new last variable.
Otherwise, it will be infinite, and there is a real edge of the cone leading away from our vertex along which this variable increases. In this case, discard all the functions which depend on this variable, and with the ones remaining, set it equal to $0$. We are in effect looking at a degenerate cone containing this edge, and we may replace this degenerate cone by a slice transverse to the edge, which is also a cone. The dimension of the original region is one more than the dimension of this new cone, and it is easy to see that we have for this cone also a coordinate system satisfying I and II.
In effect, we are looking at a series of recursive steps in which dimensions decrease at each step. Sooner or later we arrive at a region of dimension $1$, which is straightforward.
### Final remarks
In a sequel, I'll look at the problem of listing all vertices of a polytope, in effect converting the specification of a polytope in terms of inequalities to its dual specification as the convex hull of its vertices.
• Hermann Minkowski, Theorie der konvexen Körper, insbesondere Begründung ihres Oberflächenbegriffs', published only as item XXV in volume II of his Gesammelte Abhandlungen, Leipzig, 1911.
As far as I know, it was Minkowski who first examined convex sets in detail, maybe the first great mathematician to take them seriously.
• Hermann Weyl, The elementary theory of convex polytopes', pp. 3--18 in volume I of Contributions to the theory of to the theory of games, Annals of Mathematics Studies 21, Princeton University Press, 1950.
This is a translation by H. Kuhn of the original article `Elementare Theorie der konvexen Polyeder', Commentarii Mathematici Helvetici 17. 1935, pp. 290--306. At this time, game theory and linear programing had just made the subject of convex polytopes both interesting and practical.
• Vašek Chvátal. Linear programming, W. H. Freeman, 1983.
This is the classic textbook. Somewhat verbose for someone who is already familiar with linear algebra, but still worth reading.
• Günter Ziegler, Lectures on polytopes, Springer, 1995.
A deservedly popular textbook, neither too elementary nor too difficult.
### Notes
• Points and vectors. Points are positions, expressed as $(x, y, z)$. Vectors are relative displacements, expressed as $[x,y,z]$. As they say in physics classes, vectors have no position. Vectors give rise to translations: $v$ shifts $P$ to $P + v$. So you can add vectors to each other, and you can add points and vectors. But you cannot legitimately add two points. The two notions are often confused, since the coordinates of a position are those of its displacement from the origin. But origins depend on a particular choice of coordinate system, so this is not an intrinsic identification.
• Programs. Look at the web page LP-python.html.
Bill Casselman
University of British Columbia, Vancouver, Canada
Email Bill Casselman
The AMS encourages your comments, and hopes you will join the discussions. We review comments before they're posted, and those that are offensive, abusive, off-topic or promoting a commercial product, person or website will not be posted. Expressing disagreement is fine, but mutual respect is required.
Welcome to the
Feature Column!
These web essays are designed for those who have already discovered the joys of mathematics as well as for those who may be uncomfortable with mathematics.
|
Angles of rotation
I am haunted by a problem of angles of rotation. Here's my nightmare. At the origin of an inertial frame $R_0(XYZ)$, there is a ball. Another reference frame $R(xyz)$ is fixed to the center of this ball. At the beginning, the origin of $R_0$ coincides the origin of $R$, and $x\parallel X$, $y\parallel Y$, $z\parallel Z$ and in the same orientation. Then the ball begins to roll. Firstly, it rotates by an angle of $\psi$ around the axe $Z$ in $R_0$; secondly, it rotates by an angle of $\theta$ around the 'new' axe $x$ in $R$; finally, it rotates by an angle of $\phi$ around the 'final' axe $y$ in $R$. My question is, after these three rotations, could we express the angles of rotation ($\theta_X$,$\theta_Y$,$\theta_Z$)of the ball with respect to the 3 axes $X$,$Y$ and $Z$ of inertial frame $R_0$? I tried different ways, but it seems to be too complexe to express them without using integration. Is there any existing mathematical formulation to deal this problem? Thank you for taking a look!
Hint:
You can solve this problem representing the rotations by means of quaternions. (see here and, more briefly: Representing rotations using quaternions ).
Let $\mathbf{i},\mathbf{j},\mathbf{k}$ the three versors of the axis $X,Y,Z$ in $R_0$. The first rotation of the $x,y,z$ axis around $Z$ of angle $\psi$, is given, in exponential notation, by: $$\mathbf{x_1}=e^{\mathbf{k}\psi/2}\mathbf{i}e^{\mathbf{k}\psi/2}=\mathbf{i}\cos \psi+\mathbf{j} \sin \psi$$ $$\mathbf{y_1}=e^{\mathbf{k}\psi/2}\mathbf{j}e^{\mathbf{k}\psi/2}=-\mathbf{i}\sin \psi+\mathbf{j} \cos \psi$$ $$\mathbf{z_1}=\mathbf{z}=\mathbf{k}$$ where $\mathbf{x_1},\mathbf{y_1},\mathbf{z_1}$ are the rotated versors of the moving frame $R$(note that this is a simple rotation in the $XY$ plane and the quaternion representation seems a unnecessary complication).
Now the second rotation is about the versor $\mathbf{x_1}$ of an angle $\theta$ so , with obvious notation, the rotated versors are given by:
$$\mathbf{x_2}=\mathbf{x_1}$$ $$\mathbf{y_2}=e^{\mathbf{x_1}\theta/2}\mathbf{y_1}e^{-\mathbf{x_1}\theta/2}$$ $$\mathbf{z_2}=e^{\mathbf{x_1}\theta/2}\mathbf{k}e^{-\mathbf{x_1}\theta/2}$$
Here the calculus becomes more complex and the use of quaternions is well justified. And, in the same way we can represent the other rotation around $\mathbf{y_2}$:
$$\mathbf{x_3}=e^{\mathbf{y_2}\phi/2}\mathbf{x_2}e^{-\mathbf{y_2}\phi/2}$$ $$\mathbf{y_3}=\mathbf{y_2}$$ $$\mathbf{z_3}=e^{\mathbf{y_2}\phi/2}\mathbf{z_2}e^{-\mathbf{y_2}\phi/2}$$
From these you can easily found the angles between the rotated versors $\mathbf{x_3},\mathbf{y_3},\mathbf{z_3}$, and the axis of the fixed reference system using the scalar product.
Clearly the explicit calculation of the components of $\mathbf{x_3},\mathbf{y_3},\mathbf{z_3}$ requires a bit of algebraic work. And, if you prefer, you can find the same result also using the matrix representation of the rotations, obviously in the correct order.
• Thank you very much! It helps a lot. – Zihan Shen Feb 19 '16 at 7:56
• You are welcome! I've added a reference also to the matrix representation :) – Emilio Novati Feb 19 '16 at 8:26
• Thank you very much! Could I ask another question? – Zihan Shen Feb 19 '16 at 10:15
• If it is a clarification about this question you can add a comment. If it is a different question it is better to ask another question ant link the two questions if these are related. – Emilio Novati Feb 19 '16 at 11:27
• Thank you for your advice. I am rethinking about it and going to ask another question. ;) – Zihan Shen Feb 19 '16 at 11:40
|
## Introduction
Many phytoplankton species produce substances that are toxic to humans, hence we consider such phytoplankton ‘toxic algae’, but the evolution and functional role of these secondary metabolites remain unclear. They may be released into the environment and have allelochemical effects to combat competitors [1, 2] or grazers [3], they may be mainly intracellular and have toxic and/or deterrent effects on grazers [4, 5], or in mixotrophic species that they may have offensive roles, functioning as a venom towards prey [6,7,8]. Some toxic algae may also form dense blooms, potentially promoted by their toxicity and consequent grazer deterrent effects, and a defensive role of toxin production is thus often assumed [4, 9]. This interpretation is supported by the observations that algal toxins often have a deterrent rather than a toxic effect on grazers [10,11,12,13,14,15], and that toxin production may be upregulated in the presence of grazers or their cues, as demonstrated in dinoflagellates, Alexandrium spp. [13, 16, 17], and in diatoms, Pseudo-nitzschia spp. [18]. The latter observation also suggests that toxin production comes at a cost—why else regulate the production in response to the need? Optimal defense theory, well founded in terrestrial plant ecology, predicts that inducibility of defense should evolve only when the defense is costly and variable in time [19]. However, experiments have generally been unable to demonstrate such costs in toxic phytoplankton, and the growth rates of grazer-induced and un-induced cells, or of toxic and nontoxic strains of the same phytoplankton species appear to be similar [3, 13, 16, 18], and model exercises have similarly suggested costs to be trivial [20]. However, if defense via toxin production is both effective and costless, one would expect most phytoplankton to be toxic, which is far from being the case [19]. Also, the promotion of phytoplankton diversity in the ocean due to grazing and consequent evolution of defense mechanisms, as demonstrated in both theoretical models and considerations [21,22,23] and whole-community experiments [24], functions only if the defense comes at a cost.
One of the most studied groups of toxic phytoplankton is the dinoflagellate Alexandrium spp. since the toxins it produces, Paralytic Shellfish Toxins (PSTs), have serious effects on humans. The main potential grazers of dinoflagellates are copepods. While the toxin production may be induced by grazer (copepod) cues [13], the production also depends on the nutrient status of the algae. Specifically, when nitrogen or carbon (light) is limiting the PST production is reduced, while nutrient balanced or P-limited cells may produce PSTs at a high rate (growth of P-limited cells decreases while the production of toxins continues) [25, 26]. PSTs contain large amounts of nitrogen, and so this dependency suggests that the costs of PST production only become obvious when N or light is limiting, and may become manifest either as a reduction in growth rate because part of the assimilated N goes towards PST production; or as a cessation of the production of PSTs in favor of a higher growth rate with a consequent loss of the defense; or a combination of the two.
Here, we explore by means of a simple resource allocation model the costs of chemical defense in phytoplankton, using PST producing dinoflagellate Alexandrium spp. as an example. We consider both energy and material costs of PST production, their dependency on environmental conditions, and the reduction in mortality that is achieved by the defense investments. Through fitness optimization we demonstrate that costs can be substantial, leading to >20% reduction in cell division rate, when grazer abundance is high and nitrogen availability is low. We also examine the conditions under which the production of toxins provides maximum benefit to the toxin producing species.
### Model description
The model is based on a resource allocation optimization model modified from Berge et al. [27]. The division rate of phytoplankton depends on the acquisition of three resources, viz. carbon (via photosynthesis), nitrogen (as nitrate), and phosphorous (as phosphate), as well as on metabolic expenses. The cells may invest some of their assimilated nitrogen into toxin production and combust some of their fixed organic carbon to cover costs of toxin synthesis, and in return experience reduced grazing mortality. We search for the investment that maximizes the fitness of the cells, defined as the difference between cell division rate and mortality rate (= net growth rate). We use the model to explore the dependency of division rate, net growth rate, and toxicity on the environmental resource availability and predation risk.
Uptake of carbon, nitrogen, and phosphorous to the cell is described by the symbol Ji (mass flows i being carbon via light-dependent photosynthesis (C), nitrogen (N), or phosphorous (P) in units of μg per day; see Table 1 for central symbols and parameters), and are combined to synthesize new biomass (Fig. 1). Respiratory costs, RC (units of μg C per day), include costs of biomass synthesis (including transport) and maintenance of the structure. Toxin is produced at a rate, Tr, and implies an additional respiratory cost, RT. Biomass synthesis rate, Jtot, is constrained by the stoichiometric balance between carbon, nitrogen, and phosphorous. Finally, we assume that toxins and structure have constant but different stoichiometry.
#### Uptake of carbon, nitrogen, and phosphorous
The potential uptake Ji of resource i (C, N, P) is governed by a standard saturating functional response:
$$J_{\mathrm{i}}{\mathrm{ = }}M_{\mathrm{i}}\frac{{A_{\mathrm{i}}Y_{\mathrm{i}}}}{{A_{\mathrm{i}}Y_{\mathrm{i}}{\mathrm{ + }}M_{\mathrm{i}}}},$$
(1)
where Ai is the affinity for resource i, Yi resource concentration (μg L−1), and Mi is the maximum uptake rate.
#### Costs
Respiratory costs include costs of both uptake and mobilization of resources for synthesis through each pathway, and the maintenance of the structure. This metabolic cost is assumed to be 30% of the total carbon budget [28] plus a constant basal respiration (R0) independent of JC, i.e.,
$$R_{\mathrm{C}} = 0.3J_{\mathrm{C}}{\mathrm{ + }}R_0.$$
(2)
#### Rate and cost of toxin production
Let θ be the fraction of nitrogen uptake that a toxic phytoplankton cell devotes to toxin production. Then the potential rate of toxin production is (units of μg N d−1):
$$T_{\mathrm{{pot}}}\left( \theta \right) = \;\theta J_{\mathrm{N}}.$$
(3)
As the toxin production needs carbon both for building the toxin molecules and to fuel the respiratory costs of toxin production, the actual toxin production rate may be limited by the available carbon to:
$$T_{\mathrm{r}}\left( \theta \right) = \min \left[ {T_{\mathrm{{pot}}}\left( \theta \right),\,\left( {J_{\mathrm{C}} - R_{\mathrm{C}}} \right){\mathrm{/}}\left( {n_{\mathrm{T}} + r_{\mathrm{T}}} \right)} \right],$$
(4)
where nT is the mass of carbon need per mass of nitrogen in the toxin, and rT is the respiratory cost per nitrogen synthesized into toxins (units of g C (g N) −1).
The total cost of toxin production in terms of carbon then becomes (μg C d−1):
$$R_{\mathrm{T}}\left( \theta \right) = \left( {n_{\mathrm{T}} + r_{\mathrm{T}}} \right)T_{\mathrm{r}}(\theta ) \cdot$$
(5)
#### Synthesis and growth rate
The assimilated carbon, nitrogen and phosphorous are combined to synthesize new structure. We assume constant C:N mass ratio, QCN (units of μg C (μg N)−1) and C:P mass ratio, QCP (units of μg C (μg P)−1) of the cell. The total available carbon for growth is then JCRCRT where JC represents the total uptake of carbon through photosynthesis, and RC and RT represent the costs of maintenance and biomass synthesis, and costs of toxin production, respectively. The carbon required to synthesize biomass from nutrients is QCN(JNTr) and QCPJP for nitrate and phosphate, respectively. The growth rate is constrained by the limiting resource (Liebig’s law of the minimum) such that the total flux of carbon (and nutrients) available for growth Jtot is:
$$J_{\mathrm{{tot}}}\left( \theta \right) = \min \left[ {J_{\mathrm{{C}}} - R_{\mathrm{C}} - R_{\mathrm{T}}\left( \theta \right),\quad Q_{\mathrm{{CN}}}\left( {J_{\mathrm{N}} - T_{\mathrm{r}}\left( \theta \right)} \right),\quad Q_{\mathrm{{CP}}}J_{\mathrm{P}}} \right].$$
(6)
Synthesis is not explicitly limited by a maximum synthesis capacity; limitation of synthesis is taken care of by the limitation of uptake of carbon, nitrogen and phosphorous in the functional responses (Eq. 1). The division rate μ of the cells (d−1) is the total flux of carbon divided by the carbon mass of the cell (wX):
$$\mu \left( \theta \right) = J_{{\mathrm{tot}}}\left( \theta \right){\mathrm{/}}w_X \cdot$$
(7)
Further subtracting the predation mortality (mp) yields the net growth rate (r):
$$r(\theta ) = \mu (\theta ) - m_p(\theta ) \cdot$$
(8)
We assume that predation mortality increases linearly with zooplankton biomass (Z), and due to the toxin production, zooplankton reduces its grazing pressure on toxic cells exponentially as:
$$m_{\mathrm{p}}\left( \theta \right) = m_{\mathrm{{p}},0}Ze^{\mathrm{{ - \beta T}}\left( \theta \right)},$$
(9)
where mp,0 is a mortality constant, T is the cellular toxin content (μg N cell−1) estimated as the toxin production rate divided by the cell division rate (=Tr), and β represents the strength of toxic effect.
The resulting population growth rate, r(θ), is a measure of the fitness of the phytoplankton and we assume that the cell has the ability to optimize its growth rate by regulating its resource allocation to toxin production such that it maximizes its fitness. The optimal proportion of assimilated N devoted to toxin production then becomes:
$$\theta ^ \ast = \arg \mathop {{\max }}\limits_\theta \left\{ {r\left( \theta \right)} \right\}.$$
(10)
### Model parameterization
Calibration of parameters is based on laboratory measurements on the dinoflagellates Alexandrium minutum and A. tamarense as the toxic species, and the copepods Acartia clausi and A. tonsa as the grazer zooplankton. To calibrate the basic grazing parameters, we use data for non-toxic strains of A. minutum.
#### Parameters related to phytoplankton division rate (μ)
We use experimental observations reported in the literature for cell division rate (μ) of A. minutum as a function of light intensity (L) and nutrient concentrations (N and P) to estimate parameter values for maximum uptake rates (Mi) and affinities (Ai) (Fig. 2a-c). While calibrating these parameters a non-toxic strain of A. minutum was considered, and as a result, no cost is deducted. For calibration, we adjust the parameters manually to fit the curves with data and keep them close to the existing values of the parameters from other studies (when available). Due to Liebig’s minimum law for synthesis (Eq. 6), the synthesis is limited by one of the resources (either C or N) and further growth cannot materialize in spite of the availability of other non-limiting resource. As a result, growth cannot increase any further.
We use a constant value 1.07 × 10−4 μg C d−1 for the basal respiration rate (R0) taken from the range reported in Frangoulis et al. [29].
#### Parameters related to the cost of toxin production
To estimate the two parameters related to the cost of toxin production (nT, rT), we consider the stoichiometry of PSTs and the biochemistry of PST synthesis. PSTs produced by Alexandrium spp. (and other organisms) consist of saxitoxin and multiple derivatives; they are cyclic nitrogenous compounds that are synthesized from amino acid precursors. Here we consider the synthesis of saxitoxin, one of the dominating toxins, from the amino acid glutamate via arginine to estimate the approximate costs of toxin production. The costs are two-fold; i.e., the metabolic cost of biosynthesis, rT, and the cost in terms of material invested in the toxin, nT.
### Material investment (nT):
The molecular formula of saxitoxin in C10H17N704, that is 10 moles of carbon per 7 moles of nitrogen, or nT = (10 × 12)/(7 × 14) = 1.23 µg C (μg N)−1.
### Metabolic expenses (rT):
Saxitoxin is synthesized from arginine [30], which in turn is typically synthesized from glutamate that in phytoplankton has to be synthesized de novo. The metabolic cost of converting glutamate to arginine is 12 mol ATP per mol of arginine, and the synthesis of arginine precursors (i.e., glutamate and carbamoyl-P) requires another 32 mol of ATP, i.e., a total of 44 mol ATP per mol of arginine [31]. We have no estimate of the cost of synthesizing saxitoxin from arginine (e.g., necessary transcripts and translation to enzymes), as well as costs due to actual or potential autotoxicity (hence we ignore it), but it takes 3 mol of arginine to synthesize one mol of saxitoxin. Therefore, it requires at least 132 mol of ATP to synthesize 1 mol of saxitoxin. The respiratory equivalent of ATP synthesis is about 0.235 mol ATP synthesized per liter of oxygen consumed; or about 2 g of organic carbon combusted per mol of ATP synthesized [32]. Thus, 2 × 132 g of organic carbon is respired per mol of toxin synthesized. With 7 × 14 g N per mol toxin yields an estimate of rT = 2.7 µg C (μg N)−1.
#### Parameters related to zooplankton feeding
The parameters related to reduction in feeding on toxic cell (β) was calibrated from Teegarden and Cembella [12] who quantified the feeding rate on a toxic strain of A. minutum by A. tonsa as a function of cellular toxin content (μg N cell−1). We fit Eq. 9 to the experimental data to estimate β (Fig. 2d). We use 8.95×10−4 as the cellular carbon content of A. minutum [28], and chose the value of the mortality constant (mp,0) as 0.008 L (μg C)−1 d−1 based on the clearance rate of A. tonsa [33].
## Results
### Optimal allocation strategy
The optimal allocation of nitrogen to toxin production is the one that yields the highest population growth rate (Fig. 3). The optimal allocation of N to toxin production increases with decreasing environmental nitrogen availability and increasing concentration of zooplankton (Fig. 3). As a result, the investment in defense—toxin production—in toxic dinoflagellates varies with both N-availability and grazing pressure, with implications to cell division rate, grazing mortality rate, and population growth rate (Figs. 47).
### Toxin production, grazing mortality, and cost of defense
At high N concentrations, the cells produce toxin whenever zooplankton is present but production ceases in the absence of grazers (Fig. 4a). Note that for simplicity we assume that toxin production rate becomes zero when there is no grazer. In contrast, at low N, phytoplankton produce toxins only when zooplankton biomass exceeds a threshold concentration, and the cellular toxin content increases with the biomass of the zooplankton (Figs. 4a, 6b).
Defended cells experience lower grazing mortality than undefended cells especially when nitrogen availability is high and zooplankton concentration is low (Fig. 4d) as cellular toxin content remains high (Fig. 4a). Taken together the grazing mortality increases with zooplankton density and decreases with availability of nitrogen.
The cost of the defense can be quantified as a reduction in the cell division rate of defended relative to undefended cells. This cost is significant only at high zooplankton biomass and/or low nitrogen availability (Figs. 4b, 5a) and increases with increasing zooplankton biomass and decreasing nitrogen concentration (Fig. 6c, d). At realistically high zooplankton biomass and realistically low nitrogen availability, cell division rate may be reduced by more than 20%. At high nitrogen concentrations, the cells produce toxins from the excess nitrogen (not used for growth) and therefore the costs are, of course, unmeasurably low (Fig. 6d). The net outcome of the defense investment is that defended cells have similar or higher population growth rates (fitness) than undefended cells under all nutrient conditions (Fig. 4c). The absolute enhancement is largest at high N (Fig. 6e) whereas the relative advantage is most pronounced at low N and high zooplankton biomass (Fig. 6f).
### Effects of P and light limitation
If phosphate or light rather than nitrogen limit cell growth, N is in excess and the excess N can be allocated to toxin production and the cells consequently become well defended. Figure 7a, b display the effects of light limitation on toxin production and cellular toxin contents, respectively. There are two regions in this parameter space: the left region where light is limiting and toxin production increases with light intensity while toxin content decreases with light intensity; and the right region, where nitrogen is limiting and both toxin production and cellular toxin content are independent of light intensity. Toxin production and cellular contents similarly vary in the nitrogen-phosphorous parameter space, between phosphorous limitation at low P and nitrogen limitation at high P (Fig. 7c, d). Under all conditions, cellular toxin contents increase with decreasing ambient P.
### Sensitivity analysis of β and rT
Since the parameter β was calibrated based on only four available data points (Fig. 2d), we perform a sensitivity analysis by varying β, which represents the reduction in zooplankton grazing due to cellular toxin (see supplementary material fig. A1). Overall, the qualitative patterns described above are robust to changes in β. When nitrogen concentrations are high, there is no observable change in cellular toxin concentration with varying β as organisms produce toxins at their maximum rate and consequently division rates remain same. However, due to the increase in predation pressure with decreasing β, population growth rate decreases. On the other hand, with decrease in nitrogen concentrations, organisms produce more toxin with decrease in β, leading to reduction in division rates, as well as growth rates.
Similarly, varying the parameter rT, representing the metabolic cost of synthesizing toxin, does not lead to observable changes in the system dynamics (see supplementary material fig. A2).
## Discussion
Experiments have demonstrated that PST producing dinoflagellates become less toxic when N-starved, that toxin content is high in exponentially growing cells in N:P balanced environments, and that the cells become most toxic when P-limited [20, 26, 34,35,36]. Light-limited cells also accumulate more toxins [37] and toxin production is enhanced in the presence of grazer cues [13]. Our model qualitatively reproduces all these observations.
The model further conforms with the experimental observation that the cost of toxin production, quantified as a reduction in cell division rate of toxin-producing cells compared to cells or strains that produce less or no toxins, is negligible when resources, mainly N and light, are plentiful [3, 13]. However, when light or nitrogen are limiting cell growth, and when grazers are abundant, we predict that the cost of investing in toxin production may be substantial and lead to >20% reduction in cell division rate. There is evidence for other defense mechanisms in phytoplankton where the cost of the defense only materializes when resources are limiting [38,39,40]. However, the prediction of costs of toxin production still remains to be tested experimentally. The costs of the investment are two-fold: (i) the cells need nitrogen to build toxin molecules, and this requirement compete with the nitrogen investment in cell structure and growth. While the nitrogen requirement for toxin production is small in absolute terms, leading John and Flynn [20] to consider it trivial, it becomes significant when nitrogen is limiting, and may eventually lead to a total shut down of toxin production. (ii) The cells further need energy for the synthesis of toxins, and this energy eventually comes from photosynthesis. This is why toxin production rate increases with light intensity when light is the limiting resource. In this situation, cell division rate increases faster than toxin production rate with increasing light, and therefore cellular toxin content decreases with increasing light, an effect and a mechanism in agreement with experimental observations [37].
The environmental conditions that promote cell toxicity, i.e., high grazer biomass, actually coincide with the time of the year when nitrogen is the most limiting resource in temperate shelf regions. Thus, zooplankton (copepod) biomass is at its seasonal maximum during summer and may easily exceed 10–100 µg C L−1 [41], which will impose a high predation mortality and thus induce high cell toxicity. At this time of the year, concentrations of inorganic nitrogen in surface waters in temperate shelf regions are low, typically below 1–10 µg N L−1 [42, 43], and under these conditions the model predicts that the cost of toxin production is substantial, leading to >20% decrease in cell division rate. However, the investment in defense pays off since defended cells experience lower grazing mortality, and may consequently have net growth rates up to twice or more of that of undefended cells (e.g., Fig. 6f). If the toxins have deterrent rather than toxic effects, i.e., preventing the toxic cells from being consumed rather than killing grazers that do consume the cells [10, 11, 44], then the toxic cells may have a competitive advantage and a monospecific bloom may develop. Indeed, blooms of toxic algae typically occur during summer [45], which is consistent with these considerations.
Furthermore, the toxic species can also gain growth advantage under high nitrogen and high grazer biomass, as growth rates can be doubled due to the benefit provided by toxin production at a negligible production cost (Fig. 6d, f). Such situations are comparable with many coastal areas where N:P ratios remain considerably high due to erratic input of nitrogen from human activities, and can consequently result in toxic blooms [46]. Thus, potentially toxic species can also become toxic when exposed to high N regime caused by eutrophication.
The success of toxic bloom formation also depends on the evolutionary history of zooplankton grazer and toxic species [39]. Evidence from both fresh waters and marine waters shows that grazers can evolve full or partial resistance against the toxic algae [47, 48]. Our results suggest that cellular toxin content will increase in the presence of toxin-resistant grazers as will the costs (see supplementary material fig. A1). However, if the benefits in terms of reduced grazing mortality vanish due to grazer resistance, the possibility of success of the toxic species in terms of forming bloom will be reduced or disappear.
Many phytoplankton species have evolved what is supposed to be defense mechanisms to avoid or reduce predation, ranging from hard shells and spines, to evasive behaviors and toxin production, and such defenses may have significant implications on predator-prey interactions, population dynamics, and diversity of phytoplankton communities [49]. However, the trade-offs are rarely quantified and often not even documented. While there is increasing evidence that toxin-production in many cases provides partial protection from grazing in dinoflagellates, the costs have hitherto not been properly quantified. The currencies of benefits and costs are growth and mortality. Here, we have shown that the costs are not only strongly dependent on the concentration of grazers but also on the resource availability, and that the realized costs in nature are typically highest during summer when the defense is most needed. The delicate balance between costs, benefits, and resource availability not only explains why the defense is inducible but also has implications for the timing of toxic algal blooms. Further studies on costs and benefits of toxin production are needed to experimentally test our model predictions (e.g., toxin production and growth under sufficient and deficient resources), and for deeper understanding of the mechanisms and evolution of inducible toxin production. At present, very little is known about the consequences of inducible toxin production on the community level in complex communities. Future work should be devoted towards investigating the complex integrated ecological issues of inducible toxin production, species diversity, and food web structure.
|
# Proving the trigonometric identities $\tan x + \cot x = 2\csc 2x$ and $\sec^2 \frac{x}{2} = \frac{2}{1 + \cos x}$
I need help with identities. I don't really know how to start or if I'm doing it right. Can someone show me the steps in solving these?
1.) $\tan x+ \cot x= 2\csc 2x$
$$\frac{\sin x}{\cos x} + \frac{\cos x}{\sin x}= 2\csc 2x$$
Is this the right way?
2.) $\sec^2 \dfrac{x}{2} = \dfrac{2}{1+ \cos x}$
I'm not sure how to start this one.
• Hint: For the number $2$ remember that $$\color{blue}{\sec t=\frac1{\cos t}\qquad \text{and}\qquad\cos^2\alpha=\frac{1+\cos 2\alpha}2}$$ Then $$\sec^2\frac x2=\frac1{\cos^2\frac x2}= ?$$ Apr 18 '17 at 6:04
• Please read this tutorial on how to typeset mathematics on this site. For the first one, get a common denominator, then use the trigonometric identity $\sin^2x + \cos^2x = 1$. You will also need the trigonometric identity $\sin 2x = 2\sin x\cos x$. Apr 18 '17 at 7:15
I would say that writing $$\frac{\sin x}{\cos x} + \frac{\cos x}{\sin x}= 2\csc 2x$$ is certainly not the way to proceed. You are in essence assuming what you need to prove. I would start proving this by writing $$\tan x+\cot x= \frac{\sin x}{\cos x} + \frac{\cos x}{\sin x} =\frac{\sin^2 x+\cos^2x}{\sin x\cos x}=\cdots.$$
For $1.)$ use the definition of tangent, cotangent, cosecant and the double angle formula for sine.
$$\csc 2x=\frac{1}{\sin2x}~~~~~~\textrm{and}~~~~~~~~\sin 2x=2\sin x\cos x$$ hence $$1-2\sin x\cos x\csc 2x=0\tag 1$$ as well as $$\cot x=\frac{\cos x}{\sin x} ~~~~~~\textrm{and}~~~~~~~~\tan x=\frac{\sin x}{\cos x}$$ or $$\cos x=\sin x\cot x \tag 2$$ $$\sin x=\cos x\tan x \tag 3$$ Starting from $(1)$ $$\Leftrightarrow\sin^2 x + \cos^2 x -2\sin x \cos x \csc 2x=0$$ use $(2)$ $$\Leftrightarrow\sin^2 x + \cos\sin x\cot x -2\sin x \cos x \csc 2x=0$$ divide by $\sin x \neq 0$ $$\Rightarrow\sin x + \cos x\cot x -2\cos x \csc 2x=0$$ use (3) $$\Leftrightarrow\cos x\tan x+ \cos x\cot x -2\cos x \csc 2x=0$$ divide by $\cos x \neq 0$ $$\Rightarrow\tan x+ \cot x -2\csc 2x=0\Leftrightarrow\tan x+ \cot x=2\csc 2x$$ For $2.)$ substitute $x=2y$, use the definition of secant and the double angle formula for cosine. Definition of secant $$\sec y\cos y=1\tag 1$$ and the double angle formula for cosine $$\cos 2y-2\cos^2y+1=0$$ multiply by $\sec^2 y\neq 0$ $$\Rightarrow\sec^2y+\sec^2 y\cos 2y-2\sec^2 y\cos^2 y=0$$ using $(1)$ twice $$\Leftrightarrow\sec^2y+\sec^2 y\cos 2y-2=0$$ collect $\sec^2 y$ $$\Leftrightarrow\sec^2y(1+\cos 2y)-2=0$$ or for $1+\cos x\neq=0$ $$\Rightarrow\sec^2\frac{x}{2}=\frac{2}{1+\cos x}$$
|
# alternatives to the concept of a page, Gutenberg press vs LCD screen
Brian Dunn bd at bdtechconcepts.com
Fri Aug 30 05:17:58 CEST 2019
On Fri, 30 Aug 2019 00:56:27 +0200
Reinhard Kotucha <reinhard.kotucha at web.de> wrote:
>
> The limitations of current hardware lead to huge fonts and thus very
> few words per line. What I've seen so far looked like typeset with
> Microsoft Word and hyphenation turned off. I've never seen math
> formulas yet but suppose that they aren't supported properly either.
MathML has some support. MathJax might be used instead. SVG images
work pretty well, and LaTeX code can be embedded in ALT tags.
Nevertheless, imagine a formula which takes a full line on a printed
page being forced to fit on a small e-reader. Reflow, shrink, or pan?
> All this doesn't mean that content markup isn't important. It's at
> least required in order to support visually impaired readers and I
> think that TeX still lags behind in this area.
Experimental packages include accsupp and axessibility. For the lwarp
package (LaTeX to HTML), I've introduced an ALT key for
\includegraphics, and the next version will introduce \ThisAltTag for
custom one-off tags. SVG math and some chemistry have the LaTeX
expression included as an ALT tag, unless it's a complicated nasty
thing in which case it can be set to a generic default.
Brian
--
Brian Dunn
BD Tech Concepts LLC
http://www.BDTechConcepts.com
|
location: Publications → journals
Search results
Search: MSC category 93 ( Systems theory; control )
Expand all Collapse all Results 1 - 3 of 3
1. CMB 2010 (vol 54 pp. 527)
Preda, Ciprian; Sipos, Ciprian
On the Dichotomy of the Evolution Families: A Discrete-Argument Approach We establish a discrete-time criteria guaranteeing the existence of an exponential dichotomy in the continuous-time behavior of an abstract evolution family. We prove that an evolution family ${\cal U}=\{U(t,s)\}_{t \geq s\geq 0}$ acting on a Banach space $X$ is uniformly exponentially dichotomic (with respect to its continuous-time behavior) if and only if the corresponding difference equation with the inhomogeneous term from a vector-valued Orlicz sequence space $l^\Phi(\mathbb{N}, X)$ admits a solution in the same $l^\Phi(\mathbb{N},X)$. The technique of proof effectively eliminates the continuity hypothesis on the evolution family (\emph{i.e.,} we do not assume that $U(\,\cdot\,,s)x$ or $U(t,\,\cdot\,)x$ is continuous on $[s,\infty)$, and respectively $[0,t]$). Thus, some known results given by Coffman and Schaffer, Perron, and Ta Li are extended. Keywords:evolution families, exponential dichotomy, Orlicz sequence spaces, admissibilityCategories:34D05, 47D06, 93D20
2. CMB 1998 (vol 41 pp. 178)
Krupnik, Ilya; Lancaster, Peter
Minimal pencil realizations of rational matrix functions with symmetries A theory of minimal realizations of rational matrix functions $W(\lambda)$ in the pencil'' form $W(\lambda)=C(\lambda A_1-A_2)^{-1}B$ is developed. In particular, properties of the pencil $\lambda A_1-A_2$ are discussed when $W(\lambda)$ is hermitian on the real line, and when $W(\lambda)$ is hermitian on the unit circle. Categories:93Bxx, 15A23
3. CMB 1998 (vol 41 pp. 49)
Harrison, K. J.; Ward, J. A.; Eaton, L-J.
Stability of weighted darma filters We study the stability of linear filters associated with certain types of linear difference equations with variable coefficients. We show that stability is determined by the locations of the poles of a rational transfer function relative to the spectrum of an associated weighted shift operator. The known theory for filters associated with constant-coefficient difference equations is a special case. Keywords:Difference equations, adaptive $\DARMA$ filters, weighted shifts,, stability and boundedness, automatic continuityCategories:47A62, 47B37, 93D25, 42A85, 47N70
top of page | contact us | privacy | site map |
|
# How many grams of potassium chloride are produced if 25.0g of potassium chlorate decompose?
Aug 13, 2016
approximately 15.1 grams.
#### Explanation:
The key to chemistry is to change everything to moles. Then when you have the answer in moles change the answer back to grams, liters, or whatever you want.
change 25 grams of potassium chlorate to moles.
calculate the gram molecular mass of potassium chlorate.
Chlorate is Cl with 3 oxygens. ate = saturated. Chlorine has seven valance electrons when it is saturated six of these electrons are used by oxygen ( 2 electrons per oxygen) leaving only 1 electron.
1 K x 39 grams/mole
+1 Cl x 35.4 grams/ mole
+3 O x 16 grams/ mole
= 122.4 grams / mole Potassium Chlorate
$\frac{25}{122.4}$ = moles.
2.05 moles of Potassium Chlorate.
There is a 1:1 mole ratio. 1 mole of Potassium Chlorate will produce 1 mole of Potassium Chloride.
2.05 moles of Potassium Chlorate will produce 2.05 moles of Potassium Chloride.
Find the gram molecular mass of Potassium Chloride.
1 K x 39 = 39
+1 Cl x 35.4 = 35.4
= 74.4 grams / mole.
2.05 moles x 74.4 grams/ mole = 15.2 grams
Aug 16, 2016
The reaction will produce 15.3 g of $\text{KCl}$.
#### Explanation:
1. Write the balanced equation.
${\text{2KClO"_3 → "2KCl" + "3O}}_{2}$
2. Calculate the moles of ${\text{KClO}}_{3}$.
$\text{Moles of KClO"_3 = 25.0 color(red)(cancel(color(black)("g KClO"_3))) × ("1 mol KClO"_3)/(122.55 color(red)(cancel(color(black)("g KClO"_3)))) = "0.2046 mol KClO"_3}$
3. Calculate the moles of $\text{KCl}$.
$\text{Moles of KCl" = 0.2046 color(red)(cancel(color(black)("mol KClO"_3))) × ("2 mol KCl")/(2 color(red)(cancel(color(black)("mol KClO"_3)))) = "0.2046 mol KCl}$
4. Calculate the mass of $\text{KCl}$.
$\text{Mass of KCl" = 0.2046 color(red)(cancel(color(black)("mol KCl"))) × ("74.55 g KCl")/(1 color(red)(cancel(color(black)("mol KCl")))) = "15.3 g KCl}$
|
## Translation operator
$$e^{\alpha\frac{d}{dx}}=1+\alpha\frac{d}{dx}+\frac{\alpha^2}{2!}\frac{d^ 2}{dx^2}+...=\sum^{\infty}_{n=0}\frac{\alpha^n}{n!}\frac{d^n}{dx^n}$$
Why this is translational operator?
##e^{\alpha\frac{d}{dx}}f(x)=f(x+\alpha)##
Blog Entries: 27 Recognitions: Gold Member Homework Help Science Advisor taylor expansion?
Recognitions: Gold Member Homework Help Consider alpha to be an infinitesimal translation. Expand $f(x+\alpha )$ for small $\alpha$ to first order. Do the same for the LHS of the equation and you should see that the equality is true for infinitesimal translations. We say that the operator $\frac{d}{dx}$ (technically $\frac{d}{idx}$) is the 'generator' of the translation. EDIT: Beaten to the punch by TT!
## Translation operator
I have a problem with that. So
$$f(x+\alpha)=f(x)+\alpha f'(x)+...$$
My problem is that we have ##\frac{df}{dx}## and that isn't value in some fixed point ##x##. This is the value in some fixed point ##(\frac{df}{dx})_{x_0}##.
Quote by matematikuvol I have a problem with that. So $$f(x+\alpha)=f(x)+\alpha f'(x)+...$$ My problem is that we have ##\frac{df}{dx}## and that isn't value in some fixed point ##x##. This is the value in some fixed point ##(\frac{df}{dx})_{x_0}##.
I am not sure about your question. But that translation operator is a generic operator, which translate a function value at x to x+a.
In Taylor series x is fixed, while in ##\frac{df}{dx}## ##x## isn't fixed. Well you suppose that is.
Blog Entries: 27
Recognitions:
Gold Member
Homework Help
Quote by matematikuvol I have a problem with that. So $$f(x+\alpha)=f(x)+\alpha f'(x)+...$$ My problem is that we have ##\frac{df}{dx}## and that isn't value in some fixed point ##x##. This is the value in some fixed point ##(\frac{df}{dx})_{x_0}##.
yes, but x here is a constant, and only α is the variable
if you prefer, write f(xo + α) = f(xo) + α∂f/∂x|xo + …
Quote by tiny-tim yes, but x here is a constant, and only α is the variable if you prefer, write f(xo + α) = f(xo) + α∂f/∂x|xo + …
Ok but that is equal to
$$\sum^{\infty}_{n=0}\frac{\alpha^n}{n!}(\frac{df}{dx})_{x_0}$$
and how to expand now
$$e^{\alpha\frac{d}{dx}}$$
Blog Entries: 27 Recognitions: Gold Member Homework Help Science Advisor no, it's $\sum^{\infty}_{n=0}\frac{\alpha^n}{n!}\left(\frac{d^nf}{dx^n}\right)_{x _0}$
I made a mistake. But I'm asking when you get that ##(\frac{d^n}{dx^n})_{x_0}##? Please answer my question if you know. In $$e^{\alpha\frac{d}{dx}}=1+\alpha\frac{d}{dx}+\frac{\alpha^2}{2!}\frac{d^ 2}{dx^2}+...$$ you never have ##x_0##.
Blog Entries: 27 Recognitions: Gold Member Homework Help Science Advisor $$\left(e^{\alpha\frac{d}{dx}}(f(x))\right)_{x_o}$$ $$= \left(\left(1+\alpha\frac{d}{dx}+\frac{\alpha^2}{2!}\frac{d^2}{dx^2}+.. .\right)(f(x))\right)_{x_o}$$ $$= \sum^{\infty}_{n=0}\frac{\alpha^n}{n!}\left(\frac{d^nf}{dx^n}\right)_{x _0}$$
|
Linear Algebra - Matrix with given eigenvalues
1. roto25
11
1. The problem statement, all variables and given/known data
Come up with a 2 x 2 matrix with 2 and 1 as the eigenvalues. All the entries must be positive.
Then, find a 3 x 3 matrix with 1, 2, 3 as eigenvalues.
3. The attempt at a solution
I found the characteristic equation for the 2x2 would be λ2 - 3λ + 2 = 0. But then I couldn't get a matrix with positive entries to work for that.
Last edited: Apr 4, 2012
2. Dick
25,887
Pick a diagonal matrix.
3. roto25
11
Does that count for the entries being positive though?
4. Dick
25,887
Not really, no. Sorry. Better give this more thought than I gave this response.
5. roto25
11
thanks though!
6. micromass
19,347
Staff Emeritus
The 2x2-case is not so difficult. Remember (or prove) that the characteristic polynomail of a 2x2-matrix A is
$$\lambda^2-tr(A)\lambda+det(A)$$
By the way, I think your characteristic polynomial is wrong.
7. HallsofIvy
40,769
Staff Emeritus
??? Why do the diagonal matrices
$$\begin{bmatrix}1 & 0 \\ 0 & 2\end{bmatrix}$$
and
$$\begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3\end{bmatrix}$$
NOT count as "all entries postive"?
8. micromass
19,347
Staff Emeritus
He probably doesn't consider 0 to be positive.
9. HallsofIvy
40,769
Staff Emeritus
But it is much easier to claim that 0 is positive!:tongue:
Thanks.
10. roto25
11
Oh, I had typed 3 instead of 2 for the characteristic polynomial. I ended up looking at this from a Hermitian matrix point of view.
And then I got the matrix:
0 i +1
i-1 3
And I did get the right eigenvalues from that. Does that work?
11. micromass
19,347
Staff Emeritus
You still have 0 as an entry, you don't want that.
12. roto25
11
Yeah, I didn't realize that at first. :/
13. Dick
25,887
I don't think i+1 would be considered a positive number either. Stick to real entries. Your diagonal entries need to sum to 3, and their product should be greater than 2. Do you see why?
14. roto25
11
Yes. Theoretically, I know what it should do. I just can't actually find the right values to do it.
15. Dick
25,887
Call one diagonal entry x. Then the other one must be 3-x. Can you find a positive value of x that makes x*(3-x)>2? Graph it.
16. roto25
11
Well, any value of x between 1 and 2 (like 1.1) work.
17. Dick
25,887
Ok, so you just need to fill in the rest of the matrix.
18. roto25
11
but if I set x to be 1.1, my matrix would be
1.1 __
__ 1.9
And those two spaces have to be equivalent to 1.1*1.9 - 2, right?
because no matter what values I try, when the eigenvalues are getting closer to 1 and two, the matrix is just getting closer to the matrix of:
1 0
0 2
19. Dick
25,887
The two spaces multiplied together have to give you 1.1*1.9 - 2. How about putting one blank to be 1 and the other to be 1.1*1.9 - 2? The eigenvalues should work out to EXACTLY 1 and 2. Try it with x=3/2.
20. roto25
11
I had just figured out that
1.5 0.5
0.5 1.5
worked out! :)
|
Home > Difference Between > Difference Between Standard Deviation And Relative Error
# Difference Between Standard Deviation And Relative Error
## Contents
BMJ 1995;310: 298. [PMC free article] [PubMed]3. This bias will be negative or positive depending upon the type and there may be several systematic errors at work. OverviewThere are certain basic concepts in analytical chemistry that are helpful to the analyst when treating analytical data. jtbell said: ↑ Does this help? get redirected here
However, we have the ability to make quantitative measurements. For instance I get my percent deviation to be 5%, and my percent error = 11%. If many data points are close to the mean, then the standard deviation is small; if many data points are far from the mean, then the standard deviation is large. Hide this message.QuoraSign In Standard Deviation Error and Errors Statistics (academic discipline)What is the difference between Relative Error and Standard Deviation?UpdateCancelAnswer Wiki1 Answer Prasham Rambhia, loves StatisticsWritten 144w agoRelative errorRelative error
## Difference Between Standard Deviation And Normal Distribution
The graph shows the ages for the 16 runners in the sample, plotted on the distribution of ages for all 9,732 runners. Now the sample mean will vary from sample to sample; the way this variation occurs is described by the “sampling distribution” of the mean. Altman DG, Bland JM.
Save your draft before refreshing this page.Submit any pending changes before refreshing this page. Compute the following probabilities.? Or decreasing standard error by a factor of ten requires a hundred times as many observations. Difference Between Standard Deviation And Variance In Finance It is useful to compare the standard error of the mean for the age of the runners versus the age at first marriage, as in the graph.
The proportion or the mean is calculated using the sample. Difference Between Standard Deviation And Covariance Relative Precision Error Relative Standard Deviation (RSD) Coefficient of Variation (CV) Example: The CV of 53.15 %Cl, 53.56 %Cl, and 53.11 %Cl is (0.249 %Cl/53.27 %Cl)x100% = 0.47% relative uncertainty. For example, the U.S. http://zimmer.csufresno.edu/~davidz/Chem102/Gallery/AbsRel/AbsRel.html Standard errors provide simple measures of uncertainty in a value and are often used because: If the standard error of several individual quantities is known then the standard error of some
If the analyst touches the weight with their finger and obtains a weight of 1.0005 grams, the total error = 1.0005 -1.0000 = 0.0005 grams and the random and systematic errors Difference Between Standard Deviation And Standard Uncertainty Feb 16, 2007 #1 rachelle % Deviation vs. % Error?? Gurland and Tripathi (1971)[6] provide a correction and equation for this effect. That notation gives no indication whether the second figure is the standard deviation or the standard error (or indeed something else).
## Difference Between Standard Deviation And Covariance
The frequency distribution of the measurements approximates a bell-shaped curve that is symmetrical around the mean. https://answers.yahoo.com/question/?qid=20080224200856AAKqXwF Variance is tabulated in units squared. Difference Between Standard Deviation And Normal Distribution Therefore, the error can be estimated using equation 14.1 and the conventional true value.Errors in analytical chemistry are classified as systematic (determinate) and random (indeterminate). Difference Between Standard Deviation And Population Journal of the Royal Statistical Society.
As a result, we need to use a distribution that takes into account that spread of possible σ's. http://completeprogrammer.net/difference-between/difference-between-standard-deviations-and-standard-error.html Since truly random error is just as likely to be negative as positive, we can reason that a measurement that has only random error is accurate to within the precision of Blackwell Publishing. 81 (1): 75–81. The confidence interval of 18 to 22 is a quantitative measure of the uncertainty – the possible difference between the true average effect of the drug and the estimate of 20mg/dL. Difference Between Standard Deviation And Residual Standard Deviation
All rights reserved, Privacy | Terms| Feedback 1.800.669.6799FacebookTwitterYouTubeLinkedInRSS Absolute Relative Absolute values have the same units as the quantities measured. Unless the entire population is examined, s cannot be known and is estimated from samples randomly selected from it. For the age at first marriage, the population mean age is 23.44, and the population standard deviation is 4.72. http://completeprogrammer.net/difference-between/difference-between-standard-error-mean-standard-deviation.html Struggles with the Continuum – Part 7 Why Road Capacity Is Almost Independent of the Speed Limit So I Am Your Intro Physics Instructor Explaining Rolling Motion 11d Gravity From Just
Of course, T / n {\displaystyle T/n} is the sample mean x ¯ {\displaystyle {\bar {x}}} . Difference Between Standard Deviation And Sample Variance Sampling from a distribution with a large standard deviation The first data set consists of the ages of 9,732 women who completed the 2012 Cherry Blossom run, a 10-mile race held Thanks~ Rachelle rachelle, Feb 16, 2007 Phys.org - latest science and technology news stories on Phys.org •As hunt for sterile neutrino continues, mystery deepens •New sensor material could enable more
## The error is based on a theoretic value expected.
This formula may be derived from what we know about the variance of a sum of independent random variables.[5] If X 1 , X 2 , … , X n {\displaystyle In each of these scenarios, a sample of observations is drawn from a large population. For a large sample, a 95% confidence interval is obtained as the values 1.96×SE either side of the mean. Difference Between Standard Deviation And Root Mean Square If one survey has a standard error of $10,000 and the other has a standard error of$5,000, then the relative standard errors are 20% and 10% respectively.
National Center for Health Statistics (24). The graph below shows the distribution of the sample means for 20,000 samples, where each sample is of size n=16. The changed conditions may include principle of measurement, method of measurement, observer, measuring instrument, reference standard, location, conditions of use, and time.When discussing the precision of measurement data, it is helpful this page Share this thread via Reddit, Google+, Twitter, or Facebook Have something to add?
For example, in the population {4, 8}, the mean is 6 and the deviations from mean are {−2, 2}. As will be shown, the standard error is the standard deviation of the sampling distribution. Why Supersymmetry? You can only upload a photo (png, jpg, jpeg) or a video (3gp, 3gpp, mp4, mov, avi, mpg, mpeg, rm).
doi:10.2307/2682923. If σ is known, the standard error is calculated using the formula σ x ¯ = σ n {\displaystyle \sigma _{\bar {x}}\ ={\frac {\sigma }{\sqrt {n}}}} where σ is the It is widely used in analytical chemistry to express the precision of an assay: (standard deviation of array X) × 100 / (average of array X) = relative standard deviation Source(s): more...
Newer Than: Search this thread only Search this forum only Display results as threads More... The graphs below show the sampling distribution of the mean for samples of size 4, 9, and 25. Using the utmost of care, the analyst can only obtain a weight to the uncertainty of the balance or deliver a volume to the uncertainty of the glass pipette. It is a mistake that went unnoticed, such as a transcription error or a spilled solution.
Because of Deligne’s theorem. Misuse of standard error of the mean (SEM) when reporting variability of a sample. We can estimate how much sample means will vary from the standard deviation of this sampling distribution, which we call the standard error (SE) of the estimate of the mean. Hence, taking several measurements of the 1.0000 gram weight with the added weight of the fingerprint, the analyst would eventually report the weight of the finger print as 0.0005 grams where
|
# API¶
stan([file, model_name, model_code, fit, …]) Fit a model using Stan. stanc([file, charset, model_code, …]) Translate Stan model specification into C++ code. StanModel([file, charset, model_name, …]) Model described in Stan’s modeling language compiled from C++ code.
pystan.stan(file=None, model_name='anon_model', model_code=None, fit=None, data=None, pars=None, chains=4, iter=2000, warmup=None, thin=1, init='random', seed=None, algorithm=None, control=None, sample_file=None, diagnostic_file=None, verbose=False, boost_lib=None, eigen_lib=None, include_paths=None, n_jobs=-1, **kwargs)[source]
Fit a model using Stan.
The pystan.stan function was deprecated in version 2.17 and will be removed in version 3.0. Compiling and using a Stan Program (e.g., for drawing samples) should be done in separate steps.
Parameters: Returns: file (string {'filename', file-like object}) – Model code must found via one of the following parameters: file or model_code. If file is a filename, the string passed as an argument is expected to be a filename containing the Stan model specification. If file is a file object, the object passed must have a ‘read’ method (file-like object) that is called to fetch the Stan model specification. charset (string, optional) – If bytes or files are provided, this charset is used to decode. ‘utf-8’ by default. model_code (string) – A string containing the Stan model specification. Alternatively, the model may be provided with the parameter file. model_name (string, optional) – A string naming the model. If none is provided ‘anon_model’ is the default. However, if file is a filename, then the filename will be used to provide a name. ‘anon_model’ by default. fit (StanFit instance) – An instance of StanFit derived from a previous fit, None by default. If fit is not None, the compiled model associated with a previous fit is reused and recompilation is avoided. data (dict) – A Python dictionary providing the data for the model. Variables for Stan are stored in the dictionary as expected. Variable names are the keys and the values are their associated values. Stan only accepts certain kinds of values; see Notes. pars (list of string, optional) – A list of strings indicating parameters of interest. By default all parameters specified in the model will be stored. chains (int, optional) – Positive integer specifying number of chains. 4 by default. iter (int, 2000 by default) – Positive integer specifying how many iterations for each chain including warmup. warmup (int, iter//2 by default) – Positive integer specifying number of warmup (aka burin) iterations. As warmup also specifies the number of iterations used for stepsize adaption, warmup samples should not be used for inference. thin (int, optional) – Positive integer specifying the period for saving samples. Default is 1. init ({0, '0', 'random', function returning dict, list of dict}, optional) – Specifies how initial parameter values are chosen: - 0 or ‘0’ initializes all to be zero on the unconstrained support. - ‘random’ generates random initial values. An optional parameter init_r controls the range of randomly generated initial values for parameters in terms of their unconstrained support; list of size equal to the number of chains (chains), where the list contains a dict with initial parameter values; function returning a dict with initial parameter values. The function may take an optional argument chain_id. seed (int or np.random.RandomState, optional) – The seed, a positive integer for random number generation. Only one seed is needed when multiple chains are used, as the other chain’s seeds are generated from the first chain’s to prevent dependency among random number streams. By default, seed is random.randint(0, MAX_UINT). algorithm ({"NUTS", "HMC", "Fixed_param"}, optional) – One of the algorithms that are implemented in Stan such as the No-U-Turn sampler (NUTS, Hoffman and Gelman 2011) and static HMC. sample_file (string, optional) – File name specifying where samples for all parameters and other saved quantities will be written. If not provided, no samples will be written. If the folder given is not writable, a temporary directory will be used. When there are multiple chains, an underscore and chain number are appended to the file name. By default do not write samples to file. diagnostic_file (string, optional) – File name specifying where diagnostic information should be written. By default no diagnostic information is recorded. boost_lib (string, optional) – The path to a version of the Boost C++ library to use instead of the one supplied with PyStan. eigen_lib (string, optional) – The path to a version of the Eigen C++ library to use instead of the one in the supplied with PyStan. include_paths (list of strings, optional) – Paths for #include files defined in Stan code. verbose (boolean, optional) – Indicates whether intermediate output should be piped to the console. This output may be useful for debugging. False by default. control (dict, optional) – A dictionary of parameters to control the sampler’s behavior. Default values are used if control is not specified. The following are adaptation parameters for sampling algorithms. These are parameters used in Stan with similar names: adapt_engaged : bool adapt_gamma : float, positive, default 0.05 adapt_delta : float, between 0 and 1, default 0.8 adapt_kappa : float, between default 0.75 adapt_t0 : float, positive, default 10 adapt_init_buffer : int, positive, defaults to 75 adapt_term_buffer : int, positive, defaults to 50 adapt_window : int, positive, defaults to 25 In addition, the algorithm HMC (called ‘static HMC’ in Stan) and NUTS share the following parameters: stepsize: float, positive stepsize_jitter: float, between 0 and 1 metric : str, {“unit_e”, “diag_e”, “dense_e”} In addition, depending on which algorithm is used, different parameters can be set as in Stan for sampling. For the algorithm HMC we can set int_time: float, positive For algorithm NUTS, we can set max_treedepth : int, positive n_jobs (int, optional) – Sample in parallel. If -1 all CPUs are used. If 1, no parallel computing code is used at all, which is useful for debugging. fit StanFit instance chain_id (int, optional) – chain_id can be a vector to specify the chain_id for all chains or an integer. For the former case, they should be unique. For the latter, the sequence of integers starting from the given chain_id are used for all chains. init_r (float, optional) – init_r is only valid if init == “random”. In this case, the intial values are simulated from [-init_r, init_r] rather than using the default interval (see the manual of (Cmd)Stan). test_grad (bool, optional) – If test_grad is True, Stan will not do any sampling. Instead, the gradient calculation is tested and printed out and the fitted StanFit4Model object is in test gradient mode. By default, it is False. append_samples (bool, optional) refresh (int, optional) – Argument refresh can be used to control how to indicate the progress during sampling (i.e. show the progress every code{refresh} iterations). By default, refresh is max(iter/10, 1). obfuscate_model_name (boolean, optional) – obfuscate_model_name is only valid if fit is None. True by default. If False the model name in the generated C++ code will not be made unique by the insertion of randomly generated characters. Generally it is recommended that this parameter be left as True.
Examples
>>> from pystan import stan
>>> import numpy as np
>>> model_code = '''
... parameters {
... real y[2];
... }
... model {
... y[1] ~ normal(0, 1);
... y[2] ~ double_exponential(0, 2);
... }'''
>>> fit1 = stan(model_code=model_code, iter=10)
>>> print(fit1)
>>> excode = '''
... transformed data {
... real y[20];
... y[1] = 0.5796; y[2] = 0.2276; y[3] = -0.2959;
... y[4] = -0.3742; y[5] = 0.3885; y[6] = -2.1585;
... y[7] = 0.7111; y[8] = 1.4424; y[9] = 2.5430;
... y[10] = 0.3746; y[11] = 0.4773; y[12] = 0.1803;
... y[13] = 0.5215; y[14] = -1.6044; y[15] = -0.6703;
... y[16] = 0.9459; y[17] = -0.382; y[18] = 0.7619;
... y[19] = 0.1006; y[20] = -1.7461;
... }
... parameters {
... real mu;
... real<lower=0, upper=10> sigma;
... vector[2] z[3];
... real<lower=0> alpha;
... }
... model {
... y ~ normal(mu, sigma);
... for (i in 1:3)
... z[i] ~ normal(0, 1);
... alpha ~ exponential(2);
... }'''
>>>
>>> def initfun1():
... return dict(mu=1, sigma=4, z=np.random.normal(size=(3, 2)), alpha=1)
>>> exfit0 = stan(model_code=excode, init=initfun1)
>>> def initfun2(chain_id=1):
... return dict(mu=1, sigma=4, z=np.random.normal(size=(3, 2)), alpha=1 + chain_id)
>>> exfit1 = stan(model_code=excode, init=initfun2)
pystan.stanc(file=None, charset='utf-8', model_code=None, model_name='anon_model', include_paths=None, verbose=False, obfuscate_model_name=True)[source]
Translate Stan model specification into C++ code.
Parameters: file ({string, file}, optional) – If filename, the string passed as an argument is expected to be a filename containing the Stan model specification. If file, the object passed must have a ‘read’ method (file-like object) that is called to fetch the Stan model specification. charset (string, 'utf-8' by default) – If bytes or files are provided, this charset is used to decode. model_code (string, optional) – A string containing the Stan model specification. Alternatively, the model may be provided with the parameter file. model_name (string, 'anon_model' by default) – A string naming the model. If none is provided ‘anon_model’ is the default. However, if file is a filename, then the filename will be used to provide a name. include_paths (list of strings, optional) – Paths for #include files defined in Stan code. verbose (boolean, False by default) – Indicates whether intermediate output should be piped to the console. This output may be useful for debugging. obfuscate_model_name (boolean, True by default) – If False the model name in the generated C++ code will not be made unique by the insertion of randomly generated characters. Generally it is recommended that this parameter be left as True. stanc_ret – A dictionary with the following keys: model_name, model_code, cpp_code, and status. Status indicates the success of the translation from Stan code into C++ code (success = 0, error = -1). dict
Notes
C++ reserved words and Stan reserved words may not be used for variable names; see the Stan User’s Guide for a complete list.
The #include method follows a C/C++ syntax #include foo/my_gp_funs.stan. The method needs to be at the start of the row, no whitespace is allowed. After the included file no whitespace or comments are allowed. pystan.experimental(PyStan 2.18) has a fix_include-function to clean the #include statements from the model_code. Example: from pystan.experimental import fix_include model_code = fix_include(model_code)
StanModel()
Class representing a compiled Stan model
stan()
Fit a model using Stan
References
The Stan Development Team (2013) Stan Modeling Language User’s Guide and Reference Manual. <http://mc-stan.org/>.
Examples
>>> stanmodelcode = '''
... data {
... int<lower=0> N;
... real y[N];
... }
...
... parameters {
... real mu;
... }
...
... model {
... mu ~ normal(0, 10);
... y ~ normal(mu, 1);
... }
... '''
>>> r = stanc(model_code=stanmodelcode, model_name = "normal1")
>>> sorted(r.keys())
['cppcode', 'model_code', 'model_cppname', 'model_name', 'status']
>>> r['model_name']
'normal1'
class pystan.StanModel(file=None, charset='utf-8', model_name='anon_model', model_code=None, stanc_ret=None, include_paths=None, boost_lib=None, eigen_lib=None, verbose=False, obfuscate_model_name=True, extra_compile_args=None)[source]
Model described in Stan’s modeling language compiled from C++ code.
Instances of StanModel are typically created indirectly by the functions stan and stanc.
Parameters: file (string {'filename', 'file'}) – If filename, the string passed as an argument is expected to be a filename containing the Stan model specification. If file, the object passed must have a ‘read’ method (file-like object) that is called to fetch the Stan model specification. charset (string, 'utf-8' by default) – If bytes or files are provided, this charset is used to decode. model_name (string, 'anon_model' by default) – A string naming the model. If none is provided ‘anon_model’ is the default. However, if file is a filename, then the filename will be used to provide a name. model_code (string) – A string containing the Stan model specification. Alternatively, the model may be provided with the parameter file. stanc_ret (dict) – A dict returned from a previous call to stanc which can be used to specify the model instead of using the parameter file or model_code. include_paths (list of strings) – Paths for #include files defined in Stan program code. boost_lib (string) – The path to a version of the Boost C++ library to use instead of the one supplied with PyStan. eigen_lib (string) – The path to a version of the Eigen C++ library to use instead of the one in the supplied with PyStan. verbose (boolean, False by default) – Indicates whether intermediate output should be piped to the console. This output may be useful for debugging. kwargs (keyword arguments) – Additional arguments passed to stanc.
model_name
Type: string
model_code
Stan code for the model.
Type: string
model_cpp
C++ code for the model.
Type: string
module
Python module created by compiling the C++ code for the model.
Type: builtins.module
show()[source]
Print the Stan model specification.
sampling()[source]
Draw samples from the model.
optimizing()[source]
Obtain a point estimate by maximizing the log-posterior.
get_cppcode()[source]
Return the C++ code for the module.
get_cxxflags()[source]
Return the ‘CXXFLAGS’ used for compiling the model.
get_include_paths()[source]
Return include_paths used for compiled model.
stanc
Compile a Stan model specification
stan
Fit a model using Stan
Notes
More details of Stan, including the full user’s guide and reference manual can be found at <URL: http://mc-stan.org/>.
There are three ways to specify the model’s code for stan_model.
1. parameter model_code, containing a string to whose value is the Stan model specification,
2. parameter file, indicating a file (or a connection) from which to read the Stan model specification, or
3. parameter stanc_ret, indicating the re-use of a model
generated in a previous call to stanc.
References
The Stan Development Team (2013) Stan Modeling Language User’s Guide and Reference Manual. <URL: http://mc-stan.org/>.
Examples
>>> model_code = 'parameters {real y;} model {y ~ normal(0,1);}'
>>> model_code; m = StanModel(model_code=model_code)
... # doctest: +ELLIPSIS
'parameters ...
>>> m.model_name
'anon_model'
optimizing(data=None, seed=None, init='random', sample_file=None, algorithm=None, verbose=False, as_vector=True, **kwargs)[source]
Obtain a point estimate by maximizing the joint posterior.
Parameters: Returns: data (dict) – A Python dictionary providing the data for the model. Variables for Stan are stored in the dictionary as expected. Variable names are the keys and the values are their associated values. Stan only accepts certain kinds of values; see Notes. seed (int or np.random.RandomState, optional) – The seed, a positive integer for random number generation. Only one seed is needed when multiple chains are used, as the other chain’s seeds are generated from the first chain’s to prevent dependency among random number streams. By default, seed is random.randint(0, MAX_UINT). init ({0, '0', 'random', function returning dict, list of dict}, optional) – Specifies how initial parameter values are chosen: - 0 or ‘0’ initializes all to be zero on the unconstrained support. - ‘random’ generates random initial values. An optional parameter init_r controls the range of randomly generated initial values for parameters in terms of their unconstrained support; list of size equal to the number of chains (chains), where the list contains a dict with initial parameter values; function returning a dict with initial parameter values. The function may take an optional argument chain_id. sample_file (string, optional) – File name specifying where samples for all parameters and other saved quantities will be written. If not provided, no samples will be written. If the folder given is not writable, a temporary directory will be used. When there are multiple chains, an underscore and chain number are appended to the file name. By default do not write samples to file. algorithm ({"LBFGS", "BFGS", "Newton"}, optional) – Name of optimization algorithm to be used. Default is LBFGS. verbose (boolean, optional) – Indicates whether intermediate output should be piped to the console. This output may be useful for debugging. False by default. as_vector (boolean, optional) – Indicates an OrderedDict will be returned rather than a nested dictionary with keys ‘par’ and ‘value’. optim – Depending on as_vector, returns either an OrderedDict having parameters as keys and point estimates as values or an OrderedDict with components ‘par’ and ‘value’. optim['par'] is a dictionary of point estimates, indexed by the parameter name. optim['value'] stores the value of the log-posterior (up to an additive constant, the lp__ in Stan) corresponding to the point identified by optim[‘par’]. OrderedDict iter (int, optional) – The maximum number of iterations. save_iterations (bool, optional) refresh (int, optional) init_alpha (float, optional) – For BFGS and LBFGS, default is 0.001 tol_obj (float, optional) – For BFGS and LBFGS, default is 1e-12. tol_grad (float, optional) – For BFGS and LBFGS, default is 1e-8. tol_param (float, optional) – For BFGS and LBFGS, default is 1e-8. tol_rel_grad (float, optional) – For BFGS and LBFGS, default is 1e7. history_size (int, optional) – For LBFGS, default is 5. Refer to the manuals for both CmdStan and Stan for more details.
Examples
>>> from pystan import StanModel
>>> m = StanModel(model_code='parameters {real y;} model {y ~ normal(0,1);}')
>>> f = m.optimizing()
sampling(data=None, pars=None, chains=4, iter=2000, warmup=None, thin=1, seed=None, init='random', sample_file=None, diagnostic_file=None, verbose=False, algorithm=None, control=None, n_jobs=-1, **kwargs)[source]
Draw samples from the model.
Parameters: Returns: data (dict) – A Python dictionary providing the data for the model. Variables for Stan are stored in the dictionary as expected. Variable names are the keys and the values are their associated values. Stan only accepts certain kinds of values; see Notes. pars (list of string, optional) – A list of strings indicating parameters of interest. By default all parameters specified in the model will be stored. chains (int, optional) – Positive integer specifying number of chains. 4 by default. iter (int, 2000 by default) – Positive integer specifying how many iterations for each chain including warmup. warmup (int, iter//2 by default) – Positive integer specifying number of warmup (aka burn-in) iterations. As warmup also specifies the number of iterations used for step-size adaption, warmup samples should not be used for inference. warmup=0 forced if algorithm=”Fixed_param”. thin (int, 1 by default) – Positive integer specifying the period for saving samples. seed (int or np.random.RandomState, optional) – The seed, a positive integer for random number generation. Only one seed is needed when multiple chains are used, as the other chain’s seeds are generated from the first chain’s to prevent dependency among random number streams. By default, seed is random.randint(0, MAX_UINT). algorithm ({"NUTS", "HMC", "Fixed_param"}, optional) – One of algorithms that are implemented in Stan such as the No-U-Turn sampler (NUTS, Hoffman and Gelman 2011), static HMC, or Fixed_param. Default is NUTS. init ({0, '0', 'random', function returning dict, list of dict}, optional) – Specifies how initial parameter values are chosen: 0 or ‘0’ initializes all to be zero on the unconstrained support; ‘random’ generates random initial values; list of size equal to the number of chains (chains), where the list contains a dict with initial parameter values; function returning a dict with initial parameter values. The function may take an optional argument chain_id. sample_file (string, optional) – File name specifying where samples for all parameters and other saved quantities will be written. If not provided, no samples will be written. If the folder given is not writable, a temporary directory will be used. When there are multiple chains, an underscore and chain number are appended to the file name. By default do not write samples to file. verbose (boolean, False by default) – Indicates whether intermediate output should be piped to the console. This output may be useful for debugging. control (dict, optional) – A dictionary of parameters to control the sampler’s behavior. Default values are used if control is not specified. The following are adaptation parameters for sampling algorithms. These are parameters used in Stan with similar names: adapt_engaged : bool, default True adapt_gamma : float, positive, default 0.05 adapt_delta : float, between 0 and 1, default 0.8 adapt_kappa : float, between default 0.75 adapt_t0 : float, positive, default 10 In addition, the algorithm HMC (called ‘static HMC’ in Stan) and NUTS share the following parameters: stepsize: float, positive stepsize_jitter: float, between 0 and 1 metric : str, {“unit_e”, “diag_e”, “dense_e”} In addition, depending on which algorithm is used, different parameters can be set as in Stan for sampling. For the algorithm HMC we can set int_time: float, positive For algorithm NUTS, we can set max_treedepth : int, positive n_jobs (int, optional) – Sample in parallel. If -1 all CPUs are used. If 1, no parallel computing code is used at all, which is useful for debugging. fit – Instance containing the fitted results. StanFit4Model chain_id (int or iterable of int, optional) – chain_id can be a vector to specify the chain_id for all chains or an integer. For the former case, they should be unique. For the latter, the sequence of integers starting from the given chain_id are used for all chains. init_r (float, optional) – init_r is only valid if init == “random”. In this case, the intial values are simulated from [-init_r, init_r] rather than using the default interval (see the manual of Stan). test_grad (bool, optional) – If test_grad is True, Stan will not do any sampling. Instead, the gradient calculation is tested and printed out and the fitted StanFit4Model object is in test gradient mode. By default, it is False. append_samples (bool, optional) refresh (int, optional) – Argument refresh can be used to control how to indicate the progress during sampling (i.e. show the progress every code{refresh} iterations). By default, refresh is max(iter/10, 1). check_hmc_diagnostics (bool, optional) – After sampling run pystan.diagnostics.check_hmc_diagnostics function. Default is True. Checks for n_eff and rhat skipped if the flat parameter count is higher than 1000, unless user explicitly defines check_hmc_diagnostics=True.
Examples
>>> from pystan import StanModel
>>> m = StanModel(model_code='parameters {real y;} model {y ~ normal(0,1);}')
>>> m.sampling(iter=100)
vb(data=None, pars=None, iter=10000, seed=None, init='random', sample_file=None, diagnostic_file=None, verbose=False, algorithm=None, **kwargs)[source]
Call Stan’s variational Bayes methods.
Parameters: data (dict) – A Python dictionary providing the data for the model. Variables for Stan are stored in the dictionary as expected. Variable names are the keys and the values are their associated values. Stan only accepts certain kinds of values; see Notes. pars (list of string, optional) – A list of strings indicating parameters of interest. By default all parameters specified in the model will be stored. seed (int or np.random.RandomState, optional) – The seed, a positive integer for random number generation. Only one seed is needed when multiple chains are used, as the other chain’s seeds are generated from the first chain’s to prevent dependency among random number streams. By default, seed is random.randint(0, MAX_UINT). sample_file (string, optional) – File name specifying where samples for all parameters and other saved quantities will be written. If not provided, samples will be written to a temporary file and read back in. If the folder given is not writable, a temporary directory will be used. When there are multiple chains, an underscore and chain number are appended to the file name. By default do not write samples to file. diagnostic_file (string, optional) – File name specifying where diagnostics for the variational fit will be written. iter (-) – Positive integer specifying how many iterations for each chain including warmup. algorithm ({'meanfield', 'fullrank'}) – algorithm}{One of “meanfield” and “fullrank” indicating which variational inference algorithm is used. meanfield: mean-field approximation; fullrank: full-rank covariance. The default is ‘meanfield’. verbose (boolean, False by default) – Indicates whether intermediate output should be piped to the console. This output may be useful for debugging. optional parameters, refer to the manuals for both CmdStan (Other) – Stan. (and) – iter – grad_samples the number of samples for Monte Carlo enumerate of (-) – gradients, defaults to 1. elbo_samples the number of samples for Monte Carlo estimate of ELBO (-) – (objective function), defaults to 100. (ELBO stands for “the evidence lower bound”.) eta positive stepsize weighting parameters for variational (-) – inference but is ignored if adaptation is engaged, which is the case by default. adapt_engaged flag indicating whether to automatically adapt the (-) – stepsize and defaults to True. tol_rel_objconvergence tolerance on the relative norm of the (-) – objective, defaults to 0.01. eval_elbo, evaluate ELBO every Nth iteration, defaults to 100 (-) – output_samples number of posterior samples to draw and save, (-) – defaults to 1000. adapt_iter number of iterations to adapt the stepsize if (-) – adapt_engaged is True and ignored otherwise. results – Dictionary containing information related to results. dict
Examples
>>> from pystan import StanModel
>>> m = StanModel(model_code='parameters {real y;} model {y ~ normal(0,1);}')
>>> results = m.vb()
>>> # results saved on disk in format inspired by CSV
>>> print(results['args']['sample_file'])
## StanFit4model¶
Each StanFit instance is model-specific, so the name of the class will be something like: StanFit4anon_model. The StanFit4model instances expose a number of methods.
class pystan.StanFit4model
plot(pars=None)
Visualize samples from posterior distributions
Parameters
pars : sequence of str
names of parameters
This is currently an alias for the traceplot method.
extract(pars=None, permuted=True, inc_warmup=False, dtypes=None)
Extract samples in different forms for different parameters.
Parameters
pars : sequence of str
names of parameters (including other quantities)
permuted : bool
If True, returned samples are permuted. All chains are merged and warmup samples are discarded.
inc_warmup : bool
If True, warmup samples are kept; otherwise they are discarded. If permuted is True, inc_warmup is ignored.
dtypes : dict
datatype of parameter(s). If nothing is passed, np.float will be used for all parameters.
Returns
samples : dict or array If permuted is True, return dictionary with samples for each parameter (or other quantity) named in pars.
If permuted is False and pars is None, an array is returned. The first dimension of the array is for the iterations; the second for the number of chains; the third for the parameters. Vectors and arrays are expanded to one parameter (a scalar) per cell, with names indicating the third dimension. Parameters are listed in the same order as model_pars and flatnames.
If permuted is False and pars is not None, return dictionary with samples for each parameter (or other quantity) named in pars. The first dimension of the sample array is for the iterations; the second for the number of chains; the rest for the parameters. Parameters are listed in the same order as pars.
stansummary(pars=None, probs=(0.025, 0.25, 0.5, 0.75, 0.975), digits_summary=2)
Summary statistic table. Parameters ———- pars : str or sequence of str, optional
Parameter names. By default use all parameters
probs : sequence of float, optional
Quantiles. By default, (0.025, 0.25, 0.5, 0.75, 0.975)
digits_summary : int, optional
Number of significant digits. By default, 2
summary : string
Table includes mean, se_mean, sd, probs_0, …, probs_n, n_eff and Rhat.
>>> model_code = 'parameters {real y;} model {y ~ normal(0,1);}'
>>> m = StanModel(model_code=model_code, model_name="example_model")
>>> fit = m.sampling()
>>> print(fit.stansummary())
Inference for Stan model: example_model.
4 chains, each with iter=2000; warmup=1000; thin=1;
post-warmup draws per chain=1000, total post-warmup draws=4000.
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
y 0.01 0.03 1.0 -2.01 -0.68 0.02 0.72 1.97 1330 1.0
lp__ -0.5 0.02 0.68 -2.44 -0.66 -0.24 -0.05-5.5e-4 1555 1.0
Samples were drawn using NUTS at Thu Aug 17 00:52:25 2017.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at
convergence, Rhat=1).
summary(pars=None, probs=None)
Summarize samples (compute mean, SD, quantiles) in all chains. REF: stanfit-class.R summary method Parameters ———- fit : StanFit4Model object pars : str or sequence of str, optional
Parameter names. By default use all parameters
probs : sequence of float, optional
Quantiles. By default, (0.025, 0.25, 0.5, 0.75, 0.975)
summaries : OrderedDict of array
Array indexed by ‘summary’ has dimensions (num_params, num_statistics). Parameters are unraveled in row-major order. Statistics include: mean, se_mean, sd, probs_0, …, probs_n, n_eff, and Rhat. Array indexed by ‘c_summary’ breaks down the statistics by chain and has dimensions (num_params, num_statistics_c_summary, num_chains). Statistics for c_summary are the same as for summary with the exception that se_mean, n_eff, and Rhat are absent. Row names and column names are also included in the OrderedDict.
log_prob(upar, adjust_transform=True, gradient=False)
Expose the log_prob of the model to stan_fit so user can call this function.
Parameters
upar :
The real parameters on the unconstrained space.
Whether we add the term due to the transform from constrained space to unconstrained space implicitly done in Stan.
Note
In Stan, the parameters need be defined with their supports. For example, for a variance parameter, we must define it on the positive real line. But inside Stan’s sampler, all parameters defined on the constrained space are transformed to unconstrained space, so the log density function need be adjusted (i.e., adding the log of the absolute value of the Jacobian determinant). With the transformation, Stan’s samplers work on the unconstrained space and once a new iteration is drawn, Stan transforms the parameters back to their supports. All the transformation are done inside Stan without interference from the users. However, when using the log density function for a model exposed to Python, we need to be careful. For example, if we are interested in finding the mode of parameters on the constrained space, we then do not need the adjustment. For this reason, there is an argument named adjust_transform for functions log_prob and grad_log_prob.
grad_log_prob(upars, adjust_transform=True)
Expose the grad_log_prob of the model to stan_fit so user can call this function.
Parameters
upar : array
The real parameters on the unconstrained space.
Whether we add the term due to the transform from constrained space to unconstrained space implicitly done in Stan.
get_adaptation_info()
Obtain adaptation information for sampler, which now only NUTS2 has.
The results are returned as a list, each element of which is a character string for a chain.
get_logposterior(inc_warmup=True)
Get the log-posterior (up to an additive constant) for all chains.
Each element of the returned array is the log-posterior for a chain. Optional parameter inc_warmup indicates whether to include the warmup period.
get_sampler_params(inc_warmup=True)
Obtain the parameters used for the sampler such as stepsize and treedepth. The results are returned as a list, each element of which is an OrderedDict a chain. The dictionary has number of elements corresponding to the number of parameters used in the sampler. Optional parameter inc_warmup indicates whether to include the warmup period.
get_posterior_mean()
Get the posterior mean for all parameters
Returns
means : array of shape (num_parameters, num_chains)
Order of parameters is given by self.model_pars or self.flatnames if parameters of interest include non-scalar parameters. An additional column for mean lp__ is also included.
constrain_pars(np.ndarray[double, ndim=1, mode="c"] upar not None)
Transform parameters from unconstrained space to defined support
unconstrain_pars(par)
Transform parameters from defined support to unconstrained space
get_seed()
get_inits()
get_stancode()
to_dataframe(pars=None, permuted=True, dtypes=None, inc_warmup=False, diagnostics=True)
Extract samples as a pandas datafriame for different parameters.
pars : {str, sequence of str}
parameter (or quantile) name(s). If permuted is False, pars is ignored.
permuted : bool, default False
If True, returned samples are permuted. All chains are merged and warmup samples are discarded.
dtypes : dict
datatype of parameter(s). If nothing is passed, np.float will be used for all parameters.
inc_warmup : bool
If True, warmup samples are kept; otherwise they are discarded. If permuted is True, inc_warmup is ignored.
diagnostics : bool
If True, include MCMC diagnostics in dataframe. If permuted is True, diagnostics is ignored.
df : pandas dataframe
Unlike default in extract (permuted=True) .to_dataframe method returns non-permuted samples (permuted=False) with diagnostics params included.
|
### Archive
Archive for the ‘solr’ Category
## How to solve org.apache.lucene.index.CorruptIndexException?
If solr server stops while indexing is going on, we may get org.apache.lucene.index.CorruptIndexException error.
To solve this we have to use org.apache.lucene.index.CheckIndex class. We have to run this class to fix the corrupted indexes in solr.
1. First take back up your indexed data
2. Go to the directory where lucene-core.3.6.1.jar is there
3. then run this command
java -cp lucene-core-3.1.0.jar -ea:org.apache.lucene… org.apache.lucene.index.CheckIndex “your indexed directory path” –fix
in my case it is
java -cp lucene-core-3.1.0.jar -ea:org.apache.lucene… org.apache.lucene.index.CheckIndex “C:\Program Files\gisgraphy-3.0-beta2\solr\data\index” –fix
As there are no corrupted indexes in my case it says “No problems were detected with this index”
If there are any corrupted indexes are there then those will be fixed with this command and solr will be working fine.
Categories: Java, solr
|
Uncategorized
# Anavar with testosterone, hgh legal uses
Anavar with testosterone, Hgh legal uses – Buy legal anabolic steroids
# Anavar with testosterone
Sinapsis colectiva foro – perfil del usuario > perfil página. Usuario: oxandrolone and weight loss, cheap boldebolin order legal anabolic steroid fast delivery,. Anavar 10mg tablets is medically prescribed for the treatment of individuals weight loss caused by chronic or acute injury, infection, or illness. This drug is one of. They found that oxandrolone reduced body fat and increased strength among these men during the study, but the muscle and strength gain. Forum – member profile > profile page. User: oxandrolone and weight loss, oxandrolone and diabetes, title: new member, about: oxandrolone and weight loss,. Anavar is an anabolic, androgenic steroid that is utilized in the world of athletics and bodybuilding because it allows the user to grow muscle and lose fat all in. We all know the danger of a super strict diet: potential muscle loss. Who wants to look ripped if you have to lose a lot of hard-earned muscle to. This happens even if you have a well-planned diet you will still lose muscle mass, unless you are using a powerful anabolic drug like anavar. We found that the average weight gain during the first 3 weeks was 14. 5 pounds and that the physical therapy index was 8. 5 of recovery in the. There was significant weight gain in both groups (p<. 05 for time effect vs. Oxandrolone is another name of anavar which has a high anabolic rating. This steroid is top-rated among athletes, bodybuilders,. Oxandrolone, sold under the brand names oxandrin and anavar, among others, is an androgen and anabolic steroid (aas) medication which is used to help promote weight gain in various situations, to help offset protein. Weight loss and lean mass loss from burn induced catabolism can be more rapidly restored when the anabolic steroid oxandrolone is added to optimum
If you opt for Test Propionate pin 200mg a week, anavar with testosterone.
## Hgh legal uses
Calorieën voor het lichaam. , clenbuterol uk credit card, testosterone enanthate – pas soulever un paquet d’eau sans , alpha pharma anavar, testosterone. Decreasing the lean mass will lead to a deficiency in wound healing and in muscle-skeletal function -. Testosterone analogs have been reported in the literature as. Alternatives to testosterone being investigated for boys with low predicted adult stature include oxandrolone, a weaker anabolic steroid than. I plan on using 500mg of testosterone enanthate per week injecting 250mg twice a week. I am stacking this with 60mg of anavar rising. Whereas oxandrolone-treated patients had preserved lbm (9. Oxandrolone significantly increased serum prealbu- min, total protein, testosterone,. 6 boys with tall stature received testosterone therapy (500 mg. By deca the amount of testosterone available to your body, mesterolone can. Testosterone cypionate injectable solution can interact with other medications, oxandrolone, or herbs you may be taking. An interaction is when a substance. Ointment formulations anavar tablets not be used for pretreatment as they may significantly reduce testosterone absorption. Wish you all results and good health,. Can have a dramatic negative effect on testosterone, such as anavar. Anabolic steroids are synthetic derivatives of testosterone. Testosterone enanthate pharma lab 10 amps 10x250mg/1ml 38,00 € 34,20 le dr Due to the testosterone suppression and estrogen dominance that occurs, men can really begin to struggle in this department, anavar with testosterone. https://vk.com/topic-174425918_47787230
## Anavar with testosterone, hgh legal uses
This is essentially the holy grail of all kinds of growth in our bodies. According to the Medical College of Wisconsin, a low level of HGH in our bodies can lead to a significant decrease in lean muscle mass and thinning of the skin (3). According to Godfrey RJ at the Brunel University, HGH plays a crucial role in increasing our fat metabolism and maintaining a healthier body composition even in our late years (4). To put it shortly, HGH has been medically proven to accelerate our metabolism, muscle growth, strength gains, and the recovery from an intense workout (5, 6, 7, 8, 9), anavar with testosterone. Ostarine mk-2866 for bulking Decreasing the lean mass will lead to a deficiency in wound healing and in muscle-skeletal function -. Testosterone analogs have been reported in the literature as. Anabolic steroids like anavar are similar to testosterone and may sometimes even be prescribed to men who aren’t producing enough of it on. However, the long-term safety and behavioral consequences of testosterone at. Oxandrin is a man-made version of testosterone , a hormone that some hae. Oxandrolone transiently suppressed the pituitary-testicular axis and altered gh pulsatility. Testosterone increased gh via amplitude modulation. ◦39 x 1ml ampoules in boxes labelled ‘testobolin – testosterone enanthate’. ◦328 x oxandrolone tablets in a box labelled “anavar”. Oxandrolone, testosterone cypionate injection, testosterone. Length of authorization: varies. Mean growth velocity increased from 4. 5 cm/year in the oxandrolone. Anavar will cause endogenous testosterone levels to decrease in women. Anavar oxandrolone comes in a very close second to winstrol. Because these steroid supplement products could increase testosterone. The same team has now looked at the impact of a brief exposure to testosterone on mice. They found that three months after the drug was. Anavar (and all anabolic steroids) are essentially forms of exogenous testosterone, thus anavar
### Oxandrolone and weight loss, bulking meal plan for skinny guys
Anavar with testosterone, cheap order anabolic steroids online bodybuilding supplements. When stacking Anavar in a cutting cycle, you’ll want to surround it with other quality performance enhancing drugs if you’re going to reap a high reward, anavar with testosterone. To achieve this end, you’ll find the Oxandrolone hormone stacks well with the following: Testosterone (Any Form) Trenbolone (Any Form) Primobolan Depot Masteron NPP Clenbuterol Cytomel Arimidex Letrozole Human Growth Hormone Nolvadex Equipoise Winstrol. Stacking Anavar for Women. Unlike men, women will find stacking Anavar in an off-season bulking cycle to be very beneficial.
Testomax gel The best results from any peptides, especially those designed for growth hormone and taken to improve anabolism, fat loss, physique or overall health, are to combine specific combinations of peptides all at the same time, anavar with testosterone.
Anavar with testosterone, order anabolic steroids online bodybuilding supplements. At the same time, testosterone also reduces blood pressure levels, hgh legal uses.
https://cofounder.tv/2021/09/06/bulking-stack-dianabol-fiyat/
Ketogenic diet for rapid fat loss great tasting bodybuilding recipes. This takes you too the next level. Anavar weight loss pill the best sex pills shop for. From an evolutionary perspective, what is the keto diet all about this makes anavar pills for weight loss sense. Since we chose to walk upright. Why is this medication prescribed? expand section. Oxandrolone is used with a diet program to cause weight gain in people who have lost too. Anavar diet plan — oxandrolone for weight loss. Therefore, plenty of athletes use this product when they need to add their body weight. Fat loss is somewhat noticeable on anavar, with research showing that. Fat loss: it is clinically proven than anavar helps reduce abdominal fat (that includes sub-q and visceral fat) a lot better than testosterone does. Anavar weight loss pill the best sex pills shop for sale online natural penus. 2021-02-26 oxandrolone weight loss camp como this weight loss supplement is ideal for women who are not just looking for a fat burner but also wants to grow. Larger studies are needed to determine if oxandrolone is an effective, safe, and affordable option to stimulate appetite, improve weight gain,. There was significant weight gain in both groups (p<. 05 for time effect vs. Anavar and weight loss anavar and weight loss otc appetite suppressants that really work weight loss tablets without side effects best way to suppress. When to choose tbol vs anavar is very very easy: anavar is the king of oral steroids, and maybe even roids in general, for fat loss effects. Anavar vs rad 140 reddit Good bulking stack
Burned children and adults, oxandrolone has shown promis- ing results in terms of weight gain and urinary nitrogen balance during the acute phase postburn. Drolone is the fda-approved dose for treatment of weight loss or in- ability to. Buying testosterone gel online gain androgel in the legs ?????? femorales special. Tag: anabolic steroids side effects anabolic steroid your doctor. It encourages belly fat loss – studies have found that compared to testosterone and other natural weight loss techniques; anavar can cause a. This releases adrenaline in a large dose which has positive effects on weight loss and increasing energy. Bodybuilders use anavar to lose fat, grow muscle and. Oxandrolone is used to help you regain weight lost after surgery, severe trauma, or chronic infections. Oxandrolone is also used in people who. Anavar weight loss pill the best sex pills shop for sale online natural penus. From an evolutionary perspective, what is the keto diet all about this makes anavar pills for weight loss sense. Since we chose to walk upright. Conclusions: oral oxandrolone decreased sq abdominal fat more than te or weight loss alone and also tended to produce favorable changes in visceral fat. We found that the average weight gain during the first 3 weeks was 14. 5 pounds and that the physical therapy index was 8. 5 of recovery in the. 3 weight loss steroids. Also known as oxandrolone, this ‘mild-mannered’ steroid is suitable for both men and women this is the process called protein synthesis,. Anavar and weight loss anavar and weight loss otc appetite suppressants that really work weight loss tablets without side effects best way to suppress Anadrol powerlifting
However, this is not considered a safe drug by the medical community nor is it approved for bodybuilding or anti-aging usage. Besides the side effects we discussed above, the big problem is that there are no long-term studies on what taking exogenous (outside) HGH will do to a healthy person, anavar with test. Both HGH and testosterone therapies will help benefit the other hormone in some ways, anavar with test. There is no person better equipped to make this determination than the HRT specialist. Hello, i am just planning to order Ostarine and GW, anavar with testosterone. I am also taking 2IU of HGH for 16 weeks now. That is until we get stomach aches and throw up perfectly good chocolate. May that metaphor sink in a bit, anavar with test. Exactly how do these two important hormones interact to ensure muscle building and fat loss, anavar with testosterone. Testosterone (T) is a steroid hormone synthesized from cholesterol and released into the bloodstream from the Leydig cells of the testes. To overcome a plateau or to maximize gains in a short period of time, they will significantly up the dosage, known as a ‘blast’, anavar with test. This user experienced notable increases in muscle hypertrophy and fat loss from this protocol. Testosterone with HGH: What to Expect, anavar with testosterone. Testosterone with HGH: What to Expect You may have heard about athletes who use testosterone with HGH. This study confirmed the ability of low doses of r-HuEPO to maintain Hct and $$\dot V _ >$$ at elevated levels, anavar with testosterone. Growth Hormone Stack – The Best Fast-Track HGH Stack Available In The Market. Long story short, don’t buy HGH pills, patches or sprays. If you want to learn how athletes get HGH Click Here, anavar with testosterone. In order to get maximal elevations in GH, GHRP-6 should be combined with a GHRH, which you’ll learn more about shortly, anavar with test. Ultimately, to maximize the effects of IGF, you should accompany any IGF use with a GHRP, and from what I’ve personally researched, the GHRP Ipamorelin appears to be the safest and most efficacious.
Popular steroids:
Test E 200mg / EQ 200mg Geneza Pharmaceuticals $74.00 Santra 1 mg Sandoz$60.00
Para Pharma US DOM up to 20 days
GP Mast 200 mg Geneza Pharmaceuticals $87.00 Methyl-1-Test 10 mg Dragon Pharma$44.00
Parlodel 2.5 mg MEDA Pharm $24.00 Oxydrol 50 mg Pharmaqo Labs$42.00
Oxydrolone 50 mg (50 tabs)
Testocyp 250 mg Alpha-Pharma $46.00 Testobolin 250 mg Alpha-Pharma$46.00
Anavar 10 mg Dragon Pharma $95.00 HCG – Fertigyn 5000iu Sun Pharma$34.00
Dragon Pharma International
|
Share
# ABC is a right triangle right-angled at C. Let BC = a, CA = b, AB = c and let p be the length of perpendicular from C on AB, prove that - CBSE Class 10 - Mathematics
#### Question
ABC is a right triangle right-angled at C. Let BC = a, CA = b, AB = c and let p be the length of perpendicular from C on AB, prove that
(i) cp = ab
(ii) 1/p^2=1/a^2+1/b^2
#### Solution
(i) Let CD ⊥ AB. Then, CD = p
AB=c,BC=a,AC=b and CD=p
Area of triangle ABC=1/2ABxxCD=1/2xxcxxp=1/2cp
Also,
Area of triangle ABC=1/2BCxxAC=1/2xxaxxb=1/2ab
∴ \frac { 1 }{ 2 } cp = \frac { 1 }{ 2 } ab
⇒ cp = ab
(ii) Since ∆ABC is right triangle right-angled at C.
∴ AB^2 = BC^2 + AC^2
⇒ c^2 = a^2 + b^2
=>((ab)/p)^2=a^2+b^2
=>(a^2b^2)/p^2=a^2+b^2
=>1/p^2=(a^2+b^2)/(a^2b^2)=>1/p^2=1/b^2+1/a^2
=>1/p^2=1/a^2+1/b^2
Is there an error in this question or solution?
#### Video TutorialsVIEW ALL [5]
Solution ABC is a right triangle right-angled at C. Let BC = a, CA = b, AB = c and let p be the length of perpendicular from C on AB, prove that Concept: Pythagoras Theorem.
S
|
# Why does a ball bounce?
If an object is acted on by equal and opposite forces then it will be in equilibrium, and it's acceleration or velocity (and so direction as well) will not be changed.
So when a ball bounces, it exerts a force on the floor, which matches the magnitude of the force in the opposite direction (the ball is bouncing perfectly vertical), up. So how is it's velocity/direction changed? If the forces are equal and opposite to each other. In order for it bounce, surely the force acting from the floor to the ball must be greater than the force acting from the ball to the floor?
You've misunderstood the statement.
When A exerts $\vec{F}_{A \to B}$ on B, B exerts an equal and opposite force $\vec{F}_{B \to A} = - \vec{F}_{A \to B}$ on A.
The only forces acting on the ball a gravity and the normal force, and the floor experiences a force from the ball which is equal in magnitude and opposite in direction from the normal force on the ball.
• I don't understand you're last paragraph. If the forces are equal and opposite why is there a change in motion? May 24 '12 at 20:06
• One of the equal and opposite forces acts on the ball, and the other one acts on the floor. When you add up the forces acting on ball it includes only one of the pair. May 24 '12 at 20:08
• That seems so obvious now you've said it :) May 24 '12 at 20:13
The easiest way to understand this is by flows of momentum, the ball has momentum down, and when it hits the floor, it transfers twice this amount of down momentum to the floor, and so is left with negative this much down-momentum, which is up-momentum.
The statement of Newton's laws are nothing more than the flows of momentum from object to object, and how the object moves when momentum is building up on it. Understanding it this way is helpful for clearing up the elementary confusions. This is a supplement of dmckee's answer, which is correct, but it doesn't give the intuition of momentum-flow, which makes the misunderstanding impossible to formulate.
• Assuming we didn't already know the answer, why would one intuitively guess that it transfers two times the momentum to the floor, and not, say, only it's momentum? That would make it much more intuitive for me. Thanks. May 25 '12 at 3:43
• @knives: If the momentum transfer is reversible, the process looks the same back in time, so the magnitude of the post-bounce velocity must be the same (so the motion looks the same backwards in time). The details are exactly the same as usual, the ball transfers all the down momentum at the point of maximum compression, and then absorbs up momentum as long as it is compressed. The analogy with charge flows is for LC circuits which are reversible, not with lossy R circuits. If you have a capacitor with an inductor connecting the plates, the charge can oscillate forever back and forth. May 25 '12 at 3:49
The problem is a common conflation of 1st and 3rd law forces. Yes, the normal force (from the floor up) and the contact force (from the ball down) are equal and opposite since they are 3rd law forces. However, to understand why the ball's motion changes we must focus on external first law forces. If the the normal force up exceeds the downward forces of Earth's gravity and drag from the air, the ball will accelerate upward.
|
## Electronic Communications in Probability
### Random walk in a stratified independent random environment
Brémont Julien
#### Abstract
We study Markov chains on a lattice in a codimension-one stratified independent random environment, exploiting results previously established in [2]. The random walk is first shown to be transient in dimension at least three. Focusing on dimension two, we provide sharp sufficient conditions for either recurrence or transience. We determine the critical scale of the local drift along the strata, corresponding to the frontier between the two regimes.
#### Article information
Source
Electron. Commun. Probab., Volume 24 (2019), paper no. 47, 15 pp.
Dates
Accepted: 21 June 2019
First available in Project Euclid: 5 September 2019
https://projecteuclid.org/euclid.ecp/1567649074
Digital Object Identifier
doi:10.1214/19-ECP252
#### Citation
Julien, Brémont. Random walk in a stratified independent random environment. Electron. Commun. Probab. 24 (2019), paper no. 47, 15 pp. doi:10.1214/19-ECP252. https://projecteuclid.org/euclid.ecp/1567649074
#### References
• [1] J. Brémont, On planar random walks in environments invariant by horizontal translations, Markov Processes and Related Fields 2016, issue 2, vol. 22, 267-310.
• [2] J. Brémont, Markov chains in a stratified environment, ALEA, Lat. Am. J. Probab. Math. Stat. 14, 751-798 (2017).
• [3] M. Campanino, D. Petritis, Random walks on randomly oriented lattices, Markov Processes and Related Fields 9 (2003), no. 3, 391-412.
• [4] M. Campanino, D. Petritis, Type transitions of simple random walks on randomly directed regular lattices, Journal of Applied Probability 51, 1065-1080 (2014).
• [5] F. Castell, N. Guillotin, F. Pene, B. Schapira, A local limit theorem for random walks in random scenery and on randomly oriented lattices, Annals of Proba. 39 (2011), no. 6, 2079-2118.
• [6] K. Chung, A course in probability theory. Second edition. Academic Press.1974.
• [7] A. Devulder, F. Pene, Random walk in random environment in a two-dimensional stratified medium with orientations, Electron. Journal of Probab. 18 (2013), no. 88, 1-23.
• [8] B. V. Gnedenko, A. N. Kolmogorov, Limit distributions for sums of independent random variables, Addison-Wesley Publishing Company, revised edition, 1968.
• [9] N. Guillotin-Plantard, A. Le Ny, Transient random walks on $2D$-oriented lattices, Theory Probab. Appl. 52 (2008), no. 4, 699-711.
• [10] N. Guillotin-Plantard, A. Le Ny, A functional limit theorem for a 2d- random walk with dependent marginals. Electronic Communications in Probability (2008), Vol. 13, 337-351.
• [11] S. Kalikow, Generalized random walk in a random environment. Ann. Probab. 9, 753-768.
• [12] M. Kochler, Random walks in random environment: random orientations and branching, Ph-D thesis, Technische Universität München, 2012.
• [13] M. Matheron, G. de Marsily, Is transport in porous media always diffusive ? A counterexample. Water resources Res. 16:901-907, 1980.
• [14] D. Ornstein, Random walks. I, II. Trans. Amer. Math. Soc. 138 (1969), 1-43; ibid. 138 (1969), 45-60.
• [15] F. Pene, Transient random walk in $\mathbb{Z} ^{2}$ with stationary orientations, ESAIM Probability and Statistics 13 (2009), 417-436.
• [16] C. Sabot, Random Walks in Random Dirichlet Environment are transient in dimension $d\ge 3$, Probab. Theory Related Fields 151 (2011), no. 1-2, 297-317.
• [17] Y. Sinaï, The limit behavior of a one-dimensional random walk in a random environment, Teor. Veroyatnost. i Primenen. 27 (1982), no. 2, 247-258.
• [18] F. Solomon, Random walks in a random environment. Ann. Probability 3 (1975), 1-31.
• [19] F. Spitzer, Principles of random walks. Second edition. Graduate Texts in Mathematics, vol. 34. Springer-Verlag, New-York Heidelberg, 1976.
|
# Determine all groups $G$ for which there is a certain endomorphism
Determine all groups $$G$$ for which the map $$\phi:G\rightarrow G$$ defined by $$\phi(g) = g^2$$ is a homomorphism.
I have problems understanding the task, what is it: determine all such groups? Find a property that they all have? I checked the axioms of homomorphism, like nothing supernatural of them goes for the structure of the image.
$$\phi(g_1 g_2) = (g_1 g_2)^2 = g_1^2 g_2^2 = \phi(g_1)\phi(g_2)$$
$$\phi(1_G) = 1_G^2 = 1_G$$
$$\phi(g_1^{-1})=(g_1^{-1})^2 = g_1^{-2}$$
$$\phi(g_1 g_2)^{-1} = (g_1 g_2)^{-2} = g_2^{-2} g_1^{-2} = \phi(g_2^{-1})\phi(g_1^{-1})$$
$$\phi(g^0) = (g^0)^2 = g^0 = 1_G$$
$$\phi(g g^{-1}) = (g g^{-1})^2 = g^2 g^{-2} = 1_G$$
I have no idea about what I should check...
Hint: you've written $$(g_1g_2)^2 = g_1^2g_2^2$$. But $$(g_1g_2)^2 = g_1g_2g_1g_2$$ doesn't equal $$g_1^2g_2^2$$ in general. Think about what is required for this to be true...
|
# Arbitrage free in a Black-Scholes/Poisson model
I am trying to solve the following exercise from Bjork's Arbitrage Theory in Continuous Time:
Consider a model for the stock market where the short rate of interest $$r$$ is a deterministic constant. We focus on a particular stock with price process $$S$$. Under the objective probability measure $$P$$ we have the following dynamics for the price process. $$dS(t) = \alpha S(t)dt + \sigma S(t)dW(t) + \delta S(t^-)dN(t)$$ Here $$W$$ is a standard Wiener process whereas $$N$$ is a Poisson process with intensity $$\lambda$$. We assume that $$\alpha, \sigma,\delta$$ and $$\lambda$$ are known to us. The $$dN$$ term is to be interpreted in the following way:
• Between the jump times of the Poisson process $$N$$, the $$S$$-process behaves just like ordinary geometric Brownian motion.
• If $$N$$ has a jump at time $$t$$ this induces $$S$$ to have a jump at time $$t$$. The size of the $$S$$-jump is given by $$S(t) - S(t^-) = \delta\cdot S(t^-)$$
Discuss the following questions.
1. Is the model free of arbitrage?
2. Is the model complete?
3. Is there a unique arbitrage free price for, say, a European call option?
4. Suppose that you want to replicate a European call option maturing in January 1999. Is it posssible (theoretically) to replicate this asset by a portfolio consisting of bonds, the underlying stock and European call option maturing in December 2001?
### Q2
The model is complete if we can find a replicating portfolio for it. The replicating portfolio can be built using a bond, with deterministic price process: $$dB = rBdt$$ and a certain number of stock shares $$S_i$$, with stochastic price process: $$dS_i = \alpha_i S_i dt + S_i\sum_j\sigma_{ij} dW_j + \delta_i S_i dN_i,$$ where $$W_j$$ are independent standard Wiener processes, and $$N_i$$ are independent standard Poisson processes.
To build a replicating portfolio, we need some theoretical tools, like proving some form of the Ito's lemma for jump processes, but in principle, supposing that such a thing exists, we should probably manage to build a replicating portfolio.
The jump processes probably require some care when imposing the self-financing constraint in the portfolio, but basically they act as random, instantaneous injections or withdrawals of the money that can be used to rebalance the portfolio.
### Q1, Q3 and Q4
I am not sure about my answer to Q2, and have no idea on how to approach Q1, Q3, and Q4. Any help would be appreciated.
|
# Can Block Reward Halving Still Increase Value?
There has been a lot of chatter lately about an impending event that represents a drastic change in the world of Bitcoin: In about 3 days the block reward is going to be cut in half from 50 BTC to 25. While it’s immediately obvious that this has potentially severe implications for miners everywhere, there is another aspect of the block reward change that opinions seem torn on, and I for one would like to take the temperature of the community.
It is known that the block reward is changing. It is known that it will be changing very soon. It is known that decreasing the inflation of a currency increases its scarcity and therefore its value. What’s not known is exactly how a market will react when the change is drastic and they can see it coming a mile away.
Bitcoin currently experiences an annual rate of inflation of about 33% – with the halving of the block reward that rate drops drastically to about 12.5%. Subsequent halvings will drop the inflation rate slowly to zero at which point Bitcoin becomes truly deflationary. We have sound economic theory for what should happen at every step along the way, but very few examples from real life where a massive reduction in the availability of an asset can be seen with exact precision years in advance. We don’t know the time scale of the market’s reaction.
The Bitcoin community seems to have divided opinions about what we’re likely to see in a few days, and while there are many theories most fall into one of two camps:
1. The majority of the community is ignorant of the inner workings of Bitcoin, especially the inner workings of Bitcoin mining. Most of them won’t see the scarcity coming so we can expect a large increase in BTC value following the block reward halving – it’s simple supply and demand.
2. Bitcoin is not only transparent about these things but has a community that knows and cares about them, too. Most Bitcoin users have seen this coming for a long time and the block reward halving is responsible for the price rise from $5 to$12 we’ve seen in recent days – it’s still simple supply and demand but the timeline is different.
It’s hard to know which opinion is right. Bitcoin is extremely transparent and it’s easy to see most of the market forces at play, but this specific issue comes down to one difficult-to-answer question: what percent of the community is aware of the impending change? It’s a hard statistic to find with any accuracy, but just for the hell of it I’m going to ask my audience anyway:
Will Bitcoin's value increase with the reward drop?
View Results
## Tip With Bitcoin
1BYRcbEtA8F8a1YGWfnpoahzCXkJaicmZV
Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked. Vote with your (Bitcoin) wallet!
I think that the market knows this is coming but that just means that when the new supply gets cut in half we will know why the miners have suddenly doubled the price they are willing to sell their fresh new coins for. They won't be able to get double the price until the supply actually gets cut in half so we have to wait for the event to actually happen before the market effects can be seen.
2. _bc says:
Not everyone needs to see this coming. Most miners see it, and have acted accordingly. They've influenced the market with their actions/inactions, and hardware.
Having said that, there was volatility before the halving, and there will be volatility after the halving.
Great work on the blog. Keep them coming, please.
3. Nicolai Hel says:
The supply of BTC will not be reduced. The rate of increase of supply will halve. This is disinflationary, not deflationary. There will continue to be more and more BTC available so to say that there is a "scarcity coming" due to the halving is incorrect and beside the point.
It is likely that the rate of adoption of BTC among new users, businesses and niches will continue to grow and probably exponentially for quite a while, so the difference between this growth rate and the reduced linear growth rate of the Bitcoin monetary base is what matters here.
Many have anticipated this decrease in the growth rate, but most have not, since there are many that just use BTC on a regular basis (eg. on Silk Road or day trading or gambling or …) and there are so many that have recently just discovered it for various reasons. The few people that would buy in advance of the anticipated increase in price are either speculators/inverstors or people who (think they) know what's going on and have a use for BTC in the fairly near future. For example, someone who wants to make some investments in miners. Most people just buy BTC when they need it and usually end up scrambling at the last minute to get it done.
The value of BTC compared to the USD will likely rise steadily since the adoption rate will continue to grow faster and faster for the foreseeable future, people will keep buying BTC mostly on an as needed basis and the growth rate of the monetary base will be much smaller than it has been.
I would argue that the having is almost a non factor in the medium to long term. I would argue that the Bitcoin adoption rate will dwarf the growth rate of the monetary base and we can almost treat the monetary base as standing still when things really start rolling in terms of adoption.
These are very early days for Bitcoin, but it is clearly buttressed by a cadre of freedom loving crypto-anarcho-geeks who will do whatever it takes to keep this ball rolling down the (truly) free market capitalism lane, and a cohort of experimenters and experiencers who don't like governments telling them that they can't enjoy whatever substances, pornography and gambling activities they deem fit to their mental health. With these two intersecting sets of users and maintainers serving as the foundation, Bitcoin will be supported while the rest of the world comes to see the beauty. This is how the Internet was born and flourished.
Love the blog, by the way. You are doing the Bitcoin community a huge service. Thank you.
• David Perry says:
Thanks for the kind words – and the several paragraphs of educational words – it's good to know people are actually interested in what I'm writing. The stats can only prove that people are reading, it's these kinds of comments that show I've actually made people think about something and that's what I'm all about.
Thanks for not just reading, but participating!
4. inflateaway! says:
Everyone knows no more than 21 million bitcoins can potentially circulate. Today's (and tomorrow's) inflation does not matter that much, since the 21 million BTC already exists in everyone's mind. Most bitcoins are hoarded because everyone understands they will never be more than that, and each increase in bitcoin's acceptance will increase it's value.
Why would the supply get cut in half? Most people buy bitcoins from exchanges, then turn around and spend them at a merchant almost immediately. The merchant then turns around and cashes the bitcoins back out at the exchange for their local currency.
I also wonder what will happen when there are no more bitcoins to be mined. The theory is, there will be enough computers building blocks in the chain by then, but I find it hard to believe miners will continue paying for electricity to build blocks, when there's no longer a monetary reward for doing so.
• Hiawata says:
The miners will still collect the increasing transaction fees.
6. Ramon says:
Start filling a bathtub with water at a rate of 1 liter per minute. When it's halfway filled, turn the flow rate down to 1/2 liter per minute. The impact is marginal, and will in fact quickly reduce the level of surface disturbance.
Bitcoin will continue to rise in value, albeit not without some initial volatility following the Great Halving. Once that settles, the primary factor returns to external demand.
7. Awesome site/blog, keep on posting interesting articles!
|
# multiset variant of subset sum problem known algorithms
I have been working in the time analysis for an exact solver I designed for the subset sum problem accepting multisets as input instances, and determined its time complexity to be dependent on the multiplicity ($$m$$) of the elements in the input.
For input instances containing elements having $$m \leq 2$$, the time complexity is $$O(2^{n/2} \cdot 0.75^{\frac{d/2}{n}})$$ where $$d=$$ # of elements with $$m=2$$.
For example, when $$d=n/2$$, then:
• $$O(2^{n/2} \cdot 0.75^{\frac{d/2}{n}}) \approx O((1.4142 \cdot 0.93)^n) \approx O(1.316^n)$$
For input instances containing elements having $$m = 1$$ and $$m=4$$, the time complexity is $$O(2^{n/2} \cdot 0.312^{\frac{d/2}{n}})$$ where $$d=$$ # of elements with $$m=4$$.
For example, when $$d=n/4$$, then:
• $$O(2^{n/2} \cdot 0.312^{\frac{d/2}{n}}) \approx O((1.4142 \cdot 0.86)^n) \approx O(1.223^n)$$
And following the above configuration, for $$m=8$$ and $$d=n/8$$ is
• $$O(2^{n/2} \cdot 0.035^{\frac{d/2}{n}}) \approx O((1.4142 \cdot 0.81)^n) \approx O(1.147^n)$$
I have yet to complete the time analysis for mixed cases (different multiplicities in the input instance) but my educated guess is it will be a composite of the above TCs.
Besides to ask for comments, I am looking as well for other known algorithms with a similar behavior to compare approaches (looked for them but found nothing so far...)
|
Yesterday we covered the impact of QE & low interest rates on asset prices, bundling that under the financial economy 👇
Today we're onto the impacts on bank lending and the real economy, and how this feeds back into the whole system.
Why separate the two? Because...
"The stock market is NOT the economy"
I like to think of the financial and real economies as two co-dependent systems that operate independently of each other a lot of the time.
But when it comes to interest rate changes, they become very linked.
The Bank of England explained how the interest rate mechanism feeds through to the wider economy...
We buy government bonds or corporate bonds from other financial companies and pension funds.
When we do this, the price of these bonds tend to increase which means that the bond yield, or ‘interest rate’ that holders of these bonds get, goes down.
The lower interest rate on government and corporate bonds then feeds through to lower interest rates on loans for households and businesses. That helps to boost spending in the economy and keep inflation at target.
From the outside looking in, the process looks like this 👇
It seems so simple, so logical. That alone should arouse suspicions...
This is a far more realistic perspective 👇
Funds are loaned out or excess reserves pile up...
And there are plenty of excess reserves around... 👇
Bound to be some excess when you "flood the system"... 👇
Fair to say you simply flooded the system with money?
POWELL: Yes. We did. That's another way to think about it. We did.
There's a big problem with that imagery. A genuine flood gets into every nook and cranny of the system. That's not how this works.
It's more like a system of river locks. The 'flood' is guided and compartmentalised, and often stays in one place for a long time.
Something like this 👇
This monetary 'flood' reaching the real economy depends on a sequence of unlocks...
Unlocks like lending & government spending.
Some QE transmission worked in a slightly unique way during the pandemic.
Government stimulus bypassed the lending system altogether by 'gifting' stimulus cheques directly to the public. Central banks and governments also incentivised banks to lend by underwriting pandemic loans & credit.
Extraordinary times & extraordinary measures, but that's over now. So we're back to the BOE's explainer:
What if they don't want to?
You can't force people to borrow. If people/businesses have taken on a load of debt already, they might not want to load up even more, especially if they feel that the economic outlook is uncertain.
High debt levels restrict the appetite to borrow. 👇
High debt levels also restrict the appetite to lend
Going back to the pandemic, the Fed launched the Main Street loan program "to support small and medium-sized businesses".
The program made available $600 billion in loan facilities to employers, who must have been in good financial standing prior to the onset of the COVID-19 crisis. To encourage banks to lend, the Fed bought 95% of new or existing loans to qualified employers, while the issuing bank kept 5% to discourage irresponsible lending. Of the available$600 billion, only $17.5 billion was actually lent out... That's just 3% of the available funding. Why? The Fed asked the same question... 👇 Even with the Fed essentially insuring 95% of the loans, banks still weren't keen to lend. And the biggest reason why 👇 "Firm in poor condition pre-virus" Borrowers that don't want to borrow and lenders that don't want to lend... So.... Do bank reserves truly matter for lending decisions? Let's ask former ECB Vice President Vitor Constancio: “The level of bank reserves hardly figures in banks lending decisions; the supply of credit outstanding is determined by banks’ perceptions of risk/reward trade-offs and demand for credit”. So, QE doesn't actually impact the rate of lending, and the whole 'money multiplier' idea is a myth... 👇 OK. Let's try interest rates. Does the shape of the yield curve affect lending? Not really. Andy Constan explains 👇 Today and forever more (regardless of the shape of the yield curve) banks can exploit the spread between lending and deposits that isn't curve riding. Today's banks are sophisticated risk managers and understand that funding 10 year assets with overnight money is just begging for an extinction level event. Instead, banks use interest rate swaps to manage that risk. A flat yield curve may be a sign of economic headwinds/recession ahead (which can negatively impact lending decisions), but the difference between interest rates at different time horizons isn't a decisive factor for banks to lend. The higher interest rates that we're seeing should actually be a good thing for US banks. 👇 Bank of America says that a 100 basis point parallel increase in interest rates would boost net interest income by$6.5 billion over the next 12 months; that’s a 15% jump on the 2021 level.
Summing up, almost everything you've been told about QE and low interest rates is wrong. It matters, but it's not as dominant as we're often led to believe.
Lending is a function of credit risk. If banks think you can make money and pay them back, they'll lend.
Linking it all back together, things happening in the real economy feed back into the financial economy.
The value of assets is determined by the appetite for savings in an economy. Too much saving and not enough consumption creates an asset shortage 👇
If you think about ‘asset production’ as being correlated to GDP, it's easy to recognize that if income (GDP) is evenly distributed, the aggregate marginal propensity to consume (MPC) is higher and the marginal propensity to save (MPS) is lower.
In simple terms, an economy that is more 'equal' has a better balance between saving and consumption. Instead of that, we have this 👇
If unevenly distributed, wealthier people will consume less of their income and save more. Lower MPC and higher MPS translate into higher demand for financial assets for a given level of aggregate income.
Beyond a certain level of wealth, there's only so much you can buy. Which creates an excess of savings looking for a home.
Excess demand for savings pushes up asset prices excessively because there simply aren't enough assets to accommodate the level of savings.
And if this is you after reading this, I've done something right. 👇
Don't know what financial news stories are important and what is complete bullsh*t? Hop onto our filtered news channel.
It's completely free 👇👇👇
Subscribe to our YouTube Channel and stay up to date with all of our videos as they're posted. We'll keep expanding and adding more formats as we go!
Check out our reviews on TrustPilot 👇👇👇
|
• Selena2345
Broderick had $420 in his checking account. He made 6 deposits of$35.50 each. He needs to write 4 checks for \$310.75 each. Write an integer to represent the total deposits Broderick made and another integer to represent the total amount Broderick needs from his checking account to cover the checks. Show your work. How much money does Boderick need to add to his account to cover his checks ? Show your work.
Mathematics
Looking for something else?
Not the answer you are looking for? Search for more explanations.
|
# Homework Help: Flux through a circle not centered at origin.
1. Dec 13, 2009
### Kizaru
I feel really dumb for this, but I keep getting strange answers so I may be forgetting something.
1. The problem statement, all variables and given/known data
Find the magnetic flux through a circle in the xz plane of radius "a" due to a wire. . The center of the circle is (b,0,0).
2. Relevant equations
Well I found the magnetic field to be proportional to
$$\frac{1}{\rho}$$
That field has only a component in the phi direction.
Where rho and phi are cylindrical coordinates.
3. The attempt at a solution
I can't figure out the limits of integration for anything unless it's rectangular coordinates, but those become very messy and I obtain an integral which is clearly wrong. I KNOW the answer should reduce to something "nice."
I know I'm missing something stupid here.
2. Dec 13, 2009
### fluidistic
Where is the wire? I guess it's the z-axis?
3. Dec 13, 2009
### Kizaru
Yes, it is. It's an infinite wire carrying current I in the z-hat direction. Basically using the result of a previous problem, I know what the B field is. I just chose not to include the rest of the stuff for that field in this question.
4. Dec 13, 2009
### fluidistic
According to wikipedia, $$\Phi _B=\iint _S \vec B \cdot d \vec S$$.
I'm not able to solve the problem so I'd love someone to help us.
If you have the expression of B in all the circle, I think it's easy to solve the integral.
5. Dec 13, 2009
### Kizaru
I obtained B and I know what the definition of the magnetic flux is. I also know the vector element of area for cylindrical coordinates results in an integral over just $$\frac{1}{\rho}$$
IE, I have
$$\Phi _B= k \iint _S \frac{1}{\rho} d\rho dz$$
where k is a constant (having trouble texing the constants).
The issue is I am unsure of what to make the limits of integration.
6. Dec 13, 2009
### fluidistic
Are you sure you have a $$d\rho dz$$ as a differential area? You said you were working cylindrical coordinates, I'm confused.
7. Dec 13, 2009
### Kizaru
8. Dec 13, 2009
### fluidistic
9. Dec 13, 2009
### Kizaru
What? Phi is the angle that goes counterclockwise from the x axis. I would NOT be integrating over dphi because phi is constant = 0. The circle lies in the xz plane.
Draw the circle out on a piece of paper to see what I mean. Clearly rho does not go from 0 to a. Rho goes from (b-a) to (b+a) when z =0, and it goes from b to b when z = a. The limits of integration over the inner integral are clearly not constant.
10. Dec 13, 2009
### phsopher
Well, you could divide the circle into infinitesimal horizontal slices and calculate what is the area dA of a slice as a function of x (through trigonometry). On the circle, B is proportional to 1/x so the flux through a slice would be just B(x)(dA/dx)dx which you could integrate along the x axis. However, this gives a complicated integral which I'm not sure how to solve.
11. Dec 13, 2009
### Kizaru
I tried a similar approach and obtained an integral which Mathematica integrated to have imaginary parts != 0. Im curious if I could shift my origin (make a substitution) and then convert to spherical coordinates (probably the other order). The spherical coordinates integral (after the substitution) for r would be from 0 to a, and for theta it would be 0 to pi. After the origin shift, my B field will still be in phi-hat direction only due to the symmetry.
12. Dec 13, 2009
### fluidistic
My error, I thought it was in the x-y plane, sorry about that.
13. Dec 13, 2009
### phsopher
It does that because it doesn't know the values of a and b, they might be such that the thing inside the square root is negative. If we assume that b>a>0, however, it's clear that the value of the integral is real. You have to use assumptions, though i don't know how to do that in Mathematica. Anyway, I got (assuming B(x) = B0/x)
$$\Phi = \int_{b-a}^{b+a} \frac{2 B_0 \sqrt{a^2 -(b-x)^2}}{x}dx = 2 B_0 a \int_{\frac{b}{a}-1}^{\frac{b}{a}+1} \frac{ \sqrt{1 -(\frac{b}{a}-x)^2}}{x}dx$$
which according to Maple is
$$2 B_0 a \,{\frac {\pi \, \left( 1+\frac{b}{a}\,\sqrt {-1+{\frac{b^2}{a^2}}}-{\frac{b^2}{a^2}} \right) }{ \sqrt {-1+{\frac{b^2}{a^2}}}}}$$
which can be simplified to $$2 \pi B_0 (b - \sqrt{b^2-a^2})$$
Which isn't that bad in the end (assuming I didn't make a mistake). I'm not sure how to calculate the integral by hand though.
Last edited: Dec 13, 2009
14. Dec 13, 2009
### Kizaru
YES! Thank you. That's exactly what the result should simplify to (it's very similar to that of a toroid, after all a toroid is roughly just N of those). Thank you. You were right, I didn't make the assumptions in Mathematica, but I'm still quite a novice and don't know how to do that yet.
Thank you :)
|
By Dave Andrusko As we approached the 49th anniversary of Roe v. Wade and Doe v. Bolton, NRLC’s Dr. Randall K. O’Bannon has compiled a factsheet estimating that 63,459,781 babies have been aborted since 1973. The numbers continue to show a steady decline in abortions since 2009, although the 2019 Center for Disease Control (CDC) … Continue reading A total of 63,459,781 unborn babies have been lost to abortions since Roe v. Wade
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.