text
stringlengths
100
356k
#### [SOLVED] An enigmatic pattern in division graphs Draw the numbers $$1,2,\dots,N$$ on a circle and draw a line from $$n$$ to $$m>n$$ when $$n$$ divides $$m$$: For larger $$N$$ some kind of stable structure emerges which remains perfectly in place for ever larger $$N$$, even though the points on the circle get ever closer, i.e. are moving. This really astonishes me, I wouldn't have guessed. Can someone explain? In its full beauty the case $$N=1000$$ (cheating a bit by adding also lines from $$m$$ to $$n$$ when $$(m-N)\%N$$ divides $$(n-N)\%N$$ thus symmetrizing the picture): Note that a similar phenomenon – stable asymptotic patterns, esp. cardioids, nephroids, and so on – can be observed in modular multiplication graphs $$M:N$$ with a line drawn from $$n$$ to $$m$$ if $$M\cdot n \equiv m \pmod{N}$$. For the graphs $$M:N$$, $$N > M$$ for small $$M$$ But not for larger $$M$$ For $$M:(3M -1)$$ It would be interesting to understand how these two phenomena relate. Note that one can create arbitrary large division graphs with circle and compass alone, without even explicitly checking if a number $$n$$ divides another number $$m$$: 1. Create a regular $$2^n$$-gon. 2. Mark an initial corner $$C_1$$. 3. For each corner $$C_k$$ do the following: 1. Set the radius $$r$$ of the compass to $$|C_1C_k|$$. 2. Draw a circle around $$C_{k_0} = C_k$$ with radius $$r$$. 3. On the circle do lie two other corners, pick the next one in counter-clockwise direction, $$C_{k_1}$$. 4. If $$C_1$$ does not lie between $$C_{k_0}$$ and $$C_{k_1}$$ (in counter-clockwise direction) or equals $$C_{k_1}$$: 5. Draw a line from $$C_k$$ to $$C_{k_1}$$. 6. Let $$C_{k_0} = C_{k_1}$$ and proceed with 5. 7. Else: Stop. There are three equivalent ways to create the division graph for $$N$$ edge by edge: 1. For each $$n = 1,2,...,N$$: For each $$m\leq N$$ draw an edge between $$n$$ and $$m$$ when $$n$$ divides $$m$$. 2. For each $$n = 1,2,...,N$$: For each $$k = 1,2,...,N$$ draw an edge between $$n$$ and $$m = k\cdot n$$ when $$m \leq N$$. 3. For each $$k = 1,2,...,N$$: For each $$n = 1,2,...,N$$ draw an edge between $$n$$ and $$m = k\cdot n$$ when $$m \leq N$$. #### @Hans-Peter Stricker 2019-01-14 09:21:10 Putting the pieces together one may explain the pattern like this: • The division graph for $$N$$ can be seen as the sum of the multiplication graphs $$G_N^k$$, $$k=2,3,..,N$$ with an edge from $$n$$ to $$m$$ when $$k\cdot n = m$$. When $$n > N/k$$ there's no line emanating from $$n$$. (This relates to step 7 in the geometric construction above.) • The multiplication-modulo-$$N$$ graphs $$H_{N}^k$$ have a weaker condition: there's an edge from $$n$$ to $$m$$ when $$k\cdot n \equiv m \pmod{N}$$. • So the division graph for $$N$$ is a proper subgraph of the sum of multiplication graphs $$H_{N+1}^k$$. • The multiplication graphs $$H_{N}^k$$ exhibit characteristic $$k-1$$-lobed patterns: • These patterns are truncated in the graphs $$G_{N}^k$$ exactly at $$N/k$$. • Overlaying the truncated patterns gives the pattern in question. #### @Hans-Peter Stricker 2019-01-12 08:41:24 To add some visual sugar to Alex R's comment (thanks for it):
# How to limit the picture size took from IMAGE_CAPTURE intent I use Intent intent = new Intent("android.media.action.IMAGE_CAPTURE"); intent.putExtra(MediaStore.EXTRA_OUTPUT, Uri.fromFile(new File(getSomePath()+"temp.jpg"))); //intent.putExtra(MediaStore.EXTRA_VIDEO_QUALITY, 1); startActivityForResult(intent, 0); to get a picture and the saved picture returns as full-size. But I want to get a limited picture, like 800*640, because on different devices I get different sizes, and I don't need that big picture. I notice that there is a EXTRA_SIZE_LIMIT in MediaStore , but I can't find how to use it, what parameter should be set? - Finally I found that's because different manufacturers customize their Rom, including the Camera app, so the best way is not to call the default camera app, instead we can write a activity use hardware.camera to take picture. There are a lot of examples for this around the Internet too.
Matthew wrote: > \$x\_{n} = x\_{n-1} + x\_{n-2} + x\_{n-3} \$ Cool, the Tribonacci numbers! I discussed them in Week 1 of this course, and gave some homework about them - you can see solutions: * [Quantization and Categorification](http://math.ucr.edu/home/baez/qg-winter2004/), Winter 2004. This could lead you down an even deeper rabbit-hole than the one you're exploring now! > How would you make a matrix to represent this recurrence? This is going to take some work, so I will only give you the answer because you flattered me so much. (They say "flattery gets you nowhere," but they're lying.) There's a famous way to convert higher-order linear ordinary differential equations into systems of first-order ones. For example if we take $\frac{d^2}{d t^2} x(t) = 3 \frac{d}{dt} x(t) + 7 x(t)$ and write \$$\dot{x}\$$ for \$$\frac{d}{dt} x(t)\$$ it becomes $\frac{d}{dt} \left(\begin{array}{c} \dot{x} \\\\ x \end{array}\right) = \left(\begin{array}{cc} 3 & 7 \\\\ 1 & 0 \end{array} \right) \left(\begin{array}{c} \dot{x} \\\\ x \end{array}\right) .$ We can then solve the equation using matrix tricks. More to the point, a similar trick works for linear recurrences. So we can take $x\_{n+2} = 3 x\_{n+1} + 7 x\_n$ and write it as $\left(\begin{array}{c} x\_{n+2} \\\\ x\_{n+1}\end{array}\right) = \left(\begin{array}{cc} 3 & 7 \\\\ 1 & 0 \end{array} \right) \left(\begin{array}{c} x\_{n+1} \\\\ x\_n \end{array}\right) .$ We can make it even more slick using an "evolution operator" \$$U\$$ that increments the "time" \$$n\$$ by one step. By definition $U \left(\begin{array}{c} x\_{n+1} \\\\ x\_{n}\end{array}\right) =\left(\begin{array}{c} x\_{n+2} \\\\ x\_{n+1}\end{array}\right) .$ Then our earlier equation becomes $U \left(\begin{array}{c} x\_{n+1} \\\\ x\_{n}\end{array}\right) = \left(\begin{array}{cc} 3 & 7 \\\\ 1 & 0 \end{array} \right) \left(\begin{array}{c} x\_{n+1} \\\\ x\_n \end{array}\right) .$ but this just means $U = \left(\begin{array}{cc} 3 & 7 \\\\ 1 & 0 \end{array} \right)$ so we know stuff like $\left(\begin{array}{c} x\_{n+23} \\\\ x\_{n+22}\end{array}\right) = U^{23} \left(\begin{array}{c} x\_{n+1} \\\\ x\_n \end{array}\right)$ I think the answer to your question is lurking in what I said. Note that I considered a second-order differential equation and a second-order recurrence, but the exact same idea works for the third-order one you are interested in. You just need a \$$3 \times 3\$$ matrix instead of a \$$2 \times 2\$$.
In Progress Lesson 15, Topic 3 In Progress # Non-Equilibrium Systems Lesson Progress 0% Complete ### Combustion When hydrocarbons undergo complete combustion in an excess of oxygen, carbon dioxide and water are formed. These reactions are strongly exothermic: \text{CH}_4 (g) + 2\text O_2 (g) \rightarrow \text{CO}_2 (g) + 2\text H_2 \text O (l) The release of heat energy in this exothermic reaction drives the reaction in the forward direction. This reaction is spontaneous as the Gibbs free energy change is negative. (ΔG < 0). The high activation energy for the reverse reaction reduces the chance that products will recombine to form reactants. Also, as combustion reactions occur in open systems, the products can escape into the surroundings and therefore cannot recombine to produce reactant molecules. ### Photosynthesis Photosynthesis occurs in the chloroplasts of green plant cells. Solar energy is absorbed by chlorophyll molecules inside the chloroplasts. Chlorophyll selectively absorbs red and violet light in the visible spectrum. This energy is used to convert carbon dioxide and water into glucose and oxygen, as seen below: 6\text{CO}_2 (g) + 6\text H_2 \text O (l) \rightarrow \text C_6 \text H_{12} \text O_6 (s) + 6 \text O_2 (g) The Gibbs free energy change for photosynthesis is positive (ΔG > 0) and therefore the reaction is non-spontaneous. This reaction requires solar energy to occur. The reaction products are lost from the system and this prevents the reverse reaction from ocurring.
# countable sets , rational numbers and which of the following are true?? Let $A$ be a subset of real numbers containing all the rational numbers. Which of the following statements is true? a. $A$ is countable. b. If $A$ is uncountable then $A=R$. c. If $A$ is open, then $A=R$. d. None of the above statement is true. (a) can't be true since $A$ containing all the rational numbers no way implies that $A$ contains only the rational numbers. The confusion lies between (b) and (c). - Hint What if $A = \mathbb{R}-\{\sqrt{2}\}$? - a. is false since a subset of the real numbers containing the rationals is all of the reals. b. Wrong. Take $\mathbb{R} - \{\sqrt{2}\}.$ c. Enumerate the rationals and enclose the $n$th in an interval of length $\epsilon/2^n$. This kills c. Hence, d. - notice your example on b also eliminates c –  clark Feb 4 '14 at 2:15 but the example kills it more spectacularly, with an open set of measure $\le\epsilon$. Bang. dead. –  ncmathsadist Feb 4 '14 at 2:24 to be honest, when reading the problem your solution is what came to my mind. Since this is a clean measure theoretic idea. But when I saw the hint of Habert, I said wait a minute this answers only $b$? and then realised of course not it asnwers $c$ as well. So I commented just in case this happened to someone else. –  clark Feb 4 '14 at 2:38
# tensiometer.chains_convergence¶ This file contains some functions to study convergence of the chains and to compare the two posteriors. tensiometer.chains_convergence.GR_test(chains, param_names=None)[source] Function performing the Gelman Rubin (GR) test (described in Gelman and Rubin 92 and Brooks and Gelman 98) on a list of MCSamples or on a single MCSamples with different sub-chains. This test compares the variation of the mean across a pool of chains with the expected variation of the mean under the pdf that is being sampled. If we define the covariance of the mean as: $C_{ij} \equiv {\rm Cov}_c({\rm Mean}_s(\theta))_{ij}$ and the mean covariance as: $M_{ij} \equiv {\rm Mean}_c[{\rm Cov}_s(\theta)_{ij}]$ then we seek to maximize: $R-1 = {\rm max_{\theta}}\frac{C_{ij} \theta^i \theta^j} {M_{ij}\theta^i \theta^j}$ where the subscript $$c$$ means that the statistics is computed across chains while the subscrit $$s$$ indicates that it is computed across samples. In this case the maximization is solved by finding the maximum eigenvalue of $$C M^{-1}$$. Parameters: chains – single or list of MCSamples param_names – names of the parameters involved in the test. By default uses all non-derived parameters. value of the GR test and corresponding parameter combination tensiometer.chains_convergence.GR_test_from_samples(samples, weights)[source] Lower level function to perform the Gelman Rubin (GR) test. This works on a list of samples from different chains and corresponding weights. Refer to tensiometer.chains_convergence.GR_test() for more details of what this function is doing. Parameters: samples – list of samples from different chains weights – weights of the samples for each chain value of the GR test and corresponding parameter combination tensiometer.chains_convergence.GRn_test(chains, n, theta0=None, param_names=None, feedback=0, optimizer='ParticleSwarm', **kwargs)[source] Multi dimensional higher order moments convergence test. Compares the variation of a given moment among the population of chains with the expected variation of that quantity from the samples pdf. We first build the $$k$$ order tensor of parameter differences around a point $$\tilde{\theta}$$: $Q^{(k)} \equiv Q_{i_1, \dots, i_k} \equiv (\theta_{i_1} -\tilde{\theta}_{i_1}) \cdots (\theta_{i_k} -\tilde{\theta}_{i_k})$ then we build the tensor encoding its covariance across chains $V_M = {\rm Var}_c (E_s [Q^{(k)}])$ which is a $$2k$$ and then build the second tensor encoding the mean in chain moment: $M_V = {\rm Mean}_c (E_s[(Q^{(k)}-E_s[Q^{(k)}]) \otimes(Q^{(k)}-E_s[Q^{(k)}])])$ where we have suppressed all indexes to not crowd the notation. Then we maximize over parameters: $R_n -1 \equiv {\rm max}_\theta \frac{V_M \theta^{2k}}{M_V \theta^{2k}}$ where $$\theta^{2k}$$ is the tensor product of $$\theta$$ for $$2k$$ times. Differently from the 2D case this problem has no solution in terms of eigenvalues of tensors so far and the solution is obtained by numerical minimization with the pymanopt library. Parameters: chains – single or list of MCSamples n – order of the moment theta0 – center of the moments. By default equal to the mean param_names – names of the parameters involved in the test. By default uses all non-derived parameters. feedback – level of feedback. 0=no feedback, >0 increasingly chatty optimizer – choice of optimization algorithm for pymanopt. Default is ParticleSwarm, other possibility is TrustRegions. kwargs – keyword arguments for the optimizer. value of the GR moment test and corresponding parameter combination tensiometer.chains_convergence.GRn_test_1D(chains, n, param_name, theta0=None)[source] One dimensional higher moments test. Compares the variation of a given moment among the population of chains with the expected variation of that quantity from the samples pdf. This test is defined by: $R_n(\theta_0)-1 = \frac{{\rm Var}_c ({\rm Mean}_s(\theta-\theta_0)^n)}{{\rm Mean}_c ({\rm Var}_s(\theta-\theta_0)^n) }$ where the subscript $$c$$ means that the statistics is computed across chains while the subscrit $$s$$ indicates that it is computed across samples. Parameters: chains – single or list of MCSamples n – order of the moment param_name – names of the parameter involved in the test. theta0 – center of the moments. By default equal to the mean. value of the GR moment test and corresponding parameter combination (an array with one since this works in 1D) tensiometer.chains_convergence.GRn_test_1D_samples(samples, weights, n, theta0=None)[source] Lower level function to compute the one dimensional higher moments test. This works on a list of samples from different chains and corresponding weights. Refer to tensiometer.chains_convergence.GRn_test_1D() for more details of what this function is doing. Parameters: samples – list of samples from different chains weights – weights of the samples for each chain n – order of the moment theta0 – center of the moments. By default equal to the mean. value of the GR moment test and corresponding parameter combination (an array with one since this works in 1D) tensiometer.chains_convergence.GRn_test_from_samples(samples, weights, n, theta0=None, feedback=0, optimizer='ParticleSwarm', **kwargs)[source] Lower level function to compute the multi dimensional higher moments test. This works on a list of samples from different chains and corresponding weights. Refer to tensiometer.chains_convergence.GRn_test() for more details of what this function is doing. Parameters: samples – list of samples from different chains weights – weights of the samples for each chain n – order of the moment theta0 – center of the moments. By default equal to the mean feedback – level of feedback. 0=no feedback, >0 increasingly chatty optimizer – choice of optimization algorithm for pymanopt. Default is ParticleSwarm, other possibility is TrustRegions. kwargs – keyword arguments for the optimizer. value of the GR moment test and corresponding parameter combination
# MLIR Multi-Level IR Compiler Framework # Builtin Dialect The builtin dialect contains a core set of Attributes, Operations, and Types that have wide applicability across a very large number of domains and abstractions. Many of the components of this dialect are also instrumental in the implementation of the core IR. As such, this dialect is implicitly loaded in every MLIRContext, and available directly to all users of MLIR. Given the far-reaching nature of this dialect and the fact that MLIR is extensible by design, any potential additions are heavily scrutinized. ## Attributes ¶ ### AffineMapAttr ¶ An Attribute containing an AffineMap object Syntax: affine-map-attribute ::= affine_map < affine-map > Examples: affine_map<(d0) -> (d0)> affine_map<(d0, d1, d2) -> (d0, d1)> #### Parameters: ¶ ParameterC++ typeDescription valueAffineMap ### ArrayAttr ¶ A collection of other Attribute values Syntax: array-attribute ::= [ (attribute-value (, attribute-value)*)? ] An array attribute is an attribute that represents a collection of attribute values. Examples: [] [10, i32] [affine_map<(d0, d1, d2) -> (d0, d1)>, i32, "string attribute"] #### Parameters: ¶ ParameterC++ typeDescription value::llvm::ArrayRef<Attribute> ### DenseArrayBaseAttr ¶ A dense array of i8, i16, i32, i64, f32, or f64. A dense array attribute is an attribute that represents a dense array of primitive element types. Contrary to DenseIntOrFPElementsAttr this is a flat unidimensional array which does not have a storage optimization for splat. This allows to expose the raw array through a C++ API as ArrayRef<T>. This is the base class attribute, the actual access is intended to be managed through the subclasses DenseI8ArrayAttr, DenseI16ArrayAttr, DenseI32ArrayAttr, DenseI64ArrayAttr, DenseF32ArrayAttr, and DenseF64ArrayAttr. Syntax: dense-array-attribute ::= [ : (integer-type | float-type) tensor-literal ] Examples: [:i8] [:i32 10, 42] [:f64 42., 12.] when a specific subclass is used as argument of an operation, the declarative assembly will omit the type and print directly: [1, 2, 3] #### Parameters: ¶ ParameterC++ typeDescription typeShapedType elementTypeDenseArrayBaseAttr::EltType rawData::llvm::ArrayRef<char>64-bit aligned storage for dense array elements ### DenseIntOrFPElementsAttr ¶ An Attribute containing a dense multi-dimensional array of integer or floating-point values Syntax: tensor-literal ::= integer-literal | float-literal | bool-literal | [] | [tensor-literal (, tensor-literal)* ] dense-intorfloat-elements-attribute ::= dense < tensor-literal > : ( tensor-type | vector-type ) A dense int-or-float elements attribute is an elements attribute containing a densely packed vector or tensor of integer or floating-point values. The element type of this attribute is required to be either an IntegerType or a FloatType. Examples: // A splat tensor of integer values. dense<10> : tensor<2xi32> // A tensor of 2 float32 elements. dense<[10.0, 11.0]> : tensor<2xf32> #### Parameters: ¶ ParameterC++ typeDescription typeShapedType rawDataArrayRef<char> ### DenseResourceElementsAttr ¶ An Attribute containing a dense multi-dimensional array backed by a resource Syntax: dense-resource-elements-attribute ::= dense_resource < resource-handle > : shaped-type A dense resource elements attribute is an elements attribute backed by a handle to a builtin dialect resource containing a densely packed array of values. This class provides the low-level attribute, which should only be interacted with in very generic terms, actual access to the underlying resource data is intended to be managed through one of the subclasses, such as; DenseBoolResourceElementsAttr, DenseUI64ResourceElementsAttr, DenseI32ResourceElementsAttr, DenseF32ResourceElementsAttr, DenseF64ResourceElementsAttr, etc. Examples: // A tensor referencing a builtin dialect resource, resource_1, with two // unsigned i32 elements. dense_resource<resource_1> : tensor<2xui32> #### Parameters: ¶ ParameterC++ typeDescription typeShapedType rawHandleDenseResourceElementsHandle ### DenseStringElementsAttr ¶ An Attribute containing a dense multi-dimensional array of strings Syntax: dense-string-elements-attribute ::= dense < attribute-value > : ( tensor-type | vector-type ) A dense string elements attribute is an elements attribute containing a densely packed vector or tensor of string values. There are no restrictions placed on the element type of this attribute, enabling the use of dialect specific string types. Examples: // A splat tensor of strings. dense<"example"> : tensor<2x!foo.string> // A tensor of 2 string elements. dense<["example1", "example2"]> : tensor<2x!foo.string> #### Parameters: ¶ ParameterC++ typeDescription typeShapedType valueArrayRef<StringRef> ### DictionaryAttr ¶ An dictionary of named Attribute values Syntax: dictionary-attribute ::= { (attribute-entry (, attribute-entry)*)? } A dictionary attribute is an attribute that represents a sorted collection of named attribute values. The elements are sorted by name, and each name must be unique within the collection. Examples: {} {attr_name = "string attribute"} {int_attr = 10, "string attr name" = "string attribute"} #### Parameters: ¶ ParameterC++ typeDescription value::llvm::ArrayRef<NamedAttribute> ### FloatAttr ¶ An Attribute containing a floating-point value Syntax: float-attribute ::= (float-literal (: float-type)?) | (hexadecimal-literal : float-type) A float attribute is a literal attribute that represents a floating point value of the specified float type. It can be represented in the hexadecimal form where the hexadecimal value is interpreted as bits of the underlying binary representation. This form is useful for representing infinity and NaN floating point values. To avoid confusion with integer attributes, hexadecimal literals must be followed by a float type to define a float attribute. Examples: 42.0 // float attribute defaults to f64 type 42.0 : f32 // float attribute of f32 type 0x7C00 : f16 // positive infinity 0x7CFF : f16 // NaN (one of possible values) 42 : f32 // Error: expected integer type #### Parameters: ¶ ParameterC++ typeDescription type::mlir::Type value::llvm::APFloat ### IntegerAttr ¶ An Attribute containing a integer value Syntax: integer-attribute ::= (integer-literal ( : (index-type | integer-type) )?) | true | false An integer attribute is a literal attribute that represents an integral value of the specified integer or index type. i1 integer attributes are treated as boolean attributes, and use a unique assembly format of either true or false depending on the value. The default type for non-boolean integer attributes, if a type is not specified, is signless 64-bit integer. Examples: 10 : i32 10 // : i64 is implied here. true // A bool, i.e. i1, value. false // A bool, i.e. i1, value. #### Parameters: ¶ ParameterC++ typeDescription type::mlir::Type valueAPInt ### IntegerSetAttr ¶ An Attribute containing an IntegerSet object Syntax: integer-set-attribute ::= affine_set < integer-set > Examples: affine_set<(d0) : (d0 - 2 >= 0)> #### Parameters: ¶ ParameterC++ typeDescription valueIntegerSet ### OpaqueAttr ¶ An opaque representation of another Attribute Syntax: opaque-attribute ::= dialect-namespace < attr-data > Opaque attributes represent attributes of non-registered dialects. These are attribute represented in their raw string form, and can only usefully be tested for attribute equality. Examples: #dialect<"opaque attribute data"> #### Parameters: ¶ ParameterC++ typeDescription dialectNamespaceStringAttr attrData::llvm::StringRef type::mlir::Type ### SparseElementsAttr ¶ An opaque representation of a multi-dimensional array Syntax: sparse-elements-attribute ::= sparse < attribute-value , attribute-value > : ( tensor-type | vector-type ) A sparse elements attribute is an elements attribute that represents a sparse vector or tensor object. This is where very few of the elements are non-zero. The attribute uses COO (coordinate list) encoding to represent the sparse elements of the elements attribute. The indices are stored via a 2-D tensor of 64-bit integer elements with shape [N, ndims], which specifies the indices of the elements in the sparse tensor that contains non-zero values. The element values are stored via a 1-D tensor with shape [N], that supplies the corresponding values for the indices. Example: sparse<[[0, 0], [1, 2]], [1, 5]> : tensor<3x4xi32> // This represents the following tensor: /// [[1, 0, 0, 0], /// [0, 0, 5, 0], /// [0, 0, 0, 0]] #### Parameters: ¶ ParameterC++ typeDescription typeShapedType indicesDenseIntElementsAttr valuesDenseElementsAttr ### StringAttr ¶ An Attribute containing a string Syntax: string-attribute ::= string-literal (: type)? A string attribute is an attribute that represents a string literal value. Examples: "An important string" "string with a type" : !dialect.string #### Parameters: ¶ ParameterC++ typeDescription value::llvm::StringRef type::mlir::Type ### SymbolRefAttr ¶ An Attribute containing a symbolic reference to an Operation Syntax: symbol-ref-attribute ::= symbol-ref-id (:: symbol-ref-id)* A symbol reference attribute is a literal attribute that represents a named reference to an operation that is nested within an operation with the OpTrait::SymbolTable trait. As such, this reference is given meaning by the nearest parent operation containing the OpTrait::SymbolTable trait. It may optionally contain a set of nested references that further resolve to a symbol nested within a different symbol table. This attribute can only be held internally by array attributes, dictionary attributes(including the top-level operation attribute dictionary) as well as attributes exposing it via the SubElementAttrInterface interface. Symbol reference attributes nested in types are currently not supported. Rationale: Identifying accesses to global data is critical to enabling efficient multi-threaded compilation. Restricting global data access to occur through symbols and limiting the places that can legally hold a symbol reference simplifies reasoning about these data accesses. See Symbols And SymbolTables for more information. Examples: @flat_reference @parent_reference::@nested_reference #### Parameters: ¶ ParameterC++ typeDescription rootReferenceStringAttr nestedReferences::llvm::ArrayRef<FlatSymbolRefAttr> ### TypeAttr ¶ An Attribute containing a Type Syntax: type-attribute ::= type A type attribute is an attribute that represents a type object. Examples: i32 !dialect.type #### Parameters: ¶ ParameterC++ typeDescription valueType ### UnitAttr ¶ An Attribute value of unit type Syntax: unit-attribute ::= unit A unit attribute is an attribute that represents a value of unit type. The unit type allows only one value forming a singleton set. This attribute value is used to represent attributes that only have meaning from their existence. One example of such an attribute could be the swift.self attribute. This attribute indicates that a function parameter is the self/context parameter. It could be represented as a boolean attribute(true or false), but a value of false doesn’t really bring any value. The parameter either is the self/context or it isn’t. Examples: // A unit attribute defined with the unit value specifier. func.func @verbose_form() attributes {dialectName.unitAttr = unit} // A unit attribute in an attribute dictionary can also be defined without // the value specifier. func.func @simple_form() attributes {dialectName.unitAttr} ## Location Attributes ¶ A subset of the builtin attribute values correspond to source locations, that may be attached to Operations. ### CallSiteLoc ¶ A callsite source location Syntax: callsite-location ::= callsite ( location at location ) An instance of this location allows for representing a directed stack of location usages. This connects a location of a callee with the location of a caller. Example: loc(callsite("foo" at "mysource.cc":10:8)) #### Parameters: ¶ ParameterC++ typeDescription calleeLocation callerLocation ### FileLineColLoc ¶ A file:line:column source location Syntax: filelinecol-location ::= string-literal : integer-literal : integer-literal An instance of this location represents a tuple of file, line number, and column number. This is similar to the type of location that you get from most source languages. Example: loc("mysource.cc":10:8) #### Parameters: ¶ ParameterC++ typeDescription filenameStringAttr lineunsigned columnunsigned ### FusedLoc ¶ A tuple of other source locations Syntax: fused-location ::= fused fusion-metadata? [ location (location ,)* ] fusion-metadata ::= < attribute-value > An instance of a fused location represents a grouping of several other source locations, with optional metadata that describes the context of the fusion. There are many places within a compiler in which several constructs may be fused together, e.g. pattern rewriting, that normally result partial or even total loss of location information. With fused locations, this is a non-issue. Example: loc(fused["mysource.cc":10:8, "mysource.cc":22:8) loc(fused<"CSE">["mysource.cc":10:8, "mysource.cc":22:8]) #### Parameters: ¶ ParameterC++ typeDescription locations::llvm::ArrayRef<Location> metadataAttribute ### NameLoc ¶ A named source location Syntax: name-location ::= string-literal (( location ))? An instance of this location allows for attaching a name to a child location. This can be useful for representing the locations of variable, or node, definitions. Example: loc("CSE"("mysource.cc":10:8)) #### Parameters: ¶ ParameterC++ typeDescription nameStringAttr childLocLocation ### OpaqueLoc ¶ An opaque source location An instance of this location essentially contains a pointer to some data structure that is external to MLIR and an optional location that can be used if the first one is not suitable. Since it contains an external structure, only the optional location is used during serialization. #### Parameters: ¶ ParameterC++ typeDescription underlyingLocationuintptr_t underlyingTypeIDTypeID fallbackLocationLocation ### UnknownLoc ¶ An unspecified source location Syntax: unknown-location ::= ? Source location information is an extremely integral part of the MLIR infrastructure. As such, location information is always present in the IR, and must explicitly be set to unknown. Thus, an instance of the unknown location represents an unspecified source location. Example: loc(?) ## Operations ¶ ### builtin.module (::mlir::ModuleOp) ¶ A top level container operation Syntax: operation ::= builtin.module ($sym_name^)? attr-dict-with-keyword$bodyRegion A module represents a top-level container operation. It contains a single graph region containing a single block which can contain any operations and does not have a terminator. Operations within this region cannot implicitly capture values defined outside the module, i.e. Modules are IsolatedFromAbove. Modules have an optional symbol name which can be used to refer to them in operations. Example: module { func.func @foo() } Traits: AffineScope, HasOnlyGraphRegion, IsolatedFromAbove, NoRegionArguments, NoTerminator, SingleBlock, SymbolTable Interfaces: OpAsmOpInterface, RegionKindInterface, Symbol #### Attributes: ¶ AttributeMLIR TypeDescription sym_name::mlir::StringAttrstring attribute sym_visibility::mlir::StringAttrstring attribute ### builtin.unrealized_conversion_cast (::mlir::UnrealizedConversionCastOp) ¶ An unrealized conversion from one set of types to another Syntax: operation ::= builtin.unrealized_conversion_cast ($inputs^ : type($inputs))? to type(\$outputs) attr-dict An unrealized_conversion_cast operation represents an unrealized conversion from one set of types to another, that is used to enable the inter-mixing of different type systems. This operation should not be attributed any special representational or execution semantics, and is generally only intended to be used to satisfy the temporary intermixing of type systems during the conversion of one type system to another. This operation may produce results of arity 1-N, and accept as input operands of arity 0-N. Example: // An unrealized 0-1 conversion. These types of conversions are useful in // cases where a type is removed from the type system, but not all uses have // been converted. For example, imagine we have a tuple type that is // expanded to its element types. If only some uses of an empty tuple type // instance are converted we still need an instance of the tuple type, but // have no inputs to the unrealized conversion. %result = unrealized_conversion_cast to !bar.tuple_type<> // An unrealized 1-1 conversion. %result1 = unrealized_conversion_cast %operand : !foo.type to !bar.lowered_type // An unrealized 1-N conversion. %results2:2 = unrealized_conversion_cast %tuple_operand : !foo.tuple_type<!foo.type, !foo.type> to !foo.type, !foo.type // An unrealized N-1 conversion. %result3 = unrealized_conversion_cast %operand, %operand : !foo.type, !foo.type to !bar.tuple_type<!foo.type, !foo.type> Interfaces: CastOpInterface, NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription inputsany type #### Results: ¶ ResultDescription outputsany type ## Types ¶ ### BFloat16Type ¶ bfloat16 floating-point type ### ComplexType ¶ Complex number with a parameterized element type Syntax: complex-type ::= complex < type > The value of complex type represents a complex number with a parameterized element type, which is composed of a real and imaginary value of that element type. The element must be a floating point or integer scalar type. Examples: complex<f32> complex<i32> #### Parameters: ¶ ParameterC++ typeDescription elementTypeType ### Float128Type ¶ 128-bit floating-point type ### Float16Type ¶ 16-bit floating-point type ### Float32Type ¶ 32-bit floating-point type ### Float64Type ¶ 64-bit floating-point type ### Float80Type ¶ 80-bit floating-point type ### FunctionType ¶ Map from a list of inputs to a list of results Syntax: // Function types may have multiple results. function-result-type ::= type-list-parens | non-function-type function-type ::= type-list-parens -> function-result-type The function type can be thought of as a function signature. It consists of a list of formal parameter types and a list of formal result types. #### Parameters: ¶ ParameterC++ typeDescription inputsArrayRef<Type> resultsArrayRef<Type> ### IndexType ¶ Integer-like type with unknown platform-dependent bit width Syntax: // Target word-sized integer. index-type ::= index The index type is a signless integer whose size is equal to the natural machine word of the target ( rationale ) and is used by the affine constructs in MLIR. Rationale: integers of platform-specific bit widths are practical to express sizes, dimensionalities and subscripts. ### IntegerType ¶ Integer type with arbitrary precision up to a fixed limit Syntax: // Sized integers like i1, i4, i8, i16, i32. signed-integer-type ::= si [1-9][0-9]* unsigned-integer-type ::= ui [1-9][0-9]* signless-integer-type ::= i [1-9][0-9]* integer-type ::= signed-integer-type | unsigned-integer-type | signless-integer-type Integer types have a designated bit width and may optionally have signedness semantics. Rationale: low precision integers (like i2, i4 etc) are useful for low-precision inference chips, and arbitrary precision integers are useful for hardware synthesis (where a 13 bit multiplier is a lot cheaper/smaller than a 16 bit one). #### Parameters: ¶ ParameterC++ typeDescription widthunsigned signednessSignednessSemantics ### MemRefType ¶ Shaped reference to a region of memory Syntax: memref-type ::= memref < dimension-list-ranked type (, layout-specification)? (, memory-space)? > stride-list ::= [ (dimension (, dimension)*)? ] strided-layout ::= offset: dimension , strides: stride-list layout-specification ::= semi-affine-map | strided-layout | attribute-value memory-space ::= attribute-value A memref type is a reference to a region of memory (similar to a buffer pointer, but more powerful). The buffer pointed to by a memref can be allocated, aliased and deallocated. A memref can be used to read and write data from/to the memory region which it references. Memref types use the same shape specifier as tensor types. Note that memref<f32>, memref<0 x f32>, memref<1 x 0 x f32>, and memref<0 x 1 x f32> are all different types. A memref is allowed to have an unknown rank (e.g. memref<*xf32>). The purpose of unranked memrefs is to allow external library functions to receive memref arguments of any rank without versioning the functions based on the rank. Other uses of this type are disallowed or will have undefined behavior. Are accepted as elements: • built-in integer types; • built-in index type; • built-in floating point types; • built-in vector types with elements of the above types; • another memref type; • any other type implementing MemRefElementTypeInterface. ##### Codegen of Unranked Memref ¶ Using unranked memref in codegen besides the case mentioned above is highly discouraged. Codegen is concerned with generating loop nests and specialized instructions for high-performance, unranked memref is concerned with hiding the rank and thus, the number of enclosing loops required to iterate over the data. However, if there is a need to code-gen unranked memref, one possible path is to cast into a static ranked type based on the dynamic rank. Another possible path is to emit a single while loop conditioned on a linear index and perform delinearization of the linear index to a dynamic array containing the (unranked) indices. While this is possible, it is expected to not be a good idea to perform this during codegen as the cost of the translations is expected to be prohibitive and optimizations at this level are not expected to be worthwhile. If expressiveness is the main concern, irrespective of performance, passing unranked memrefs to an external C++ library and implementing rank-agnostic logic there is expected to be significantly simpler. Unranked memrefs may provide expressiveness gains in the future and help bridge the gap with unranked tensors. Unranked memrefs will not be expected to be exposed to codegen but one may query the rank of an unranked memref (a special op will be needed for this purpose) and perform a switch and cast to a ranked memref as a prerequisite to codegen. Example: // With static ranks, we need a function for each possible argument type %A = alloc() : memref<16x32xf32> %B = alloc() : memref<16x32x64xf32> call @helper_2D(%A) : (memref<16x32xf32>)->() call @helper_3D(%B) : (memref<16x32x64xf32>)->() // With unknown rank, the functions can be unified under one unranked type %A = alloc() : memref<16x32xf32> %B = alloc() : memref<16x32x64xf32> // Remove rank info %A_u = memref_cast %A : memref<16x32xf32> -> memref<*xf32> %B_u = memref_cast %B : memref<16x32x64xf32> -> memref<*xf32> // call same function with dynamic ranks call @helper(%A_u) : (memref<*xf32>)->() call @helper(%B_u) : (memref<*xf32>)->() The core syntax and representation of a layout specification is a semi-affine map. Additionally, syntactic sugar is supported to make certain layout specifications more intuitive to read. For the moment, a memref supports parsing a strided form which is converted to a semi-affine map automatically. The memory space of a memref is specified by a target-specific attribute. It might be an integer value, string, dictionary or custom dialect attribute. The empty memory space (attribute is None) is target specific. The notionally dynamic value of a memref value includes the address of the buffer allocated, as well as the symbols referred to by the shape, layout map, and index maps. Examples of memref static type // Identity index/layout map #identity = affine_map<(d0, d1) -> (d0, d1)> // Column major layout. #col_major = affine_map<(d0, d1, d2) -> (d2, d1, d0)> // A 2-d tiled layout with tiles of size 128 x 256. #tiled_2d_128x256 = affine_map<(d0, d1) -> (d0 div 128, d1 div 256, d0 mod 128, d1 mod 256)> // A tiled data layout with non-constant tile sizes. #tiled_dynamic = affine_map<(d0, d1)[s0, s1] -> (d0 floordiv s0, d1 floordiv s1, d0 mod s0, d1 mod s1)> // A layout that yields a padding on two at either end of the minor dimension. #padded = affine_map<(d0, d1) -> (d0, (d1 + 2) floordiv 2, (d1 + 2) mod 2)> // The dimension list "16x32" defines the following 2D index space: // // { (i, j) : 0 <= i < 16, 0 <= j < 32 } // memref<16x32xf32, #identity> // The dimension list "16x4x?" defines the following 3D index space: // // { (i, j, k) : 0 <= i < 16, 0 <= j < 4, 0 <= k < N } // // where N is a symbol which represents the runtime value of the size of // the third dimension. // // %N here binds to the size of the third dimension. %A = alloc(%N) : memref<16x4x?xf32, #col_major> // A 2-d dynamic shaped memref that also has a dynamically sized tiled // layout. The memref index space is of size %M x %N, while %B1 and %B2 // bind to the symbols s0, s1 respectively of the layout map #tiled_dynamic. // Data tiles of size %B1 x %B2 in the logical space will be stored // contiguously in memory. The allocation size will be // (%M ceildiv %B1) * %B1 * (%N ceildiv %B2) * %B2 f32 elements. %T = alloc(%M, %N) [%B1, %B2] : memref<?x?xf32, #tiled_dynamic> // A memref that has a two-element padding at either end. The allocation // size will fit 16 * 64 float elements of data. %P = alloc() : memref<16x64xf32, #padded> // Affine map with symbol 's0' used as offset for the first dimension. #imapS = affine_map<(d0, d1) [s0] -> (d0 + s0, d1)> // Allocate memref and bind the following symbols: // '%n' is bound to the dynamic second dimension of the memref type. // '%o' is bound to the symbol 's0' in the affine map of the memref type. %n = ... %o = ... %A = alloc (%n)[%o] : <16x?xf32, #imapS> ##### Index Space ¶ A memref dimension list defines an index space within which the memref can be indexed to access data. ##### Index ¶ Data is accessed through a memref type using a multidimensional index into the multidimensional index space defined by the memref’s dimension list. Examples // Allocates a memref with 2D index space: // { (i, j) : 0 <= i < 16, 0 <= j < 32 } %A = alloc() : memref<16x32xf32, #imapA> // Loads data from memref '%A' using a 2D index: (%i, %j) %v = load %A[%i, %j] : memref<16x32xf32, #imapA> ##### Index Map ¶ An index map is a one-to-one semi-affine map that transforms a multidimensional index from one index space to another. For example, the following figure shows an index map which maps a 2-dimensional index from a 2x2 index space to a 3x3 index space, using symbols S0 and S1 as offsets. The number of domain dimensions and range dimensions of an index map can be different, but must match the number of dimensions of the input and output index spaces on which the map operates. The index space is always non-negative and integral. In addition, an index map must specify the size of each of its range dimensions onto which it maps. Index map symbols must be listed in order with symbols for dynamic dimension sizes first, followed by other required symbols. ##### Layout Map ¶ A layout map is a semi-affine map which encodes logical to physical index space mapping, by mapping input dimensions to their ordering from most-major (slowest varying) to most-minor (fastest varying). Therefore, an identity layout map corresponds to a row-major layout. Identity layout maps do not contribute to the MemRef type identification and are discarded on construction. That is, a type with an explicit identity map is memref<?x?xf32, (i,j)->(i,j)> is strictly the same as the one without layout maps, memref<?x?xf32>. Layout map examples: // MxN matrix stored in row major layout in memory: #layout_map_row_major = (i, j) -> (i, j) // MxN matrix stored in column major layout in memory: #layout_map_col_major = (i, j) -> (j, i) // MxN matrix stored in a 2-d blocked/tiled layout with 64x64 tiles. #layout_tiled = (i, j) -> (i floordiv 64, j floordiv 64, i mod 64, j mod 64) ##### Strided MemRef ¶ A memref may specify a strided layout as part of its type. A stride specification is a list of integer values that are either static or ? (dynamic case). Strides encode the distance, in number of elements, in (linear) memory between successive entries along a particular dimension. A stride specification is syntactic sugar for an equivalent strided memref representation with a single semi-affine map. For example, memref<42x16xf32, offset: 33, strides: [1, 64]> specifies a non-contiguous memory region of 42 by 16 f32 elements such that: 1. the minimal size of the enclosing memory region must be 33 + 42 * 1 + 16 * 64 = 1066 elements; 2. the address calculation for accessing element (i, j) computes 33 + i + 64 * j 3. the distance between two consecutive elements along the inner dimension is 1 element and the distance between two consecutive elements along the outer dimension is 64 elements. This corresponds to a column major view of the memory region and is internally represented as the type memref<42x16xf32, (i, j) -> (33 + i + 64 * j)>. The specification of strides must not alias: given an n-D strided memref, indices (i1, ..., in) and (j1, ..., jn) may not refer to the same memory address unless i1 == j1, ..., in == jn. Strided memrefs represent a view abstraction over preallocated data. They are constructed with special ops, yet to be introduced. Strided memrefs are a special subclass of memrefs with generic semi-affine map and correspond to a normalized memref descriptor when lowering to LLVM. #### Parameters: ¶ ParameterC++ typeDescription shape::llvm::ArrayRef<int64_t> elementTypeType layoutMemRefLayoutAttrInterface memorySpaceAttribute ### NoneType ¶ A unit type NoneType is a unit type, i.e. a type with exactly one possible value, where its value does not have a defined dynamic representation. ### OpaqueType ¶ Type of a non-registered dialect Syntax: opaque-type ::= opaque < type > Opaque types represent types of non-registered dialects. These are types represented in their raw string form, and can only usefully be tested for type equality. Examples: opaque<"llvm", "struct<(i32, float)>"> opaque<"pdl", "value"> #### Parameters: ¶ ParameterC++ typeDescription dialectNamespaceStringAttr typeData::llvm::StringRef ### RankedTensorType ¶ Multi-dimensional array with a fixed number of dimensions Syntax: tensor-type ::= tensor < dimension-list type (, encoding)? > dimension-list ::= (dimension x)* dimension ::= ? | decimal-literal encoding ::= attribute-value Values with tensor type represents aggregate N-dimensional data values, and have a known element type and a fixed rank with a list of dimensions. Each dimension may be a static non-negative decimal constant or be dynamically determined (indicated by ?). The runtime representation of the MLIR tensor type is intentionally abstracted - you cannot control layout or get a pointer to the data. For low level buffer access, MLIR has a memref type. This abstracted runtime representation holds both the tensor data values as well as information about the (potentially dynamic) shape of the tensor. The dim operation returns the size of a dimension from a value of tensor type. The encoding attribute provides additional information on the tensor. An empty attribute denotes a straightforward tensor without any specific structure. But particular properties, like sparsity or other specific characteristics of the data of the tensor can be encoded through this attribute. The semantics are defined by a type and attribute interface and must be respected by all passes that operate on tensor types. TODO: provide this interface, and document it further. Note: hexadecimal integer literals are not allowed in tensor type declarations to avoid confusion between 0xf32 and 0 x f32. Zero sizes are allowed in tensors and treated as other sizes, e.g., tensor<0 x 1 x i32> and tensor<1 x 0 x i32> are different types. Since zero sizes are not allowed in some other types, such tensors should be optimized away before lowering tensors to vectors. Examples: // Known rank but unknown dimensions. tensor<? x ? x ? x ? x f32> // Partially known dimensions. tensor<? x ? x 13 x ? x f32> // Full static shape. tensor<17 x 4 x 13 x 4 x f32> // Tensor with rank zero. Represents a scalar. tensor<f32> // Zero-element dimensions are allowed. tensor<0 x 42 x f32> // Zero-element tensor of f32 type (hexadecimal literals not allowed here). tensor<0xf32> // Tensor with an encoding attribute (where #ENCODING is a named alias). tensor<?x?xf64, #ENCODING> #### Parameters: ¶ ParameterC++ typeDescription shape::llvm::ArrayRef<int64_t> elementTypeType encodingAttribute ### TupleType ¶ Fixed-sized collection of other types Syntax: tuple-type ::= tuple < (type ( , type)*)? > The value of tuple type represents a fixed-size collection of elements, where each element may be of a different type. Rationale: Though this type is first class in the type system, MLIR provides no standard operations for operating on tuple types ( rationale). Examples: // Empty tuple. tuple<> // Single element tuple<f32> // Many elements. tuple<i32, f32, tensor<i1>, i5> #### Parameters: ¶ ParameterC++ typeDescription typesArrayRef<Type> ### UnrankedMemRefType ¶ Shaped reference, with unknown rank, to a region of memory Syntax: unranked-memref-type ::= memref <*x type (, memory-space)? > memory-space ::= attribute-value A memref type with an unknown rank (e.g. memref<*xf32>). The purpose of unranked memrefs is to allow external library functions to receive memref arguments of any rank without versioning the functions based on the rank. Other uses of this type are disallowed or will have undefined behavior. Examples: memref<*f32> // An unranked memref with a memory space of 10. memref<*f32, 10> #### Parameters: ¶ ParameterC++ typeDescription elementTypeType memorySpaceAttribute ### UnrankedTensorType ¶ Multi-dimensional array with unknown dimensions Syntax: tensor-type ::= tensor < * x type > An unranked tensor is a type of tensor in which the set of dimensions have unknown rank. See RankedTensorType for more information on tensor types. Examples: tensor<*f32> #### Parameters: ¶ ParameterC++ typeDescription elementTypeType ### VectorType ¶ Multi-dimensional SIMD vector type Syntax: vector-type ::= vector < vector-dim-list vector-element-type > vector-element-type ::= float-type | integer-type | index-type vector-dim-list := (static-dim-list x)? ([ static-dim-list ] x)? static-dim-list ::= decimal-literal (x decimal-literal)* The vector type represents a SIMD style vector used by target-specific operation sets like AVX or SVE. While the most common use is for 1D vectors (e.g. vector<16 x f32>) we also support multidimensional registers on targets that support them (like TPUs). The dimensions of a vector type can be fixed-length, scalable, or a combination of the two. The scalable dimensions in a vector are indicated between square brackets ([ ]), and all fixed-length dimensions, if present, must precede the set of scalable dimensions. That is, a vector<2x[4]xf32> is valid, but vector<[4]x2xf32> is not. Vector shapes must be positive decimal integers. 0D vectors are allowed by omitting the dimension: vector<f32>. Note: hexadecimal integer literals are not allowed in vector type declarations, vector<0x42xi32> is invalid because it is interpreted as a 2D vector with shape (0, 42) and zero shapes are not allowed. Examples: // A 2D fixed-length vector of 3x42 i32 elements. vector<3x42xi32> // A 1D scalable-length vector that contains a multiple of 4 f32 elements. vector<[4]xf32> // A 2D scalable-length vector that contains a multiple of 2x8 i8 elements. vector<[2x8]xf32> // A 2D mixed fixed/scalable vector that contains 4 scalable vectors of 4 f32 elements. vector<4x[4]xf32> #### Parameters: ¶ ParameterC++ typeDescription shape::llvm::ArrayRef<int64_t> elementTypeType numScalableDimsunsigned ## MemRefElementTypeInterface (MemRefElementTypeInterface) ¶ Indication that this type can be used as element in memref types. Implementing this interface establishes a contract between this type and the memref type indicating that this type can be used as element of ranked or unranked memrefs. The type is expected to: • model an entity stored in memory; • have non-zero size. For example, scalar values such as integers can implement this interface, but indicator types such as void or unit should not. The interface currently has no methods and is used by types to opt into being memref elements. This may change in the future, in particular to require types to provide their size or alignment given a data layout. ## ShapedType (ShapedTypeInterface) ¶ This interface provides a common API for interacting with multi-dimensional container types. These types contain a shape and an element type. A shape is a list of sizes corresponding to the dimensions of the container. If the number of dimensions in the shape is unknown, the shape is “unranked”. If the number of dimensions is known, the shape “ranked”. The sizes of the dimensions of the shape must be positive, or kDynamicSize (in which case the size of the dimension is dynamic, or not statically known). ### Methods: ¶ #### cloneWith¶ ::mlir::ShapedType cloneWith(::llvm::Optional<::llvm::ArrayRef<int64_t>> shape, ::mlir::Type elementType); Returns a clone of this type with the given shape and element type. If a shape is not provided, the current shape of the type is used. NOTE: This method must be implemented by the user. #### getElementType¶ ::mlir::Type getElementType(); Returns the element type of this shaped type. NOTE: This method must be implemented by the user. #### hasRank¶ bool hasRank(); Returns if this type is ranked, i.e. it has a known number of dimensions. NOTE: This method must be implemented by the user. #### getShape¶ ::llvm::ArrayRef<int64_t> getShape(); Returns the shape of this type if it is ranked, otherwise asserts. NOTE: This method must be implemented by the user. ## SubElementTypeInterface (SubElementTypeInterface) ¶ An interface used to query and manipulate sub-elements, such as sub-types and sub-attributes of a composite type. ### Methods: ¶ #### walkImmediateSubElements¶ void walkImmediateSubElements(llvm::function_ref<void(mlir::Attribute)> walkAttrsFn, llvm::function_ref<void(mlir::Type)> walkTypesFn); Walk all of the immediately nested sub-attributes and sub-types. This method does not recurse into sub elements. NOTE: This method must be implemented by the user. #### replaceImmediateSubElements¶ ::mlir::Type replaceImmediateSubElements(::llvm::ArrayRef<::mlir::Attribute> replAttrs, ::llvm::ArrayRef<::mlir::Type> replTypes); Replace the immediately nested sub-attributes and sub-types with those provided. The order of the provided elements is derived from the order of the elements returned by the callbacks of walkImmediateSubElements. The element at index 0 would replace the very first attribute given by walkImmediateSubElements. On success, the new instance with the values replaced is returned. If replacement fails, nullptr is returned. NOTE: This method must be implemented by the user. ## SubElementTypeInterface (anonymous_314) ¶ An interface used to query and manipulate sub-elements, such as sub-types and sub-attributes of a composite type. ### Methods: ¶ #### walkImmediateSubElements¶ void walkImmediateSubElements(llvm::function_ref<void(mlir::Attribute)> walkAttrsFn, llvm::function_ref<void(mlir::Type)> walkTypesFn); Walk all of the immediately nested sub-attributes and sub-types. This method does not recurse into sub elements. NOTE: This method must be implemented by the user. #### replaceImmediateSubElements¶ ::mlir::Type replaceImmediateSubElements(::llvm::ArrayRef<::mlir::Attribute> replAttrs, ::llvm::ArrayRef<::mlir::Type> replTypes); Replace the immediately nested sub-attributes and sub-types with those provided. The order of the provided elements is derived from the order of the elements returned by the callbacks of walkImmediateSubElements. The element at index 0 would replace the very first attribute given by walkImmediateSubElements. On success, the new instance with the values replaced is returned. If replacement fails, nullptr is returned. NOTE: This method must be implemented by the user.
## Calculating sound attenuation with rolloff 0 I was hoping someone could help me make sense of sound attention calculations, because the more I try to look into it the more confused I become. So from what I can tell the main way that you're able to calculate attenuation is with the formula Attenuation = Volume-20*log(Distance/Maximum distance) Which makes sense, and I see that a lot of places also mention a minimum distance, which I understand as being the distance below which attenuation is always 1, that is, attenuation doesn't apply until after that point. However my problem comes when I tried to work out what rolloff does. From what I understand it's just controlling how fast the volume drops over distance, however this turned up an entirely different equation: Attenuation = Minimum distance*(1/(1+Rolloff*(1-Distance)) And this is incredibly confusing to me, as it's really, really unclear as to where, how or if these two equations interact. More confusing is that occasionally that formula returns a value > 1 at Minimum distance. I haven't seen any variation of the first equation which includes rolloff, which makes me more convinced that there's some connection between the two that I'm just not understanding. I'm sorry if this is a stupid question and I've just massively overcomplicated this, but I'm very lost and any further research seems impossible because I feel like I'm lacking the terminology to articulate what I'm looking for 1Are you talking about using an attenuator in an electrical circuit or the decreasing in volume the further away you get from the source? – Timinycricket – 2021-01-20T06:53:02.217
0 Research Papers: Internal Combustion Engines # Instantaneous Engine Speed Time-Frequency Analysis for Onboard Misfire Detection and Cylinder Isolation in a V12 High-Performance Engine [+] Author and Article Information Fabrizio Ponti DIEM, University of Bologna, Bologna 40136, Italy J. Eng. Gas Turbines Power 130(1), 012805 (Jan 09, 2008) (9 pages) doi:10.1115/1.2436563 History: Received June 19, 2006; Revised August 11, 2006; Published January 09, 2008 ## Abstract The diagnosis of a misfire event and the isolation of the cylinder in which the misfire took place is enforced by the onboard diagnostics (OBD) requirements over the whole operating range for all the vehicles, whatever the configuration of the engine they mount. This task is particularly challenging for engines with a high number of cylinders and for engine operating conditions that are characterized by high engine speed and low load. This is why much research has been devoted to this topic in recent years, developing different detection methodologies based on signals such as instantaneous engine speed, exhaust pressure, etc., both in time and frequency domains. This paper presents the development and the validation of a methodology for misfire detection based on the time-frequency analysis of the instantaneous engine speed signal. This signal contains information related to the misfire event, since a misfire occurrence is characterized by a sudden engine speed decrease and a subsequent damped torsional vibration. The identification of a specific pattern in the instantaneous engine speed frequency content, characteristic of the system under study, allows performing the desired misfire detection and cylinder isolation. Particular attention has been devoted to designing the methodology in order to avoid the possibility of false alarms caused by the excitation of this frequency pattern independently from a misfire occurrence. Although the time-frequency analysis is usually considered a time-consuming operation and not associated to onboard application, the methodology proposed here has been properly modified and simplified in order to obtain the quickness required for its use directly onboard a vehicle. Experimental tests have been performed on a $5.7l$ V12 spark-ignited engine run onboard a vehicle. The frequency characteristic of the engine-vehicle system is not the same that could be observed when running the engine on a test bench, because of the different inertia and stiffness that the connection between the engine and the load presents in the two cases. This makes it impossible to test and validate the methodology proposed here only on a test bench, without running tests on the vehicle. Nevertheless, the knowledge of the mechanical design of the engine and driveline gives the possibility of determining the resonance frequencies of the system (the lowest one is always the most important for this work) before running tests on the vehicle. This allows saving time and reducing costs in developing the proposed approach. <> Topics: Engines , Cylinders , Gears , Cycles ## Figures Figure 7 Amplitude of the harmonic at f* for the steady-state test at 5000rpm in fifth gear at 230km∕hr (one induced misfire every 30 cycles in cylinder 10) Figure 8 Amplitude of the harmonic at f* for the transient test in third gear (one induced misfire every 50 cycles in cylinder 1) Figure 3 Engine speed trend for a V12 engine with cylinder 1 misfiring at 0deg crankshaft angle in fifth gear at 230km∕hr Figure 4 Waterfall representation of the engine speed STFT analysis at 5000rpm in fifth gear at 230km∕hr (one induced misfire in cylinder 10 every 30 cycles) Figure 5 Engine speed variation for the transient test in third gear (one induced misfire every 50 cycles in cylinder 1) Figure 6 Engine speed STFT analysis for the transient test in third gear (one induced misfire every 50 cycles in cylinder 1) Figure 1 Indicated torque mean value evaluation over 60deg intervals, for a V12 engine with cylinder 1 misfiring at the beginning of the engine cycle Figure 2 Difference between the indicated torque production in case of misfire in cylinder 1 and during normal operating conditions Figure 9 Engine speed waveform for a test conducted in second gear from 6400rpm to 7600rpm (from 135km∕hr to 160km∕hr) Figure 10 St(2πf*) evaluation for the test shown in Fig. 9 Figure 11 LUi evaluation for the test shown in Fig. 9 Figure 12 Engine speed waveform for a test conducted in third gear at 4850rpm(140km∕hr) Figure 13 St(2πf*) evaluation for the test shown in Fig. 1 Figure 14 LUi evaluation for the test shown in Fig. 1 Figure 15 St(2πf*) evaluation for a test conducted at 2500rpm in fifth gear Figure 16 LUi evaluation for a test conducted at 2500rpm in fifth gear Figure 17 St(2πf*) evaluation for a test conducted at 2500rpm in fifth gear Figure 18 LUi evaluation for a test conducted at 2500rpm in fifth gear ## Discussions Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
# Using chessboard package, how to render “black” squares in solid gray? In latex chessboard package, by default, black squares are rendered with hatch patten. I want them in solid gray. My several attempts failed, chessboard.pdf is not helpful here. The programming model is not clear, so I gave up. Need help. Thanks in advance. \documentclass[preview]{standalone} \usepackage[LSB,LSBC3,T1]{fontenc} \usepackage{tikz} \usepackage[skaknew]{skak,chessboard} \begin{document} \setchessboard{showmover=false} \chessboard \end{document} \documentclass[preview]{standalone} \usepackage[LSB,LSBC4,T1]{fontenc} \usepackage{tikz} \usepackage[skaknew]{skak,chessboard} \begin{document} \setboardfontcolors{ \setchessboard{boardfontencoding=LSBC4,setfontcolors,showmover=false} \chessboard \end{document} Code with light gray. See setboardfontcolors. • Thanks! Difficult getting through the documentation. +1 – DJP Sep 10 '19 at 15:58 • Life is cool because of you guys at StackExchange, a million thanks. – userOne Sep 11 '19 at 7:29 • I will accept this as answer. It gives control over color. Also thanks to DJP for giving a quick solution. – userOne Sep 11 '19 at 7:43 The documentation on CTAN is here. I found something like what you wanted on page 34. I modified your code using boardfontencoding=LSBC4 to get the result below. \documentclass[preview]{standalone} \usepackage[LSB,LSBC4,T1]{fontenc} \usepackage{tikz} \usepackage[skaknew]{skak,chessboard} \begin{document} \setchessboard{boardfontencoding=LSBC4,setfontcolors,showmover=false} \chessboard \end{document} • The solution works, but can I set a lighter shade of gray? Say gray!30. – userOne Sep 10 '19 at 15:27 • I tested color properties from the documentation and ran into transparency issues, and gave up. Strange, a font change is producing a different color. – userOne Sep 10 '19 at 15:38 • Sango found the fix to modify the gray color and posted it in their answer. – DJP Sep 10 '19 at 16:07
Center for Theoretical Physics of the Universe (CTPU) # Neutrino Physics with Atomic/Molecular Processes ## by Dr. Minoru Tanaka (Osaka U) Wednesday, 8 June 2016 from to (Asia/Seoul) Description The energy scale of atomic or molecular transitions is of order eV or less, which is closer to the neutrino mass scale suggested by the neutrino oscillation experiments. In this talk, I propose a new approach to clarify unknown properties of neutrinos using atomic or molecular processes. After introducing the relevant process, radiative emission of neutrino pair (RENP), I explain the macrocoherent amplification that is necessary to observe RENP. I also mention the possibility to detect the cosmic neutrino background with RENP. In addition some experimental results that exhibit the macrocoherent amplification in a QED process are described.
Elements – s & p-block source : wdpunit10tri320125thhr.pbworks ## Elements of s-block: Elements of IA and IIA or 1 & 2 group are present in the left side of periodic table are called s-block elements. In these elements last electron enters in outer s-orbital. Characteristics : 1) Electronic configuration : The electronic configuration of outermost shell is ns1 or ns2. 2) valency : The valency of elements of IA is one and IIA is two . 3) Ionization potential : I.P of s-block element (except H) is low due to large radius because  I.P  is inversely proportional to atomic radius. 4) Atomic and ionic radius : Radius of s-block elements is large. Radius of cation is always smaller than its neutral atom. 5) Properties of the cation : Elements of IA and IIA form cations of the type M+ &M2+ respectively.These cations are colorless and diamagnetic because all the electrons present are paired. 6) Standard reduction potential :  s-block elements have negative standard reduction potential but standard reduction potential of hydrogen is zero. 7) Reactivity : s-block elements are very reactive. These react with water and acid to liberate H2 gas. 8) Reducing power : I.P and standard reduction potential of s-block elements are small. Hence these elements have tendency to form cation by loss of electrons. Thus s-block elements are powerful reducing agent. Alkali metals ( IA) are more powerful reducing agent than alkaline earth metal (IIA). 9) Nature of oxides and hydroxides : Oxides and hydroxides of alkali and alkaline earth metals are basic but  BeO is amphoteric in nature . Example: Li2O , Na2O, CaO, MgO, LiOH , NaOH , Ca(OH)2 , Mg(OH)2 10) Nature of compounds : s-block elements form electrovalent compounds. Beryllium forms covalent compounds. ## Elements of p-block : Elements of IIIA, IVA, VA, VIA, VIIA, or elements of groups 13 to 17 are present in the right side of the periodic table. In these elements last electron enters in p-orbital therefore these are p-block elements. Inert gases are also elements of p-block but their properties are entirely different from p-block elements. Characteristics : 1. Electronic configuration of outer most orbit is ns2 np 1-6. 2) Valency : Valency  of IIA or 13 group element is 3 ( such as B, Al ). Valency of elements having 4,5,6 and 7 electrons in their outer most shell show valency 4 , 3 , 2 and 1 respectively. 3) Variable valency : Some elements of p-block show variable valency. It is due to inert pair effect or due to presence of vacant d-orbital. Example: PCl3 & PCl5 , SnCl2 & SnCl4 , SO3 & SO4, Cl2O5  & Cl2O7  etc. 4) Ionization potential : Ionization potential of p-block elements are higher  than s-block elements because their atomic radius is smaller than s- block elements & I.P  is inversely proportional to atomic radius. 5) Atomic radius : Their atomic radius is smaller than s- block elements. 6) Metallic and Nonmetallic character : Elements of p-block show  metallic and nonmetallic properties because they contain metals, nonmetals and metalloids. C , Si (nonmetal) Ge (Metalloids) Sn & Pb (metal). 7) Reactivity : most of the p-block elements are less reactive but some are very reactive such as O,S,P & halogens. 8) Catenation property : The property of forming chain of identical atoms is called catenation property. S8,P4,O3 etc show catenation property . Great catenation property is shown by Carbon.
# Tag Info 8 The feature that you demand is called event location in Matlab ODE solvers pack, or rootfinding in SUNDIALS solvers suite terminology. Essentially this feature allows to stop integration exactly at the point where some vector function of free and dependent variables has a root. Namely, for system $$\frac{d\boldsymbol y}{dt} = f(t, \boldsymbol y), \quad \... 5 Hashing floating-point numbers can indeed lead to weird results, especially if the node positions can be perturbed by some small amount or if there are denormalized values. You included the Python tag, so I assume that's what you're using and that you have scipy. A quick and dirty solution would be to construct a kd-tree of the fine grid points, then for ... 5 From a performance view, you are always interested in preserving as much 'structure' in your grid as possible. Computations on a simplex- or a hexaedral mesh, where every cell looks like the next will be more performant, as you do not have to transform from local to global coordinates differently for each cell. Also, you do not have to save the cell ... 4 You achieve this with equidistribution to a mesh density function \rho(x). If you consider x as a continuous map from \xi \in [0,1] into your domain [0,L], then the statement x equidistributes \rho is equivalent to$$\int_0^{x(\xi)} \rho(x') \mathrm{d}x' = \xi \theta\,,$$where$$\theta = \int_0^L \rho(x) \mathrm{d}x\,.$$Not sure if this has a ... 4 In this situation, you need an integrator with dense output, which means it has a capability to calculate the solution to the ODE at any point in time between the time steps taken by the integrator. dop853 (from Hairer and Wanner's eighth-order Dormand-Prince integrator) has this capability, in addition to adaptive time-stepping. Likely what happens is that ... 4 Your numerical solution is probably just getting more accurate as you increase the number of grid points. Do you know or have you tried to derive the analytic (exact) solution for this problem? By looking at your plots, it seems like the exact solution has a shock (discontinuity) occurring at around x = 4.5, and the numerical method is resolving it with ... 3 Let me formulate my remark as an answer (and make it more precise): My experience and knowledge is that to use collocated grids with any standard (lower or higher order) methods for this problem, you must modify (stabilize) your Navier-Stokes equation. it means with some additional effort. This needs not to be necessary when using staggered grids. The ... 3 This is not a good way to do modern programming for many reasons. First of all, as you pointed out, this kind of code is hard to read and maintain. Secondly, this tends to be done in old versions of Fortran for reasons which are no longer a problem. Namely, old versions of Fortran didn't have ways to "bundle together" objects. If you made different arrays ... 3 In essence, you are asking whether you can enumerate the integer lattice sites within your domain from 1 to N in such a way that accessing the east/west/north/south neighbors of a location n requires accessing positions n_e,n_w,n_n,n_s so that the distance between index n and these four indices is minimal. This problem is equivalent to shuffling ... 3 The solution is https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html From the documentation : ‘RK45’ or ‘RK23’ method for non-stiff problems and ‘Radau’ or ‘BDF’ for stiff problems The documentation taken from scipy: scipy.integrate.solve_ivp(fun, t_span, y0, method='RK45', t_eval=None, dense_output=False, events=None, ... 2 I ended up raising an exception in the deriv() function (and catching it in the loop) because it turned out to be the only reliable way to end integration in every single case and for every ODE. (While I considered this option already in my original question, it seems there is no other way. If anybody presents a better option I will gladly mark her/his ... 2 SRM is a standard protocol, so it is suppose to work for all storage implementations. The only things that differ are the protocol implementations. In addition to bestman2 and dcache srm tools, there is also another popular implementation, lcg-util (lcg-cp, lcg-ls...). I don't know why srmping and srm-ping do not work the same. To be honest, I have ... 2 Writing C++ code from the ground up for adaptive mesh refinement (as part of a PDE solver) is a relatively complicated endeavor and can easily involve thousands of lines of code for even simple problems. Based on the comments it sounds like you may be fairly new to programming (for example unsure how to implement a binary tree). Of course there is nothing ... 2 If your data are regularly spaced, you do not need an interpolation procedure, you can directly plot a contour through a contour or a pcolor command. That why I don't understand why you need to initially interpolate. Contouring algorithms often use marching cube method (and its variations) to render a contour plot. This kind of algorithm only work with a ... 2 A simpler answer, IMHO, is to solve your system using PyDSTool. It has built-in support for accurate event finding. In fact, it's easier to set up and more efficient (including faster) than Matlab's approach if you use the C-based solvers. I am the author of this package. A tutorial using events is given here. 2 It seems like you are inventing Barnes-Hut-type algorithm, which is a fundamental accelerated algorithm for n-body simulations. It follows a similar logic: you combine masses on a grid. But, it is done in a more elaborate fashion: when particles are close, you still use direct calculations (no combination), otherwise, you would lose accuracy. multipole ... 2 In the section about adaptive Methods Chapter 16. in "Chebyshev and Fourier Spectral Methods" from John P. Boyd several different coordinate transformations together with their application in different publications are presented. I have not used any of those transformation myself but they can serve as a starting point for your problem. y is the physical (... 2 Use the chain rule to get the derivative on the non-uniform grid, \frac{dy}{d\zeta}:$$ \begin{align} \frac{dy}{d\zeta} &= \frac{dy}{dx} \frac{dx}{d\zeta}\\ &= \frac{dy}{dx} \cdot 2 \zeta \\ &= \frac{dy}{dx} \cdot 2 \sqrt{x} \\ \end{align} $$So if you have a way of computing \frac{dy}{dx} on the uniform grid, you can simply scale it by 2\... 2 Let's assume the following heat transient heat transfer equation in 1D :$$ \frac{\partial T} {\partial t} = \alpha \frac{\partial^2 T}{\partial x^2} $$If we take its finite difference approximation using an explicit time scheme we obtain :$$ \frac{T_i^{t+\Delta t} - T_i^{t}}{\Delta t} = \frac{\alpha}{(\Delta x)^2} (T_{i-1}^{t}-2T_i^{t}+T_{i+1}^{t}) $... 2 You can probably speed things up a little bit by storing the array in 4x4 square subarrays so that each of them fit in cache line (64 bytes = 4x4 32-bit integers). This changes the probability distribution of number of memory accesses from [0,0,1] (mean number of accesses = 3) (for 1,2,or 3 accesses) to [1/4,1/2,1/4] (mean number of accesses = 2). Maybe ... 2 You've almost certainly got to reduce the timestep to maintain stability due to the CFL condition imposed by your explicit timestepping method choice. That being said, for benchmarking purposes, I'd be inclined to pose the problem as computing to a particular final time and want to compare that to the quality of the answers and times to solution on other ... 2 The best choice for a numerical grid is the one that will most accurately approximate the solution to your problem (without being too computationally expensive). But beyond that the specific features will depend heavily on the type of problem you are trying to solve. A grid might be aesthetically pleasing because it cleverly exploits some symmetry of the ... 2 The easiest thing is to ensure a consistent (although arbitrary) tie-breaking scheme. If your nodes/vertices indexed, this usually means preferring the split edge with the lowest index of its lowest vertex. If the edges share the same lowest index vertex, then check the other vertex and choose the one with the lowest index. So in your situation, imaging the ... 2 Assuming that the arrays are passed already sorted (which is a reasonable assumption since you are starting from two 1D grids), I have a solution which is$\mathcal{O}(n+m)$where$n$and$m$are the respective sizes of the arrays. I wrote the code in MATLAB/Octave, but should be easy to translate to Python. % Compare two arrays for overlaps % Test 1 - Easy %... 1 As pointed out in the comments, the CFL condition dictates how big timestep can be. Therefore as we progressively refine in space, we would be required to take smaller time steps if an explicit temporal discretization is employed. Regardless of the smaller time steps, it would still be acceptable to get a time-averaged quantity to compare across the various ... 1 I think the problem is that you've lost the topology upon the first rasterization: | 0 | 1 | 1 | 0.5 | 0 | could be =============== | 0 | 1 | 1 | 0.5 | 0 | or it could be ============ === | 0 | 1 | 1 | 0.5 | 0 | So the answer is no. Unless you demand that two objects cannot end in the same grid point, then you ... 1 I would second the opinion expressed in the context. For that small problem and limited usage, you don't need a library. Generation of a structured grid and retrieving points can be coded up in at most several screens of code. However, I would point out for you ViennaGrid library, which can be used and actually provides STL iterators. In addition, the ... 1 For scattered data at points with no structure, try inverse-distance-weighted-idw-interpolation-with-python. This combines scipy's fast K-d_trees with inverse-distance aka radial kernels:$\qquad interpol( x ) = \sum w_i \, f_i$, sum over say 10 points nearest$x\qquad \qquad \qquad \qquad w_i = {1 \over |x - x_i|^p}$normalized so$\sum w_i = 1\$ . It's ... 1 Your data is already in the right shape, then you don't need to create a meshgrid. See the code below import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import numpy as np I = np.array([ [10.55, 0., 0.], [0., 0., 0.01], [0.2, -0.1, 3.33], [0., 2.14, 0.], [0., 3.80, 0.], [9.02, 0., 0.]]) t = np.array([ [0., ... 1 The dataset is huge. So I would like to be more memory-efficient. The main point to consider is the following: […] most of the cells will be “unactivated”, i.e most of the elements of the matrix will store an empty list (no signal). This sounds as if an array of lists (or variably sized arrays) is just fine. In this case, the crucial factor for memory ... Only top voted, non community-wiki answers of a minimum length are eligible
340 US customary cups in US gallons Conversion 340 US customary cups is equivalent to 21.25 US gallons.[1] Conversion formula How to convert 340 US customary cups to US gallons? We know (by definition) that: $1\mathrm{uscup}\approx 0.0625\mathrm{usgallon}$ We can set up a proportion to solve for the number of US gallons. $1 ⁢ uscup 340 ⁢ uscup ≈ 0.0625 ⁢ usgallon x ⁢ usgallon$ Now, we cross multiply to solve for our unknown $x$: $x\mathrm{usgallon}\approx \frac{340\mathrm{uscup}}{1\mathrm{uscup}}*0.0625\mathrm{usgallon}\to x\mathrm{usgallon}\approx 21.25\mathrm{usgallon}$ Conclusion: $340 ⁢ uscup ≈ 21.25 ⁢ usgallon$ Conversion in the opposite direction The inverse of the conversion factor is that 1 US gallon is equal to 0.0470588235294118 times 340 US customary cups. It can also be expressed as: 340 US customary cups is equal to $\frac{1}{\mathrm{0.0470588235294118}}$ US gallons. Approximation An approximate numerical result would be: three hundred and forty US customary cups is about zero US gallons, or alternatively, a US gallon is about zero point zero five times three hundred and forty US customary cups. Footnotes [1] The precision is 15 significant digits (fourteen digits to the right of the decimal point). Results may contain small errors due to the use of floating point arithmetic. Was it helpful? Share it!
## The message This year's Penn Reading Project book is Adam Bradley's Book of Rhymes: The Poetics of Hip Hop.  In my discussion group yesterday afternoon, several participants complained that some important things about the "poetics" of rap are lost in a purely textual presentation of the lyrics. One student observed that in pieces he knows, the rhythm is there in the written form — but the lyrics for pieces that he doesn't know seem flat and lifeless in comparison. There are good reasons that this is more true for the works of Melle Mel or Jay Z than for Elizabeth Barrett Browning or W.H. Auden, I think. One of the advantages of the weblog format is the combination of text, images, and audio or video clips, so for this morning's Breakfast Experiment™ I decided to present a small exploration of the "poetics of hip hop" in a multimedia — and somewhat quantitative — framework. This exercise will clarify why transcriptions of the lyrics, even with bold-face indications of stress, are missing an important dimension. The lines' scansion depends not only on the syllable sequence and on where the performer puts phrasal stresses, but also on the alignment of the syllables with the musical meter. This alignment is not automatic or always obvious — it has artistically-relevant degrees of freedom beyond those available in most other genres of text setting. For those whose appraisal of Bradley's book was (interpreting freely) "not enough vampires and car chases", this will probably make things worse — you have been warned. Let's look at a piece from the classical period, Grandmaster Flash and the Furious Five's 1982 "The Message": The background has a clear and steady beat. The metrical pattern that lines up with a line of the verse lasts about 2.4 seconds — one period of that pattern is shown below. (I've taken it to represent two bars of 4/4 time, though Jonathan Mayhew in the comments suggests that a more standard analysis would be transcribe it as one bar.) What matters to the analysis is that if we track this metrical pattern through the times when Melle Mel is rapping, we find that he subdivides it into (a maximum of) 16 minimal time units, as in this couplet from the first verse: The basic pattern of the musical meter is the usual binary hierarchy — to get the 16 units, the line is subdivided into 2, subdivided into 4, subdivided into 8, subdivided into 16. Above, I've used a couple of Audacity label tracks for the musical meter and for the alignment of the words with the metrical line divided into 8 time-units, which I've notated as 1 through 4 and then 1 through 4 again. As usual, it's the syllable onset — the place where the amplitude is rising most rapidly, known to generations of phoneticians as the "P Center" — that aligns with a given metrical "beat". Here's the same couplet presented in tabular form, with the off-beat positions (nominally eighth notes in the transcription I've assumed) labelled with "+": + 2 + 3 + 4 + 1 + 2 + 3 + 4 + 1 I can't take the smell can't take the noise got no mo- -ney to move out I guess I got no choice + 2 + 3 + 4 + 1 + 2 + 3 + 4 + 1 The relation between musical meter and poetic form is more complex here than in (some types of) traditional verse setting. Among other complexities, we need to deal with syncopations of the kind that I discussed in the post "Rock syncopations: Stress shifts or polyrhythms?", 11/26/2007 — thus in phrase "can't take the noise" in the first line of the couplet shown above, the syllables "can't" and "take" are aligned with the offbeats that I've labelled 2+ and 3+, rather than the nominally more prominent positions 3 and 4. Is this just a stylistic syncopation, or is it something more fundamental? Whatever the answer to this question, a kind of statistical shape emerges if we look at the overall syllable-to-musical-meter correspondence. Here's the whole first verse (of five) in the same format — four couplets, eight lines, 16 bars, 128 minimal time units: + 2 + 3 + 4 + 1 + 2 + 3 + 4 + 1 bro- -ken glass ev- -ry- where peo- -ple pis- -sing on the stairs you know they just don't care I can't take the smell can't take the noise got no mo- -ney to move out I guess I got no choice rats in the front room roa- -ches in the back jun- -kies in the al- -ley with a base ball bat I tried to get a- -way but I could- -n't get far cause a man with a tow truck re- -pos- -sessed my car + 2 + 3 + 4 + 1 + 2 + 3 + 4 + 1 If we just add up the number of syllables in this verse, meter position by meter position — even without taking account of stress — we get a plot like this: Except for position 2 in the first bar, which is clearly a kind of upbeat, an alternating binary strong-weak pattern does emerge. If we do the same thing for all five verses (74 lines in all) the pattern is confirmed: However, in the chorus, something very different is going on. In the opening four lines — treated as a double-length couplet, with rhymes at the end of each pair of lines — each 16-count line is divided as 3|3|3|4|3. Along with Melle Mel's delivery, this shift into a polyrhythm — with the background maintaining the straight 4|4|4|4 — is a sort of metrical icon of the mental instability he's talking about: + 2 + 3 + 4 + 1 + 2 + 3 + 4 + 1 don't push me cause I'm close to the edge I'm try- -ing not to ha ha it's like a jun- -gle some- -times it makes me won- -der how I keep from go- -ing un- -der + 2 + 3 + 4 + 1 + 2 + 3 + 4 + 1 Here's a graphical comparison of the syllable distribution in the chorus, compared with the syllable distribution in the first two verses: And as the piece develops, the shift of the ends of lines to a 3|2|3 pattern x     x   x     x x x x x x x x x x x 2 + 3 + 4 + 1 + 2 + becomes more and more common, Here's a comparison of syllable-counts-by-position in verses 1-2 (24 lines) with verse 5 (28 lines): For a good example of this effect, see the last two couplets: + 2 + 3 + 4 + 1 + 2 + 3 + 4 + 1 it was plain to see that your life was lost you was cold and your bo- -dy swung back and forth of how you lived so fast and died so young + 2 + 3 + 4 + 1 + 2 + 3 + 4 + 1 You could see this polyrhythmic infiltration as a metrical symbol of the personal instability that the lyrics talk about. As the genre develops, varied and complex polyrhythms become more common and are given a different interpretation, as in Eric B. and Rakim's 1987 "I Know You Got Soul": + 2 + 3 + 4 + 1 + 2 + 3 + 4 + 1 Ra- -kim 'll be- -gin when you make the mix I'll ex- -per- -i- -ment like a sci- -en- -tist + 2 + 3 + 4 + 1 + 2 + 3 + 4 + 1 But that's a story for another time. 1. ### Jonathan Mayhew said, August 27, 2013 @ 9:25 am Great post. The beats you label as 3s I take to be 1s. I'm curious why you don't hear those as the beginnings of the measure? Of course, I could be wrong. It wouldn't be the first time. [(myl) You might be right. But in this context, what really matters how the verses line up with the 16-unit two-bar sequence, and the choice I made has the lines ending in the 4-to-1 region (around the end of a bar) rather than in the 2-to-3 region (in the middle of a bar), which seemed more appropriate to me.] 2. ### Dimitri said, August 27, 2013 @ 9:39 am I agree with Jonathan. The famous — iconic — keyboard riff starts at the beginning of each measure. 3. ### Jonathan Mayhew said, August 27, 2013 @ 10:24 am Also, as I count it the tempo is about 100 beats per minute. There are about 25 bars in the first minute of the video, for example. You seem to be counting 8ths as quarters to get to 198bpm. [(myl) Whatever note denominations we use in a musical transcription, the 16-time-unit sequence that lines up with a line of the verse lasts about 2.4 seconds. If we take that to be two bars of 4/4 time, then that's about 2.4/8 = 0.3 seconds per quarter note, or about 60/0.3 = 200 quarter notes per minute. On that analysis (which is what I assumed), each of the 16 minimal-time-units is an eighth note. You're interpreting the same sequence as one bar of 4/4 time, I guess, in which case there are about 100 quarter notes per minute, and the minimal time unit of the 16-unit sequence is a sixteenth note. This doesn't matter to my analysis, since all that I care about is the alignments of the syllables with the 16-unit sequences. But if my (mis-?)assumptions about translation to musical notation confuse the picture, I apologize.] 4. ### Jonathan Mayhew said, August 27, 2013 @ 10:42 am Right. It makes no difference to your analysis. It would just be confusing since it diverges from the standard way in which musicians or knowledgeable listeners would normally count. http://www.beatport.com/track/the-message-original-mix/1589435 [(myl) OK, I've modified the presentation in a way that (I hope) relieves the confusion…] 5. ### Ralph Hickok said, August 27, 2013 @ 11:01 am Without regard to the musical accompaniment, I've long felt that rap/hip hop prosidically resembles skeltonics, but i have neither the technical knowledge nor the ability to do an analysis. [(myl) Maybe so, in content as well as in form — one example of Skelton's work: Youre key is mete for euery lok, Youre key is commen and hangyth owte; Youre key is redy, we nede not knok, Nor stand long wrestyng there aboute; Of youre doregate ye haue no doute: But one thyng is, that ye be lewde: Holde youre tong now, all beshrewde! To mastres Anne, that farly swete, That wonnes at the Key in Temmys strete. Or another: Say this, and say that, His hed is so fat, He wotteth neuer what Nor wherof he speketh; He cryeth and he creketh, He pryeth and he peketh, He chydes and he chatters, He prates and he patters, He clytters and he clatters, He medles and he smatters, He gloses and he flatters; Or yf he speake playne, Than he lacketh brayne, He is but a fole; Let hym go to scole, On a thre foted stole That he may downe syt, For he lacketh wyt; ] 6. ### J.W. Brewer said, August 27, 2013 @ 3:29 pm University administrators trying to appear hip-and-with-it-and-relevant by exposing freshman born circa 1995 to a text from the lost and irrelevant world of 1982 seems like something out of a David Lodge novel. Better to have them read Skelton. [(myl) Adam Bradley's book takes the story up through the time he sent the ms to the publisher, in 2008. I chose an old-timey example because the freshmen in my discussion group, born in 1994-5, were familiar with recent work but not with older stuff. As for getting 2500 Penn freshmen to read Skelton, good luck with that.] Back in the Sixities, I think the Uptight Grownups used to play a parlor game in which lyrics by the Beatles or Dylan or whoever the Young People made extravagant claims for the literary merits of could be solemnly intoned aloud with a po-faced affect, as if they were poems found in a dusty volume of Dickinson or Tennyson, in which context they would typically appear ridiculous. I expect that sometimes this was in fact because they actually were ridiculous as stand-alone texts – they worked in context only because of the musical setting or harmony or whatnot — but in others it was because the way the lyrics worked with the music meant they were sung with some sort of non-intuitive prosody or timing that was at odds with how you would just read them aloud left to your own devices, and thus worked better if you could figure out how to adjust your reading-aloud performance to the timing in the song. I do not think the Ur-rap of 1982 was particularly novel in this regard. [(myl) Jazz, blues, R&B, and rock have all had a similar — or even greater — amount of freedom in text setting, and so I'm sure that the effect I wrote about here applies to lyrics in those genres as well. For songs that are metrically more "square", there are obviously other reasons why being familiar with the music adds to the experience of reading or recalling the lyrics. Equally obviously, there are kinds of poetry that are created and interpreted mainly through the channel of text. This post was an attempt to illustrate one aspect of the difference.] 7. ### AntC said, August 27, 2013 @ 4:25 pm @J.W.B. Perhaps you're thinking of Peter Sellers reciting "A Hard Day's Night" po-faced? I found (and find) it riotously funny. 8. ### maidhc said, August 27, 2013 @ 4:41 pm A classic example of what J.W. Brewer is talking about is Steve Allen reading the lyrics to "Be-Bop-a-Lula". You can find it on YouTube. What I wasn't able to find on YouTube, though, is the answer. Phil Spector (I think it was) went on the Steve Allen Show and told Steve that he was missing the point because the point was the beat, illustrating his argument by banging loudly on the desk while singing the lyrics. Now that we have a historical perspective, I think Phil Spector (if he was the person) won that argument. There's a difference between strict poetic metre and song metre, because in poetic metre every syllable has to be accounted for, whereas in song metre it's the stresses that are important, while the number of unstressed syllables in between can vary. Although some poetry can work like that too. I'm a GET up in the MORN-ing I be- LIEVE I'll DUST my BROOM (although neither the Robert Johnson or the Elmore James version is that rhythmically simple, they both put the last stress on the upbeat) 9. ### maidhc said, August 27, 2013 @ 4:48 pm AntC: Peter Sellers did a few Beatles songs. The one I hear the most is the Germanic version of "She Loves You" — "Ja?" "Ja!". But the difference is that Peter Sellers was a friend of the Beatles, and they were great admirers of his work. They had the same producer, George Martin. Whereas Steve Allen was just being nasty. 10. ### Asa said, August 27, 2013 @ 4:56 pm I read the first two comments (by Jonathan Mayhew and Dimitri) as pointing out a question not of measure length but of where the measure starts. I agree with what I take those first comments to be saying: that the standard way to parse this would be to start the measure on what you call beat 3. For example, in your first "tabular form" example, I definitely hear "can't" as beat 1, "smell" as beat 3, etc. I think a large part of what has me convinced of this is the snare drum hit that occurs every four beats. In the measure-long scheme of things, such a hit tends to fall on the off-beat (i.e., beat 3). [(myl) FWIW, there was also an issue about the right metronome marking, which came down to how long the bars are and how to notate the minimal time units. I rewrote the post so as to be as neutral as possible on that point. As for where the bar lines are, I think it's clearer to have the lines start and end about where bars start and end — the relative strength of the metrical positions. But you can renumber everything shifted over by two, if you want to.] 11. ### J.W. Brewer said, August 27, 2013 @ 6:45 pm I suppose we all develop strong idiosyncratic/extra-textual associations with particular texts depending on particular life circumstances in which we may have encountered them. For me, "The Message" is irrevocably "about" being with an overwhelmingly white and suburban crowd of high school students at a Model U.N. in Hershey, Pa. sometime in the winter of '82-'83 and trying to strike up a better acquaintance with a young lady representing West Germany (by way of some town in New Jersey), even though I strongly suspect that was not what the lyricist(s) had in mind. So I can see the pedagogical point of using a text that may have fallen into sufficient obscurity that students are hopefully coming to it with their minds a blank slate. (Although because of continuing Boomer cultural hegemony, Guitar Hero, and various other phenomena, it can be difficult to predict which hit songs of my childhood can in fact safely be considered obscure for Kids Today — indeed sometimes songs of my youth that only us obscurantist hipsters knew about subsequently became widely known to a mass audience because some hipster who grew up to work at an ad agency used them in a commercial.) 12. ### dainichi said, August 27, 2013 @ 6:54 pm Interesting post! I second @Jonathan Mayhew's view that the measures should be shifted by 2 beats. It might not make a difference to the result of the analysis, but it makes it much harder to follow for musicians who are used to 1s being 1s and 3s being 3s. Just some observations: In "broken glass everywhere", "-ry-" falls on (what you call) 4, not 3+ and "-where" falls on 4+, not 4. In "junkies", "-kies" falls on 3+, not 3. In "I'm trying not to lose my head", "not", "to" and "lose" fall on 4, 1+ and 3, not on 3+, 1 and 2+. 13. ### Layra said, August 28, 2013 @ 2:02 am While the exact numbering of the beats might not matter to the question of between-beat syncopation, it does have rhythmic implications in that the first beat of the measure is generally the strongest and hence acts as a rhythmic reference point. So the alignment of the lyrical lines with the musical measures becomes a coarser-grained form of syncopation and which beat is considered the start of the musical measure is rhythmically significant here. Thus in the interest of allowing the same charts to be used for analysis of lyrical-versus-musical alignment on the scale of beats rather than portions of beats, it would be helpful to indicate the beginning of the musical measure. 14. ### Jason said, August 28, 2013 @ 3:30 am University administrators trying to appear hip-and-with-it-and-relevant by exposing freshman born circa 1995 to a text from the lost and irrelevant world of 1982 seems like something out of a David Lodge novel. I'm not so sure about this. There was a Tupac Shakur-centric course at the University of Oslo last year, and in recent years, at University of Washington and UC Berkely (offered on and off since 1997.) They wouldn't be offering these kinds of courses if students weren't interested. Which reminds me.. what ever happened to the once-fecund field of Madonna Studies? (The artist, obviously.) I don't see much Gaga Studies. Presumably the successor to Madonna will be studied by the successor to Camille Paglia, but I don't know who that is. Hopefully no-one? 15. ### Lance said, August 28, 2013 @ 9:58 am But the difference is that Peter Sellers was a friend of the Beatles, and they were great admirers of his work. They had the same producer, George Martin. Whereas Steve Allen was just being nasty. Even more than that, Peter Sellers was being deliberately incongruous. Allen was presumably trying to point out that the lyrics seem banal when read as poetry; but no one expects the lyrics to A Hard Day's Night to sound right when done in the style of Laurence Olivier doing Shakespeare: http://www.youtube.com/watch?v=zLEMncv140s . 16. ### Joe said, August 28, 2013 @ 11:15 am The punctuation in Emily Dickinson's poetry (em-dashes, capitalizations, semi-colons) had been judged "idiosyncratic" because it did not seem "grammatical" or followed a particular writing convention. However when you try to set Dickinson's poetry to music (typcally in hymn meter) these "idiosyncrasies" start to make sense as rhythmic notations (eg, em-dashes tell you when to pause for a whole beat, captializations indicate stresses). This post reminds me of that insight. Because of this, Camille Paglia calls Dickinson "the first modernist master of syncopation and atonality". She calls Dickinson a bunch of othjer names, but I think this particluar one sticks. 17. ### J.W. Brewer said, August 28, 2013 @ 2:46 pm 18. ### Jonathan said, August 29, 2013 @ 8:05 am I love this post! But the first commenter is right. The snare hits show the third beat of the bar, not the downbeat. This mistake makes the whole analysis difficult to follow. 19. ### Collier said, August 29, 2013 @ 10:36 pm Derek Attridge's book "Poetic Rhythm" has a great chapter on this. He points out that scholars can't figure out quite how poetry in Old English should sound, because it's strong-stress verse, basically like rap, instead of metrical verse. Because no recordings exist, all we have is what's on the page, which is impossible to fully scan. He uses EPMD's "It's My Thing" and Tone Loc's "Wild Thing" as examples illustrating how difficult it is to map stressed verse on a page. Here's a link to part of the chapter, available in Google Books: http://bit.ly/14IAKBQ. 20. ### C. Scott Ananian said, September 5, 2013 @ 4:38 pm Count me in with the musicians. Shift things over two. For whatever reason, lyrics are typically set with a partial "pickup measure" so that the unstressed words precede "beat 1", which is the location of the first stress. Even if you look over at hymns or fiddle music, the melody crosses the "measure boundaries" in a way which, I guess, a nonmusical person (or expert linguist?) wouldn't expect. Musical notation has developed mechanisms to handle how this interacts with measure boundaries, eg repeats with alternate endings, dal signo (https://en.wikipedia.org/wiki/Dal_segno), etc. FWIW, Square dance singing calls also have a stereotyped "8 lines of 8 beats each" structure: https://www.youtube.com/watch?v=3_TqFOyYYmc
# Projectile motion of baseball When you throw a baseball obliquely in the air, it shows a projectile motion as in the picture. Suppose that the height of the ball $1$ second after you throw it is $25$ m. Then how many seconds will it take for the ball to reach the peak, from the moment you throw the ball? Gravitational acceleration is $10 \text{m/s}^2$, and air resistance is negligible. ×
# Does simulataneity contradicts the constancy of speed of light 1. Sep 8, 2011 ### universal_101 This is my first post here, Consider the typical simultaneity scenario: An observer O' right in the middle of two light signal sources A' & `B', all three has their clocks synchronized to each other. Now, A & B both emit a light signal simultaneously and then again one second later. After one second from receiving the first signal the observer starts moving(does not matter how fast) towards one of the sources. Now there are two contradicting statements as 1. According to simultaneity problem the second signal would be non-simultaneous w.r.t the moving observer O. 2. And as the speed of light signal is same in every inertial frame, this means w.r.t the moving observer O, speeds of both the second signals should be same. As the moment the signals were emitted the distance between the observer and the sources was same. This means same speed and same distance should result in same time to arrive, therefore the second event should also be simultaneous. 2. Sep 8, 2011 ### Staff: Mentor Have you tried to draw a spacetime diagram of your scenario? If you do, I think it may help you to see the error in your reasoning. However, I'll try to explain the error verbally. The fact that the distance between O and B (or O and A) is the same at time t = 1 as at time t = 0, does *not* imply that the signals emitted at time t = 1 should take the same time to arrive at O as the signals emitted at time t = 0. This is because O starts moving towards B and away from A at time t = 1, so he is moving towards the signal coming from B and away from the signal coming from A. So the B signal has to cover less distance to reach O (because O covers some of that distance himself, since he is moving) and the A signal has to cover more distance (because it has to cover the extra distance that O moved towards B). The key point is that it's not the instantaneous distance at the time of emission that matters, but the distance that the light actually has to cover, which must take into account the movement of O between the times of emission and reception. All the above analysis is done in the original frame (the one in which A, B, and O all start out at rest, and in which A and B remain at rest). However, once done, it makes clear that the B signal and the A signal (meaning the signals emitted at time t = 1 in the original frame) arrive at O at *different events*, which is the key point. That is, the B signal crosses O's worldline at a different point than the A signal does. Drawing a spacetime diagram makes this easy to see. This difference in arrival events is *invariant*; it must hold in any frame, because actual physical events, like the meeting of two worldlines (a light signal and an observer), are frame-independent. But if the two signals cross O's worldline at different points once O starts moving, then the signal emission events cannot be simultaneous to O, by definition (that is, they will not be simultaneous in O's moving frame). According to O, once O starts moving, B's signal is emitted *before* A's signal. This is what your statement #1 asserts, and it is correct. So to summarize: in the original frame, the difference in arrival times of the signals at O (B first, then A) is due to the difference in distance traveled; the signals are both emitted at the same time and travel at the same speed, but B's travels a shorter distance than A's does because O is moving toward B and away from A. But in the moving frame (O's moving frame), the difference in arrival times is due to B's signal being emitted before A's; both signals travel the same distance at the same speed, but B's is emitted before A's is, so it arrives first. So there is no contradiction; instead, your statement #2 is simply not correct, and when it is corrected it agrees with #1, that the events are not simultaneous to O when O is moving. Again, drawing a spacetime diagram (or better yet, two diagrams, one from the viewpoint of the original frame and one from O's moving frame) makes it easy to see how everything fits together. 3. Sep 8, 2011 ### universal_101 This is exactly what other people i talked with said, that in the frame of reference of the observer O the emitting of second signals are themselves non-simultaneous But, what about the synchronization of clocks, or better yet what if the observer starts moving at time t=1.5 instead of 1. This means when the second signals were emitted at t=1 they were perfectly simultaneous even in observer's frame of reference. 4. Sep 8, 2011 ### Staff: Mentor If the observer O is at rest with respect to A and B at time t = 0, but starts moving with respect to A and B at time t = 1 (both times with respect to A and B's mutual rest frame), then "the observer's frame of reference" does not name a single inertial frame. The A and B signals emitted at time t = 1 in their mutual rest frame are simultaneous in that frame (which is O's frame *before* he starts moving), but are *not* simultaneous in O's moving frame (after he starts moving). In other words, you can change whether a pair of events "look simultaneous" to you by changing your state of motion. That seems counterintuitive, I know, but it's a necessary consequence of simultaneity being frame-dependent. When you change your state of motion, you change which inertial frame is "at rest" for you, so you change which events look simultaneous to you. This is another reason why I like spacetime diagrams: they help you to focus on what does *not* change when you change your state of motion, which is where the real physics is. No actual physical result depends on whether or not two events look simultaneous to you. 5. Sep 8, 2011 ### universal_101 My thanks to you for giving me the time. And I guess, I understand it now Thanks again, 6. Sep 8, 2011 ### Staff: Mentor You don't sound quite sure. Feel free to ask more questions if you need to.
# Using Dnsmasq for local development on OS X This is a quick guide to installing Dnsmasq on OS X and using it to redirect development sites to your local machine. Posted by Thomas Sutton on October 23, 2013 Most web developers will be familiar with the process of updating your /etc/hosts file to direct traffic for coolproject.dev to 127.0.0.1. Most will also be familiar with the problems of this approach: • it requires a configuration change every time you add or remove a project; and Installing a local DNS server like Dnsmasq and configuring your system to use that server can make these configuration changes a thing of the past. In this post, I’ll run through the process of: 1. Installing Dnsmasq on OS X. 2. Configuring Dnsmasq to respond to all .dev requests with 127.0.0.1. 3. Configure OS X to send all .dev requests requests to Dnsmasq. Before we get started, I should give you a warning: these instructions show you how to install new system software and change your system configuration. Like all such changes, you should not proceed unless you are confident you have understood them and that you can reverse the changes if needed. ## Installing Dnsmasq Dnsmasq is a lightweight, easy to configure DNS forwarder and DHCP server […] is targeted at home networks[.] There are plenty of ways to install Dnsmasq but my favourite (on OS X) is to use the Homebrew package manager. Installing Homebrew is fairly simple but beyond my scope here. Once you have Homebrew installed, using it to install Dnsmasq is easy: # Update your homebrew installation brew up # Install dnsmasq brew install dnsmasq The installation process will output several commands that you can use to start Dnsmasq automatically with a default configuration. I used the following commands but you should use whichever commands brew tells you to: # Copy the default configuration file. cp $(brew list dnsmasq | grep /dnsmasq.conf.example$) /usr/local/etc/dnsmasq.conf # Copy the daemon configuration file into place. sudo cp $(brew list dnsmasq | grep /homebrew.mxcl.dnsmasq.plist$) /Library/LaunchDaemons/ # Start Dnsmasq automatically. sudo launchctl load /Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist ## Configuring Dnsmasq Now that you have Dnsmasq installed and running, it’s time to configure it! The configuration file lives at /usr/local/etc/dnsmasq.conf by default, so open this file in your favourite editor. One the many, many things that Dnsmasq can do is compare DNS requests against a database of patterns and use these to determine the correct response. I use this functionality to match any request which ends in .dev and send 127.0.0.1 in response. The Dnsmasq configuration directive to do this is very simple: address=/dev/127.0.0.1 Insert this into your /usr/local/etc/dnsmasq.conf file (I put it near the example address=/double-click.net/127.0.0.1 entry just to keep them all together) and save the file. You may need to restart Dnsmasq to get it to recognise this change. Restarting Dnsmasq is the same as any other service running under launchd: sudo launchctl stop homebrew.mxcl.dnsmasq sudo launchctl start homebrew.mxcl.dnsmasq You can test Dnsmasq by sending it a DNS query using the dig utility. Pick a name ending in dev and use dig to query your new DNS server: dig testing.testing.one.two.three.dev @127.0.0.1 You should get back a response something like: ;; ANSWER SECTION: testing.testing.one.two.three.dev. 0 IN A 127.0.0.1 ## Configuring OS X Now that you have a working DNS server you can configure your operating system to use it. There are two approaches to this: 1. Send all DNS queries to Dnsmasq. 2. Send only .dev queries to Dnsmasq. The first approach is easy – just change your DNS settings in System Preferences – but probably won’t work without additional changes to the Dnsmasq configuration. The second is a bit more tricky, but not much. Most UNIX-like operating systems have a configuration file called /etc/resolv.conf which controls the way DNS queries are performed, including the default server to use for DNS queries (this is the setting that gets set automatically when you connect to a network or change your DNS server/s in System Preferences). OS X also allows you to configure additional resolvers by creating configuration files in the /etc/resolver/ directory. This directory probably won’t exist on your system, so your first step should be to create it: sudo mkdir -p /etc/resolver Now you should create a new file in this directory for each resolver you want to configure. Each resolver corresponds – roughly and for our purposes – to a top-level domain like our dev. There a number of details you can configure for each resolver but I generally only bother with two: • the name of the resolver (which corresponds to the domain name to be resolved); and • the DNS server to be used. For more information about these files, see the resolver(5) manual page: man 5 resolver Create a new file with the same name as your new top-level domain (I’m using dev, recall) in the /etc/resolver/ directory and add a nameserver to it by running the following commands: sudo tee /etc/resolver/dev >/dev/null <<EOF nameserver 127.0.0.1 EOF Here dev is the top-level domain name that I’ve configured Dnsmasq to respond to and 127.0.0.1 is the IP address of the server to use. Once you’ve created this file, OS X will automatically read it and you’re done! ## Testing Testing you new configuration is easy; just use ping check that you can now resolve some DNS names in your new top-level domain: # Make sure you haven't broken your DNS. ping -c 1 iam.the.walrus.dev PING iam.the.walrus.dev (127.0.0.1): 56 data bytes You can now just make up new DNS names under .dev whenever you please. Congratulations!
# Tag Info 47 My impression is that, by and large, traditional algebra is rather too specific for use in Computer Science. So Computer Scientists either use weaker (and, hence, more general) structures, or generalize the traditional structures so that they can fit them to their needs. We also use category theory a lot, which mathematicians don't think of as being part ... 40 A type $C$ has a logarithm to base $X$ of $P$ exactly when $C \cong P\to X$. That is, $C$ can be seen as a container of $X$ elements in positions given by $P$. Indeed, it's a matter of asking to what power $P$ we must raise $X$ to obtain $C$. It makes sense to work with $\mathop{log}F$ where $F$ is a functor, whenever the logarithm exists, meaning $\mathop{... 27 There have been recent developments in dependent type theory which relate type systems to homotopy types. This is now a relatively small field, but there is a lot of exciting work being done right now, and potentially a lot of low hanging fruit, most notably in porting results from algebraic topology and homological algebra and formalizing the notion of ... 24 Algebraic geometry is used heavily in algebraic complexity theory and in particular in geometric complexity theory. Representation theory is also crucial for the latter, but it's even more useful when combined with algebraic geometry and homological algebra. 23 My all-time favorite application of group theory in TCS is Barrington's Theorem. You can find an exposition of this theorem on the complexity blog, and Barrington's exposition in the comment section of that post. 18 Algebraic geometry Noether's Normalization Lemma (NNL) for explicit varieties is currently only known to be in$\mathsf{EXPSPACE}$(like general NNL), but is conjectured to be in$\mathsf{P}$(and is in$\mathsf{P}$assuming that PIT can be black-box derandomized). Update 4/18/18: It was recently shown that for the variety$\overline{\mathsf{VP}}$it is in$... 17 You could try the notes from Madhu Sudan's course: Algebra and Computation 16 There is not a canonical such category, for the same reason there is no canonical category of computations. However, there are large and useful algebraic structures on data structures. One of the more general such structures, which is still nevertheless useful, is the theory of combinatorial species. A species is a functor $F : B \to B$, where $B$ is the ... 15 Groups, rings, fields, and modules are everywhere in computational topology. See especially Carlsson and Zomorodian's work [ex: 1] on (multidimensional) persistent homology, which is all about graded modules over principal ideal domains. 15 Your knowledge of field theory would be useful in cryptography, while category theory is heavily used in the research on programming languages and typing systems, both of which are closely related to the foundations of mathematics. 14 Indeed, there is a different notion than isomorphism which is more useful in programming. It is called "behavioural equivalence" (sometimes called "observational equivalence") and it is established by giving a "simulation relation" between data structures rather than bijections. Algebraists came in and established an area called "algebraic data types" in ... 14 Here is a very nice, practical use: an algorithm for computing graph connectivity (from FOCS2011). To compute the s->t connectivity of a graph, the authors give an algorithm that assigns random vectors with entries drawn from a finite field to the out edges from s, then construct similar vectors for all of the edges in the graph by taking random linear ... 14 Finite automata in which the initial state is also the unique accepting state have the form $r^∗$, where $r$ is some regular expression. However, as J.-E. Pin points out below, the converse is not true: there are languages of the form $r^*$ which are not accepted by a DFA with a unique accepting state. Intuitively, given a sequence of states $q_0, \ldots, ... 12 [Computational complexity and category theory] seem like such natural pairs. Given the prominence of computational complexity as a research field, if they were such natural bedfellows, maybe somebody would have brought out the connection already? Wild speculation. Let me entertain the reader with thoughts about why a categorical rendering of ... 12 Universal algebra is an important tool in studying the complexity of constraint satisfaction problems. For example, the Dichotomy Conjecture states that, roughly speaking, a constraint satisfaction problem over a finite domain is either NP-complete or polynomial-time solvable. Note that by Ladner's theorem there are problems in NP which are not in P and ... 12 Lattices and fixed points are at the foundations of program analysis and verification. Though advanced results from lattice theory are rarely used because we are concerned with algorithmic issues such as computing and approximating fixed points, while research in lattice theory has a different focus (connections to topology, duality theory, etc). The initial ... 12 The order of permutation groups can be computed in polynomial-time. In fact, I believe even in$\mathsf{NC}$and also nearly linear Las Vegas time. See, e.g., the book by Seress. For reference, subgroups of$S_n$(and algorithms related thereto) are typically called "permutation groups" rather than merely "subgroups (of$S_n$)". So you can google "... 12 For NP, this seems hard to construct. In particular, if you can also sample (nearly) uniform elements from your group - which is true for many natural ways of constructing groups - then if an NP-complete language has a poly-time group action with few orbits, PH collapses. For, with this additional assumption about sampleability, the standard$\mathsf{coAM}$... 11 Here are two applications from a different part of TCS. Semirings are used for modelling annotations in databases (especially those needed for provenance), and often also for the valuation structures in valued constraint satisfaction. In both of these applications, individual values must be combined together in ways which lead naturally to a semiring ... 11 One possibily path into abstract algebra could be to look at it from point of view of cryptography, which is about algorithms on finite field. Fields are rings, and fields are also two groups coupled by simple laws. Field theory uses vector spaces in prominent position (Galois theory), so this angle should cover a lot of abstract algebra. The book A ... 11 There is a more general question on mathoverflow. I asked a similar question on CS.SE. jbapple provided a good answer. To quote "Necklaces, Convolutions, and X+Y", By Bremner et al. shows a$O\left(\frac{n^2(\lg \lg n)^3}{\lg^2 n}\right)$algorithm for this problem on the real RAM and a$O(n \sqrt{n})$algorithm in the nonuniform linear decision ... 11 Field theory and algrebraic geometry would be useful in topics related to error correcting codes, both in the classical setting as well as in studying locally decodable codes and list decoding. I believe this goes back to work on the Reed-Solomon and Reed-Muller codes, which was then generalized to algebraic geometric codes. See for example, this book ... 11 It seems there is a paper answering this exact question, and even in the more general case of$\omega$-regular languages, but I cannot find an open-access version. If somebody finds a link without paywall it would be great. I requested the full-text on ResearchGate. Title: Which Finite Monoids are Syntactic Monoids of Rational omega-Languages. Authors: ... 11 In a more elementary way than Denis's answer, the following is extracted from Pippenger's "Theories of Computability", p.87, and immediate to check. Definition: Let$M$be a monoid, and$Y \subseteq M$. Define the congruence relation$\equiv_Y$over$M$by$x \equiv_Y y$iff$\big[\forall w, z \in M$,$wxz \in Y \Leftrightarrow wyz \in Y\big]$. Definition:... 11 From a programming language theory perspective, as opposed to the computability perspective other answers and comments have offered, C++ templates combined with concepts correspond to bounded polymorphism or constrained genericity. Concepts themselves correspond to the constraints or bounds placed on a type. A template is type-level function, parameterised ... 10 Rings, modules, and algebraic varieties are used in error correction and, more generally, coding theory. Specifically, there is an abstract error correcting scheme (algebraic-geometry codes) which generalizes Reed-Solomon codes and Chinese Remainder codes. The scheme is basically to take your messages to come from a ring R and encode it by taking its ... 10 An important subclass of this family is a sub-class of 0-reversible languages. A language is 0-reversible if the reversal of the minimal DFA for the language is also deterministic. The reversing operation is defined as swapping initial and final states, and inverting the edge relation of the DFA. This means that a 0-reversible language can have only one ... 10 The terminology rigid seems to be relatively new compared to the term disjunctive used in the late 70's (and probably before, I didn't check for earlier references). A subset$P$of a monoid$M$is disjunctive if and only if the syntactic congruence of$P$in$M$is the equality relation. Thus a monoid is the syntactic monoid of a language if and only if it ... 10 To my knowledge (which is definitely incomplete), there has been relatively little work on this, presumably because it requires assimilating two relatively intricate bodies of knowledge. However, little does not mean nonexistent. Thierry Coquand and his collaborators have written quite a few papers on the connections between commutative algebra and ... 9 You are working on your PhD. Saying "I am not well versed in$X$" is not an excuse. And if you're good, then saying "my advisor does not know$X$" is not an excuse either. You are using monoids where you should be using categories. Your monoid operations presupposes that you can combine any$\delta\$'s together. But does this really make sense, for example, ... Only top voted, non community-wiki answers of a minimum length are eligible
Marks the specified set of species as background species. Background species are handled differently in some plots and their abundance is automatically adjusted in addSpecies() to keep the community close to the Sheldon spectrum. markBackground(object, species) ## Arguments object An object of class MizerParams or MizerSim. Name or vector of names of the species to be designated as background species. By default this is set to all species. ## Value An object of the same class as the object argument ## Examples # \dontrun{ params <- newMultispeciesParams(NS_species_params_gears, inter)#> Note: No h provided for some species, so using f0 and k_vb to calculate it.#> Note: Because you have n != p, the default value is not very good.#> Note: No ks column so calculating from critical feeding level.#> Note: Using z0 = z0pre * w_inf ^ z0exp for missing z0 values.#> Note: Using f0, h, lambda, kappa and the predation kernel to calculate gamma.sim <- project(params, effort=1, t_max=20, t_save = 0.2, progress_bar = FALSE) sim <- markBackground(sim, species = c("Sprat", "Sandeel", "N.pout", "Dab", "Saithe")) plotSpectra(sim)# }
# FQHESphereQuasiholesWithSpinTimeReversalSymmetryAndPairing FQHESphereQuasiholesWithSpinTimeReversalSymmetryAndPairing deals with two copies of single Landau level with opposite chiralities and superconducting regions at the poles. The Hamiltonian is projected into the subspace of Laughlin 1/3 quasiholes and reads $H=\sum_{m,\sigma} V_{m,n,\sigma} c^\dagger_{m,\sigma} c_{m,\sigma} + \sum_{m} \Delta_{m} c^\dagger_{m,\uparrow} c^\dagger_{m,\downarrow}\;+\; h.c.$ The options are similar to FQHESphereFermionsWithSpinTimeReversalSymmetryAndPairing (more details can also be found in the documentation of FQHESphereFermionsWithSpin). Note that there is no option to fix the number of particles (for compatibility reasons, any output file still contains a _n_0_ string). An interaction file has to be provided using the "--interaction-file" option. This file specifies the form of the pairing term (OneBodyPotentialPairing) as well as that of the confining potential (OneBodyPotentialUpUp and OneBodyPotentialDownDown). The number of coefficients in each of these lines has to match the number of orbitals. Unlike FQHESphereFermionsWithSpinTimeReversalSymmetryAndPairing, the two-body pseudopotentials should be left undefined, since these terms are only implemented through the projection onto the quasihole subspace. Finally, the average number of particles N_0 can be set through the option "--average-nbrparticles", with a charge energy E_c (N - N_0)^2 whose amplitude E_c is fixed by the option "--charging-energy". FQHESphereQuasiholesWithSpinTimeReversalSymmetryAndPairing does not generate the Hilbert space or matrix elements from scratch. Instead, one has to run FQHESphereQuasiholeMatrixElements with the same number of flux quanta first. The files containing the numbers of quasiholes for each spin (fermions_qh_states_k_1_r_2_nphi_*.dat), the pairing matrix elements in binary format (fermions_qh_k_1_r_2_n_*_nphi_*_lz_*_c_*.mat), the one-body potential matrix elements (fermions_qh_k_1_r_2_n_*_nphi_*_lz_*_cdc_*.mat) should be present either in the directory where the program is run or in any directory that should be indicated using the --directory option. A typical command line would look like this: $PATHTODIAGHAM/build/FQHE/src/Programs/FQHEOnSphere/FQHESphereQuasiholesWithSpinTimeReversalSymmetryAndPairing -l 6 --charging-energy 1 --average-nbrparticles 4 --interaction-file onebodypotentials_2s_6.dat --interaction-name test --use-lapack --full-diag 5000 If all the required files are located in the directory nphi_6, you should use$PATHTODIAGHAM/build/FQHE/src/Programs/FQHEOnSphere/FQHESphereQuasiholesWithSpinTimeReversalSymmetryAndPairing -l 6 --charging-energy 1 --average-nbrparticles 4 --interaction-file onebodypotentials_2s_6.dat --interaction-name test --use-lapack --full-diag 5000 --directory nphi_6 As an example, here is the file content of onebodypotentials_2s_6.dat OneBodyPotentialUpUp = 1.0 0 0 0 0 0 1.0 OneBodyPotentialDownDown = 1.0 0 0 0 0 0 1.0 OneBodyPotentialPairing = 0 1.0 0 0 0 1.0 0 FQHESphereQuasiholesWithSpinTimeReversalSymmetryAndPairing has a full support of both lapack and scalapack. When the full spectrum is needed with moderate Hilbert space dimension (between 5000 and 60000), scalapack provides a clear speed boost over lapack. Note that by default, FQHESphereQuasiholesWithSpinTimeReversalSymmetryAndPairing only computes the positive angular momentum sectors. If the potentials are not symmetric, you should include the --force-negativelz to compute all the angular momentum sectors. ## Cylinder geometry Most of the FQHESphereQuasiholesWithSpinTimeReversalSymmetryAndPairing are geometry agnostic. If the default geometry is the sphere geometry, ee can switch to the cylinder geometry thanks to a few option. First, we need to generate the data on such a geometry with FQHECylinderQuasiholeMatrixElements. For example, let start with a cylinder where the perimeter is set to 6 magnetic length and with $N_\Phi=6$ flux quanta $PATHTODIAGHAM/build/FQHE/src/Programs/FQHEOnTorus/FQHECylinderQuasiholeMatrixElements -l 6 --cylinder-perimeter 6.0 All the data will automatically be stored in nphi_6_cylinder_perimeter_6.000000 . Now we just have to call$PATHTODIAGHAM/build/FQHE/src/Programs/FQHEOnSphere/FQHESphereQuasiholesWithSpinTimeReversalSymmetryAndPairing -l 6 --charging-energy 1 --average-nbrparticles 4 --cylinder-perimeter 6 --use-cylinder --directory nphi_6_cylinder_perimeter_6.000000 --interaction-file onebodypotentials_2s_6.dat to perform the same calculation that described for the sphere but this time on a cylinder. ## Fixed number of particles It is also possible to fix the total number of particles via the --nbr-particles option. For example we can simulate a single layer by setting both the total spin and the number of particles to the same value $PATHTODIAGHAM/build/FQHE/src/Programs/FQHEOnSphere/FQHESphereQuasiholesWithSpinTimeReversalSymmetryAndPairing -l 26 -s 7 --nbr-particles 7 --interaction-file confining_momentum_cylinder_perimeter_8.000000_2s_26_alphar_1_x0r_0.000000_v0r_1.000000_alphal_1_x0l_0.000000_v0l_0.000000.dat --use-cylinder --cylinder-perimeter 8.0 --directory nphi_26_n_7_cylinder_perimeter_8.000000 --use-lapack --force-negativelz --full-diag 8000 --interaction-name righlinearmomentum Here we effectively have a single layer with 7 particles and 26 flux quanta on a cylinder with a perimeter of $L_y=8$, the linear confining can be generated by FQHECylinderConfiningPotentialCoefficients. If we want to look at all the particle number sectors in a single run, we can use both --nbr-particles and --all-fixednbrparticles. In that case, FQHESphereQuasiholesWithSpinTimeReversalSymmetryAndPairing will consider all the particle number sectors from 0 (or 1 is $2S_z$ is odd) to the integer set by --nbr-particles with an increment of two units. For example, if we want to compute the 30 lowest energy states of the problem of a symmetric confining potential without superconducting coupling, with $2S_z=0$ and for all the number of particles going from 0 to 16, we just have to use$PATHTODIAGHAM/build/FQHE/src/Programs/FQHEOnSphere/FQHESphereQuasiholesWithSpinTimeReversalSymmetryAndPairing -l 21 --directory nphi_21_cylinder_perimeter_8.000000 --interaction-file confining_momentum_cylinder_perimeter_8.000000_2s_21_alphar_1_x0r_0.000000_v0r_1.000000_alphal_1_x0l_0.000000_v0l_1.000000.dat --interaction-name symlinearmomentum --use-lapack --full-diag 12000 -s 0 --use-cylinder --cylinder-perimeter 8.0 --memory 5000 -n 30 --block-lanczos --block-size 30 --force-reorthogonalize -S --processors 4 --eigenstate --nbr-lz 1 --nbr-particles 16 --all-fixednbrparticles Most of the options are similar to the previous example with the exception of --all-fixednbrparticles. ## Using an effective subspace As many DiagHam codes, the diagonalization can be performed in an effective subspace described by a series of binary vectors (as long they belong to the same subspace defined by the various quantum numbers). For example, we can first generate the eigenstates for the problem with only the confining potential, keep only the lowest energy states and then diagonalize the superconducting coupling within that effective Hilbert space. Nicely, getting the eigenstates for the problem with only the confining potential allows to use the particle number conservation. We can generate these states using the example mentioned previously. We then need to convert them to the full basis without the particle number conservation using FQHESphereQuasiholesWithSpinTimeReversalSymmetryConvertStates (let say in the $K_y=0$ sector). Then we create a text file (like basis.dat) that contains the list of states that we will use for the effective Hilbert space. This file is a single line that should look like Basis= fermions_cylinder_perimeter_8.000000_su2_quasiholes_symlinearmomentum_cenergy_0.000000_n0_0.000000_pairing_fixedn_0_n_0_2s_21_sz_0_lz_0.0.vec fermions_cylinder_perimeter_8.000000_su2_quasiholes_symlinearmomentum_cenergy_0.000000_n0_0.000000_pairing_fixedn_2_n_0_2s_21_sz_0_lz_0.0.vec ... The way the effective subspace basis is sorted is the order in which the vectors appears in this file. Thus, it is highly recommended that the effective subspace basis should be sorted per particle number sector and within each sector from the smallest to the largest energy. If we want to diagonalize the superconducting coupling acting only on the leftmost orbital, we create a simple interaction file confining_supra_0_2s_21.dat OneBodyPotentialPairing = 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Then we just run the following command \$PATHTODIAGHAM/build/FQHE/src/Programs/FQHEOnSphere/FQHESphereQuasiholesWithSpinTimeReversalSymmetryAndPairing -l 21 --directory nphi_21_cylinder_perimeter_8.000000 --interaction-file confining_supra_0_2s_21.dat --interaction-name supra_0_effective_30 --use-lapack --full-diag 12000 -s 0 --use-cylinder --cylinder-perimeter 8.0 --memory 16000 -S --processors 4 --nbr-lz 1 --use-hilbert basis.dat --export-binhamiltonian fermions_cylinder_perimeter_8.000000_su2_quasiholes_supra_0_effective_30_cenergy_0.000000_n0_0.000000_pairing_n_0_2s_21_sz_0_lz_0.mat Note taht we have asked to export the hamiltonian restricted to the effective space in the binary file fermions_cylinder_perimeter_8.000000_su2_quasiholes_supra_0_effective_30_cenergy_0.000000_n0_0.000000_pairing_n_0_2s_21_sz_0_lz_0.mat . This file can be processed and diagonalized using the generic code GenericHamiltonianDiagonalization.
# Question about an eigenvalue problem: range space • I ## Main Question or Discussion Point A theorem from Axler's Linear Algebra Done Right says that if 𝑇 is a linear operator on a complex finite dimensional vector space 𝑉, then there exists a basis 𝐵 for 𝑉 such that the matrix of 𝑇 with respect to the basis 𝐵 is upper triangular. In the proof, he defines U=range(T-𝜆I) (as we have proved atleast one eigenvalue must exist)and says that this is invariant under T. First I want understand what the range space is here. Suppose we are in 2D and I choose some basis consisting on an eigenvector(ev) and another free vector(v): ( ev, v) Now this v can be anything not necessarily an eigenvector. So there are many possibilities. Are all contained in U? U can have dimension 1 but all these possibilities are not linearly independent so what exactly is U? Is it a set of all vectors not in the span of ev? Does this not conflict with U having dimension 1? He then proves this by saying if u belongs to U then Tu=(T-𝜆I)u + Tu and both (T-𝜆I)u and Tu belong to U. But if I had chosen a non-eigenvalue for my basis, then Tu does not scale it so it is mapped to some other span. Therefore I'm guessing U is a set. Any other comments about (T-𝜆I)v would be appreciated. How to think of it in the context of our original T? (I know the null space is the eigenvector) Related Linear and Abstract Algebra News on Phys.org Stephen Tashi
# On the number of positive integers $\leq x$ and free of prime factors $>y$ P. Erdös, J.H. van Lint
### Quantum Key Search for Ternary LWE Iggy van Hoof, Elena Kirshanova, and Alexander May ##### Abstract Ternary LWE, i.e., LWE with coefficients of the secret and the error vectors taken from $\{-1, 0, 1\}$, is a popular choice among NTRU-type cryptosystems and some signatures schemes like BLISS and GLP. In this work we consider quantum combinatorial attacks on ternary LWE. Our algorithms are based on the quantum walk framework of Magniez-Nayak-Roland-Santha. At the heart of our algorithms is a combinatorial tool called the representation technique that appears in algorithms for the subset sum problem. This technique can also be applied to ternary LWE resulting in faster attacks. The focus of this work is quantum speed-ups for such representation-based attacks on LWE. When expressed in terms of the search space $\mathcal{S}$ for LWE keys, the asymptotic complexity of the representation attack drops from $\mathcal{S}^{0.24}$ (classical) down to $\mathcal{S}^{0.19}$ (quantum). This translates into noticeable attack's speed-ups for concrete NTRU instantiations like NTRU-HRSS and NTRU Prime. Our algorithms do not undermine current security claims for NTRU or other ternary LWE based schemes, yet they can lay ground for improvements of the combinatorial subroutines inside hybrid attacks on LWE. Note: update 2021-12-10: fixed some typos and confusing variable names Available format(s) Category Public-key cryptography Publication info Published elsewhere. MINOR revision.PQCrypto 2021 Keywords small secret LWErepresentationsquantum random walk Contact author(s) iggy hoof @ rub de elena kirshanova @ rub de alex may @ rub de History 2021-12-10: revised See all versions Short URL https://ia.cr/2021/865 CC BY BibTeX @misc{cryptoeprint:2021/865, author = {Iggy van Hoof and Elena Kirshanova and Alexander May}, title = {Quantum Key Search for Ternary LWE}, howpublished = {Cryptology ePrint Archive, Paper 2021/865}, year = {2021}, note = {\url{https://eprint.iacr.org/2021/865}}, url = {https://eprint.iacr.org/2021/865} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
# Math Help - Square outiside circle outside quadrilateral 1. ## Square outiside circle outside quadrilateral This was a problem on my brothers maths test, and it is doing my families collective heads in. A quadrilateral with three sides measuring 15cm and one side 20cm sits within a circle with each of its four corners touching the circle. Similarly, a square sits around the outside of the circle, touching in the middle of each side. What is the area of the square? Also, the problem had to be solved without a calculator. 2. ## Re: Square outiside circle outside quadrilateral see attached diagram ... 1. trapezoid diagonal bisects the base angle 2. notice small right triangle formed by trapezoid height ... $\cos(2x) = \frac{2.5}{15} = \frac{1}{6}$ 3. inscribed angle is half the central angle 4. using law of cosines ... $15^2 = r^2 + r^2 - 2r^2\cos(2x) = 2r^2[1 - \cos(2x)]$ $2r^2 = \frac{15^2}{1 - \cos(2x)}$ $2r^2 = \frac{15^2}{1 - \frac{1}{6}}$ $2r^2 = \frac{6 \cdot 15^2}{5}$ 5. area of large square (not drawn) is $4r^2$ ... $4r^2 = \frac{2 \cdot 6 \cdot 15^2}{5} = 540$ 3. ## Re: Square outiside circle outside quadrilateral I constructed the regular trapezoid and found the radius of the circumcircle to be 11.5 cm. side of square 23cm. area of square 529 cm^2. No calculator. 4. ## Re: Square outiside circle outside quadrilateral Originally Posted by bjhopper I constructed the regular trapezoid and found the radius of the circumcircle to be 11.5 cm. side of square 23cm. area of square 529 cm^2. No calculator. a 2% error ... not bad. 5. ## Re: Square outiside circle outside quadrilateral thanks Skeeter, test was without a calculator, did you manage that? 6. ## Re: Square outiside circle outside quadrilateral thanks BJ, we did that too on the computer in publisher (? i know) and got a diameter of 23.306, area of square 543.16963. Still, test was without rulers or calculator. Test was actually an Australia wide maths comp for year 10 students. 7. ## Re: Square outiside circle outside quadrilateral Originally Posted by Linus thanks Skeeter, test was without a calculator, did you manage that? did you read my solution? no calculator required.
Opened 2 years ago # Users should be able to see their own sensitive tickets Reported by: Owned by: ceremcem dkgdkg normal SensitiveTicketsPlugin normal 1.0 ### Description It doesn't make sense forbidding a user to see a sensitive ticket which is created by that user in the first place. This should be the default behaviour. ### comment:1 Changed 2 years ago by ceremcem • Type changed from enhancement to defect If a user wants to see his own sensitive ticket, this error is shown: TICKET_VIEW privileges are required to perform this operation on Ticket #21. You don't have the required permissions. Giving TICKET_VIEW privilige is not solving problem either. It seems that there is no way to make a user be able to see his own sensitive ticket. Last edited 2 years ago by ceremcem (previous) (diff) ### comment:2 Changed 2 years ago by dkg Is the user authenticated? If not, they will not be able to see the sensitive ticket, i see no way around this. If the user is authenticated, and you have allow_owner set to true (which should be the default) then they should be able to see their ticket. can you give more details, including your entire configuration for this plugin? what version of the plugin are you running?
# Homework Help: Determine the surface of a cardioid Tags: 1. Jan 15, 2017 ### Math_QED 1. The problem statement, all variables and given/known data Consider the cardioid given by the equations: $x = a(2\cos{t} - \cos{2t})$ $y = a(2\sin{t} - \sin{2t})$ I have to find the surface that the cardioid circumscribes, however, I don't know what limits for $t$ I have to take to integrate over. How can I know that, as I don't know how this shape looks like (or more precisely where it is located)? 2. Relevant equations Integration formulas 3. The attempt at a solution I know how I have to solve the problem once I have the integral bounds, but I don't know how I have to determine these. In similar problems, we could always eliminate cost and sint by using the identity $cos^2 x + sin^2 x = 1$ but neither this nor another way to eliminate the cos, sin seems to work. This makes me think, would it be sufficient if I find the maximum and minimum x-coordinate in function of t using derivatives? Then I would have bounds to integrate over, but this seems like a lot of work. 2. Jan 15, 2017 ### BvU That's easy ! calculate a few points and connect the dots ! And when things start to repeat, you know you've passed the bound for $t$ 3. Jan 15, 2017 ### Math_QED I have to know what the bound exactly is, so I can use it as my integration bounds. 4. Jan 15, 2017 ### BvU O ? But you already have an exact expression for the bound 5. Jan 15, 2017 ### Math_QED I don't think I understand what you mean. Can you elaborate? 6. Jan 15, 2017 ### BvU You wrote down the equations in post #1. They exactly define the bound. A bit corny of me, sorry. Did you make the sketch? Find the domain of $t$ ? What would your integral look like ? The section in your textbook/curriculum where this exercise appears: what's it about ? 7. Jan 15, 2017 ### Math_QED It's about integration: definite and indefinite integrals. The next chapter deals with arc length. 8. Jan 15, 2017 ### BvU That answers the last of my three questions. What about the other two ? let me add another one: do you know two ways to integrate to get the area of a circle ? I am a bit evasive because I am convinced you will say 'of course' once you have found your way out of this on your own steam.... 9. Jan 15, 2017 ### Math_QED Yes I can find the integral of $\sqrt{r^2 - x^2}$ so that wouldn't be the problem. It's late now but I will try the problem tomorrow again and answer your questions then. 10. Jan 15, 2017 ### BvU That's one way. There's a quicker way too. Same way works for the cardioid (but it's admittedly less simple than for a circle -- still an easy integral). First three (not two) questions in post # 6 are still open ... And it's ruddy late here too.. 11. Jan 15, 2017 ### Math_QED Yes I believe you can do x = cos t, y = sin t and then integrate over the correct bounds. I will answer the other questions tomorrow. 12. Jan 17, 2017 ### BvU get a good sleep ? Isn't good for an area (only one integrand), but it's in the right direction. You need a factor r in both. $\sqrt{r^2 - x^2}\$ are vertical strips, and I'm trying to lure you towards investigating pie pieces (sectors, so to say). As you picked up correctly. What would the integral for the cardioid area look like with this approach ? 13. Jan 17, 2017 ### Math_QED I figured out how to solve the problem using your hints. Thanks a lot. 14. Jan 17, 2017 ### BvU You're welcome.
Pygments is a syntax highlighting package written in Python. ## Project description Pygments ~~~~~~~~ Pygments is a syntax highlighting package written in Python. It is a generic syntax highlighter suitable for use in code hosting, forums, wikis or other applications that need to prettify source code. Highlights are: * a wide range of over 300 languages and other text formats is supported * special attention is paid to details, increasing quality by a fair amount * support for new languages and formats are added easily * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image formats that PIL supports and ANSI sequences * it is usable as a command-line tool and as a library :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS. ## Release history Release notifications | RSS feed Uploaded source Uploaded 2 7
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop OMC elog, Page 8 of 9 Not logged in New entries since: Wed Dec 31 16:00:00 1969 ID Date Author Type Category Subject 307   Wed Aug 29 11:06:30 2018 KojiGeneralGeneralRF AM RIN and dBc conversion 0. If you have an RF signal whose waveform is $1 \times \sin(2 \pi f t)$, the amplitude is constant and 1. 1. If the waveform $[1+0.1 \sin(2 \pi f_{\rm m} t)] \sin(2 \pi f t)$, the amplitude has the DC value of 1 and AM with the amplitude of 0.1 (i.e. swing is from 0.9 to 1.1). Therefore the RMS RIN of this signal is 0.1/1/Sqrt(2). 2. The above waveform can be expanded by the exponentials. $\left[-\frac{1}{2} i e^{i\,2\,\pi f t} + 0.025 e^{i\,2\,\pi (f-f_{\rm m}) t}- 0.025 e^{i\,2\,\pi (f+f_{\rm m}) t} \right] - {\rm C.C.}$ Therefore the sideband carrier ratio R is 0.025/0.5 = 0.05. This corresponds to 20 log10(0.05) = -26dBc In total, we get the relationship of dBc and RIN as ${\rm dBc} = 20 \log_{10}(\rm{RIN}/\sqrt{2})$, or R = RIN/sqrt(2) 308   Sun Sep 23 19:42:21 2018 KojiOpticsGeneralMontecarlo simulation of the phase difference between P and S pols for a modeled HR mirror [Koji Gautam] With Gautam's help, I ran a coating design code for an HR mirror with the standard quarter-wave design. The design used here has 17 pairs of lambda/4 layers of SiO2 and Ta2O5 (=34 layers) with the fused silica as the substrate to realize the transmission of tens of ppm. At the AOI (angle of incidence) of 4 deg (=nominal angle for the aLIGO OMC), there is no significant change in the reflectivity (transmissivity). With 95% of the case, the phase difference at the AOI of 4 deg is smaller than 0.02 deg for given 1% fluctuation (normal distribution) of the layer design and the refractive indeces of the materials. Considering the number of the OMC mirrors (i.e. 4), the total phase shift between P and S pols is less than 0.08 deg. This makes P and S resonances matched well within 1/10 of the cavity resonant width (360/F=0.9deg, F: Finesse=400). Of course, we don't know how much layer-thickness fluctuation we actually have. Therefore, we should check the actual cavity resonance center of the OMC cavity for the polarizations. Attachment 1 shows the complex reflectivity of the mirror for P and S pols between AOIs of 0 deg and 45 deg. Below 30 deg there is no significant difference. (We need to look at the transmission and the phase difference) Attachment 2 shows the power transmissivity of the mirror for P and S pols between AOIs of 0 deg and 45 deg. For the purpose to check the robustness of the reflectivity, random fluctuations (normal distribution, sigma = 1%) were applied to the thicknesses of each layer, and the refractive indices of Silica and Tantala. The blue and red bands show the regions that the 90% of the samples fell in for P and S pols, respectively. There are median curves on the plot, but they are not well visible as they match with the ideal case. This figure indicates that the model coating well represents the mirror with the transmissivity better than 70ppm. Attachment 3 shows the phase difference of the mirror complex reflectivity for P and S pols between AOIs of 0deg and 45deg. In the ideal case, the phase difference at the AOI of 4deg is 1x10-5 deg. The Monte-Carlo test shows that the range of the phase for 90% of the case fell into the range between 5x10-4 deg and 0.02 deg. The median was turned to be 5x10-3 deg. Attachment 4 shows the histogram of the phase difference at the AOI of 4deg. The phase difference tends to concentrate at the side of the smaller angle. 309   Thu Sep 27 20:19:15 2018 AaronOpticsGeneralMontecarlo simulation of the phase difference between P and S pols for a modeled HR mirror I started some analytic calculations of how OMC mirror motion would add to the noise in the BHD. I want to make some prettier plots, and am adding the interferometer so I can also compute the noise due to backscatter into the IFO. However, since I've pushed the notebook I wanted to post an update. Here's the location in the repo. I used Koji's soft limit of 0.02 degrees additional phase accumulation per reflection for p polarization. 310   Thu Nov 1 19:57:32 2018 AaronOpticsGeneralMontecarlo simulation of the phase difference between P and S pols for a modeled HR mirror I'm still not satisfied/done with the solution to this, but this has gone too long without an update and anyway probably someone else will have a direction to take it that prevents me spinning my wheels on solved or basic questions. The story will have to wait to be on the elog, but I've put it in the jupyter notebook. Basically: • I considered the polarization-separated OMC in several configurations. I have plots of DARM referred noise (measured free-running and controlled noise for the current OMC, thermal theoretical noise curve, scattered light) for the case of such an OMC with one lambda/2 waveplate oriented at 45 degrees. This is the base case. • I also considered such an OMC with a lambda/2 both before and after the OMC, where their respective polarization axes can be arbitrary (I look at parameter space near the previous case's values). • I optimize the BHD angle to balance the homodyne (minimize the E_LO^2 term in the homodyne readout). • I then optimize the rotations of the lambda/2 polarization axes to minimize the noise • For the optimum that is closest to the base case, I also plotted DARM referred length noise. It's clear to me that there is a way to optimize the OMC, but the normalization of my DARM referred noise is clearly wrong, because I'm finding that the input-referred noise is at least 4e-11 m/rt(Hz). This seems too large to believe. Indeed, I was finding the noise in the wrong way, in a pretty basic mistake. I’m glad I found it I guess. I’ll post some plots and update the git tomorrow. 314   Fri Feb 1 12:52:12 2019 KojiMechanicsGeneralPZT deformation simulation A simple COMSOL simulation was run to see how the PZT deforms as the voltage applied. Use the geometry of the ring PZT which is used in the OMCs -  NAC2124 (OD 15mm, ID 9mm, H 2mm) The material is PZT-5H (https://bostonpiezooptics.com/ceramic-materials-pzt) which is predefined in COMSOL and somewhat similar to the one used in NAC2124 (NCE51F - http://www.noliac.com/products/materials/nce51f/) The bottom surface of the ring was electrically grounded (0V), and mechanically fixed. Applied 100V between the top and bottom. 321   Thu Apr 4 20:07:39 2019 KojiSupplyGeneralPurchase == Office Depot == Really Useful Box 9L x 6 (delivered) Really Useful Box 17L x 5 (ordered 4/4) P-TOUCH tape (6mm, 9mm, 12mmx2, 18mm) (ordered 4/4) == Digikey == 9V AC Adapter (- inside, 1.3A) for P-TOUCH (ordered 4/4) 12V AC Adapter (+ inside, 1A) for Cameras (ordered 4/4) == VWR == Mask KIMBERLY CLARK "KIMTECH Pure M3" ISO CLASS 3 (ordered 4/4) 325   Fri Apr 5 23:30:20 2019 KojiGeneralGeneralOMC (002) repair completed OMC(002) repair completed When the cable harness of OMC(004) is going to be assembled, the cable harness of OMC(002) will be replaced with the PEEK one. Otherwise, the work has been done. Note that there are no DCPDs installed to the unit. (Each site has two in the OMC and two more as the spares) More photos: https://photos.app.goo.gl/XdU1NPcmaXhATMXw6 326   Wed Apr 10 19:22:24 2019 KojiGeneralGeneralOMC(004): preparation for the PZT subassembly bonding Preparation for the PZT subassembly bonding (Section 6.2 and 7.3 of T1500060 (aLIGO OMC optical testing procedure) - Gluing fixture (Qty 4) - Silica sphere powder - Electric scale - Toaster oven for epoxy mixture qualification - M prisms - C prisms - Noliac PZTs - Cleaning tools (forceps, tweezers) - Bonding kits (copper wires, steering sticks) - Thorlabs BA-2 bases Qty2 327   Thu Apr 11 10:54:38 2019 StephenGeneralGeneralOMC(004): preparation for the PZT subassembly bonding Quote: Preparation for the PZT subassembly bonding (Section 6.2 and 7.3 of T1500060 (aLIGO OMC optical testing procedure) - Gluing FIxture (Qty4) - Silica Sphere Powder - Electric scale - Toaster Oven for epoxy mixture qualification - M prisms - C prisms - Noliac PZTs - Cleaning tools (forceps, tweezers) - Bonding kits (copper wires, steering sticks) - Thorlabs BA-2 bases Qty2 - Razor Blades Also brought to the 40m on 10 April, in preparation for PZT subassembly bonding: - new EP30-2 epoxy (purchased Jan 2019, expiring Jul 2019 - as documented on documents attached to glue, also documented at C1900052. - EP30-2 tool kit (maintained by Calum, consisting of mixing nozzles, various spatulas, etc) Already at the 40m for use within PZT subassembly bonding: - "dirty" ABO A with temperature controller (for controlled ramping of curing bake) - clean work areas on laminar flow benches - Class B tools, packaging supplies, IPA "red wipes", etc. Upon reviewing EP30-2 procedure T1300322 (current revision v6) and OMC assembly procedure E1300201 (current revision v1) it appears that we have gathered everything required. 329   Thu Apr 11 21:22:26 2019 KojiMechanicsGeneralOMC(004): PZT sub-assembly gluing [Koji Stephen] The four PZT sub-assemblies were glued in the gluing fixtures. There were two original gluing fixtures and two additional modified fixtures for the in-situ bonding at the repair of OMC(002). - Firstly, we checked the fitting and arrangements of the components without glue. The component combinations are described in ELOG 329. - Turned on the oven toaster for the cure test (200F). - Then prepared EP30-2 mixture (7g EP30-2 + 0.35g glass sphere). - The test specimen of EP30-2 was baked in the toaster oven. (The result shows perfect curing (no stickyness, no finger print, crisp fracture when bent) - Applied the bond to the subassemblies. - FInally the fixtures were put in airbake Oven A. We needed to raise one of the tray with four HSTS balance weights (Attachment 2). 330   Thu Apr 11 21:22:58 2019 KojiGeneralGeneralOMC(004): PZT sub-assembly air baking [Stephen Koji] The baking of the PZT subassemblies was more complicated than we initially thought. The four PZT subassemblies were placed in the air bake oven A. We meant to bake the assemblies with the ramp time of 2.5h, a plateau of 2h at 94degC, and slow ramp down. The oven controller was started and the temperature has been monitored. The ramping up was ~20% faster than expected (0.57degC/min instead of 0.47degC/min), but at least it was linear and steady. Once the temperature reached the set temperature (around t=120min), the temperature started oscillating between 74 and 94degC. Stephen's interpretation was that the PID loop of the controller was not on and the controller falled into the dead-bang mode (=sort of bang-bang control). As the assembly was already exposed to T>70F for more than 2.5hours, it was expected the epoxy cure was done. Our concern was mainly the fast temperature change and associated stress due to thermal expansion, which may cause delamination of the joint. To increase the heat capacity of the load, we decided to introduce more components (suspension balance weights). We also decided to cover the oven with an insulator so that the conductive heat loss was reduced. However, the controller thought it was already the end of the baking process and turned to stand-by mode (i.e. turned off everything). This started to cause rapid temp drop. So I (Koji) decided to give a manual heat control for mind cooling. When the controller is turned off and on, it gives some heat for ramping up. So the number of heat pulses and the intervals were manually controlled to give the temp drop of ~0.5degC/min. Around t=325, the temperature decay was already slower than 0.5degC/min without heat pulse, so I decided to leave the lab. We will check the condition of the sub-assemblies tomorrow (Fri) afternoon. 331   Sun Apr 14 23:58:49 2019 KojiGeneralGeneralOMC(004): PZT sub-assembly post air-bake inspection [Koji Stephen] (Friday afternoon) We retrieved the PZT sub-assemblies to the clean room. We started removing the ASSYs from the fixtures. We noticed that some part of the glass and PZT are ripped off from the ASSY and stuck with the fixture. For three ASSYs (except for #9), the effect is minimal. However, ASSY #9 has two large removals on the front surface, and one of the bottom corners got chipped. This #9 is still usable, I believe, but let's avoid to use this unit for the OMC. Individual inspection of the ASSYs is posted in the following entries. This kind of fracture events was not visible for the past 6 PZT sub-ASSYs. This may indicate a few possibilities: - More rigorous quality control of EP30-2 was carried out for the PZT ASSY bonding. (The procedure was defined after the past OMC production.) The procedure leads to the strength of the epoxy enhanced. - During the strong and fast thermal cycling, the glass was exposed to stress, and this might make the glass more prone to fracture. For the production of the A+ units, we think we can avoid the issues by modifying the fixtures. Also, reliable temperature control/monitor technology should be employed. These improvements should be confirmed with the bonding of spare PZTs and blank 1/2" mirrors before gluing any precious components. 332   Mon Apr 15 00:08:32 2019 KojiGeneralGeneralOMC(004): PZT sub-assembly post air-bake inspection (Sub-assy #7) Sub-ASSY #7 Probably the best glued unit among the four. Attachment #1: Mounting Block SN001 Attachment #2: PZT-Mounting Block bonding looks completely wet. Excellent. Attachment #3: The other side of the PZT-Mounting Block bonding. Also looks excellent. Attachment #4: Overall look. Attachment #5: The mirror-PZT bonding also look excellent. The mounting block surface has many EP30-2 residue. But they were shaved off later. The center area of the aperture is clear. Attachment #6: A small fracture of the mirror barrel is visible (at 7 o'clock). 333   Mon Apr 15 00:39:04 2019 KojiGeneralGeneralOMC(004): PZT sub-assembly post air-bake inspection (Sub-assy #8) Sub-ASSY #8 Probably the best glued unit among the four. Attachment #1: Mounting Block SN007 Attachment #2: Overall look. Attachment #3: Some fracture on the barrel visible. Attachment #4: It is visible that a part of the PZT removed. Otherwise, PZT-Mounting Block bonding looks pretty good. Attachment #5: The other side of the PZT bonding. Looks fine. Attachment #6: Fractured PZT visible on the fixture parts. Attachment #7: Fractured glass parts also visible on the fixture parts. Attachment #8: MIrror bonding looks fine except for the glass chip. 334   Mon Apr 15 01:07:30 2019 KojiGeneralGeneralOMC(004): PZT sub-assembly post air-bake inspection (Sub-assy #9) Sub-ASSY #9 The most fractured unit among four. Attachment #1: Mounting Block SN017 Attachment #2: Two large removals well visbile. The bottom right corener was chipped. Attachment #3: Another view of the chipping. Attachment #4: PZT-mounting block bonding look very good. Attachment #5: Another view of the PZT-mounting block bonding. Looks very good too. Attachment #6: Fractures bonded on the fixture. Attachment #7: Front view. The mirror-PZT bonding look just fine. 335   Mon Apr 15 01:23:45 2019 KojiGeneralGeneralOMC(004): PZT sub-assembly post air-bake inspection (Sub-assy #10) Sub-ASSY #10 Attachment #1: Mounting Block SN021 Attachment #2: PZT-Mounting Block bonding looks just excellent. Attachment #3: The other side of the PZT-Mounting Block bonding is also excellent. Attachment #4: The mirror-PZT bonding also look excellent. Some barrel fracture is visible at the lower left of the mirror. 343   Tue Apr 16 23:11:43 2019 KojiGeneralGeneralBorrowed items from the other labs Apr 16, 2019 Borrowed two laser goggles from the 40m. (Returned Apr 29, 2019) Borrowed small isopropanol glass bottole from CTN. Apr 19, 2019 Borrowed from the 40m: - Universal camera mount - 50mm CCD lens - zoom CCD lens (Returned Apr 29, 2019) - Olympus SP-570UZ (Returned Apr 29, 2019) - Special Olympus USB Cable (Returned Apr 29, 2019) 344   Wed Apr 17 09:08:47 2019 StephenGeneralGeneralOMC(004): Unwrapping and preparing breadboard [Stephen, Philip, Koji, Joe] Breadboard D1200105 SN06 was selected as described in eLOG 338. This log describes unwrapping and preparation of the breadboard. Relevant procedure section: E1300201 section 6.1.5 Breadboard was unwrapped. No issues observed during unwrapping. • Attachment 1: packaging of SN06. Visual inspection showed no issues observed in breadboard - no large scratches, no cracks, no chipping, polished area (1 cm margin) looks good. • Attachment 2: engraving of SN06. Initially the breadboard has a large amount of dust and fiber from the paper wrapping. Images were gathered using a green flashlight at grazing incidence (technique typical of optic inspection). PROCEDURE IMPROVEMENT: Flashlight inspection and Top Gun use should be described (materials, steps) in E1300201. • Attachment 3: particulate before Top Gun, large face. • Attachment 4: particulate before Top Gun, small face. Top gun was used (with medium flow rate) to remove large particulate. Breadboard was placed on Ameristat sheet during this operation. • Attachment 5: particulate after Top Gun Next, a clean surface within the cleanroom was protected with Vectra Alpha 10 wipes. The breadboard, with reduced particulate after Top Gun, was then placed inside the cleanroom on top of these wipes. Wiping with IPA Pre-wetted Vectra Alpha 10 wipes proceeded until the particulate levels were acceptable. Joe and Koji then proceeded with placing the breadboard into the transport fixture. 345   Wed Apr 17 10:30:37 2019 PhilipOpticsGeneralOMC optical set-up day 1 [Joe, Koji, Liyuan, Philip, Stephen] Work done on 16.04.2019 Finishing assembly of transport box Assembly worked fine except for the clamping structure to clamp the lid of the transport box to the bottom part. It seemed that some of the plastic of these clamps became brittle during the baking. The plastic was removed and the clamps where wiped clean. It appears that the clamps can't be locked as they should. Still the transport box should be fine as the long screws will mainly clamp the two parts together. Preparing the transport box to mount the breadboard The lid of the the transport box was placed upside down and clamped to the table. All peak clamping structures where pulled back as far as possible. Preparation and cleaning of the breadboard We unpacked the breadboard and found lots of dust particles on it (most likely from the soft paper cover which was used). We used the ionized nitrogen gun at 25 psi to get rid of the majority of particles and cross-checked with a bright green flash light before and after blowing. The second stage of cleaning was done below the clean room tent and included the wiping of all surfaces. The breadboard was then placed into the prepared lid of the transport box and clamped with peak screws. Unpacking of the template The previously cleaned template was unpacked while the last layer of coverage was removed below the cleanroom tent. All peak screws of the clamping structure of the template where removed. The template was placed onto the breadboard only seperated by peak spacers. All peak screws have been inserted for horizontal clapming. A calipper was used to measure the distance of each edge of the template to the edge of the breadboard. For documentation the labeled side of the bradboard (facing away from the persons on the pictures) of the upside down breadboard is defined to be the south side, continuing clockwise with west, north and east. First rough alignment was done by shifting the template on the breadboard and then the peak screws where used for fine tuning. The caliper values measured where: North   C 8.32mm     E 8.52 mm     W 8.41 mm East     C 8.08 mm South   C 8.32 mm West    C 8.02 mm (E indicating east side position, W indicating west side position and C indicating center position) 358   Thu May 9 16:07:18 2019 StephenMechanicsGeneralImprovements to OMC Bonding Fixture [Stephen, Koji] As mentioned in eLOG 331, either increased thermal cycling or apparent improvements in cured EP30-2 strength led to fracture of curved mirrors at unintended locations of bonding to the PEEK fixture parts. The issue and intended resolution is summarized in the attached images (2 different visualizations of the same item). Redline has been posted to D1600336-v3. Drawing update will be processed shortly, and parts will be modified to D1600336-v4. 359   Thu May 9 17:35:07 2019 KojiOpticsGeneralAlignment strategy Notes on the OMC cavity alignment strategy - x3=1.17 γ + 1.40 δ, x4=1.40 γ + 1.17 δ - This means that the effect of the two curved mirrors (i.e. gouy phases) are very similar. To move x3 and x4 in common is easy, but to do differentially is not simple. - 1div of a micrometer is 10um. This corresponds to the angular motion of 0.5mrad (10e-6/20e-3 = 5e-4). ~0.5mm spot motion. - ~10um displacement of the mirror longitudinal position has infinitesimal effect on the FSR. Just use either micrometer (-x side). - 1div of micrometer motion is just barely small enough to keep the cavity flashing. => Easier alignment recovery. Larger step causes longer time for the alignment recovery due to the loss of the flashes. - After micrometer action, the first move should be done by the bottom mirror of the periscope. And this is the correct direction for beam walking. - If x3 should be moved more than x4, use CM2, and vise versa. - If you want to move x3 to +x and keep x4 at a certain place, 1) Move CM2 in (+). This moves x3 and x4 but x3>x4. 2) Compensate x4 by turning CM1 in (-). This returnes x4 to the original position (approximately), but leave x3 still moved. Remember the increment is <1div of a micrometer and everytime the cavity alignment is lost, recover it before loosing the flashes. 361   Wed May 15 19:07:53 2019 KojiCleanGeneralWhat is this??? Suddenly something dirty emerged in the lab. What is this? It looks like an insulation foam or similar, but is quite degraded and emits a lot of particulates. This does not belong to the lab. I don't see piping above this area which shows broken insulation or anything. All the pipes in the room are painted white. The only possibility is that it comes from the hole between the next lab (CRIME Lab). I found that the A.C. today is much stronger and colder than last week. And there is a positive pressure from CRIME Lab. Maybe the foam was pushed out from the hole due to the differential pressure (or any RF cable action). 362   Thu May 16 12:41:28 2019 ChubGeneralGeneralfire pillow found on optics table That is an expanding fire pillow, also known as firebrick.  It is used to create a fire block where holes in fire-rated walls are made and prevents lab fires from spreading rapidly to adjacent labs.  I had to pull cable from B254 to our labs on either side during a rather narrow window of time.  Some of the cable holes are partially blocked, making it difficult to reach the cable to them. The cable is then just guided to the hole from a distance.  With no help, it's not possible to see this material getting shoved out of the hole.  I can assure you that I took great pains not to allow the CYMAC coax to fall into any equipment, or drag against any other cables. 367   Tue May 28 12:14:20 2019 StephenOpticsGeneralCM PZT Assembly Debonding of EP30-2 in Acetone [LiyuanZ, StephenA] Downs B119 Summary: Beginning on 20 May 2019, two CM PZT assemblies were soaked in Acetone in an effort to debond the EP30-2 bonds between tombstone-PZT and between PZT-optic. Debonding was straightforward after 8 days of soaking. 24 hours of additional acetone soaking will now be conducted in an attempt to remove remnant EP30-2 from bonding surfaces. Procedure: The assemblies were allowed to soak in acetone for 8 days, with acetone level below the HR surface of the optic. No agitation of the solution, mechanical abrasion of the bond, or other disturbance was needed for the bond to soften. GariLynn contributed the glassware and fume hood, and advised on the process (similar to debonding of CM and PZT from OMC SN002 after damaging event). The equipment list was (WIP, more detail / part numbers will be gathered today and tomorrow): • crystallizing dish (no spout, like a deep petri dish) • curved lid • wax sheet (to seal) • acetone • fume hood Results: Today, 28 May 2019, I went to the lab to check on the optics after 8 days of soaking. Liyuan had monitored the acetone level during the first 4 days, topping up once on 24 May. All bonds were fully submerged for 8 days. There were 2 assemblies soaked in one crystallizing dish. Debonded assemblies - ref OMC eLOG 328 for specified orientations and components: PZT Assy #9 - ref. OMC eLOG 334 - M17+PZT#12+C10 PZT Assy #7 - ref. OMC eLOG 332 - M1+PZT#13+C13 PZT Assy #7 was investigated first. • C13 was removed with no force required. • PZT#13 was removed with no force required. • EP30-2 remained at the bond surfaces and tracing the diameters of each bond on each of the 3 bonding surfaces of the PZT and tombstone - these components were returned to the dish to soak. • No EP30-2 remained on the surface of the curved mirror - C13 was removed and stored. A video of removal of C10 and PZT#12 from PZT Assy #9 was collected (See Attachment 8), showing the ease with which the debonded components could be separated. • C10 was removed with no force required. • A slight force - applied by gripping the barrel of the PZT and pushing with the index finger on the surface of the tombstone - was required to separate PZT#12 from M17, • likely due to excess glue at the barrel of the PZT • EP30-2 remained at the bond surfaces and tracing the diameters of each bond on each of the 3 bonding surfaces of the PZT and tombstone - these components were returned to the dish to soak. • No EP30-2 remained on the surface of the curved mirror - C13 was removed and stored. Photos and video have been be added to supplement this report (edit 2019/07/08). 368   Mon Jun 24 12:54:58 2019 KojiCleanGeneralHEPA BOOTH https://www.airscience.com/purair-flow-laminar-flow-cabinets 375   Wed Sep 18 22:30:11 2019 StephenSupplyGeneralEP30-2 Location and Status Here is a summary of the events of the last week, as they relate to EP30-2. 1) I lost the EP30-2 syringes that had been ordered for the OMC, along with the rest of the kit. • Corrective action: Found in the 40m Bake Lab garbing area. • Preventative action: log material moves and locations in the OMC elog • Preventative action: log EP30-2 moves and locations in PCS via location update [LINK] • Preventative action: keep EP30-2 kit on home shelf in Modal Lab unless kit is in use 2) The EP30-2 syringes ordered for the OMC Unit 4 build from January had already expired, without me noticing. • Corrective action: Requested LHO ship recently-purchased EP30-2 overnight • Preventative action: log expiration dates in OMC elog • Preventative action: begin purchasing program supported by logistics, where 1 syringe is maintained on hand and replaced as it expires 3) LHO shipped expired epoxy on Thursday. Package not opened until Monday. • Corrective action: Requested LHO ship current EP30-2 overnight, this time with much greater scrutiny (including confirming label indicates not expired) • Preventative action: Packages should be opened, inspected, and received in ICS or Techmart on day of receipt whenever possible. 4) Current, unopened syringe of EP30-2 has been received from LHO. Expiration date is 22 Jan 2020. Syringe storage has been improved. Kit has been docked at its home in Downs 303 (Modal Lab) (see attached photo, taken before receipt of new epoxy). Current Status: Epoxy is ready for PZT + CM subassembly bonding on Monday afternoon 23 September. 376   Wed Sep 18 23:16:06 2019 StephenSupplyGeneralItems staged at 40m Bake Lab for PZT Subassembly Bonding The following items are presently staged at the 40m Bake Lab (see photo indicating current location) (noting items broght by Koji as well): 1. Bonding fixtures, now modified with larger washers to constrain springs, and with modification from OMC elog 358. 2. Curved Mirrors and Tombstones as selected by Shruti in OMC elog 374. 3. PZTs as debonded from first iteration subassemblies (SN 12 and SN 13) 4. Epoxy-cure-testing toaster oven 5. Other items I can't think of but will populate later  =D The following item is in its home in Downs 303 (Modal Lab) 1. EP30-2 epoxy (expiration 2020 Jan 22) with full kit (tracked in PCS via location update [LINK]) 377   Wed Sep 18 23:38:52 2019 StephenGeneralGeneralDirty ABO ready for PZT Subassembly Bonding The 40m Bake Lab's Dirty ABO's OMEGA PID controller was borrowed for another oven in the Bake Lab, so I have had to play with the tuning and parameters to recover a suitable bake profile. This bake is pictured below (please excuse the default excel formatting). I have increased the ramp time, temperature offset, and thermal mass within the oven; after retuning and applying the parameters indicated, the rate of heating/cooling never exceeds .5°C/min. Expected parameters: Ramp 2.5 hours Setpoint 1 (soak temperature) 94 °C no additional thermal mass Current parameters: Ramp 4 hours Setpoint 1 (soak temperature) 84 °C Thermal mass added in the form of SSTL spacers (see photo) The ABO is controlled by a different temperature readout from the data logger used to collect data; the ABO readout is a small bead in contact with the shelf, while the data logger is a lug sandwiched between two stainless steel masses upon the shelf. I take the data logger profile to be more physically similar to the heating experienced by an optic in a gluing fixture, so I feel happy about the results of the above bake. I plan to add the data source file to this post at my earliest convenience. 378   Mon Sep 23 21:29:51 2019 KojiOpticsGeneralOMC(004): PZT sub-assembly gluing (#9/#10) [Stephen, Shruti, Koji] We worked on the gluing of the PZT sub-assy (#9 and #10) along with the designed arrangement by Shruti (OMC ELOG 374). The detailed procedures are described in E1300201 Section 6.2 PZT subassembly and Section 7.3 EP30-2 gluing. We found that the PZTs, which were debonded from the previous PZT sub assy with acetone, has some copper wires oxidized. However, we confirmed that this does not affect the conductivity of the wires, as expected. The glue test piece cooked in the toaster oven showed excellent curing. GO SIGNAL Stephen painted the PZT as shown in Attachment 1. The fixtures were closed with the retaining plate and confirmed that the optics are not moving in the fixtures. At this point, we checked the situation of the air-bake oven. And we realized that the oven controller was moved to another vacuum oven and in use with a different setting. Stephen is going to retrieve the controller to the air bake oven and test the temp profile overnight. Once we confirm the setting is correct, the PZT sub assys will be heat cured in the oven.  Hopefully, this will happen tomorrow. Until then, the sub-assys are resting on the south flow bench in the cleanroom. 379   Tue Sep 24 12:19:20 2019 StephenGeneralGeneral Dirty ABO test run prior to PZT Subassembly Bonding The 40m Bake Lab's Dirty ABO's OMEGA PID controller was borrowed for another oven in the Bake Lab (sound familiar? OMC elog 377), so I have had to play with the tuning and parameters to recover. This bake seemed to inadequately match the intended temperature profile for some reason (intended profile is shown by plotting prior qualifying bake for comparison). The parameters utilized here are exactly matching the prior qualifying bake, except that the autotuning may have settled on different parameters. Options to proceed, as I see them, are as follows: 1. reposition the oven's driving thermocouple closer to the load and attempt to qualify the oven again overnight 2. retune the controller and attempt to qualify the oven again overnight 3. proceed with current bake profile, except monitor the soak temperature via data logger thermocouple and intervene if temperature is too high by manually changing the setpoint. 380   Thu Sep 26 17:33:52 2019 StephenGeneralGeneralDirty ABO test run prior to PZT Subassembly Bonding - ABO is Ready! Follow up on OMC elog 379 I was able to obtain the following (dark blue) bake profile, which I believe is adequate for our needs. The primary change was a remounting of the thermocouple to sandwich it between two stainless steel masses. The thermocouple bead previously was 1) in air and 2) close to the oven skin. 381   Mon Sep 30 23:16:53 2019 KojiOpticsGeneralOMC(004): PZT sub-assembly gluing (#9/#10) Friday: [Stephen, Koji] As the oven setting has qualified, we brought the PZT assys in the air bake oven. Monday: [Stephen, Shruti, Koji] We brought the PZT assys to the clean room. There was not bonding between the flexture and the PZT subassy (Good!). Also the bonding o at each side looks completely wetted and looks good. The package was brought to the OMC lab to be tested in the optical setup. 382   Tue Oct 22 10:25:01 2019 StephenGeneralGeneralOMC PZT Assy #9 and #10 Production Cure Bake OMC PZT Assy Production Cure Bake (ref. OMC elog 381) for PZT Assy #9 and #10 started 27 September 2019 and completed 28 September 2019. Captured in the below figure (purple trace). Raw data has been posted as an attachment as well. We have monitored the temperature in two ways: 1) Datalogger thermocouple data (purple trace). 2) Checking in on temperature of datalogger thermocouple (lavender circles) and drive thermocouple (lavender diamonds), only during initial ramp up. • No changes were made to the tuning or instrumentation of the oven between the successful qualifying bake obtained on 26 September (ref. OMC elog 380). However, the profile seems to have been more similar to prior qualifying bake attempts that were less successful (ref. OMC elog 379), particularly as the oven seems to have ramped to an overtemperature state. I am a bit mystified, and I would like to see the oven tuning characterized to a greater extent than I have had time and bandwith to complete within this effort. • The maximum datalogger temperature was 104 °C, and the duration of the soak (94 °C or higher) was 68 minutes. This was in contrast to a programmed soak of 2.5 hours and a programmed setpoint of 84 °C. • The drive thermocouple did appear to be under-reporting temperature relative to the datalogger thermocouple, but this was not confirmed during the soak period. Neither thermocouple was calibrated as part of this effort. 383   Tue Oct 22 11:52:53 2019 StephenGeneralGeneralEpoxy Curing Timeline of OMC PZT Assy #9 and #10 This post captures the curing timeline followed by OMC PZT Assys #9 and #10. Source file posted in case any updates or edits need to be made. 384   Tue Oct 22 11:56:09 2019 StephenSupplyGeneralEpoxy Status update as of 22 October 2019 The following is the current status of the epoxies used in assembly of the OMC (excerpt from C1900052) Re-purchasing efforts are underway and/or complete 386   Fri Dec 6 00:55:25 2019 KojiOpticsGeneralBeamdump gluing [Stephen, Koji] 20 glass beamdumps were bonded at the 40m cleanroom. Attachment 1: We had 20 fused silica disks with a V-groove and 40 black glass pieces Attachment 2: The black glass pieces had (usual) foggy features. It is well known to be very stubborn. We had to use IPA/acetone and wiping with pressure. Most of the feature was removed, but we could still see some. We decided to use the better side for the inner V surfaces. Attachment 3: EP30-2 expiration date was 1/22/2020 👍. 7.66g of EP30-2 was poured and 0.38g of glass sphere was added. Total glue weight was 8.04g Attachment 4: Glue test piece was baked at 200F in a toaster oven for ~12min. It had no stickiness. It was totally crisp. 👍👍👍 Attachment 5: Painted glue on the V-groove and put the glass pieces in. Then gave a dub of blue at the top and bottom of the V from the outside. In the end, we mostly had the glue went through the V part due to capillary action. Attachment 6: The 20 BDs were stored in stainless vats. We looked at them for a while to confirm there is no drift and opening of the V part. Because the air bake oven was not available at the time, we decided to leave the assys there for the room temp curing, and then later bake them for the completion of the curing. 387   Fri Dec 13 14:59:18 2019 StephenGeneralGeneralOMC Beam Dump Production Cure Bake [Koji, Jordan, Stephen] The beam dumps, bonded on Fri 06 Dec 2019, were placed in the newly tuned and configured small dirty ABO at the Bake Lab on Fri 13 Dec 2019. Images are shared and references are linked below Bonding log entry - https://nodus.ligo.caltech.edu:8081/OMC_Lab/386 OMC Beam Dump - https://dcc.ligo.org/LIGO-D1201285 388   Wed Dec 18 21:54:53 2019 KojiGeneralGeneralOMC Beam Dump Production Cure Bake The beamdumps were taken out from the oven and packed in bags. The bottom of the V are completely "wet" for 17 BDs among 20 (Attachment 1/2). 3 BDs showed insufficient glue or delamination although there is no sign of lack of rigidity. They were separated from the others in the pack. 389   Thu Feb 27 14:31:13 2020 KojiGeneralGeneralItem lending Item lending as per Ian's request: Particle Counter from OMC Lab to QIL 390   Mon Aug 10 15:29:54 2020 KojiGeneralGeneralItem lending The particle counter came back to the OMC lab on Aug 10, 2020 392   Mon Aug 10 15:53:46 2020 KojiGeneralGeneralLab status check Check-in to the OMC lab to see the status. Nothing seemed changed. No bug. The HEPA is running normal. The particle level was 0. Went into the HEPA enclosure and put a cover on the OMC. Because of the gluing template, the lid could not be close completely (that's expected and fine). The IPA vector cloth bag was not dry yet but seemed expired (some smell). There is no stock left -> 5 bags to be ordered. 393   Mon Sep 28 16:03:13 2020 ranaGeneralGeneralOMC Beam Dump Production Cure Bake are there any measurements of the BRDF of these things? I'm curious how much light is backscattered into the incoming beam and how much goes out into the world. Maybe we can take some camera images of the cleaned ones or send 1-2 samples to Josh. No urgency, just curiosity. I saw that ANU and also some labs in India use this kind of blue/green glass for beam dumps. I don't know much about it, but I am curious about its micro-roughness and how it compares to our usual black glass. For the BRDF, I think the roughnesss matters more for the blackness than the absorption. 394   Mon Sep 28 16:13:08 2020 KojiGeneralGeneralOMC Beam Dump Production Cure Bake According to the past backscatter test of the OMC (and the black glass beamdump: not V type but triangular type on a hexagonal-mount), the upper limit of the back reflection was 0.13ppm. https://nodus.ligo.caltech.edu:8081/OMC_Lab/209 I don't have a BRDF measurement. We can send a few black glass pieces to Josh. 396   Fri Oct 9 01:01:01 2020 KojiGeneralGeneralTFT Monitor mounting To spare some room on the optical table, I wanted to mount the two TFT monitor units on the HEPA enclosure frame. I found some Bosch Rexroth parts (# 3842539840) in the lab, so the bracket was taken for the mount. This swivel head works very well. It's rigid and still the angle is adjustable. https://www.boschrexroth.com/ics/cat/?cat=Assembly-Technology-Catalog&p=p834858 BTW, this TFT display (Triplett HDCM2) is also very nice. It has HDMI/VGA/Video/BNC inputs (wow perfect) and the LCD is Full-HD LED TFT. https://www.triplett.com/products/cctv-security-camera-test-monitor-hd-1080p-led-display-hdcm2 https://www.newegg.com/p/0AF-0035-00016 https://www.bhphotovideo.com/c/product/1350407-REG/triplett_hdcm2_ultra_compact_7_hd_monitor.html The only issue is that one unit (I have two) shows the image horizontally flipped. I believe that I used the unit with out this problem before, I'm asking the company how to fix this. 397   Fri Oct 16 00:53:29 2020 KojiGeneralGeneralTFT Monitor mounting The image flipping of the display unit was fixed. The vendor told me how to fix it. - Open the chassis by the four screws at the side. - Look at the pass-through PCB board between the mother and display boards. - Disconnect the flat flex cables from the pass-through PCB (both sides) and reconnect them (i.e. reseat the cables) That's it and it actually fixed the image flipping issue. 398   Fri Oct 23 19:09:54 2020 KojiGeneralGeneralParticle counter transfered to Radhika See this entry: https://nodus.ligo.caltech.edu:8081/40m/15642 399   Fri Nov 6 18:38:00 2020 KojiGeneralGeneralPowermeter lent from OMC Lab to 2um ECDL Thorlabs' powermeter controler + S401C head was lent from OMC Lab. https://nodus.ligo.caltech.edu:8081/SUS_Lab/1856 400   Mon Nov 9 22:06:18 2020 KojiMechanicsGeneral5th OMC Transport Fixture I helped to complete the 5th OMC Transport Fixture. It was built at the 40m clean room and brought to the OMC lab. The fixture hardware (~screws) were also brought there. 401   Fri Nov 20 18:51:23 2020 KojiGeneralGeneralInstrument loan FEMTO DLPCA200 low noise preamp (brand new) Keithley Source Meter 2450 (brand new) => Returned 11/23/2020 were brought to the OMC lab for temporary use. https://nodus.ligo.caltech.edu:8081/QIL/2522 407   Fri Feb 5 07:40:37 2021 StephenSupplyGeneralOMC Unit 4 Build Machined Parts OMC Unit 4 Build Machined Parts are currently located in Stephen's office. See image of large blue box from office, below. Loaned item D1100855-V1-00-OMC08Q004 to Don Griffith for work in semi-clean HDS assy. This includes mass mounting brackets, cable brackets, balance masses, etc. For full inventory, refer to ICS load Bake-9527 (mixed polymers) and Bake-9495 (mixed metals). Inventory includes all items except cables. Plasma sprayed components with slight chipping were deemed acceptable for Unit 4 use. Cable components (including flex circuit) are ready to advance to fabrication, with a bit more planning and ID of appropriate wiring. ELOG V3.1.3-
# Continuous and bounded imply uniform continuity? I am thinking about this since couple hour. Is a continuous and bounded function $f:\mathbb R\to\mathbb R$ uniform continuous too? I didn't found a counter example and thus I tried to prove this like this: let $\epsilon>0$ and $a\in\mathbb R$. By continuity, there is a $\delta_a>0$ such that $$d(f(x),f(y))<d(f(x),f(a))+d(f(y),f(a))<\epsilon$$ if $d(x,y)\leq d(x,a)+d(y,a)<\delta_a.$ Then, if $\inf_a\delta_a>0$ I set $\delta=\inf_a\delta_a$ and we are done since $d(f(x),f(y))<\epsilon$ if $d(x,y)<\delta,$ but I didn't success to prove that $\delta\neq 0$. Consider $\sin(x^2)$. It is continuous and bounded but not uniformly continuous. This is because we can find arbitrarily small $\gamma$ so that $f(a)=-1$ and $f(a+\gamma)=1$. In other words if $\epsilon=1$ we cannot find a suitable $\delta$. This is because the function oscillates faster and faster as $|x|$ grows. On the other hand what is true is that a continuous function $f:[a,b]\rightarrow \mathbb R$ is uniformly continuous.
# You can't even use calculator If the number of digits in $$9876543210!$$ is $$n$$, find the number of digits of $$n!$$. Notation: $$!$$ denotes the factorial notation. For example, $$8! = 1\times2\times3\times\cdots\times8$$. ×
# Day 5 ## Daytime usage I haven’t really addressed the daytime power usage yet. To be consistent with the EnergizeApps parser, I’ll define ‘day’ as 5AM - 8PM (inclusive, exclusive). Likewise going forward I should define ‘night’ as 11PM - 4AM. I will not only want to look at the overall daytime usage, but isolate the weekday and weekend usage since I’d expect the building to be unoccupied on weekends. Once again I get to leverage the handy DatetimeIndex structure which holds a weekday component. This component is an integer which reads as follows: Mon Tue Wed Thu Fri Sat Sun 0 1 2 3 4 5 6 So the weekdays correspond to days 0-4 while weekends are days 5-6. day_bools = (df_energy.index.hour >= 5) & (df_energy.index.hour < 20) df_day = df_energy[day_bools] df_weekday = df_day[df_day.index.weekday <= 4] df_weekend = df_day[df_day.index.weekday > 4] You can not simply join the conditionals with the and keyword that Python would normally understand. If you did, it would try to evaluate the entire array to either just a true or false value. Instead you use the & operator to perform an item-wise comparison of all individual element pairs within the boolean arrays. EDIT: In hindsight, it would be simpler to use the pandas.DataFrame.between_time function here Now to look at the plots. For overall daytime power distribution we get: Another bimodal distribution. At first I assumed this was just because of the distinction between weekday and weekend usage. But the weekend is only 2 days, you would assume that the low-power peak would be much smaller than the high-power peak. When we look at just the weekend usage we get: It’s a weird shape, but at least it’s unimodal. Looking at the weekday usage, things get weirder. The shape is still bimodal. That means there’s some factor here that’s still causing a split in the data. Then I remembered, this includes the whole year’s worth of data. That includes the 180 days of school time, but also holidays, breaks, and summer vacation. So the heap of low power usage must be primarily from vacation. To test this, I can look at two charts: one with a month from the summer, and one with a month during the school year. July is a good summer midpoint, and after looking at the Andover academic calendar it looks like the month of March doesn’t have many days off from school. I can extract these with the pandas.DatetimeIndex.month property. In this case the index starts at 1 for Jan and extends to 12 for Jan. The weekday-daytime power distribution for July: And for March: At first I was baffled, but then once again I realized an oversight. I am defining day as the hours from 5AM-8PM. This isn’t a good index to use for a school because school doesn’t run the entire day. To put this issue to rest, I’m going to isolate March’s data between 7:40 and 2:20. I found a pandas.Series.between_time() method in the documentation, I’ll use that here and probably change my use of the .hour attribute from earlier. If this doesn’t give me a unimodal shape I don’t know what will. I’m satisfied with this result. In the console I printed the dataset that was registering under 250 kW which consisted of 4 days, 2 of which had no school in 2016 and the other 2 I’m guessing didn’t have school in 2015. The rest of the distribution looks nice and symmetric. ## Filters All of this work has just made it very clear how necessary it is to implement a robust filtering system to let administrators choose which days to include and exclude to isolate days that follow similar power usage models. The program won’t know what kind of data is being fed to it, and without that info it can’t do any useful analysis. It’s up to the person running the program to be able to seperate groups of similar data to individually perform analysis upon. To make this easier, I may implement some feature that detects multiple peaks and looks for patterns within them to suggest how to make further isolations. Up until this point, most of the programming work has been temporary scripting, but I need to start putting code into usable functions. Filters would be very helpful, so that will be my next task. ## Templating the code In my semester at Northeastern I learned about the “design recipe” which takes a very systematic approach to programming. It’s a good habit to have, and I’ll loosely follow it here. """ time_filter: filters data by properties like date and time ARGS: data : DataFrame or Series with DateTimeIndex *timerange: Tuple with start and end time strings as HH:MM or list of such tuples *daterange: Tuple with start and end dates as YYYY-MM-DD or list of such tuples. Blank Tuple element will default to MIN or MAX *exclusions: List of dates to be excluded as YYYY-MM-DD *inclusions: List of dates to be explicity included as YYYY-MM-DD This will override the daterange property *weekdays: List of integers for days to be included 0 = Mon, 6 = Sun *starred parameters are optional """ So I don’t forget it later, I recently found out that I can easily implement the daterange and excludions properties with the .loc function. E.g. I can simply type data['2016-03-01':'2016-03-20'] to filter between those dates. This method signature seems to cover all the filtering features that I’ve needed so far. Tomorrow I’ll start making the code for it.
Industrial Utility Efficiency # Determining the Economic Value of Compressed Air Measurement Systems A common adage that has been quoted many times in this journal is: “If you don’t measure it, you can’t manage it.” This is partly true. It assumes that managers are willing and able to manage the costs and reliability of their compressed air system. Without data, however, they can’t do an effective job. But because managers are at times already overwhelmed with data, more data doesn’t automatically make them a better manager. A better way of saying it is: “Appropriate measurement can make you a better manager.” For this article, I will focus on the energy cost of a compressed air system, so the measurement system will need to justify itself by helping to manage cost. But what do we mean by “managing cost?” Usually costs are managed at the project level. In other words, they are capital costs. Project engineers are held to very tight budgeting constraints for capital costs. They have a multi-stage approval process, and are very methodical. If that project is a project justified by energy savings, it has one other important number — energy savings. Who is watching over those costs? Is there anyone in the entire organization charged with making sure project energy cost savings are actually sustained over time? In some organizations, this is beginning to show up in actual job descriptions. For instance, the job title of “energy manager.” In the past, energy managers were more concerned with energy purchasing (getting the best deal for each energy source) and capitalizing on incentives. Now, more and more energy managers are equally concerned with maintaining energy savings at the process level. ### Supporting Your House of Cards Without process-level energy management, the entire house of cards falls. If all of your projects are unmanaged once they are started up, then you can’t manage the aggregate energy used at the meter. The energy savings numbers become merely an abstraction, a way to get what you really want — an incentive check or a project approval. Once you get that, there might be little incentive to determine if you’re actually saving the money you thought you were. This article will be very helpful for those who want or need to know. Properly implemented, a compressed air measurement system is partly an insurance policy. It is a way to ensure that you are still getting what you initially thought you were going to get. In other words, it is a way to prevent your project benefits from evaporating into thin air. Beyond being an insurance policy, it is a management tool for maintaining and increasing system reliability, which can pay for the project quicker than sustained energy savings. This article will show one method to place a real dollar value on a compressed air measurement system. Using an actual example, it will show you how to define an appropriate budget for a measurement system. Subsequent articles will explain how to select components, design systems, and use the data from them. ### Definitions Before I launch into the article, let me define several key terms: Energy Management Information System (EMIS): “Energy Management Information Systems (EMIS) are software tools that store, analyze, and display energy consumption data.”1 This is in contrast to Energy Management Control Systems (EMCS) that include control. Supervisory Control and Data Acquisition System (SCADA): An industrial computer system that monitors and controls a process. Net Present Value: The current worth of a future sum of money or stream of cash flows given a specified rate of return, less the initial investment. Future cash flows are discounted at the discount rate, and the higher the discount rate, the lower the present value of the future cash flows. Key Performance Indicator (KPI): This is a calculated value, based on real-time monitoring, that indicates the energy (or other) performance of a system. Commissioning: “Commissioning ensures that the new building [in this case, process] operates initially as the owner intended and that building staff are prepared to operate and maintain its systems and equipment.”2 It can also mean to verify the energy savings of a project, but technically that is measurement and verification. Measurement and Verification (M&V): The process for quantifying savings delivered by an energy conservation measure. ### Recommended Methodology for Valuing a Compressed Air EMIS In a nutshell, I recommend an economic analysis of the declining savings that will probably occur without an effective EMIS. The recommended steps are as follows: 1. Acknowledge that compressed air projects decline in savings over time. Design your project to minimize slippage. 2. Properly commission your compressed air improvement project. 3. Establish KPIs and benchmarks from commissioning data. 4. Calculate the net present value of the project over the life of the project. 5. Calculate the net present value of the project with the declining value of energy savings, if the project is unmanaged. 6. Value the measurement and management system budget as a percentage of the difference between the two. ### Compressed Air Project Savings Can “Slip Away” I know we all love to work in that imaginary world where our spreadsheets are “reality,” and we don’t want to be confused with the facts. Then we step into that other world where chaos reigns. Most of the time, it is just too much work to connect the dots between the two. Let me help you a bit by describing one project and why its energy savings declined. ### Project Example: Paper Mill Integration This project integrated three 700-hp centrifugal compressors, consolidated dryers, and added a screw compressor (See Figure 1). Figure 1: Large System Schematic. Projected Savings vs. Realized Savings After 5 Years Projected Energy Savings 5,500,000 kWh/yr ($220k/yr) Actual Energy Savings (2009) 4,200,000 kWh/yr ($168k/yr) Project Cost $1,000,000 Reduced Savings (2014) 2,400,000 kWh/yr ($96k/yr) Slippage Over 5 Years $72k/yr Time Until Savings Disappear Additional Slippage per Year$14k Years Until Savings are Gone 7 more A million-dollar project’s savings can evaporate in 12 years or less. Fortunately, there are other project benefits. However, the one that was used to finance the project will be gone. Currently, we are tuning up the system, and hope to not only recover the lost savings, but improve from the commissioned state by 1,200,000 kWh/yr ($48k/yr), or over 10 percent. ### What Caused the Savings to Evaporate? In this case, the following reasons became evident during the tune-up assessment: • Staff Turnover: The programmer who set up the PLC controls left the company, and the new staff members were not aware of the correct algorithm. • Incomplete Implementation: Several items relating to comprehensive control were not implemented in the initial project, which led to its being “fragile” to small changes. It was not a “robust” design. • Lack of Monitoring System: They had current and flow meters, but no total system performance metric. They had no idea that all four compressors did not need to run. • Demand-Side Measures Were Undone: The blowers used for air-bar replacement were not reliable. Inlet filters clogged. My point is that compressed air systems are not static. They will slip in savings — period. It is simply a matter of time. If it isn’t for the above reasons, some other factors will raise their ugly heads. The world will never be perfect, even if the project design seems to be perfect. You need a measurement and management system in place to correct it when it happens. ### Steps to Properly Commission Your Compressed Air Project 1. Comprehensively Measure the System This doesn’t mean to overdo it. It just means that every compressor needs to be monitored for energy input, and there needs to be some reasonable metric that total flow can be derived from. Although it is ideal to have monitoring embedded into the capital project, it is hard to get accomplished with all the other aspects of the project. The minimum measurement is as follows: • Compressor current (power is better but not always practical). • Indication of compressor flow percent: This depends on the compressor. An actual flow meter is preferred, but not always possible during commissioning. • Pressure at the compressor control point(s): Pressure in a different location will not tell you how the compressor controls are actually performing. 2. Calculate System Performance The following can be derived from the data: • System total flow and power (See Figure 2). • System efficiency curve — power versus flow: This shows the “turn-down” of the system. Does it shave power as flow reduces? (See Figure 3). • Control plots: These show if the compressor load controls are operating correctly and efficiently (See Figure 4). Figure 2: Overall System Performance Data ##### Click here to enlarge Figure 3: System Efficiency Curve ##### Click here to enlarge Figure 4: Compressor Controls Plot ##### Click here to enlarge ### Establish Key Performance Indicators (KPIs) and Benchmarks From Commissioning Data 1. The following KPIs were developed from the commissioning data: Total Flow 5767 scfm Total Power 1225 kW (total, incl. dryers) System Efficiency 4.71 scfm/kW System Efficiency Slope 15.5 kW/100 scfm The Dead Load Was Unkown There was no downtime in the dataset. 2. The following benchmarks are available for this type of system and load: System Efficiency 5.5 scfm/kW for an optimal system System Efficiency Slope 18.2 kW/100 scfm for an optimal system ### Calculate Net Present Value of Project with Initial Energy Savings Energy savings are basically a cash-flow stream. You want to keep it flowing full-bore, providing benefit to your bottom line. See Table 1 for the project economics. I want to highlight one number, the net present value (NPV) of the project, which is \$674k. Table 1: Original Project Economics This project’s stream is drying up, and by year 12, it will be a dry ditch. See Table 2 for the project economics. I want to highlight the NPV of the project, which is \$300k. That is \$374k less than the initial projections. If savings are increased by just 3 percent /year, that goes up to \$508k. Table 2: Declining Savings Project Economics Click here to enlarge ### Determine Value for Measurement and Management System Budget So how much would you pay to greatly increase the probability of not losing \$400k, or increasing the chance of gaining another \$100k? In my view, it is worth from 10 to 20 percent of that, or \$50 to \\$100k. For a system this size, that is a more than adequate budget for a robust compressed air SCADA system that includes an EMIS. Training needs to be included in that budget so it becomes part of your management system. The cost of integrating it into your exiting IT system also needs to be included. It can be spent through direct hardware, software and staffing within your own facility’s department. It can also be done completely separately, even as a performance contract, with leased equipment in conjunction with a service contract. At the end of the day, it’s your money, and you make the call on how to spend it. If it were mine, I would want to do it right by installing a robust SCADA with an EMIS overlay. I would use it as a model for other processes in the plant. References 1Page i, Inventory of Industrial Energy Management Information Systems (EMIS) for M&V Applications. Prepared by PECI for the Northwest Energy Efficiency Alliance. June 2014 2Building Commissioning. Building Technology and Urban Systems Division, Energy Technologies Area, Berkeley Lab, Department of Energy. http://cx.lbl.gov/cx.html For more information, contact Tim Dugan, P.E., President, Compression Engineering Corporation, tel: (503) 520-0700 or visit www.comp-eng.com.
# The balance sheets for a company, along with additional information, are provided below: Balance Sheets December... The balance sheets for a company, along with additional information, are provided below: Balance Sheets December 31, Year 2 and Year 1. Year 2 Year 1 156,250 74,000 85,000 2,000 171,500 87,000 71,000 1,000 430,000 720,000 (400, 000) $1,067, 250 430,000 620,000 (248,000)$1, 132, 500 Assets Current assets: Cash Accounts receivable Inventory Prepaid rent Long-term assets: Land Equipment Accumulated depreciation Total assets Liabilities and Stockholders' Equity Current liabilities: Accounts payable Interest payable Income tax payable Long-term liabilities: Notes payable Stockholders' equity: Common stock Retained earnings Total liabilities and stockholders' equity 89,000 6, 750 6,000 76,000 13,500 4,000 112,500 225,000 650,000 203, 000 $1,067,250 650, 000 164, 000$1, 132, 500 Additional Information for Year 2: 1. Net income is $59,000. 2. The company purchases$100,000 in equipment. 3. Depreciation expense is $152,000. 4. The company repays$112,500 in notes payable. 5. The company declares and pays a cash dividend of $20,000. Required: Prepare the statement of cash flows using the indirect method. (List cash outflows and any decrease in cash as negative amounts.) Statement of Cash Flows For the Year Ended December 31, Year 2 Cash Flows from Operating Activities Adjustments to reconcile net income to net cash flows from operating activities Net cash flows from operating activities Cash Flows from Investing Activities Net cash flows from investing activities Cash Flows from Financing Activities Net cash flows from financing activities Cash at the beginning of the period Cash at the end of the period ## Answers #### Similar Solved Questions 1 answer ##### The process of projecting the present and future availability of qualified personnel in sufficient number is... The process of projecting the present and future availability of qualified personnel in sufficient number is called taking a _______________ _________________.... 1 answer ##### Find each limit. Note that$L$'Hôpital's rule does not apply to every problem, and some problems will require more than one application of$L$'Hôpital's rule.$\lim _{x \rightarrow 0} \frac{e^{2 x}-1-2 x-2 x^{2}}{x^{3}}$Find each limit. Note that$L$'Hôpital's rule does not apply to every problem, and some problems will require more than one application of$L$'Hôpital's rule.$\lim _{x \rightarrow 0} \frac{e^{2 x}-1-2 x-2 x^{2}}{x^{3}}$... 5 answers ##### Required information Consider the system of equations_13X1 2x2 x3 = 27 ~3X1 - 6x2 2x3 = -61.5 X1 +x2 + 5*3 =-21.5Using the M-file _ solve the given system of equations. (Round the final answers to four decimal places)The lower and upper triangular matrices for the given system of equations are as follows:13 and U =where_ Required information Consider the system of equations_ 13X1 2x2 x3 = 27 ~3X1 - 6x2 2x3 = -61.5 X1 +x2 + 5*3 =-21.5 Using the M-file _ solve the given system of equations. (Round the final answers to four decimal places) The lower and upper triangular matrices for the given system of equations are as... 1 answer ##### Question 18 Knight Company reports the following costs and expenses in May.$16,800 Direct labor $72,600... Question 18 Knight Company reports the following costs and expenses in May.$16,800 Direct labor $72,600 Factory utilities Depreciation on factory equipment 14,350 Sales salaries 46,700 Depreciation on delivery trucks 4,200 3,500 Indirect factory labor Indirect materials Direct materials used Factor... 1 answer ##### A very large data set (N > 10,000) has a mean value of 1.47 units and... A very large data set (N > 10,000) has a mean value of 1.47 units and a standard deviation of 0.035 units. Determine the range of values in which 95% of the data set should be found assuming normal probability density (find a in the equation xi = x' ± a).... 1 answer ##### 6. The United States is an important trading partner for Canada. How might the United States'... 6. The United States is an important trading partner for Canada. How might the United States' military actions around the world affect the level of inventory that a Canadian company keeps on hand?... 5 answers ##### Not yet answered Marked out of 2.00Flag questionThe charge per unit length on long straight thin wire is 33.3 pC/m: The electric field (in MN/C) at a distance of 5.5 cm from the axis of the wire is: (use k=9.0x109 N. m2/c2)Select one: OA 5.45B. 10.90Oc.7.87D. 9.08E: 13.32Previous pageNext page Not yet answered Marked out of 2.00 Flag question The charge per unit length on long straight thin wire is 33.3 pC/m: The electric field (in MN/C) at a distance of 5.5 cm from the axis of the wire is: (use k=9.0x109 N. m2/c2) Select one: OA 5.45 B. 10.90 Oc.7.87 D. 9.08 E: 13.32 Previous page Next p... 5 answers ##### Current Attempt in ProgressFind the derivative of the function f(x) V4+ Vzdf dc Current Attempt in Progress Find the derivative of the function f(x) V4+ Vz df dc... 5 answers ##### 10) For ethonal, AHvap 43.5 k] /mol. Calculate q, W, AH , and AU when 1.75 mol of ethonal is vaporized at 255 K and 0.641 atm 10) For ethonal, AHvap 43.5 k] /mol. Calculate q, W, AH , and AU when 1.75 mol of ethonal is vaporized at 255 K and 0.641 atm... 1 answer ##### 2) On January 1, 2016, Amethyst Manufacturing Corporation purchased a machine for$40,000,000. The corporation expects... 2) On January 1, 2016, Amethyst Manufacturing Corporation purchased a machine for $40,000,000. The corporation expects to use the machine for 24,000 hours over the next six years. The estimated residual value of the machine at the end of the sixth year is$40,000. The schedule of usage of the machin... ##### )on 4. Suppose X and y are continuous random variables with joint density funstion the unit... )on 4. Suppose X and y are continuous random variables with joint density funstion the unit square [0, 1] x [0, 1]. (a) Let F(r,y) be the joint CDF. Compute F(1/2, 1/2). Compute F(z,y). (b) Compute the marginal densities for X and Y (c) Are X and Y independent? (d) Compute E(X), E(Y), Cov(X,y)... ##### Give the name for HNO3:a. nitrite acidb. nitric acidc. nitrate acidd. hydrogen nitridee. nitrous acid Give the name for HNO3: a. nitrite acid b. nitric acid c. nitrate acid d. hydrogen nitride e. nitrous acid... ##### "LimitingQuestion __ (30points) Diphosphorous pentoxide (PzOs (s)) and water are combined mak aqucous phosphoric acid (HsPOa (aq))If 10.0 grams of each reactant undergo reaction with 85.0 % yield, how many grams of phosphoric acid are obtained from the reaction? "Limiting Question __ (30points) Diphosphorous pentoxide (PzOs (s)) and water are combined mak aqucous phosphoric acid (HsPOa (aq)) If 10.0 grams of each reactant undergo reaction with 85.0 % yield, how many grams of phosphoric acid are obtained from the reaction?... ##### (12 points) Determine if the lines1=3+3 V = G 3=3-V;intersect,parallel,skew. If they iutersect, fiucl the point of interection,(12 points) The plaues P1 21 - 3v + : = d / 21 - "+A=ehoth contain the point P =(/30 4/31. Fiud prametric (uatious for thcir line of intersection (12 points) Determine if the lines 1=3+3 V = G 3=3- V; intersect, parallel, skew. If they iutersect, fiucl the point of interection, (12 points) The plaues P1 21 - 3v + : = d / 21 - "+A=ehoth contain the point P =(/30 4/31. Fiud prametric (uatious for thcir line of intersection... ##### The figure shows a loudspeaker A and point C, where a listener is positioned. |AC| =... The figure shows a loudspeaker A and point C, where a listener is positioned. |AC| = 3.00 m and the angle θ = 48 °. A second loudspeaker B is located somewhere to the left of A. The speakers vibrate out of phase and are playing a 64.0 Hz tone. The speed of sound is 343 m/s. What is the clo... ##### You are trying to estimate the average amount a family spends onfood during a year. In the past the standard deviation of theamount a family has spent on food during a year has beenapproximately $500. If you want to be 99% sure that you haveestimated average family food expenditures within$50, how manyfamilies do you need to survey? Round your answer up to the nearestwhole number, if necessary. You are trying to estimate the average amount a family spends on food during a year. In the past the standard deviation of the amount a family has spent on food during a year has been approximately $500. If you want to be 99% sure that you have estimated average family food expenditures within$50, ... ##### For this problem, you need to determine the magnetic field at the center of the large... For this problem, you need to determine the magnetic field at the center of the large loop. The formula for that is B = Howhere R is the radius of the loop. 2R Resistivity (Ωm ) Material 2.65 x 108 Aluminum 1.68 x 10-B Copper Gold 2.44 x 108 9.71 x 10 Iron 10.6 x 10-8 Platinum Silver 1.59 x 10... ##### Write a valid set of quantum numbers forad orbital inMon =Select[Select ]ml =Select ] Write a valid set of quantum numbers forad orbital inMo n = Select [Select ] ml = Select ]... ##### 06 Determining Rate Laws from Initial Rates 1 PointDetermine the rate law for the following reaction using the data provided: CHsCl(g) + 3 Clz (g) CCL (g) + 3HCl(g)[CHzCl] (M)[CIz] (M)Initial Rate0.0200.0200.0120.0400.0200.0250.0400.0400.098rate-k[CH:CIJ[Clz]? rate_k[CH; C1J2[C1J4 rate_k[CHsC1J? [Clz]? rate_k[CHsCIJ[Cl]? rate_k[Clz]? 06 Determining Rate Laws from Initial Rates 1 Point Determine the rate law for the following reaction using the data provided: CHsCl(g) + 3 Clz (g) CCL (g) + 3HCl(g) [CHzCl] (M) [CIz] (M) Initial Rate 0.020 0.020 0.012 0.040 0.020 0.025 0.040 0.040 0.098 rate-k[CH:CIJ[Clz]? rate_k[CH; C1J2[C1J4 rate... ##### Find cle derivacive 0: tke illoniug; %+) =Ctto & Find cle derivacive 0: tke illoniug; %+) = Ctto &... ##### Please,I need the solution within 15 min Question 3 b: /Mark 17] (CLO 3 and 4).... please,I need the solution within 15 min Question 3 b: /Mark 17] (CLO 3 and 4). A 236 kV, three-phase medium transmission line is 211 km long. The line has a per phase series resistance R = 0.041 N, inductive reactance XL =j 0.3012, per kilo-meter for both, and shunt capacitive reactance X = 211.8 k... ##### Find the limit: (If an answer does not exist, enter DNE:) lim [1/07 + x)] - (1/7) X-0Need Help?Rend ItHte[kitend IuterSubmit AnswerPractice Another Version[0/1 Points]DETAILSPREVIOUS ANSWERS Find the limit: (If an answer does not exist, enter DNE:) lim [1/07 + x)] - (1/7) X-0 Need Help? Rend It Hte[kitend Iuter Submit Answer Practice Another Version [0/1 Points] DETAILS PREVIOUS ANSWERS...
In the previous blog post we discussed some theory of how to select optimal and possibly optimal interventions in a causal framework. For those interested in the decision science, this blog post may be more inspiring. This next task involves applying counterfactual quantities to boost learning performance. This is clearly very important for an RL agent where its entire learning mechanism is based on interventions in a system. What if intervention isn’t possible? Let’s begin! ## This Series 1. Causal Reinforcement Learning 2. Preliminaries for CRL 3. CRL Task 1: Generalised Policy Learning 4. CRL Task 2: Interventions – When and Where? 5. CRL Task 3: Counterfactual Decision Making 6. CRL Task 4: Generalisability and Robustness 7. Task 5: Learning Causal Models 8. (Coming soon) Task 6: Causal Imitation Learning 9. (Coming soon) Wrapping Up: Where To From Here? ## Counterfactual Decision Making A key feature of causal inference is its ability to deal with counterfactual queries. Reinforcement learning, by its nature, deals with interventional quantities in a trial-and-error style of learning. Perhaps the most obvious question is: how can we implement RL in such a way as to deal with counterfactual and other causal factors in uncertain environments? In the preliminaries section we discussed the notions of a multi-armed bandit and their associated policy regret. This is a natural starting point for the merger of reinforcement learning and causal inference theory to solving counterfactual decision problems. We now discuss some interesting work done in this area, especially in the context of unobserved confounders in decision frameworks. Obviously, the goal of optimising a policy is to minimise the regret an agent experiences. In addition, it should achieve this minimum regret as soon as possible in the learning process. Auer et al. [25] show that policies can achieve logarithmic regret uniformly over time. Applying a policy they call UCB2, the authors show that with input $0 < \alpha \max_{i:\mu_i < \mu^\ast} \frac{1}{2 \Delta_i^2}$ of plays is bounded by $\sum_{i:\mu_i} \mathbb{E}(Y_{X=1} = 1 \mid X = 1)$ $\Leftrightarrow \mathbb{E}(Y_{X=0} = 1 \mid X = 1) > P(Y \mid X = 1).$ All this says is that we should consider the intent of the agent and consider how the causal system is trying to exploit this intent. The agent should acknowledge it’s own intent and modify its policy accordingly. The authors propose incorporating this into Thompson Sampling [27] to form Causal Thompson Sampling ($TS^C$), shown in the algorithm below. This algorithm leverages observational data to seed the algorithm and adds a rule to improve the arm exploration in the MABUC case. The algorithm is shown to dramatically improve convergence and regret against traditional methods under certain scenarios. Comments have been added to make the algorithm self-contained. The reader is encouraged to work through it. The $TS^C$ is empirically shown to outperform conventional methods by converging upon an optimal policy faster and with less regret than the non-causally empowered approach. Ultimately we are interested in sequential decision making in the context of planning – the cases reinforcement learning addresses. [28] generalises the previous work on MABs to take a causal approach to Markov decision processes (MDPs) with unobserved confounders. In a similar fashion to [21], the authors construct a motivating example and show that MDP algorithms perform sub-optimally when applying conventional (non-causal) methods. First, we note why confounding in MDPs is different to confounding in MABs. Confounding in MDPs affects state and outcome variables in addition to action and outcome, and thus requires special treatment. Unlike in MABs, the MDP setting requires maximisation of reward over a long-term horizon (planning). Maximising with respect to the immediate future (greedy behaviour) thus fails to account for potentially superior long term trajectories in state-action space (long term strategies). The authors proceed by showing conventional MDP algorithms are not guaranteed to learn an optimal policy in the presence of unobserved confounders. Reformulating MDPs in terms of causal inference, we can show that counterfactual-aware policies outperform purely experimental algorithms. MDP with Unobserved Confounders (MDPUC) [28]: A Markov Decision Process with Unobserved Confounders is an SCM $M$ with actions $X$, states $S$, and a binary reward $Y$: 1. $\gamma \in [0,1)$ is the discount factor. 2. $U^{(t)} \in U$ is the unobserved confounder at time-step $t$. 3. $V^{(t)} = X^{(t)} \cup Y^{(t)} \cup S^{(t)}$ is the set of observed variables at time-step $t$, where $X^{(t)} \in X$, $Y^{(t)} \in Y$, and $S^{(t)} \in S$. 4. $F = \{ f_x, f_y, f_s \}$ are the set o f structural equations relative to $V$ such that $X^{t} \leftarrow f_x(s^{(t)}, u^{(t)})$, $Y^{(t)} \leftarrow f_y(x^{(t)}, s^{(t)}, u^{(t)})$, and $S^{(t)} \leftarrow f_s(x^{(t-1)}, s^{(t-1)}, u^{(t-1)})$. In other words, they determine the next state and associated reward. 5. $P(u)$ encodes the probability distribution over the unobserved (exogenous) variables $U$. A key difference to the MABUC case is that different sets of variables can be confounded over the time horizon. The following figure shows the four different ways in which MDPUC variables can be confounded. The figure above shows (a) MDPUC with action (decided by the policy) to reward path, $x^{(t)} \rightarrow y^{(t)}$ confounded. (b) MDP without confounders. This is the traditional RL instance. (c) MDPUC with action to reward path, $x^{(t)} \rightarrow y^{(t)}$, and action to state path, $x^{(t)} \rightarrow s^{(t+1)}$, confounded. (d) MDPUC with only action to state path, $x^{(t)} \rightarrow s^{(t+1)}$, confounded. Extracted from [28]. Most literature discussing reinforcement learning will develop the theory in terms of value functions. We develop these notions for the MDPUC case here. This will allow us (in theory) to exploit existing RL literature with relative ease. Value Functions: Given a MDPUC model $M\langle \gamma, U, X, Y, S, F, P(u) \rangle$ and an arbitrary deterministic policy $\pi$, we can define the value function starting from state $s^{(t)}$, taking action $x^{(t)}$, and thereafter following policy $\pi$ as: $V^{\pi}(s^{(t)}) = \mathbb{E}\left[ \sum_{k=0}^{\infty} \gamma^k Y_{x^{([t, t+k])} = \pi}^{(t+k)} \mid s^{(t)} \right].$ The state-action value function is similarly defined: $Q^{\pi}(s^{(t)}, x^{(t)}) = \mathbb{E}\left[ \sum_{k=0}^{\infty} \gamma^k Y_{x^{(k)}, x^{([t+1, t+k])} = \pi}^{(t+k)} \mid s^{(t)}, x^{(t)} \right].$ The interpretation of these functions is the same as in the usual RL literature. They simply associate a state or state-action with an expected reward over all future time steps. Using these definitions, we can derive expressions for the well known Bellman equation and recursive definitions for the state and state-action value functions [12] for the situation where unobserved confounders are present. The authors rest their analysis of these results on the axioms of counterfactuals and the Markov property presented in the following theorem. Proofs for the following theorems are available in the source paper [28] and are not included here for brevity. Markovian Property in MDPUCs [28]: For a MDPUC model M $\langle \gamma, U, X, Y, S, F, P(u) \rangle$, a policy $\pi \in F_{exp} = \{ \pi \mid \pi: S \rightarrow X \}$ (state to action map), and a starting state $s^{(t)}$, the agents performs actions $do(X^{(t)} = x^{(t)})$ at round $t$ and $do(X^{([t+1,t+k])} = \pi)$ afterwards $(k \in \mathbb{Z}^+)$, the following statement holds: $P\left(Y_{x^{(t)}, x^{([t+1, t+k])}=\pi}^{t+k}=y^{(t+k)} \mid s_{x^{(t)}}^{(t+1)}, s^{(t)}\right)=P\left(Y_{x^{([t+1, t+k])}=\pi}^{t+k}=y^{(t+k)} \mid s^{(t+1)}\right)$ We can further extend MDPUCs to counterfactual policies by considering $F_{ctf} = \{ \pi \mid \pi: S \times X \rightarrow X \}$, the set of functions between the current state $s^{(t)}$, the intuition of the agent $x^{\prime(t)}$, and the action $x^{(t)}$. Then $V^{(t)} = V^{(t)}(s^{(t)},x^{\prime(t)})$ and $Q^{(t)} = Q^{(t)}(s^{(t)}, x^{\prime(t)}, x^{(t)})$. With this we can derive a remarkable result encoded as a theorem below. Theorem: Given an MDPUC instance $M\langle \gamma, U, X, Y, S, F, P(u) \rangle$, let $\pi_{exp}^\ast = \textbf{argmax}_{\pi \in F_{exp}} V^{\pi}(s^{(t)})$ and $\pi_{ctf}^\ast = \textbf{argmax}_{\pi \in F_{ctf}} V^{\pi}(s^{(t)}, x^{\prime(t)})$. For any state $s^{(t)}$, the following statement holds: $V^{\pi_{exp}^\ast}(s^{(t)}) \leq V^{\pi_{ctf}^\ast}(s^{(t)}).$ In other words, we can never do worse by considering counterfactual quantities (intent). Zhang and Bareinboim [28] continue to implement a counterfactual-aware MORMAX MDP algorithm and empirically show it superior to conventional MORMAX [29] approaches in intent-sensitive scenarios. Forney and Bareinboim [11] extend this counterfactual awareness to the design of experiments. Forney, Pearl and Bareinboim [30] expand upon the ideas presented above by showing that counterfactual-based decision-making circumvents problems of naive randomisation when UCs are present. The formalism presented coherently fuses observational and experimental data to make well informed decisions by estimating counterfactual quantities empirically. More concretely, they study conditions under which data collected from distinct sources and conditions can be combined to improve learning performance by an RL agent. The key insight here is that seeing does not equate to doing in the world of data. Applying a developed heuristic, a variant of Thompson Sampling [27] is introduced and empirically shown to outperform previous state-of-the-art agents. An extension of the motivating example of (potentially) drunk gamblers from [21] is extended to consider an extra dependence on whether or not all the machines have blinking lights. This yields four combinations of the states of sobriety and blinking machines. We start by noticing the interventional quantities, $\mathbb{E}[Y \mid do(X=x)]$ can be written in terms of counterfactuals as $\mathbb{E}[Y_{X=x}] = \mathbb{E}[Y_x]$ – that is, the expected value of $Y$ had $X$ been $x$. Applying the law of total probabilities, we arrive at the useful representation $\mathbb{E}[Y_x] = \mathbb{E}[Y_x \mid x_1]P(x_1) + \dots + \mathbb{E}[Y_x \mid x_K]P(x_K).$ $\mathbb{E}[Y_x]$ is interventional by definition. $\mathbb{E}[Y_x \mid x^\prime]$ is either observational or counterfactual depending on whether $x=x^\prime$ or $x \neq x^\prime$ respectively. That is, whether or not $x^\prime$ has occurred. As we noted earlier, counterfactual quantities are, by their nature, not empirically available in general. Interestingly, however, intents of the agent contain information about the agent’s decision process and can reveal encoded information about unobserved confounders, as we noted in the MAB case earlier. Applying randomisation to intent conditions (with RDC) can allow computation of counterfactual quantities. In this way, observational data actually adds information to the seemingly more informative interventional data. This counterfactual quantity – that is not naturally realisable – is often referred to as the effect of the treatment on the treated [31], by way of the fact that doctors cannot retroactively observe how changing the treatment would have affected a specific patient – well, they shouldn’t! Estimation of ETT [30]: The counterfactual quantity referred to as the effect of the treatment on the treated (ETT) is empirically estimable for any number of action choices when agents condition on their intent $I=i$ and estimate the response $Y$ to their final action choice $X=a$. Proof: The ETT counterfactual quantity can be written as $\mathbb{E}[Y_{X=a} \mid X = i]$. Applying law of total probability and conditional independence relation $Y_x \perp X \mid I$, we have: \begin{aligned} \mathbb{E}\left[Y_{X=a} \mid X=i\right] &=\sum_{i^{\prime}} \mathbb{E}\left[Y_{X=a} \mid X=i, I=i^{\prime}\right] P\left(I=i^{\prime} \mid X=i\right) \\ &=\sum_{i^{\prime}} \mathbb{E}\left[Y_{X=a} \mid I=i^{\prime}\right] P\left(I=i^{\prime} \mid X=i\right) \end{aligned} Now, we notice that $I_x = I$ because in $G_{\overline{X}}$ (graph with edges into $X$ removed) we have $(I \perp X)_{G_{\overline{X}}}$. Thus, \begin{aligned} \mathbb{E}\left[Y_{X=a} \mid X=i\right] &=\sum_{i^{\prime}} \mathbb{E}\left[Y_{X=a} \mid I=i^{\prime}\right] P\left(I=i^{\prime} \mid X=i\right) \\ &=\sum_{i^{\prime}} \mathbb{E}\left[Y_{X=a} \mid I_{x=a}=i^{\prime}\right] P\left(I=i^{\prime} \mid X=i\right) \\ &=\sum_{i^{\prime}} \mathbb{E}\left[Y \mid d o(X=a), I=i^{\prime}\right] P\left(I=i^{\prime} \mid X=i\right) \end{aligned} where the last line follows since all quantities are in relation to the same variable $x$ and so the counterfactual quantity can be written as an interventional quantity. We now notice that since $P(I = i^\prime \mid X = i)$ is observational, the intent will always match the outcome. We can thus rewrite this as an indicator function. The result follows. \begin{aligned} \mathbb{E}\left[Y_{X=a} \mid X=i\right] &=\sum_{i^{\prime}} \mathbb{E}\left[Y \mid d o(X=a), I=i^{\prime}\right] P\left(I=i^{\prime} \mid X=i\right) \\ &=\sum_{i^{\prime}} \mathbb{E}\left[Y \mid d o(X=a), I=i^{\prime}\right] \mathbb{1}_{i^{\prime}=i} \\ &=\mathbb{E}[Y \mid d o(X=a), I=i] \end{aligned} [30] makes use of this result to suggest heuristics for learning counterfactual quantities from (possibly noisy) experimental and observational data. This figure shows the counterfactual possibilities for different actions and action-intents. The diagonal indicates the counterfactual quantities that have occurred. We can apply knowledge of other counterfactual quantities to learn about other possible counterfactuals. (B) indicates cross-intent learning. (C) indicates cross-arm learning. Extracted form [30]. 1. Cross-Intent Learning: Thinking about these equation carefully, we notice that since they holds for every arm, we have a system of equations for outcomes conditioned on different intents. Thus to find $\mathbb{E}[Y_{x_r} \mid x_w]$ we can learn about other intent conditions: $\mathbb{E}[Y_{x_r} \mid x_w] = \left[\mathbb{E}[Y_{x_r}] - \sum_{i \neq w}^K \mathbb{E}[Y_{x_r} \mid x_i] P(x_i)\right] / P(x_w).$ 2. Cross-Arm Learning: Similarly to (1), we can observe that given information about two different arms under the same intent, we can learn about a third arm under the same intent. We have $P\left(x_w\right)=\frac{E\left[Y_{x_{r}}\right]-\sum_{i \neq w}^{K} E\left[Y_{x_{r}} \mid x_{i}\right] P\left(x_{i}\right)}{E\left[Y_{x_{r}} \mid x_{w}\right]}$ Which is the same as writing $P\left(x_w\right)=\frac{E\left[Y_{x_{s}}\right]-\sum_{i \neq w}^{K} E\left[Y_{x_{s}} \mid x_{i}\right] P\left(x_{i}\right)}{E\left[Y_{x_{s}} \mid x_{w}\right]}$ Combining these results and solving for $\mathbb{E}[Y_{x_r} \mid x_w]$ we find: $E\left[Y_{x_{r}} \mid x_{w}\right]=\frac{\left[E\left[Y_{x_{r}}\right]-\sum_{i \neq w}^{K} E\left[Y_{x_{r}} \mid x_{i}\right] P\left(x_{i}\right)\right] E\left[Y_{x_{s}} \mid x_{w}\right]}{E\left[Y_{x_{s}}\right]-\sum_{i \neq w}^{K} E\left[Y_{x_{s}} \mid x_{i}\right] P\left(x_{i}\right)}.$ This estimate is not robust to noise in the samples. We can take a pooling approach and account for variance of reward payouts by applying an inverse-variance-weighted average: $E_{X A r m}\left[Y_{x_{r}} \mid x_{w}\right]=\frac{\sum_{i \neq r}^{K} h_{XArm}\left(x_{r}, x_{w}, x_{i}\right) / \sigma_{x_{i}, x_{w}}^{2}}{\sum_{i \neq r}^{K} 1 / \sigma_{x_{i}, x_{w}}^{2}},$ where $h_{XArm}\left(x_{r}, x_{w}, x_{s}\right)$ evaluates the cross-arm learning equation above, and $\sigma_{x_{i}, i}^2$ is the reward variance for arm $x$ under intent $i$. 3. Combined Approach: Of course, we can combine the previous two approaches by sampling (collecting) estimates during execution and applying cross-intent and cross-arm learning strategies. A fairly straightforward derivation yields a combined approach formula: $E_{\text {combo}}\left[Y_{x_{r}} \mid x_{w}\right]=\frac{\alpha}{\beta}, \quad \text{where}$ $\alpha = E_{\text {samp}}\left[Y_{x_{r}} \mid x_{w}\right] / \sigma_{x_{r}, x_{w}}^{2}+E_{\text {XInt}}\left[Y_{x_{r}} \mid x_{w}\right] / \sigma_{X \text {Int}}^{2} +E_{\text {XArm}}\left[Y_{x_{r}} \mid x_{w}\right] / \sigma_{X A r m}^{2}$ $\beta = 1 / \sigma_{x_{r}, x_{w}}^{2}+1 / \sigma_{X \text {Int}}^{2}+1 / \sigma_{X A r m}^{2}$ This figure shows the flow and process of fusing data using counterfactual reasoning as outlined in this section. The agent employs both the history of interventional and observational data to compute counterfactual quantities. Along with its intended action, the agent makes a counterfactual and intent aware decision to account for unobserved confounders and make use of available information. Figure extracted from [30]. The authors proceed to experimentally verify that this data-fusion approach, applied to Thompson Sampling, results in significantly less regret than competitive MABUC algorithms. The (conventional) gold standard for dealing with unobserved confounders involves randomised control trials (RCTs) [32], especially useful in medical drug testing, for example. As we noted in the data-fusion and earlier MABUC processes, randomisation of treatments may yield population-level treatment outcomes but can fail to account for individual-level characteristics. The authors provide a motivating example in the domain of personalised medicine, or the effect of the treatment on the treated, motivated by Stead et al.’s [33] observation that Despite their attention to evidence, studies repeatedly show marked variability in what healthcare providers actually do in a given situation. – Stead et al.’s [33] They proceed to formalise the existence of different treatment policies of actors in confounded decision-making scenarios. This new theory is applied to generalise RCT procedures to allow recovery of individualised treatment effects from existing RCT results. Further, they present an online algorithm which can recommend actions in the face of multiple treatment opinions in the context of unobserved confounders. For the sake of clarity, the motivating example is now briefly presented. The reader is encouraged to refer to the source material [26] for further details. Consider two drugs which appear to be equally effective under an FDA-approved RCT. In practice, however, one physician finds agreement with the RCT results while another does not. Let us consider two potential unobserved confounders – socioeconomic status (SES) and the patient’s treatment request (R). Crucially, we note that juxtaposing observational and experimental data fails to reveal these invisible confounders. Key to this confounded decision making (CDM) scenario is that the deciding agent (physician) do not possess a fully specified causal model (in the form of an SCM). This formalism was used to define a regret decision criterion (RDC) for optimising actions in the face of unobserved confounders. In the physician motivating example we just discussed, the intent-specific recovery rates of the first physician do not appear to differ from the observational or interventional recovery rates. The results of RDC for the second physician is only slightly off the data, at 72.5%. What is happening here? The key here is the heterogeneous intents of the agents. We now develop theory to account and exploit information implicitly contained in the multiple intents of agents. Heterogeneous Intents [26]: Let $A_1$ and $A_2$ be two actors within a CDM instance, and $M^{A_1}$ be the SCM associated with the choice of policies of $A_1$ and likewise $M^{A_2}$ of $A_2$. For any decision variable $X \in \Pi_M$ and its associated intent $I = f_x$, the actors are said to have heterogeneous intent in $f_I^{A_1} \in F_M^{A_1}$ and $f_I^{A_2} \in F_M^{A_2}$ are distinct. Acknowledging the possibility of heterogeneous intents in deciding actors (such as second opinions), we can extend the notion of SCMs. The figure below shows a model for combining intents of different agents. HI Structural Causal Model (HI-SCM) [26]: A heterogeneous intent structural causal model $M^{\boldsymbol{A}}$ is an SCM that combines the individual SCMs of actors $\boldsymbol{A} = \{ A_{1}, \dots, A_{a} \}$ such that each decision variable in $M^{\boldsymbol{A}}$ is a function of each actors’ individual intents. This figure shows an example of HI-SCM. Here multiple intents of different actors contribute to the decision variable $X_t$. Unobserved confounder $U_t$ affects both the intents and the outcome. The agent history is encoded as $H_t$. Recreated based on figure 1 in [26]. Of course, it would be naive to assume every actor adds valuable information to the total knowledge of the system. With this is mind, we develop the notion of an intent equivalence class. Intent Equivalence Class (IEC) [26]: In a HI-SCM $M^A$, we say that any two actors $A_i \neq A_j$ belong to separate intent equivalence classes $\boldsymbol{\Phi} = \{ \phi_1, \dots, \phi_m \}$ of intent functions $f_I$ if $f_I^{A_i} \neq f_I^{A_j}$. With this definition in place, we can cluster equivalent actors – and their associated intents – by their IEC. For example, given $\phi_1 = \{ A_1, A_2 \}$ then $P(Y_x \mid I^{A_1}) = P(Y_x \mid I^{A_2}) = P(Y_x \mid I^{A_1}, I^{A_2}) = P(Y_x \mid I^{\phi_1}).$ It turns out that IEC-specific optimisation is always at least equivalent to each actor’s individual optimal action. IEC-Specific Reward Superiority [26]: Let $X$ be a decision variable in a HI-SCM $M^A$ with measured outcome $Y$, and let $I^{\phi_i}$ and $I^{\phi_j}$ be the heterogeneous intents of two distinct IECs $\phi_i, \phi_j$ in the set of all IECs in the system, $\boldsymbol{\Phi}$. Then $\max_{x \in X} P(Y_x \mid I^{\phi_i}) \leq \max_{x \in X} P(Y_x \mid I^{\phi_i}, I^{\phi_j}) \quad \forall \phi_i, \phi_j \in \boldsymbol{\Phi}.$ Proof: WLOG, consider the case of a binary intents $X$. Let $x^* = \textbf{argmax}_{x \in X} P(Y_x \mid I^{\phi_i} = i^{\phi_i}).$ Then $P\left(Y_{x}^{*} \mid I^{\phi_{i}}=i^{\phi_{i}}\right)>P\left(Y_{x}^{\prime} \mid I^{\phi_{i}}=i^{\phi_{i}}\right)$ $\Longrightarrow \sum_{i^{\phi_{j}}} P\left(Y_{x^{*}} \mid I^{\phi_{i}}=i^{\phi_{i}}, I^{\phi_{j}}=i^{\phi_{j}}\right) P\left(I^{\phi_{j}}=i^{\phi_{j}} \mid I^{\phi_{i}}=i^{\phi_{i}}\right)$ $>\sum_{i^{\phi_{j}}} P\left(Y_{x^{\prime}} \mid I^{\phi_{i}}=i^{\phi_{i}}, I^{\phi_{j}}=i^{\phi_{j}}\right) P\left(I^{\phi_{j}}=i^{\phi_{j}} \mid I^{\phi_{i}}=i^{\phi_{i}}\right)$ Letting $p = P\left(I^{\phi_{j}}=i^{\phi_{j}} \mid I^{\phi_{i}}=i^{\phi_{i}}\right),$ we can rewrite the above inequality as $a(p) + b(1-p) > c(p) + d(1-p)$ (corrected mistake from proof in appendix of source). We can write it as such since we are considering the binary case. Thus if we have one case occurs, necessarily the other doesn’t. We can then exhaust the cases: 1. $p = 0 \implies b > d$. 2. $p = 1 \implies a > c$. 3. $p \in (0,1) \implies \max(a,b) \geq a(p) + b(1-p)$. Thus, in each case we have that the HI-specific rewards are greater than or equal to the homogeneous-intent-specific rewards. With this important result in place, we can develop new criteria for decision making in a CDM with heterogeneous intents. HI Regret Decision Criteria (HI-RDC) [26]: In a CDM scenario modelled by a HI-SCM $M^A$ with treatment $X$, outcome $Y$, actor intended treatments $I^{A_i}$, and a set of actor IECs $\boldsymbol{\Phi} = \{ \phi_1, \dots, \phi_m \}$, the optimal treatment $x^\ast \in X$ maximises the IEC-specific treatment outcome. Formally: $x^\ast = \textbf{argmax}_{x \in X} P(Y_x \mid I^{\phi_1}, \dots, I^{\phi_m}).$ The authors point out that HI-RDC requires knowledge of IECs, which are not always obvious. This motivates the need for empirical means of clustering the actors into equivalence classes. Since these intents are observational (they are indicated by what naturally occurs), sampling and grouping by the following criteria suffices. Empirical IEC Clustering Criteria [26]: Let $A_i$, $A_j$ be two agents modelled by a HI-SCM, and let their associated intents be $I^{A_i}$, $I^{A_j}$ for some decision. We cluster agents into the same IEC, $\{ A_i, A_j \} \in \phi_r$, whenever their intended actions over the same units correlate. In other words, if their intent-specific treatment outcomes will agree. Correlation indicated by $\rho$, as is common in statistics literature. Formally: \begin{aligned} \rho\left(I^{A_{i}}, I^{A_{j}}\right)=1 & \Longrightarrow\left\{A_{i}, A_{j}\right\} \in \phi_{r} \in \Phi \\ & \Longrightarrow P\left(Y_{x} \mid I^{A_{i}}\right)=P\left(Y_{x} \mid I^{A_{i}}, I^{A_{j}}\right) \end{aligned} The authors argue that this condition can be too strict in practice and should be softened to allow for some actor-specific noise. Applying HI-RDC and empirical IEC clustering directly to the online recommendation system presented in [21] is possible but not necessarily always practical or recommended because (1) the ethics of exploring different treatment options is not always clear and (2) if UCs are present in the system and the treatment has passed experimental testing, this implies that the UCs have gone unnoticed and we – the data analysts – wouldn’t necessarily know to look for them there. This motivates the need for an extension to RCTs to involve heterogeneous intents. HI Randomised Control Trial (HI-RDC) [26]: Let $X$ be the treatment of the RCT in which all participants of the trial are randomly assigned to some experimental condition. In other words, they have been intervened on via $do(X=x)$ with some measured outcome $Y$. Let $\boldsymbol{\Phi}$ be the set of all IECs for agents in the HI-SCM $M^A$ for which the RCT is meant to apply. A Heterogeneous Intent RCT (HI-RCT) is an RCT wherein treatments are randomly assigned to each participant but, in addition, the intended treatments of the sampled agents are collected for each participant. This figure shows the HI-RDC procedure. The traditional RCT procedure is found by following the orange path. HI-RDC adds an additional layer of actor-intent collection over and above the traditional RCT procedure. Figure extracted form [26]. This procedure yields actor IECs as well as experimental, observational, counterfactual and HI-specific treatment effects. Finally, we can implement a criterion which reveals confounding beyond what a simple mix of observational and interventional data can expose. HI-RDC Confounding Criteria [26]: Consider a CDM scenario modelled by a HI-SCM $M^A$ with treatment $X$, outcome $Y$, actor intended treatments $I^{A_i}$, and a set of actor IECs $\Phi = \{ \phi_i, \dots, \phi_m \}$. Whenever there exists some $x \in X, i^{\Phi} \in I^{\Phi} : P(Y_x) \neq P(Y_x \mid i^{\Phi})$ in $M^A$, then there exists some unobserved $U$ such that $X \leftarrow U \rightarrow Y$. In addition to the above theory, [26] provides a procedure for an agent to attempt to repair harmful influences of unobserved confounders in an online procedure. Once again we come across the Multi-Armed Bandit with Unobserved Confounders (MABUC) in the attempt to maximise the reward (recovery in the physician example) – or minimise the regret – of the decision making process. One problem that arises in the online case is that the IECs the agent learns are not necessarily exhausitive. Let us consider whether we can find a mapping $\Phi_{on} \rightarrow \Phi_{off}$, representing a map from the online to the offline IEC sets respectively. If we can, the HI-RDC reveals optimal treatment. If not, HI-RCT data can be used to accelerate learning by gathering a calibration unit set – a small sample questionnaire in which actors are asked to provide intended treatments. In the case where HI-RCT IECs correspond to the sampled offline IECs, a mapping can be made. This acts as a sort of ‘bootstrap’ to the HI-RCT procedure by serving as a procedure to collect initial conditions for the learning process. Actor Calibration-Set Heuristic [26]: A collection of some $n > 0$ calibration units from an offline HI-RDC dataset $\mathcal{D}$ can be used to learn the IECs of agents in an online domain. Three heuristics scores $h(t) = h_c(t) + h_d(t) + h_o(t)$ can be used to guide the selection procedure: 1. Consistency: how consistent agents in the same IEC $\phi_r = \{ A_1, \dots, A_i \}$ are with their intended treatment,$h_c(t) = \text{Number of } I_t^A \in \phi_r \text{ agreeing with majority }/\mid\phi_{r}\mid$ 2. Diversity: how often a configuration of $I^\Phi$ has been chosen, favouring a diverse set of IEC intent combinations, $h_d(t) = 1/(\text{Number of times } I^\Phi \text{ appears in calibration set})$. This acts by favouring ‘exploration’ in terms of the IEC space. 3. Optimism: a bias towards choosing units in which the randomly assig ned treatment $x_t$ was optimal and succeeded or suboptimal and failed,$h_o(t) = \left(P\left(Y_{x_{t}} \mid I_{t}^{\Phi}\right)>P\left(Y_{x^{\prime}} \mid I_{t}^{\Phi}\right) \forall x^{\prime} \in X \backslash x_{t}\right) 1\left(Y_{t}=1\right)$$+\left(P\left(Y_{x_{t}} \mid I_{t}^{\Phi}\right)The calibration set is given by $h(\mathcal{D}, n) = \{ t \in \mathcal{D} \mid n \text{ highest } h(t) \}$. The authors experimentally show that agents maximising HI-specific rewards, clustering by IEC, and using calibration sets with and without the Actor Calibration-Set Heuristic, each outperforms the previous version. In other words, each step improves upon the regret the agent experiences. When compared against an oracle – that treated UCs as observed (unrealisable) – the agent performs well. [34] asks some interesting questions and presents intriguing results. They tackle the problem of an agent and humans having different sensory capabilities, and thereby “disagreeing” on their observations. The authors find that when leaving human intuition out of the loop – even when the agent’s sensory abilities are superior – results in worse performance. The theory presented in this section is sufficient to understand the presentation of the results in [34], and the reader is encouraged to engage with the source. Now that we have considered counterfactual decision making within a causal system, we should consider how we can transfer and generalise causal results across domains. This is what the next section tackles. ### References • [Header Image](https://www.businesssystemsuk.co.uk/i/uploads/gallery/what%20if%20scenarios%20wfm.jpg) • [11] Elias Bareinboim and J. Pearl. Causal inference and the data-fusion problem.Proceedings of the NationalAcademy of Sciences, 113:7345 – 7352, 2016. • [12] Richard Sutton and Andrew Barto.Reinforcement Learning: An Introduction. MIT Press, second editionedition, 2018. • [21] Elias Bareinboim, Andrew Forney, and J. Pearl. Bandits with unobserved confounders: A causal approach.InNIPS, 2015. • [25] P. Auer, Nicolò Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem.Machine Learning, 47:235–256, 2004. • [26] Andrew Forney and Elias Bareinboim. Counterfactual randomization: Rescuing experimental studies fromobscured confounding. InAAAI, 2019. • [27] W. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence oftwo samples.Biometrika, 25:285–294, 1933. • [28] J. Zhang and Elias Bareinboim. Markov decision processes with unobserved confounders : A causal ap-proach. 2016. • [29] I. Szita and Csaba Szepesvari. Model-based reinforcement learning with nearly tight exploration complexitybounds. InICML, 2010. • [30] Andrew Forney, J. Pearl, and Elias Bareinboim. Counterfactual data-fusion for online reinforcement learn-ers. InICML, 2017. • [31] J. Heckman. Randomization and social policy evaluation. 1991. • [32] R.A. Fisher.The Design of Experiments. The Design of Experiments. Oliver and Boyd, 1935. • [33] W. W. Stead, Starmer J. M., and M. McClellan. Beyond expert based practice.Evidence-Based Medicineand the Changing Nature of Healthcare: 2007 IOM Annual Meeting Summary, 2008. • [34] J. Zhang and Elias Bareinboim. Can humans be out of the loop? 2020. How clear is this post?
# Should Poverty Be Comfortable? WhoWee The poverty level in the US is defined by income level. http://aspe.hhs.gov/poverty/index.shtml http://aspe.hhs.gov/poverty/11poverty.shtml "The 2011 HHS Poverty Guidelines The following figures are the 2011 HHS poverty guidelines that are scheduled to be published in the Federal Register on January 20, 2011. (Additional information will be posted after the guidelines are published.) 2011 HHS Poverty Guidelines Persons in Family 48 Contiguous 1 $10,890$13,600 $12,540 2 14,710 18,380 16,930 3 18,530 23,160 21,320 4 22,350 27,940 25,710 5 26,170 32,720 30,100 6 29,990 37,500 34,490 7 33,810 42,280 38,880 8 37,630 47,060 43,270 For each additional person, add 3,820 4,780 4,390 " The chart indicates the poverty level (in the 48 contiguous states) for a family of 4 is now$22,350 in the US - that's $429.81 per week/40 hour week =$10.75 per hour. By world standards ($1.25 per day =$8.75 per week = $455 per year) the US standard is quite high. http://uk.oneworld.net/guides/poverty "Extreme poverty strikes when household resources prove insufficient to secure the essentials of dignified living. The absence of social safety nets in under-developed economies shuts off potential escape routes. The consequences of persistent poverty include insufficient food, children out of school, diminution of household back-up resources and exclusion from valuable social networks. Global Poverty Trends Based on World Bank figures which are used for official global poverty statistics, the number of people living below the international poverty line of$1.25 per day fell from 1.8 billion to 1.4 billion between 1990 and 2005." To compensate for poverty in the US, the per capita welfare spending is estimated at $2,358 per person of$20,967 of total Government spending per person. http://www.usgovernmentspending.com/per_capita Has poverty become too comfortable in the US. Is there adequate incentive for individuals to escape the gravity of benefits? BilPrestonEsq No one wants to be poor. There is and always will be a curve. There will always be people that strive to accomplish great things and well.... those that don't. I really don't think it is comfortable to be poor for anyone though. I also don't think welfare should just be a handout to anyone. There needs to be government social workers assigned to a number of welfare recipients and they need to prove that they really need the money. I have definetely seen people at the grocery store pay with foodstamps or the new cards they have, walk out with 2 carraiges full of food with gold chains hanging off of them, brand new clothes and shoes and then climb into lexus. Ahh.. yea I don't think so, that absolutely can't happen, ever. But I do believe that a government should provide for people that can't provide for themselves without a doubt. It is only right. Based on World Bank figures which are used for official global poverty statistics, the number of people living below the international poverty line of $1.25 per day fell from 1.8 billion to 1.4 billion between 1990 and 2005." Yea now they make$1.75! I'm sure the rise in inflation eventually gave way to a well deserved raise. The World Bank does nothing good for developing countries just as the Fed has done nothing good for the U.S.. Sounds like propaganda to me. Mentor No one wants to be poor. There is and always will be a curve. There will always be people that strive to accomplish great things and well.... those that don't. I really don't think it is comfortable to be poor for anyone though. I also don't think welfare should just be a handout to anyone. There needs to be government social workers assigned to a number of welfare recipients and they need to prove that they really need the money. I have definetely seen people at the grocery store pay with foodstamps or the new cards they have, walk out with 2 carraiges full of food with gold chains hanging off of them, brand new clothes and shoes and then climb into lexus. Ahh.. yea I don't think so, that absolutely can't happen, ever. But I do believe that a government should provide for people that can't provide for themselves without a doubt. It is only right. Yea now they make $1.75! I'm sure the rise in inflation eventually gave way to a well deserved raise. The World Bank does nothing good for developing countries just as the Fed has done nothing good for the U.S.. Sounds like propaganda to me. So what is your answer? What will fix this? brainstorm Imo, the big problem is that it seems very difficult to be relatively poor but stable compared to the impression I have of how it once was. People without money and/or income should be able to get access to very basic shelter, healthy food, adequate clothes, etc. They should not have to go into debt or work bad jobs with bad schedules just to barely keep up with the bills. They should also have the choice to live away from crime, substance abuse, domestic violence, etc. It seems like the only way to do this nowadays is to be middle class or at least find a very good working class area to live in. I think lower mortgage rates have helped make that possible for many people, but lower mortgage rates have the double-edged effect of promoting higher sales prices (since the same monthly payment can afford more) and this makes banks more weary to take risks and certainly makes it harder for people to save up and buy houses outright. The problem is that over a half-century of gradual real-estate inflation has led to an economic dead-end where many people get either stuck in debt or cannot own property at all - and the people who do own property outright and could reduce the price to a level that a poor person could afford are under pressure to make their investment pay to keep up with corporate and public salary levels, insurance costs, etc. WhoWee Imo, the big problem is that it seems very difficult to be relatively poor but stable compared to the impression I have of how it once was. People without money and/or income should be able to get access to very basic shelter, healthy food, adequate clothes, etc. They should not have to go into debt or work bad jobs with bad schedules just to barely keep up with the bills. They should also have the choice to live away from crime, substance abuse, domestic violence, etc. It seems like the only way to do this nowadays is to be middle class or at least find a very good working class area to live in. I think lower mortgage rates have helped make that possible for many people, but lower mortgage rates have the double-edged effect of promoting higher sales prices (since the same monthly payment can afford more) and this makes banks more weary to take risks and certainly makes it harder for people to save up and buy houses outright. The problem is that over a half-century of gradual real-estate inflation has led to an economic dead-end where many people get either stuck in debt or cannot own property at all - and the people who do own property outright and could reduce the price to a level that a poor person could afford are under pressure to make their investment pay to keep up with corporate and public salary levels, insurance costs, etc. Is it possible the barrier to home ownership isn't inflation, but trying to live beyond one's means? The term "McMansion" is relatively new - and denotes the desire to live in larger homes than necessary. Is it possible this behavior occurs at every income level? brainstorm Is it possible the barrier to home ownership isn't inflation, but trying to live beyond one's means? The term "McMansion" is relatively new - and denotes the desire to live in larger homes than necessary. Is it possible this behavior occurs at every income level? People trying to live beyond their means STIMULATES inflation. McMansions add a class-tier to the upper echelons of real-estate and give investors that much more motivation to milk more money out of low-end properties to pay for higher-end ones. The size of a house doesn't really matter as much as the cost of producing it. If you wanted to build a 5000sf enclosure with the cheapest materials using your own labor, how would that compare with Mansion-building in terms of cost and resource-waste? The barrier to home-ownership, imo, is a gap between wage-labor income and the labor that goes into building a house. If people could build their own houses using their own labor and get very cheap but effective materials, everyone could have a house regardless of employment. As it is, people can do this with tents but they're usually camping on land that's not their own and they can barely afford to develop their tent into a shanty house. WhoWee People trying to live beyond their means STIMULATES inflation. McMansions add a class-tier to the upper echelons of real-estate and give investors that much more motivation to milk more money out of low-end properties to pay for higher-end ones. The size of a house doesn't really matter as much as the cost of producing it. If you wanted to build a 5000sf enclosure with the cheapest materials using your own labor, how would that compare with Mansion-building in terms of cost and resource-waste? The barrier to home-ownership, imo, is a gap between wage-labor income and the labor that goes into building a house. If people could build their own houses using their own labor and get very cheap but effective materials, everyone could have a house regardless of employment. As it is, people can do this with tents but they're usually camping on land that's not their own and they can barely afford to develop their tent into a shanty house. On the other hand, when the Government officials spoke of the dream of homeownership - do you think they thought people would try to purchase homes they couldn't afford? If a 4 bedroom home can be found for$40,000 - that they can afford - why would they instead purchase a $100,000 home (they can't afford) - just because the funds were available from a lender? brainstorm On the other hand, when the Government officials spoke of the dream of homeownership - do you think they thought people would try to purchase homes they couldn't afford? If a 4 bedroom home can be found for$40,000 - that they can afford - why would they instead purchase a $100,000 home (they can't afford) - just because the funds were available from a lender? No, because they see property as an investment instead of just as a place to live. I have been playing the board game, Life, lately and the more expensive "starter houses" make more profit when you resell them than the mobile home or less expensive houses. So there is a general cultural assumption that one property is a better investment than another, so people try to get the best investment they can. This mentality is what drives speculation-driven investment and it is a cause of inflation. The funds-availability is just part of the problem, which is generally that everyone wants to get in on a bubble when it's growing. WhoWee No, because they see property as an investment instead of just as a place to live. I have been playing the board game, Life, lately and the more expensive "starter houses" make more profit when you resell them than the mobile home or less expensive houses. So there is a general cultural assumption that one property is a better investment than another, so people try to get the best investment they can. This mentality is what drives speculation-driven investment and it is a cause of inflation. The funds-availability is just part of the problem, which is generally that everyone wants to get in on a bubble when it's growing. We're not analyzing speculative investors in this example. The politicians said affordable home ownership was a right. There were plenty of nothing down and no doc loans available + quite a few agencies to provide assistance. http://www.hud.gov/offices/cpd/affordablehousing/programs/shop/ WhoWee We are moving off-topic. The OP is focused on the level of poverty in the US and asks the question - "Has poverty become too comfortable in the US. Is there adequate incentive for individuals to escape the gravity of benefits?" Do we, as Americans, expect too much? brainstorm We're not analyzing speculative investors in this example. The politicians said affordable home ownership was a right. There were plenty of nothing down and no doc loans available + quite a few agencies to provide assistance. http://www.hud.gov/offices/cpd/affordablehousing/programs/shop/ Home-ownership is a right in the sense that people have the right not to be excluded from owning some form of property that they have the means to develop into a livable domicile. That isn't the same thing as lenders using people as a means to invest in potential foreclosures when the belief was that foreclosed property would be worth more than the mortgage when/if they would default. Nobody seems to remember how lucrative property-investment appeared @2006. WhoWee Home-ownership is a right in the sense that people have the right not to be excluded from owning some form of property that they have the means to develop into a livable domicile. That isn't the same thing as lenders using people as a means to invest in potential foreclosures when the belief was that foreclosed property would be worth more than the mortgage when/if they would default. Nobody seems to remember how lucrative property-investment appeared @2006. Again, a person trying to buy an affordable house is not a speculator. If they can find a foreclosed property that is affordable to live in - that's great! brainstorm Again, a person trying to buy an affordable house is not a speculator. If they can find a foreclosed property that is affordable to live in - that's great! It's hard to discuss with you. You state assumptions without stating grounds or reasons. In what sense is a person trying to buy an affordable house not speculating? People get excited about buying a domicile because they see it as a nest egg. But that wasn't my point with the last post. It was that lenders were using home-buyers as a means of making profit when/if they defaulted on property that they expected to appreciate beyond its loan value. In 2006, people thought it was impossible for property to depreciate. WhoWee To go back to the OP - "...the poverty level (in the 48 contiguous states) for a family of 4 is now$22,350 in the US - that's $429.81 per week/40 hour week =$10.75 per hour. " If the acceptable percentage of income for a mortgage payment is 25% - the maximum payment a person earning $22,350 per year should pay is$465 per month. This will amortize a loan of approximately $72,500 over 30 years @ 5%. http://www.mortgagecalculator.org/ This is with nothing down and doesn't include taxes, insurance, maintenance, furnishings or improvements. Given this information, how much should they pay for a house? Would it be wise to buy a house for$72,500 because the money is available or perhaps $50,000 and have enough credit left over to be comfortable and secure? Remember - this purchaser is at the US poverty level. Mentor Has poverty become too comfortable in the US. Is there adequate incentive for individuals to escape the gravity of benefits? My answer to this is simply my usual refrain about definitions: Definitions are only useful when they are clear. For most words, that means one accepted definition, but if a word has more than one, that's ok as long as those definitions are clear and faithfully adhered to. It's fine to define "poverty" in any way that is useful for the purpose the word is used for as long as people clearly understand and accept that there are multiple definitions in order to hold a proper conversation on the subject. The counterexample is the word "terrorist" where some people intentionally manipulate the definition for political purposes and never accept any consistency, making discussion impossible. Yes, the same thing can also happen with "poverty". Too often I see conversations where people lament about the high poverty rate in the US, while not recognizing that our poverty rate is not comparable to poverty rates in a lot of other countries. Most of the poor in many countries (we're talking a large fraction of the global population - 10%ish) have living standards on par with medival peasants while most of the poor in the US and many European countries have standards of living that would make medival kings envious enough to start wars. What "poverty" in the US means is very simple: the poverty line is the line above which the standard of living is deemed acceptable by American standards. Anyone below the line has a standard of living below the minimum of what is considered acceptable. Now regarding the specific question, there are many different types of "comfort". You can drive by a trailer park and see a rediculous fraction of trailers with satellite tv dishes. That's a luxury item that the poor choose to buy which puts their basic needs in jeaporady. They must have a certain level of "comfort" otherwise they wouldn't do that. If you really aren't sure where your next meal is coming from, you'll let go of your satellite TV to make sure you get it. But the downside of having your car reposessed isn't severe enough to give up certain luxuries: hence, people with a lot of debt and satellite TV. IMO, that's because we provide the poor with financial crutches that enable them to continue making poor decisions without consequences, thus perpetuating their poverty and dependence on government aid. That's leaing us in a direction I'm not sure the OP intended, though... Gold Member Imo, the big problem is that it seems very difficult to be relatively poor but stable compared to the impression I have of how it once was. People without money and/or income should be able to get access to very basic shelter, healthy food, adequate clothes, etc. They should not have to go into debt or work bad jobs with bad schedules just to barely keep up with the bills. They should also have the choice to live away from crime, substance abuse, domestic violence, etc. It seems like the only way to do this nowadays is to be middle class or at least find a very good working class area to live in. How? Aside from the fact that domestic violence and substance abuse is not confined to the poor in the first place, I don't see how that's possible. There WILL be bad jobs out there, bad schedules, low wages, and poor benefits. There will always be people who have piss poor financial control. I know MULTIPLE people who have spent their early 20s and late teens getting jobs, spending their paycheck on beer (LITERALLY!), and getting fired the next week. Crime and everything considered "bad" about living in poverty will always follow the lowest segments of society. There will always be a segment of the population whom, if westernized societies hadn't effectively fought off the forces of natural selection, would never have made it to today. WhoWee How? Aside from the fact that domestic violence and substance abuse is not confined to the poor in the first place, I don't see how that's possible. There WILL be bad jobs out there, bad schedules, low wages, and poor benefits. There will always be people who have piss poor financial control. I know MULTIPLE people who have spent their early 20s and late teens getting jobs, spending their paycheck on beer (LITERALLY!), and getting fired the next week. Crime and everything considered "bad" about living in poverty will always follow the lowest segments of society. There will always be a segment of the population whom, if westernized societies hadn't effectively fought off the forces of natural selection, would never have made it to today. This description of "bad jobs", tied to Russ's observation of world wide poverty levels, leads to another point - there are jobs Americans (living in poverty) are not willing to accept - largely because the pay rate for the jobs would not exceed their Government benefits. Domestically, this creates an opportunity for migrant workers from Mexico and Central/S America. In China (and elsewhere) low paying manufacturing jobs abound. These jobs raise their local standard of living - but are below the US comfort standard and Government benefits - correct? The question of do Americans expect too much is valid - in as much as how long should we wait (decades?) for the standard of living in the world to catch up - before our "poor" re-engage? Or, should we re-evaluate our own poverty rates and strive for full employment and maximization of production capacities? brainstorm This is with nothing down and doesn't include taxes, insurance, maintenance, furnishings or improvements. Given this information, how much should they pay for a house? Would it be wise to buy a house for$72,500 because the money is available or perhaps $50,000 and have enough credit left over to be comfortable and secure? What about buying a land-parcel for$5000 and getting a used mobile home or getting a kit-house for another $10,000? That would be better for the poor person but it would have a negative effect on an economy fiscally stimulated by long-term mortgage funding of$50,000+ properties. The ethical issue is whether the working poor should be held hostage in bad service jobs for 30 years to fund the economy that exploits their labor 24/7 for every possible type of service. WhoWee What about buying a land-parcel for $5000 and getting a used mobile home or getting a kit-house for another$10,000? That is a great point. There are plenty of "Green" SIP kits available now. Some designs are very affordable and quite energy efficient. As for building sites - look to reclaimed inner city locations - where the utilities are already available. I don't have specific figures for this post, but if you consider the cost of HUD and HEAP/PIPP subsidies over a 20 or 30 year period compared to a specific cost for a new energy efficient house on re-claimed land - there has to be savings - plus the reward of home ownership. The only problem is the houses would need to be smaller and more affordable. A 600 to 800 square foot design - similar to an apartment - not a $100,000+ and 1,500 sq ft + luxury home. brainstorm That is a great point. There are plenty of "Green" SIP kits available now. Some designs are very affordable and quite energy efficient. As for building sites - look to reclaimed inner city locations - where the utilities are already available. I don't have specific figures for this post, but if you consider the cost of HUD and HEAP/PIPP subsidies over a 20 or 30 year period compared to a specific cost for a new energy efficient house on re-claimed land - there has to be savings - plus the reward of home ownership. The only problem is the houses would need to be smaller and more affordable. A 600 to 800 square foot design - similar to an apartment - not a$100,000+ and 1,500 sq ft + luxury home. Right, but one of the things I was trying to point out with this poverty thread is that the reason a 1500sf house costs @$100,000+ is because there is an economy built up to receive the money from the bank. In other words, the middle class would lose income if the poor would build their own houses for$15,000. A middle-class income relies on an economy where poor people borrow $50,000 and spend the next 30 years working in crappy service jobs to pay off the mortgage. I'm for liberating people from a life of crappy dead-end service jobs, but I think you have to be clear that this is not separate from the middle-class culture of investment in$100,000+ suburban property (and other price property not located in suburbs.) Middle-class income and GDP will continue to decrease as the poor become less restricted to paying either long-term mortgage payment or rent for their domicile. Maybe a more gradual compromise would be for crappy service-jobs to be made less crappy by shortening the opening times of businesses and reducing full-time work from 40 to 30 or less, while expanding the pool of people available to work in these kinds of jobs. Yes, it is many middle-class people's nightmare or biggest sense of failure to have to get a job in food service, cleaning, etc. But as long as the economy continues to patronize businesses that require such labor, there have to be people to perform it. So it makes sense to me that more of the people who consume such services spend at least part of their careers working to provide them. Last edited: WhoWee Right, but one of the things I was trying to point out with this poverty thread is that the reason a 1500sf house costs @$100,000+ is because there is an economy built up to receive the money from the bank. In other words, the middle class would lose income if the poor would build their own houses for$15,000. A middle-class income relies on an economy where poor people borrow $50,000 and spend the next 30 years working in crappy service jobs to pay off the mortgage. I'm for liberating people from a life of crappy dead-end service jobs, but I think you have to be clear that this is not separate from the middle-class culture of investment in$100,000+ suburban property (and other price property not located in suburbs.) A $25,000 (total investment) in a new energy efficient home on a reclaimed city lot financed over 30 years at 5 percent (with$500 down payment) would have an estimated monthly payment of $157.04. That is affordable and reasonable. It would enable poor people to byild equity in a quality asset and revitalize the inner city neighborhoods. It's a win - win - win. WhoWee With a budget of$17.836 Billion for rent subsidies - HUD could finance the full cost ($157.04 per month) of 9,464,680 of the affordable homes in our discussion. This is not what I'm proposing because the "owners" would only have$500 invested in an asset worth $25,000 upon delivery. However, if the Government guaranteed the loans and kept them occupied (in the event of foreclosure) the savings for taxpayers (over 30 years =$535,080,000,000) could be substantial. http://www.hud.gov/budgetsummary2010/fy10budget.pdf "Reaffirming Support for Vouchers: The first element of the new partnership on affordable rental housing involves strong and persistent support for vouchers. HUD requests $17.836 billion for vouchers, an increase of approximately$1.77 billion over the levels provided in the fiscal year 2009 Omnibus Appropriations Act. Initiated in the mid-1970s, rental housing vouchers have since emerged as the nation’s largest low-income housing assistance program. They now serve over 2 million households with extremely low incomes (about 40 percent of families who receive vouchers now have incomes below half of the poverty line), paying the difference between 30 percent of a household’s income and the rent of a qualifying, moderately priced house or apartment." brainstorm A $25,000 (total investment) in a new energy efficient home on a reclaimed city lot financed over 30 years at 5 percent (with$500 down payment) would have an estimated monthly payment of $157.04. That is affordable and reasonable. It would enable poor people to byild equity in a quality asset and revitalize the inner city neighborhoods. It's a win - win - win. It doesn't sound like a bad deal to me either, superficially. But what happens if you lose your income and can't make your$157.04 payment anymore? To me it would be better for people to own property without debt at all, but that would require an economy where people have the means to save up to buy it. Then, of course, where do you live while you are saving up to buy? Ideally, people's parents should provide them with a starter-home when they leave the house; which is what some people used to do. But what do you do for people whose parents don't provide them with anything when they are old enough to go out on their own? You can say it's unfair for them to have to work for others to afford a place to live, but if the government provided people with a starter home, what incentive would parents have to save and invest in their children's future? What's more social-economic cultural differences have evolved such that some people expect things like jobs and income at levels that exceed basic necessity. So if you were working to build a starter-house for your kid and someone else was just working a job and paying rent and then expected you to pay taxes to fund their income, e.g. so that they could buy a house you built, you might wonder why you should work to build/fix their house for them and their kids instead of them doing it themselves as you do. WhoWee BTW - this would expand the number of households served from 2 million to over 9 million. .... ...., but if the government provided people with a starter home, what incentive would parents have to save and invest in their children's future? ..... The government has no such incentive qualms when it comes to giving foreign aid to nations that should be doing their own things. In fact, it encourages dependence on foregn aid in order to maintain regional political influence. I don't see Americans taking umbrage with their tax dollars being splurged that way. They only seem to take unmbrage whenever the tax dollars are imagined as going to fellow Americans in need. WhoWee The government has no such incentive qualms when it comes to giving foreign aid to nations that should be doing their own things. In fact, it encourages dependence on foregn aid in order to maintain regional political influence. I don't see Americans taking umbrage with their tax dollars being splurged that way. They only seem to take unmbrage whenever the tax dollars are imagined as going to fellow Americans in need. This thread is focused on the comfort level provided by benefits. Do you have any thoughts as to the quality of benefits - too much - not enough? thephysicsman No matter how poor you are, you don't have the right to other people's money. It's time we replace social security with private charity. WhoWee No matter how poor you are, you don't have the right to other people's money. It's time we replace social security with private charity. Social security is funded through payroll deductions and matching tax. brainstorm The government has no such incentive qualms when it comes to giving foreign aid to nations that should be doing their own things. In fact, it encourages dependence on foregn aid in order to maintain regional political influence. I don't see Americans taking umbrage with their tax dollars being splurged that way. They only seem to take unmbrage whenever the tax dollars are imagined as going to fellow Americans in need. You're comparing radically different things. Be careful before you start threatening "foreigners" by suggesting that they are an impediment to the "national welfare of the American People." That is too close to a national-socialist type approach, imo, where scapegoats are sought to exclude in order to increase "the nation" as a closed social group. The political reality is that the US political-economy extends beyond people with US citizenship and the government has responsibilities to protect all those people's freedoms, not just promote collective dominion for citizens globally over anyone and everyone without citizenship. If you want to mess with foreign aid, it would be better to take an approach that is constructive and respectful of the rights and freedoms of the people those policies are aimed to serve. Look at their relationship(s) with US businesses and industries, etc. Don't just look at them as sandbags to be jettisoned in times when US citizens/businesses can't get their acts together to overcome financial hurdles to universal opportunity for at least basic economic sustainment. Mentor This description of "bad jobs", tied to Russ's observation of world wide poverty levels, leads to another point - there are jobs Americans (living in poverty) are not willing to accept - largely because the pay rate for the jobs would not exceed their Government benefits. Domestically, this creates an opportunity for migrant workers from Mexico and Central/S America. In China (and elsewhere) low paying manufacturing jobs abound. These jobs raise their local standard of living - but are below the US comfort standard and Government benefits - correct? The question of do Americans expect too much is valid - in as much as how long should we wait (decades?) for the standard of living in the world to catch up - before our "poor" re-engage? Or, should we re-evaluate our own poverty rates and strive for full employment and maximization of production capacities? I realize I said I was answering the question directly but I really didn't. My answer is no, poverty should not be comfortable because if it is comfortable, many people won't make an effort to get out of it. WhoWee I realize I said I was answering the question directly but I really didn't. My answer is no, poverty should not be comfortable because if it is comfortable, many people won't make an effort to get out of it. In his response to the State of the Union Address tonight, Congressman Ryan said (something to the effect of) "let's not turn the safety net into a hammock" - classic! brainstorm I realize I said I was answering the question directly but I really didn't. My answer is no, poverty should not be comfortable because if it is comfortable, many people won't make an effort to get out of it. This is the kind of economic thinking that poisons any hope of ever having a truly free market, imo. If poverty is seen as having a social-motivational purpose for "getting out of it" in a meritocratic system, the economy becomes nothing more than a systemic reward/punishment for submission and obedience to authority figures in control of those economic rewards. There needs to be some form of economic opportunity that allows people to directly work for what they get and be able to make due with little or no money if they so choose. They should be able to get access to building materials and tool (use) to build their own home/shelter to stay out of the weather OR have a small apartment in a public building if those are more readily available than land in an urban area. They should also have access to some kind of community agriculture where they can work to produce their own food. There is also no reason that basic forms of piece-work shouldn't be available for people to work for pay when it is convenient to their schedules. In other words, people shouldn't have to submit to an employer's schedule to be able to contribute their labor to an enterprise in exchange for some pay. Having this kind of work available, however, would require labor-intensive local factories or farms that had a steady supply of tasks to be done and a large number of people ready to work. Maybe there could be something like a gas-station sign to advertise the going rate for labor at various moments, so that the price could go up at moments more labor was needed and it could go down when less was needed. You're comparing radically different things. Be careful before you start threatening "foreigners" by suggesting that they are an impediment to the "national welfare of the American People." That is too close to a national-socialist type approach, imo, where scapegoats are sought to exclude in order to increase "the nation" as a closed social group. The political reality is that the US political-economy extends beyond people with US citizenship and the government has responsibilities to protect all those people's freedoms, not just promote collective dominion for citizens globally over anyone and everyone without citizenship. If you want to mess with foreign aid, it would be better to take an approach that is constructive and respectful of the rights and freedoms of the people those policies are aimed to serve. Look at their relationship(s) with US businesses and industries, etc. Don't just look at them as sandbags to be jettisoned in times when US citizens/businesses can't get their acts together to overcome financial hurdles to universal opportunity for at least basic economic sustainment. Your response is full of strawman arguments based on your misinterpretation of what I said. It assumes intentions and ideas totally alien to me. When not sure it's better keep the imagination in check and if in doubt to respectfully ask for clarification instead of pretending to be omniscient. IMHO brainstorm Your response is full of strawman arguments based on your misinterpretation of what I said. It assumes intentions and ideas totally alien to me. When not sure it's better keep the imagination in check and if in doubt to respectfully ask for clarification instead of pretending to be omniscient. IMHO I don't believe I'm omniscient. I just analyzed the what you wrote. Here is what you said: The government has no such incentive qualms when it comes to giving foreign aid to nations that should be doing their own things. First, what does this sentence imply? Why should "nations be doing their own things?" That implies that there's automatically something wrong with people in separated national regions with different national citizenships working together. Is this what you're assuming. That every form of global interaction that doesn't restrict itself to homonational relations is a problem? In fact, it encourages dependence on foregn aid in order to maintain regional political influence. I see how it could, but that's the same as saying that providing federal funding for interstate highways could encourage dependence on federal aid. You're just singling out "foreign aid" because it's "foreign," no? I don't see Americans taking umbrage with their tax dollars being splurged that way. They only seem to take unmbrage whenever the tax dollars are imagined as going to fellow Americans in need. This sounds like some kind of sarcastic double-talk to implicitly complain that "Americans" should support tax money going to "fellow Americans" and be against money going to "foreigners" for any reason. In other words, you want the US government to underwrite a system of economic privilege for citizens based on birth-right. You assume that the money spent on "foreign aid" is nothing more than a welfare check for non-citizens. You don't even mention specifics about what the money is being spent for and who the people are that are involved. You just prejudicially assume that because (some) of them don't have US citizenship that they are not part of a global US community. What right do people have to deny responsibility for US presence globally and redirect government spending on no other basis than the recipients being citizens and/or living within the regional boundaries of the official states? BilPrestonEsq No matter how poor you are, you don't have the right to other people's money. It's time we replace social security with private charity. That's like saying that we should rely on a person's self discipline and morality instead of enforcing laws(on criminals, theives and the like). There will always be people who can't help themselves and yes natural selection would 'take care of them' if you know what I mean. But we are humans and we have compassion. Problem is you can't rely on everyone to have compassion. So you have to take their money by force through taxes or else the burden would fall on the shoulders of only the compassionate, giving people without compassion a financial advantage. The problem is getting the money to the right people and leaving able minds and bodies to fend for themselves like everyone else. It needs to be case by case rather than just giving money to anyone below a certain income. Hiring social workers to weed out the people that don't really need the money would certainly be more efficient than just handing out money to anyone that says they need it. A lot of this has a lot to do with the state of our economy anyways, you cannot leave out the fundamental flaws of our monetary policy in any of these arguments. But if we are talking about a monetary policy that actually works and is sustainable and fair, unlike the one we have now, then that is my answer to the welfare problem.
# aliquote ## < a quantity that can be divided into another a whole number of time /> Here are the current Neovim mappings I came to feel comfortable with over time. This is by no means a reference card, and yes I do use the arrow keys for navigating into my buffers. It depends on motion range, though. Another key idea is that I use Tmux everyday and I like to have an unified set of mappings, beside the leader/prefix key. See my blog posts for how I started to use Vim and Neovim. Note that I only have 10 plugins in my start and opt directories, in addition to part of mini.nvim that I adapted to suit my needs better: (This may change in the future but usually I tend to remove plugins rather than add new ones.1 ) • [opt] Comment.nvim, nvim-lspconfig, null-ls.nvim, nvim-parinfer, vim-test, vimtex; • [start] nvim-treesitter, packer.nvim, plenary.nvim, telescope.nvim I prefer a minimalist setup these days, and I tend to rely on hand-on solutions for tasks I carry over and over. Many of those custom settings come from briliant Vimers who are acknowledged in my config files. Note that some of those mappings may override existing ones, whether they are somewhat redundant (e.g. s and c) or because I don’t use them at all (e.g., some of the Ctrl, g or z combinations). My leader is “,"2 and I do not define specific mappings for the localleader in my main config files. It looks like I always misunderstoof the role of localleader, and I decided to book it for my after/ftplugin custom settings. Mapping Mode Command Role c : Go to start of line c : Go to end of line - n :Ex Show explorer ,l n Alternate buffer ,! n :10sp +te Popup terminal ,. n :lcd %:p:h Set local current directory ,x n :bp bd! # Kill current buffer ,X n Only() Keep only current buffer (custom) ,E n :tabe =expand("%:p:h") . "/" Open file in new tab from current directory ,e n :e =expand("%:p:h") . "/" Open file from current directory n :tabprev Previous tab n :tabnext Next tab n :tabnext Next tab , n :$tabnew New tab , n C-wT Move split to new tab J n mzJz Join lines (cursor stationary) < v v >gv Fix block indentation (forward) n quickfix_toggle() Close quickfix window (custom) [Q n :cfirst Go to first quickfix item ]Q n :clast Go to last quickfix item n :vertical resize -4 Resize split (left) n :resize -4 Resize split (down) n :resize +4 Resize split (up) n :vertical resize +4 Resize split (right) W!! c :w !sudo tee % >/dev/null:e! Save as root i u[s1z=]au Fix last spelling error (repeat.) ,S n vip:sort iu Sort in reverse lexicographic order ,S v :sort u Sort in reverse lexicographic order ,s n vip:sort u Sort in lexicographic order ,s v :sort u Sort in lexicographic order N n Nzz Keep cursor centered on screen when looking behind n n nzz Keep cursor centered on screen when looking ahead U n Redo zs n 1z= Fix spelling using first suggestion ^ n :set hls!set hls? Toggle on/off search highlights n :nohl Clear search highlights ,df n :setlocal spell!setlocal spell?:setlocal spelllang=fr Toggle spelling on/off (French) ,de n :setlocal spell!setlocal spell?:setlocal spelllang=en Toggle spelling on/off (English) n :let @s='\<'.expand('').'\>':%s/s// Substitute current word x sy:%s/s// Substitute current word ,p n :set invpaste:set paste? Toggle on/off paste mode YY n,v "+y Yank to clipboard n :Grep Grep (custom) n :Telescope find_files Fuzzy finder for files ,/ n :Telescope current_buffer_fuzzy_find Fuzzy search in buffer ,: n :Telescope command_history Fuzzy finder for command history ,, n :Telescope buffers Fuzzy finder for buffers ,f n :Telescope live_grep Live grep in working directory ,* n :lua require('telescope.builtin').grep_string({search = vim.fn.expand("")}) Fuzzy grep current word in working directory ,gg n :Telescope git_status Git status ,gC n :Telescope git_commits Git commit log ,gc n :Telescope git_bcommits Git buffer commit log ,gb n :Telescope git_branches Git branches ,gs n :Telescope git_stash Git stash ,q n :Telescope quickfix Telescope quickfix ,r n :Telescope oldfiles Telescope old files ,ww n :Telescope diagnostics LSP diagnostics ,T n :TestFile Test file (vimtest) ,t n :TestNearest Test nearest (vimtest) s v,x ywpa Send selection to nearest terminal ss n Vywpa Send line to nearest terminal gK n :Dasht Query documentation from Dash ,wq n vim.diagnostic.setqflist Show diagnostics in quickfix (LSP) [w n vim.diagnostic.goto_prev Go to previous diagnostic (LSP) ]w n vim.diagnostic.goto_next Go to next diagnostic (LSP) ,wd n vim.diagnostic.open_float Show diagnostic in a floating window (LSP) g= x vim.lsp.buf.range_formatting Format selection (LSP) g= n vim.lsp.buf.formatting Format selection (LSP) z= n,x vim.lsp.buf.code_action Code action (LSP) g0 n require("telescope.builtin").lsp_workspace_symbols Show workspace symbols (LSP) K n vim.lsp.buf.hover Show help for current symbol (LSP) i vim.lsp.buf.signature_help Show signature help (LSP) gd n require("telescope.builtin").lsp_definitions Show definition (LSP) gr n require("telescope.builtin").lsp_references Show references (LSP) zr n vim.lsp.buf.rename Rename symbol (LSP) gi n vim.lsp.buf.implementation Show implementation (LSP) I also have kind of an universal mapping, gs, which depending on filetype may compile a$\LaTeX\$ document or a C file and show its output, run a Python/Haskell/Lisp/Scheme script and show its output, etc. It is defined in relevant after/ftplugin/*.vim files. Finally, since I never found a good or reliable plugin to send command to a terminal, and I’ve tried many of the exiting ones for Vim or Neovim, I wrote my own commands: (It assumes that you have a terminal opened next to your buffer in a split, which is how I work in any case.) " poor man send-to-repl features (we need to fire a REPL in a split first) noremap ss Vy<C-w>wpa<CR><CR><Esc> noremap s y<C-w>wpa<CR><CR><Esc> xnoremap s y<C-w>wpa<CR><CR><Esc> 1. When I first drafted this cheatsheet, I was still using 15 plugins or so. Some packages broke at some point and I was too lazy to investigate why, or some were of too little use to justify keeping them. I don’t really miss anything with my current config, though. ↩︎ 2. I recently switched to this leader key after having spent three years using the Space key as my leader, as a leftover of my Doom Emacs period. ↩︎
## Partial Derivatives You CAN Ace Calculus 17trek > calculus > mvc > partial derivatives ### Page Tools, Related Topics and Links 17calculus is a 17trek subject ideas to save on books - bags - supplies Partial derivatives follow directly from derivatives you have seen in single variable calculus. Calculation is pretty straightforward but, as is common in multi-variable calculus, you need to watch your notation carefully. To calculate partial derivatives, you are given a function, usually of more than one variable, and you are asked to take the derivative with respect to one of the variables. To do so, you consider the other variable as a constant. Before we go any further, let's discuss notation. Remember from single variable calculus, if you are given a function, say $$f(x)$$ and you are asked to take the derivative (with respect to x, of course, since x is the only variable ). For partial derivatives, you have more than one variable, say $$g(x,y)$$. Here is a comparison of how you write the derivative of g with respect to x in both cases. The $$d$$ to indicate derivative for a single variable function is replaced by $$\partial$$ for a partial derivative. This can also be written with a subscript to indicate the variable that we are taking the derivative with respect to. Notice that with partial derivatives, there is no 'prime' notation, since there is no way to determine the derivative variable. We need to show the variable somewhere to make any sense out of a partial derivative. Example - - Okay, we are ready for an example. Let's find both partial derivatives of $$g(x,y) = x^2y$$, meaning $$\partial g/\partial x$$ and $$\partial g/\partial y$$. partial derivative of $$g$$ with respect to $$x$$    $$\left[ \partial g/\partial x \right]$$ partial derivative of $$g$$ with respect to $$y$$    $$\left[ \partial g/\partial y \right]$$ y is a constant x is a constant $$\displaystyle{ \frac{\partial g}{\partial x} = }$$ $$\displaystyle{ \frac{\partial}{\partial x}[x^2y] = }$$ $$\displaystyle{ y \frac{\partial}{\partial x}[x^2] = }$$ $$\displaystyle{ y(2x) = 2xy }$$ $$\displaystyle{ \frac{\partial g}{\partial y} = }$$ $$\displaystyle{ \frac{\partial}{\partial y}[x^2y] = }$$ $$\displaystyle{ x^2 \frac{\partial}{\partial y}[y] = }$$ $$\displaystyle{ x^2 (1) = x^2 }$$ $$\displaystyle{\frac{\partial}{\partial x}[x^2y] = 2xy}$$ $$\displaystyle{ \frac{\partial}{\partial y}[x^2y] = x^2 }$$ Notice that in each case, we could pull out the other variable, since it is considered a constant, and then we take the derivative just as we would in a single variable equation. For partial derivatives, there are similar rules for products and quotients of functions. Here is a quick video showing those equations. Chain Rule for Partial Derivatives These next two videos are very good since they explain the chain rule graphically using some examples. This means you will not have to memorize formulas. (a) $$z=f(x,y), x=g(t), y=h(t)$$; find $$dz/dt$$(b) $$z=f(x,y), x=u-v, y=v-u$$; find $$z_u$$ and $$z_v$$ $$w=f(x,y), x=g(r,s), y=h(r,s)$$; find $$\partial w/ \partial r$$ and $$\partial w/ \partial s$$ Here is a very short video clip discussing a more general version of the chain rule for functions of more than two variables. General Chain Rule This next video contains a proof for a chain rule for partial derivatives. We recommend that you watch it to get a deeper understanding of the mathematics but it is not required in order to use the chain rule. Proof of a chain rule for partial derivatives One of the main uses of the first derivative is in the First Derivative Test, similar to what you learned in single variable calculus. Multi-variable functions can be tested with the partial derivative version of the First Derivative Test. This next panel shows a proof of how it works. We recommend that you watch this video since it can give you a better feel for partial derivatives. Although this is a proof, this video is really good to watch since it gives you a feel for how partial derivatives work and what they look like. So don't let the word 'proof' deter you from watching this video. This is one of the best instructors we've ever seen. So he explains it in a way that is very understandable. Proof: First derivative test Second Order Partial Derivatives As you learned in single variable calculus, you can take higher order derivatives of functions. This is also true for multi-variable functions. However, for second order partial derivatives, there are actually four second order derivatives, compared to two for single variable functions. Using subscript notation, we have these four partial derivatives. An important, but not obvious, result is that the two mixed partials, $$f_{xy}$$ and $$f_{yx}$$ are always equal, i.e. $$f_{xy} = f_{yx}$$. Here is a great video clip explaining these second order equations in more detail using other, very common, notation. Okay, now let's take the second derivative a step further and use the chain rule. How do we do that? Well, this video shows how and he has a neat way of drawing a diagram to help visualize the chain rule. Application: Analyzing Critical Points Similar to single-variable calculus, you can use the partial derivative to determine the critical points, inflection points and the behavior of a function. Here are some videos discussing this. Okay, time for some practice problems. Once you are done with those, an important application of partial derivatives can be found on the next page where we discuss gradients and directional derivatives.
# Status: Draft ## Differentiable Programming Deep Learning (DL) has recently gained a lot of interest, as nowadays, many practical applications rely on it. Typically, these applications are implemented with the help of special deep learning libraries, which inner implementations are hard to understand. We developed such a library in a lightweight way with a focus on teaching. Our library DP (differentiable programming) has the following properties which fit particular requirements for education: small code base, simple concepts, and stable Application Programming Interface (API). Its core use case is to teach how deep learning libraries work in principle. The library is divided into two layers. The low-level part allows to build programmatically a computational graph based on elementary operations. Built-in reverse mode automatic differentiation on the computational graph allows the training of machine learning models implemented as computational graphs. The higher-level part of the library eases the implementation of neural networks by providing larger building blocks, such as neuron layers and helper functions, e.g., implementation of optimizers for training neural networks. An additional benefit of the library is that exercises and programming assignments based on it do not need to be permanently refactored because of its stable API. ## Introduction Modern deep learning libraries ease the implementation of neural networks for applications and research. In the last few years, different types of such libraries were developed by academic groups and commercial companies. Examples are Theano[1], TensorFlow[2] or PyTorch[3]. Recently, the term ”differentiable programming” emerged (see e.g., [5]) which expresses that e.g. (Deep) Neural Networks can be implemented by such libraries by composing building blocks provided by the library. The term differentiable programming also reflects the fact that a much wider spectrum of models is possible by using additional (differentiable) structures (e.g. memory, stacks, queues) [12, 13] as building blocks and control flow statements. With the DP library, we provide a minimalistic version of such a library for teaching purposes. The library is designed light-weighted, focusing on the principles of differentiable programming: How to build a computational graph and how automatic differentiation can be implemented. We also developed a high-level neural network API which allows for more convenient implementation of neural network models by providing predefined functional blocks, typically used in neural networks. The library is accompanied by many Jupyter[26] notebooks, a de facto standard in data science research and education [27], to demonstrate and teach the underlying principles of a deep learning library. We also provide many exercises that allow students to deepen their understanding. The exercises also include concepts of modern neural networks, e.g., activation functions, layer initialization, versions of stochastic gradient descent, dropout, and batch normalization (see e.g. [5]). ## Prerequisites Read through this course and then at the end you can find a list of exercises. ## Types of deep learning libraries Different deep learning libraries follow different concepts, and they distinguish further from each other in various aspects. In some libraries, the neural networks must be defined by configuration (e.g. Caffe[6]). Other libraries provide APIs for programming languages, e.g. for Python or R. Some of the APIs resemble languages that are embedded in a host language. Typically, with these domain-specific languages, the computational graphs are defined symbolically. In the next step, the computational graphs (and the corresponding graphs for the derivatives) are translated into code for another programming language, typically C++ or CUDA[7]. Subsequently, the program is compiled and can be executed. Sometimes the term static computation graph is used here which reflects the fact that the graph is defined once declaratively and cannot be changed dynamically. Contrary to this symbolic approach is the imperative approach. Here, the computation graph is built up implicitly by executing the program line by line. The forward computation is done directly, and the computation of the derivatives can be done at the end, e.g., by recursion. With each execution of the program, control structures in the program can change the structure of the computation graph. In this case the term dynamic computation graph is used. Another aspect is the granularity of the computational operations in a deep learning library. With some libraries, the computational graph can be constructed with elementary tensor operations, e.g. matrix multiplication. In other libraries, the operations may correspond to whole layers of a neural network. Our library DP is a finely granular, imperative deep learning library for Python, based on NumPy[16]. The focus of the library lies in teaching the principles of a deep learning library and the implementation of neural network models and algorithms. Therefore, we designed the library as simple as possible, and we restrict the tensor order to two, i.e. matrices. So, the code base of DP is significantly smaller and easier to understand as of libraries with much more functionality like autograd[9]. Another problem is that most common deep learning libraries are still subject to frequent changes in their API, which is a big drawback when used for exercises. We are developing exercises for advanced deep learning, e.g., Bayesian neural networks [9] or variational autoencoders [10]. For educational reasons (didactic reduction), we provide all boilerplate code so that the students can focus on the learning objective. The boilerplate code includes implementation against a deep learning library. If then a new version of the used library is released and its usage changes, exercises have to be adjusted accordingly to work correctly. Typically, universities do not have the personal resources to keep the teaching materials and exercises permanently up-to-date. The minimalistic approach of our library and the strict focus on teaching allows us to keep its API stable and therefore eliminates the need for permanent maintenance of the exercises. ## Overview on the principles In deep learning libraries, a machine learning model is built up as a computational graph. A computational graph is a directed graph. The structure of the graph encodes the order of the computation steps. At each inner node, an elementary computation is executed. The inner nodes of the graph are elementary mathematical operations (including elementary functions). Examples of elementary operators are +, - or dot-product and elementary functions are e.g., $exp$, $tanh$ or $ReLU$. A computational graph corresponds to a mathematical expression. The input nodes are the parameters of the model or data values. In machine learning, the output nodes of the graph usually correspond to the prediction values or cost values. Typically, the computational graph is built up in a computer program which allows different programming techniques such as looping, branching, and recursion. Computational graphs enable automatic differentiation. For each computational node the derivative of the operation must be known. Local derivative computations are combined by the chain rule of calculus to get a numerical value for the derivatives of the whole computational graph for given input values. In deep learning libraries this is typically implemented as reverse-mode automatic differentiation [5]. With reverse-mode automatic differentiation, all partial derivatives of the output w.r.t. to all inputs can be calculated efficiently. This feature is very important for machine learning. In the training process of a machine learning model, all partial derivatives of the cost function w.r.t. all parameters of the model must be computed. In neural networks, these parameters are the neuron weights. The computational graph for the training of a model corresponds to the cost function which should be minimized in the training procedure [8]. The cost $loss$ $\theta$ is a function of the parameters of the model. During the optimization, the parameters are adapted to minimize the cost value. This optimization is typically realized by variants of stochastic gradient descent (SGD) [11]. In each step of SGD all partial derivatives of the cost w.r.t. the parameters must be computed. Before the appearance of deep learning libraries, a symbolic expression for the partial derivatives for new models was done by the researcher in a pen-and-paper solution. For an example see e.g., [14]. This manual procedure is error-prone, time consuming and nearly impossible for large complex models. By building up the model in a deep learning library the build-in feature reverse automatic differentiation deliberates the researcher or developer from this work. ## Theoretical background of automatic differentiation In the following we describe the theoretical background of reverse mode automatic differentiation in a semi-formal way. For a more rigorous formal explanation, see e.g. [15]. ## Notation In the theoretical description, we use the following mathematical notation. Lower-case Latin letters, e.g. $a$ , denote scalars or vectors. Upper-case Latin letters, e.g. $A$ , denote matrices or more structured objects like graphs. Python variables corresponding to a mathematical object are denoted as lower-case letter in a sans-serif fond, e.g. $a$, independent of the type. From the context, it should be clear which objects are referenced by the corresponding letters. ## Definition of a computational graph A computational graph $G$ is a directed acyclic graph. A directed acyclic graph is a set of nodes $V$ (with a node $n^{(i)}$ in $V$) and a set of edges $E$, i.e. pairs of nodes $(n^{(i)}, n^{(j)}) \in E$. i respectively j is the index of the node. Further we assume that the computational graph $G$ is topologically ordered, i.e. for each $(n^{(i)}, n^{(j)})$ holds $i$ < $j$. We define the leaves of the graph as the nodes with no incoming edges. Each node $n^{(i)}$ has a corresponding variable $v^{(i)}$. The dimensionality of variable $v^{(i)}$ is $d^{(i)}$. Leaf nodes correspond directly to inputs for the computation and the value of the variable $v^{(i)}$ is directly the input value. Non-leaf nodes $n^{(j)}$ have a corresponding operator $o^{(j)}$. The operator $o^{(j)}$ takes takes as input the variables $v^{(i)}$ of all nodes with an outgoing edge to the node $n^{(j)}$. For the concatenation of all variables $v^{(i)}$ with an edge to $n^{(j)}$ we write $w^{(j)}$. The concatenation is done in topological order. For a consistent definition we can define the operator for leaf nodes as the identity which takes as input the (external) input to the (leaf) node. In summary, a computational graph is a directed acyclic graph where each node has an internal structure. The nodes $n^{(i)}$ consists of a variable $v^{(i)}$ and an operator $o^{(i)}$. The input to the operator is determined by the edge structure of the graph. ## Forward propagation algorithm The forward propagation algorithm computes the values of all non-leaf nodes. The values of the leaf nodes are the input to the algorithm. In topological order all non-leaf nodes $n^{(j)}$ are computed by the corresponding operator $o^{(j)}$ and the variables of the nodes $n^{(i)}$ which have an edge to the node $n^{(j)}$ Note that the variable values of all $n^{(i)}$ are already known. Either because they are leaf nodes or they have a lower order index and are already computed by the algorithm. ## Reverse mode automatic differentiation Reverse-mode automatic differentiation is a two-step procedure. In the first step, the variable values of each inner node of the computational graph are computed by the forward algorithm. The computed values of all variables are stored in an appropriate data structure. The second step is based on the chain rule of calculus. Here we assume that we have only one node with no outgoing edges. This node has the highest order index m. We call the node the output node. In machine learning, the value of the node is typically the cost value and the computational graph computes the cost function. The cost value is a scalar, i.e. the dimensionality of the output variable $v^{(i)}$ is $d^{(m)}=1$. In general, the node variables in the computational graph can be tensors of any order. However, for compact indexing we assume that they are flattened to vectors for this theoretical analysis. So, there is only one index for each variable and the variables of the nodes are $d^{(i)}$ dimensional vectors. We are interested in partial derivatives of the output node variable $v^{(m)}$ with respect to the leaf node variables $v^{(i)}$, i.e. $\frac{\partial v_k^{(m)}}{\partial v_l^{(i)}}$. On the right side of the equation each summand is a dot-product of Jacobians. j is the index of all nodes which have an edge to node $n^{(m)}$, i.e. $(n^{(j)}, n^{(m)}) \in E$. The Jacobian which corresponds to an edge in the computational graph (here from $n^{(j)}$ to $n^{(m)}$) is called a local Jacobian (matrix). For each variable with index j the chain rule can be applied again: $$\left[ \frac{\partial v^{(j)}}{\partial v^{(i)}} \right] = \sum_k{ \left[ \frac{\partial v^{(j)}}{\partial v^{(k)}} \right] \cdot \left[ \frac{\partial v^{(k)}}{\partial v^{(i)}} \right] }$$ k is the index of all nodes which have an edge to node $n^{(j)}$, i.e. $(n^{(k)}, n^{(j)}) \in E$. Note, that for different nodes j the sum is over different nodes with indices k depending on the graph structure. Repeated application of the chain rule by respecting the graph structure shows that we can compose a global Jacobian from local Jacobians. It can be shown [15] that the dot products of all local Jacobians on all paths from the leaf node $n^{(i)}$ to the output node $n^{(m)}$ must be summed up to get the global Jacobian. As already stated, we want to compute (nearly) all global Jacobians, i.e. all global Jacobians w.r.t. (nearly) all leaf variables $v^{(i)}$. The principle idea for an efficient computation is to reuse the partial results $\left[ \frac{\partial v^{(m)}}{\partial v^{(p)}} \right]$ for all non-leaf variables $v^{(p)}$. Note that $\left[ \frac{\partial v^{(m)}}{\partial v^{(p)}} \right]$ is again the sum of the dot products of all local Jacobians on all paths from the node $n^{(p)}$ to the node $n^{(m)}$. So, regrouping of the nested sums is equivalent to send backward signals. A backward signal at a current node is the sum of dot products of the local Jacobians of all paths from the current node to the output node. To compute the backward signal of a new node $v^{(q)}$ it is sufficient to sum up all dot products of the backward signals $\left[ \frac{\partial v^{(m)}}{\partial v^{(p)}} \right]$ of all nearby upstream nodes $v^{(p)}$ with the local Jacobians $\left[ \frac{\partial v^{(p)}}{\partial v^{(q)}} \right]$: $$\left[ \frac{\partial v^{(m)}}{\partial v^{(q)}} \right] = \sum_p{ \left[ \frac{\partial v^{(m)}}{\partial v^{(p)}} \right] \cdot \left[ \frac{\partial v^{(p)}}{\partial v^{(q)}} \right] }$$ p is the index of all nodes with an edge from node $n^{(q)}$ to $n^{(p)}$, i.e. $(n^{(q)},n^{(p)})$. The algorithm starts at the output node $n^{(m)}$. The initial backward signal is $\left[ \frac{\partial v^{(m)}}{\partial v^{(m)}} \right] = I$, i.e. an identity matrix with dimension $d^{(m)} x d^{(m)}$ Then, the backward signals at the nodes which have an edge to are computed as described above. This procedure is repeated until all wanted global Jacobians are computed. In the context of neural networks, reverse mode automatic differentiation is also called backpropagation. ## Implementation For the implementation in a computer program we chose as programming language Python, because (scientific) Python is the most common programming language for machine learning. Our library is based mainly on the tensor library NumPy. ## Basic (low-level) part With the basic low-level part of the library the user can build the computational graph (implicitly) imperatively. On such a computational graph the global Jacobians of the output node can be computed efficiently by reverse mode automatic differentiation with the help of the library. The low-level part consists mainly of the Node class. Each instantiation of the Node class corresponds to the creation a node for the computational graph. To keep the implementation small and clear, the node variables are restricted to tensors of order 2 and the output node variable $v^{(m)}$ must be a scalar, i.e. $d^{(m)}=1$ In machine learning, the value of the output node is typically the cost value. So, that is not a severe restriction. Figure 1. Example of a computational graph. The leaf nodes are A and B. The output node is the rightmost node (sum over all elements). We denote in topological order, the non-leaf variables C (element-wise product), D (exponentiation) and E (sum of all elements). In the following we show how the computational graph of figure 1 can be build up in the DP-library. Leaf nodes can be instantiated directly by calling the constructor of the Node class, e.g. by a = Node(np.array([[1,1,1], [2,2,2]]), "A") b = Node(np.array([[1,2,3]]), "B") Here two leaf nodes a (with name A) and b (with name B) are generated. Both nodes have got an explicit name given by the optional second argument of the constructor. For all nodes with names the Jacobians (also called gradients) are computed by reverse mode automatic differentiation, see below. The first node is a, i.e. $v^{(1)} = A$ and the second $v^{(2)}=B$. The node variable $A$ is 2x3 matrix. However, note that the node variables described in the theoretical part are formulated as vectors and that the Jacobian indices refer to such vector indices. As an example, for the correspondence to the matrix $A$ note that the element $A_{21}$ is equivalent to $v_4^{(1)}$, and the total number of elements of the variable $v^{(1)}$ is $d^{(1)}=6$. For the flattened / vector version of $A$ we write $a$. Non-leaf nodes are generated by methods (or overwritten python operators) of the Node class. The methods correspond to the mathematical operator, e.g., the element-wise multiplication in figure 1 can be done with the API by c = a * b. Here, a Node instance of a non-leaf node is generated by the binary operator ”element-wise multiplication” and the instance is assigned to the Python variable c (mathematical notation: $C$) . Note, that the shape of $A$ (2x3 matrix) and $B$ (1x3 matrix) respectively $b$ (vector of dimension 3) are different. The DP-library supports broadcasting [20] for such element-wise operations. As result of broadcasted element-wise multiplication, c has the same shape as a. The completion of the computational graph of figure 1 is done by the following code, d = c.exp() e = d.sum() # output e is a scalar. For the variable d each element of c is exponentiated. For the variable e all elements of the variable d are summed up to a scalar. e is the output variable of the computational graph. By reverse mode automatic differentiation, the Jacobians of the node e w.r.t. node a and b can be computed. This is done by the method grad(.) with argument 1 on the output node, grads = e.grad(1). The return value is a Python dictionary with an entry for each leaf-variable with a name, here {"A": array([[2.7, 14.78, 60.26], [7.39, 109.20, 1210.29]]), "B": array([[17.50, 116.59, 826.94]])}. Exemplarily, we describe the implementation of the element-wise multiplication operation. The internal implementation is given by the following code: def __mul__(self, other): if isinstance(other, numbers.Number) or isinstance(other, np.ndarray): other = Node(other) ret = Node(self.value \* other.value) g_total_self = g * other.value g_total_other = g * self.value x = Node._set_grad(self, g_total_self, other, g_total_other) return x return ret The method generates and returns a new node ret for the element wise multiplication operator. The node instance ret has no name. The inner function definition grad implements how the backpropagated signal g is combined with the local Jacobians for both operands, i.e. in our computational graph a and b. How this implementation is related to the theory (see above) is not obvious. In the implementation, there is no (explicit) dot-product of Jacobians. In the following this relation is explained for the variable a. We assume in the analysis, that the variable b was internally broadcasted, so that a and b resp. $v^{(1)}$ and $v^{(2)}$ have the same dimension $d^{(1)} = d^{(2)} = 6$: Here, the output node is e, i.e. and the backpropagated signal is at the node c $\left[ \frac{\partial v^{(m)}}{\partial v^{(p)}} \right] = \left[ \frac{\partial e}{\partial c} \right]$ (given to the inner function grad as argument g. To get the global Jacobian w.r.t. the node a the dot product with the local gradient $\left[ \frac{\partial c}{\partial a} \right]$ must be calculated and combined with the backpropagated signal: $$\left[ \frac{\partial e}{\partial a} \right] = \left[ \frac{\partial e}{\partial c} \right] \cdot \left[ \frac{\partial c}{\partial a} \right]$$ or explicitly (with Jacobian) indices: $$\left[ \frac{\partial e}{\partial a} \right]_{1j} = \sum{ \left[ \frac{\partial e}{\partial c} \right]_{1k} \cdot \left[ \frac{\partial c}{\partial a} \right]_{kj} }$$ Note, that the first index of $\left[ \frac{\partial e}{\partial a} \right]_{1j}$ resp. $\left[ \frac{\partial e}{\partial c} \right]_{1k}$ is always a $1$ because of the scalar output of the computational graph. The local Jacobian for the element-wise multiplication is $$\left[ \frac{\partial c}{\partial a} \right]_{kj} = \delta_{kj} b_j$$ $\delta_{kj}$ is the Kronecker-Delta, i.e. $\delta_{kj}=0$ for $k \neq j$ and $\delta_{kj}=1$ for $k=j$. So, we have $$\left[ \frac{\partial e}{\partial a} \right]_{1j} = \sum{ \left[ \frac{\partial e}{\partial c} \right]_{1k} \delta_{kj} b_j } = \left[ \frac{\partial e}{\partial c} \right]_{1j} b_j$$ Therefore, the combination of the Jacobians by the dot-product is here equivalent to an element-wise multiplication of the Jacobians. The dimension of the Jacobians (indexed by ) need not to be considered in the shape of the Jacobian variables in the implementation. ## Neural network library (high-level) part Additionally, to the low-level part, the library includes different building blocks and helper functions which ease the implementation of neural networks. For teaching purposes, we restrict the provided building blocks to simple fully connected layers (see figure 2). With these layers fully connected feed-forward networks can be implemented. A hidden or output layer consists of an affine transformation given by a weight matrix $W^{(l)}$ and a bias vector $b^{(l)}$ and a (non-linear) activation function $act(.)$. Typical activation functions for hidden layers are, e.g. element-wise $ReLU$ or $tanh$. For classification tasks, the activation function of the output (last) layer is typically the logistic (two classes only) or the softmax function. A layer can be described mathematically by $$h^{(l+1)} = act(W^{(l)} \cdot h^{(l)} + b^{(l)})$$ Here, the superscript is the layer index. The input to the network is therefore $h^{(1)} = x$. For training of a neural network, a set of training examples must be provided, $$D_{Train} = \{ (x^{(1)}, y^{(1)}), (x^{(2)}, y^{(2)}), \ldots, (x^{(n)}, y^{(n)}) \}$$ Each pair $(x^{(i)}, y^{(i)})$ is a training example with an input $x^{(i)}$ and a label (target value) $y^{(i)}$. The superscript is the index of the example. $n$ is the total number of training examples. On the training data set, the learning corresponds to minimizing a cost function. Here, we neglect for simplification generalization [8] which is very important in practice. The cost (and the prediction) is computed typically on (mini) batches. The inputs of many examples are concatenated in a design matrix $X$, i.e. each row of the matrix corresponds to an input vector $x^{(i)}$. Each layer of the neural network outputs a matrix $H$ with a hidden representation $h$ for each example as row vectors of the matrix. $$H^{(l+1)} = act(W^{(l)} \cdot H^{(l)} + b^{(l)})$$ The neural network layer building blocks are internally composed from Node class objects. In figure 2 such a building-block, internally structured by Node objects, is shown. Figure 2. One neural network layer represented as computational graph with activation function, here $ReLU$. Note, that such a layer is only a part of the full computational graph. Figure 3. A complete neural network composed of multiple layers. Each layer is internally composed of Nodes objects as shown in figure 2. A complete feed forward network is composed of stacked layers, see figure 3. For training, the computational graph of the neural network is augmented with a cost function and an additional node for the provided labels of the mini batch. An example of a building block for the cross-entropy cost is show in figure 4. In the next few sections we show how each layer is implemented with our library. The input layer consists only of input data, also called features, and is represented as a leaf node in the computational graph. In Python, the input data are typically given as NumPy arrays, so we just need to convert this input array into a node object to enable backpropagation. With the DP-Library the conversation is done via input = Node(X) # X is a NumPy 2d-array Note, that the optional name argument is omitted as the Jacobian w.r.t. X is not needed for the optimization. After converting the data into a Node object, we can use all operators and functions implemented in the Node class, including automatic differentiation. For the hidden layers, our library contains a class called NeuralNode, which initializes a weight matrix $W^{(l)}$ and a bias vector $b^{(l)}$. Both are leaf-nodes (see figure 2) with unique names given to the Node constructor. Since the most common used activation function is $ReLU$ we implemented also a $ReLU$ layer besides a pure linear layer. The pure linear layer can be used together with any activation functions specified by the user with the Node class, e.g. $leakyReLU$, $tanh$, $logistic$, etc. Stacking many of these layers results in a fully connected neural network, see figure 3. We call the output of the last layer $O$. $O$ is automatically produced by sending the input $X$ forward through the network (forward propagation). The training of the neural network is done by minimization of the cost. The cost is a function of the parameters $\theta$ of the neural network. The parameters $\theta$ are the weight matrices and bias vectors: $$\theta = \{ W^{(1)}, b^{(1)}, W^{(2)}, b^{(2)}, \ldots, W^{(m)}, b^{(m)} \}$$ $m$ is the number of layers in the network. The cost function is implemented as part of the computational graph. Therefore, it consists of structured Node objects, see figure 4. Fig. 4. Calculation of the loss value $l$ using a cost function, here cross entropy, represented as computational graph. The labels $Y$ must be provided in one-hot encoding. $O$ is the output of the neural network (last Node object of the last layer). The final output from the cost (sub-)graph will be a scalar $l$. So, the gradient of the cost (loss) with respect to all model parameters $\theta$ can be calculated by the DP-library. This gradient is then used to train the network via an update rule, to tune the network parameters to lower the loss $l$. The full calculation pipeline of $l$ is shown in figure 5. Fig. 5. Neural network with corresponding cost function. The X input is mapped to the output via the neural network (see figure 3). The output of the neural network and the labels Y are mapped to the cost value via cost block. To ease the implementation of a neural network, we provide a Model class. The user has to derive from the Model class a concrete model. The layers must be defined as instance variables. Additionally, the user has to define a loss method and a forward pass method. The following code shows an example of a neural network for MNIST classification: class Network(Model): def \_\_init\_\_(self): super(Net, self).\_\_init\_\_() self.h1 = self.ReLu\_Layer(784,500,"h1") self.h2 = self.ReLu\_Layer(500,200,"h2") self.h3 = self .Linear\_Layer(200,10,"h3") def loss(self, x, y): if not type(y) == Node: y = Node(y) out = self.forward(x) loss = -1 * (y * out.log()) return loss.sum() def forward(self, x): if not type(x) == Node: x = Node(x) out= self.h3(self.h2(self.h1(x))) .softmax() return out In the constructor code two $ReLU$ layers and a linear layer are defined as instance variables. The linear layer is later complemented with a softmax activation function, since this network deals with multiclass classification (10 disjunct classes). The constructor signature of a layer instantiation is: def ReLu_Layer(number_of_inputs, number_of_outputs, "name_of_layer") The forward pass to generate the output $O$ is defined in def forward(self, x) simply by stacking all defined layers plus an additional softmax() as explained above. The loss function which outputs $l$ is defined in def loss(self, x, y) where self.forward(x) is used to calculate the network output $O$. $Y$ represents our target values, here fixed class labels (one hot encoded) for classification. Notice, that each time we start a calculation it is checked whether the input is a Node object or not, and if not, the data is converted into one. After that, the user-defined network can be instantiated by calling the constructor: net = Network() For training, we also provide different optimizers which inherit from the basic (abstract) Optimizer class. The optimizer updates the model parameter according to special update rules. The optimizer we provide are SGD, SGD Momentum, RMSProp and Adam [23]. An instance of an optimizer can be initialized, e.g. by optimizer = SGD(net,x_train, y_train) The first parameter, net, is the network (see above). x_train and y_train are the training data, equivalent to $X$ and $Y$. Training can be started with loss = optimizer.train(steps=1000, print_each=100) steps is the number of total training loops to adjust the model parameters. print_each is the number of steps after which we want to receive a feedback about the current training error, basically the loss value, which should decrease if training succeeds. Per default the train function will return the final loss value which we saved into loss in our example above.For a more detailed analysis of the training it is also possible to call loss, loss_hist, para_hist = optimizer .train(steps=1000, print_each=100, err_hist=True) With the parameter err_hist=True a complete history of the loss value the model parameters will be returned. These can be used for further analytics, e.g. to visualize the training process. After the network is trained, it is quite common to test how well the network learned its task by testing its prediction using a set x_test. Using the network prediction from the forward pass y_pred = net.forward(x_test) the test accuracy of the network can be calculated. For classification for example this means how many labels the network predicted correctly. For a deeper understanding on neural networks and optimizers or for special purposes it is possible to implement the training process from scratch. The Model class provides the functions get_grad(), get_param() and set_param(). These are also used internally called by the Optimizer class. A manually implemented training loop, using basic gradient descent, could look like the following net= Network() for epoch in range(100): # compute the loss and gradients # get the current parameters param_current = net.get_param() # calc new parameters, actual learning param_new = { name : param_current[name] - 0.001 * grad[name] for name in param_current.keys()} # set new parameters net.set_param(param_new) ## Accompanying exercises To make the entry into the topic of differentiable programming as easy as possible, the DP library is part of a differentiable programming course and can be found, together with accompanying exercises, on the deep-teaching website [18] or directly at the GitLab repository [19]. The exercises are divided into three groups: The first group of exercises teaches the principles of reverse mode automatic differentiation. It is explained how the DP library itself is implemented, i.e. how to implement the operator methods for instantiation of a computational graph, consisting of scalars, matrices, elementary operators (+, -, dot-product) and functions ($tanh$, $exp$, etc.) and how to implement automatic differentiation. Finally, everything is combined in an object-oriented architecture forming the DP library and therefore enabling easy use of the low level and high-level functionalities mentioned. The second group of exercises is about using the DP library to build neural networks, train them and using them for inference. At the same time each of these exercises is about best practices and findings of neural network research of the last couple of years, including batch-norm [21], dropout [22], optimizers (improvements of SGD, e.g. Adam [23]), weight-initialization methods (e.g. Xavier [24]) and activation functions. The last-mentioned exercise, at which we will have a look at for illustration purposes, teaches about different activation functions and the so-called vanishing gradient problem [25]: We consider a simple deep neural network, i.e. one that consists of many layers, e.g. 10 linear layers. The output of the first linear layer is computed with $H^{(2)}=act^{(1)}( W^{(1)}H^{(1)} +b^{(1)} )$, with $H^{(1)}=X$ the input, $W^{(1)}$ the first weight matrix, $b^{(1)}$ the corresponding bias vector and $act^{(1)}$ the activation function. The output of the second linear layer then is computed with $H^{(3)}=act^{(2)}( W^{(2)}H^{(2)} +b^{(2)} )$ and so on, until the last layer $O=act^{(10)}( W^{(10)}H^{(10)} +b^{(10)} )$. Training the network, we first calculate the loss $l$ , i.e. the difference of the output of our last layer $O$ (our predictions) and the true labels $Y$. This is a binary classification tasks, i.e. there are two possible labels ($0$ and $1$). The output $O$ for an example input $x$ is the predicted probability for the positive class, i.e. $p_{\theta}=(y=1|x)$. For such problems the binary cross-entropy as cost function is typically used: $$loss(\theta) = -(Y \log (O) + (1-Y) \log (1-O))$$ Second, we adjust the weight matrices for all layers $l$ by the update rule of gradient descent: $$W^{{(l)}^{NEW}} \leftarrow W^{{(l)}^{OLD}} - \alpha \frac{\partial loss(\theta)}{\partial W^{{(l)}^{OLD}}}$$ Using the chain rule to calculate $\frac{\partial loss(\theta)}{\partial W^{(1)}}$ for example, we get: $$\frac{\partial loss(\theta)}{\partial W^{(1)}} = \frac{\partial loss(\theta)}{\partial O} \cdot \frac{\partial O}{\partial H^{(10)}} \cdot \frac{\partial H^{(10)}}{\partial H^{(9)}} \cdot \ldots \cdot \cdot \frac{\partial H^{(2)}}{\partial W^{(1)}}$$ For binary classification, the typical activation function of the output layer is the logistic function $\sigma(z)=\frac{1}{1 + e^{-z}}$ which has the range $]0,1[$. However, a problem arises, if the logistic function is further used as activation function $act^{(1)}$ to $act^{(9)}$ in intermediate layers, because the absolute value of its derivative is at most $\frac{1}{4}$, which in turn leads to the partial derivative $\frac{\partial loss(\theta)}{\partial W^{(1)}}$ becoming smaller and smaller the more layers the network has in between, as $\lim_{l \to \infty} (\frac{1}{4})^l = 0$. The derivative of the $tanh$ or the $ReLU$ function on the other hand is defined in the range of $]0,1], resp.${0,1}\$. The task of this sample exercise consists of (a) building the neural network model for the computational graph using the DP library, (b) train and validate the network with different activation functions while (c) visualizing the vanishing gradient problem by plotting the sum of the absolute values of the partial derivates for all weights of each layer . The third group of exercises is on using more common, but also more complex deep learning libraries, like PyTorch and TensorFlow. This kind of exercises is not directly related to our DP library, but still should be mentioned here because they are the last step of our educational path for students on differentiable programming, that is: (1) Learn the principles of differentiable programming and how to build a framework for it at the example of our lightweight DP library, (2) learn how to use this library to build models, train them, validate them and use them for inference and (3) make a transition to using well-known but more complex frameworks. After that, the students should then have a good starting point for understanding the inner implementation and software-architecture of libraries, like PyTorch and TensorFlow. ## Conclusion The use of machine learning, especially of artificial neural networks, in practical applications has increased tremendously over the last years and most likely will keep increasing in the near and far future. Yet already today research and industry suffer from a lack of specialists in this field. Unfortunately, becoming an AI specialist has a very flat learning curve and requires knowledge in the fields of mathematics, computer science, statistics and ideally in the domain, which you want to provide with AI driven applications. With our library for educational purpose, teaching the fundamentals of differentiable programming can be improved significantly by opening the black box of deep learning libraries. With less than 1.000 lines of code, including about 400 lines of comments, in contrast to 3.5 million lines for TensorFlow [28], the goal of a lightweight, clear and easy understandable library was achieved. Following the concept of didactic reduction [29], its use and architecture have a lot in common with TensorFlow and PyTorch, but with a focus on the core principles of differentiable programming. Lastly the stable API does not force teachers to re-adjust their exercises and educational material over and over again to keep them up-to-date. ## List of Exercises • Part 1: How to implement such a framework yourself • part 2: Best practices in deep learning • Part 3: Working with existing frameworks (Pytorch / Tensorflow) ### Part 1: How to Implement a Deep Learning Framework If you ever wondered how the deep learning frameworks realize automatic differentiation, this chapter is for you. ### Part 3a: PyTorch The following exercises also exist in the course Introduction to Machine Learning. They teach the core concepts of neural networks, including linear regression, logistic regression, cost functions and gradient descent. While teaching these core concepts, they are also suited to get familiar with PyTorch as deep learning framework for automatic differentiation. The following PyTorch exercises are suitable for beginners with machine learnign as well as beginners with PyTorch, though if you are not familiar with the concepts mentioned, we suggest to first complete the course Introduction to Machine Learning, where you will find the same exercise and their pendants using numpy only. The exercise below is almost the same as the PyTorch exercise above but includes the implementation of activation and cost functions in numpy aswell as their derivatives and writing code to build the computational graph. This exercise is a first approach onto automatic differentation, but yet still tailored for the use case of fully connected feed forward networks only. ### dp.py - Documentation dp.py library documentation ## References 1. Bergstra, James, et al. ”Theano: a CPU and GPU math expression compiler.” Proceedings of the Python for scientific computing conference (SciPy). Vol. 4. No. 3. 2010. 2. Martin Abadi et al. (2015). ”TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems”, Software available from tensorflow.org. 3. Adam Paszke et al. (2017). ”Automatic differentiation in PyTorch”, Software available from pytorch.org. 4. Dougal Maclaurin, David Duvenaud and Ryan P Adams (2015). ”Autograd: Effortless Gradients in Numpy”. In ICML 2015 AutoML Workshop. 5. Baydin, Atilim Gunes; Pearlmutter, Barak; Radul, Alexey Andreyevich; Siskind, Jeffrey (2018). ”Automatic differentiation in machine learning: a survey”. Journal of Machine Learning Research. 18: 1–43. 6. Anonymous authors, ”Official Caffe website”, online available (downloaded on 30. August, https://caffe.berkeleyvision.org/ 7. John Nickolls, Ian Buck, Michael Garland, Kevin Skadron (2008), Scalable Parallel Programming with CUDA, ACM Queue, vol. 6 no. 2, March/April 2008, pp. 40-53 8. Ian Goodfellow and Yoshua Bengio and Aaron Courville (2016). ”Deep Learning”, MIT Press 9. Blundell, Charles, et al. (2015). ”Weight uncertainty in neural networks”. Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, 1613-1622 10. Kingma, Diederik P., and Max Welling (2014). ”Auto-encoding variational Bayes”. Proceedings of the 2nd International Conference on Learning Representations (ICLR) 11. Leon Bottou, Frank E. Curtis, and Jorge Nocedal (2016). Optimization methods for large-scale machine learning. arXiv preprint arXiv:1606.04838. 12. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwinska, Sergio Gomez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471–476, 2016. 13. Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pages 1828–1836, 2015. 14. F. Gers (2001), Chapter 3.2.2. of ”Long Short-Term Memory in Recurrent Neural Networks”, PhD-Thesis, Lausanne, EPF 15. M. Collins (2018), ”Computational Graphs, and Backpropagation”, Lecture Notes, Columbia University 16. Travis E, Oliphant (2006). A guide to NumPy, USA: Trelgol Publishing 17. Thomas Kluyver et al. (2016), ”Jupyter Notebooks – a publishing format for reproducible computational workflows”, in Positioning and Power in Academic Publishing: Players, Agents and Agendas 18. Herta Christian et al. ”deep-teaching.org – Webiste for educational material on machine learning” online available (downloaded on 28. August 2019), https://www.deep-teaching.org/courses/differential-programming 19. Herta Christian et al. ”Repository of deep-teaching.org” online available (downloaded on 28. August 2019), https://gitlab.com/deep.TEACHING/educational-materials/blob/master/notebooks/differentiable-programming/dp.py 20. Anonymous authors, ”Array Broadcasting in Numpy”, online available (downloaded on 27. August 2019), https://www.numpy.org/devdocs/user/theory.broadcasting.html 21. Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. 22. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1), 1929-1958. 23. Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 24. Glorot, X., & Bengio, Y. (2010, March). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics (pp. 249-256). 25. Hochreiter, S. (1991). Untersuchungen zu dynamischen neuronalen Netzen. Diploma, Technische Universität München, 91(1). 26. Anonymous authors, Jupyter homepage, online available (downloaded on 27. August 2019), https://jupyter.org/ 27. Jeffrey M. Perkel. Why Jupyter is data scientists’ computational notebook of choice. Nature 563.7729: 145, 2018 28. Anonymous authors, OpenHub – Projects - TensorFlow, online available (downloaded on 27. August 2019), https://www.openhub.net/p/tensorflow/analyses/latest/languages_summary 29. Herta, C., Voigt, B., Baumann, P., Strohmenger, K., Jansen, C., Fischer, O., … & Hufnagel, P. (2019, July). Deep Teaching: Materials for Teaching Machine and Deep Learning. In HEAD’19. 5th International Conference on Higher Education Advances (pp. 1153-1131). Editorial Universitat Politècnica de València.
Browse Questions # Which one of the following is used as Antihistamine ? $(a)\;Omeprazole\qquad(b)\;Chloranphenicol \qquad(c)\;Diphenhydramine\qquad(d)\;Norethindrone$ –1 vote
Typegroups classifier for OCR ocrd_typegroups_classifier Typegroups classifier for OCR Installation From PyPI pip3 install ocrd_typegroup_classifier From source If needed, create a virtual environment for Python 3 (it was tested successfully with Python 3.7), activate it, and install ocrd. virtualenv -p python3 ocrd-venv3 source ocrd-venv3/bin/activate pip3 install ocrd Enter in the folder containing the tool: cd ocrd_typegroups_classifier/ Install the module and its dependencies make install Finally, run the test: sh test/test.sh Models The model densenet121.tgc is based on a DenseNet with 121 layers, and is trained on the following 12 classes: • Antiqua • Bastarda • Fraktur • Gotico-Antiqua • Greek • Hebrew • Italic • Rotunda • Schwabacher • Textura • other_font • not_a_font The confusion matrix obtained with a DenseNet-121 on the pages with a single font from the dataset (see "Training a classifier" below) is: Antiqua Bastarda Fraktur Got.-Ant. Greek Hebrew Italic Rotunda Schwabacher Textura Other font Not a font Recall Antiqua 1531 10 5 2 5 98.6% Bastarda 286 6 10 1 94.4 Fraktur 1933 1 5 1 2 99.5% Gotico-Antiqua 269 1 99.6 Greek 58 1 1 96.7% Hebrew 1 326 99.7% Italic 1 187 99.5% Rotunda 9 1495 5 11 1 98.3% Schwabacher 16 4 2 452 95.4% Textura 2 371 1 99.2% Other font 288 15 94.1% Not a font 4 2 2 1 5 1 7 4 2331 98.9% Precision 99.7% 94.7% 99.1% 95.4% 96.7% 99.4% 94.9% 99.1% 94.2% 96.4% 98.3% 99.0% Updating PyTorch If you update PyTorch, it is possible that the model cannot be loaded anymore. To solve this issue, proceed as follows. 1. Downgrade to a version of PyTorch which can load the model, 2. Run the following code: import torch from ocrd_typegroups_classifier.typegroups_classifier import TypegroupsClassifier torch.save(tgc.model.state_dict(), 'model.pt') 1. Upgrade to the desired version of PyTorch 2. Run the following code: import torch from ocrd_typegroups_classifier.network.densenet import densenet121 from ocrd_typegroups_classifier.typegroups_classifier import TypegroupsClassifier print('Creating the network') net = densenet121(num_classes=12) print('Creating the classifier') tgc = TypegroupsClassifier( { 'antiqua':0, 'bastarda':1, 'fraktur':2, 'gotico_antiqua':3, 'greek':4, 'hebrew':5, 'italic':6, 'rotunda':7, 'schwabacher':8, 'textura':9, 'other_font':10, 'not_a_font':11 }, net ) tgc.save('ocrd_typegroups_classifier/models/densenet121.tgc') 1. delete model.mdl If PyTorch cannot load model.mdl, then you will have to train a new model from scratch. Training a classifier The data used for training the classifier provided in this repository is freely available at the following address: https://doi.org/10.1145/3352631.3352640 The script in tool/create_training_patches.py can be used to extract a suitable amount of crops to train the network, with data balancing. The script in tools/train_densenet121.py continues the training of any existing densenet121.tgc in the models/ folder. If there is none present, then a new one is created and trained from scratch. Note that you might have to adapt the paths in these scripts so that they correspond to where data is in your system. Generating activation heatmaps For investigation purpose, it is possible to produce heatmaps showing where and how much the network gets activated for specific classes. You need first to install an additional dependency which is not required by the OCR-D tool with: pip install tqdm Then, you can run heatmap.py: python3 heatmap.py --layer 9 --image_path sample2.jpg You can specify which layer of the network you are interested in, between 0 and 11. Best results are to be expected with larger values. If no layer is specified, then the 11th is used by default. Project details Uploaded source Uploaded py3
HOME  ›   pipelines # Phased VCF The principal output of the longranger run pipeline includes aligned reads with barcode and phasing information in BAM format, phased SNPs and indels in VCF format, and SV calls and candidates in BEDPE format. These are all standard file formats designed to interoperate with existing tools, and the additional information produced by the Chromium Platform are included as standards-compliant fields when appropriate. The following information assumes you are already familiar with the VCF format. A full description of the VCF standard is available online. ## Phased VCF Files Long Ranger encodes SNP and indel phasing information in the GT (genotype) and PS (phase set) fields specified by the VCF standard. It also stores barcode information supporting the phasing for each variant in the BX field using a comma-delimited string of the form BC_STRINGref,BC_STRINGalt1,BC_STRINGalt2,... These BC_STRINGs are semicolon-delimited strings consisting of underscore-delimited strings: BC1_QUAL1-1_QUAL1-2_...;BC2_QUAL2-1_QUAL2-2... Where BC1 is the first barcode, and QUAL1-1 and QUAL1-2 are the observed Phred qualities of attaching those barcodes to the given allele. For example, a BX field that contains AAAA_40_38;CCCC_40,GGGG_39 encodes two BC_STRINGs--one for the reference allele (AAAA_40_38;CCCC_40) and one for the alternate allele (GGGG_39):
# How Engage Affects Retention This analysis is a work in progress. We’ll attempt to analyze the effect that using Engage has on retention for Business customers. The dataset we’ll use contains around 11K Business customers that started subscriptions on or after January 2019. We’ll separate the customers into groups defined by whether or not they replied to a comment in Engage. ## Preliminary Findings The data suggests that Business customers that replied to a comment in Engage churn at significantly lower rates than those that did not use Engage. This outcome is likely affected by sampling bias – those customers that used Engage likely have Instagram accounts with more followers and engagement, which would have granted them earlier access to the product. Customers that churned were less likely to have been invited to use Engage. When all Business customers have access to Engage and the sampling is less biased we’ll be able to get a better understanding of Engage’s effect on retention and churn. ## Data Exploration Let’s count the number of customers that used Engage. # count engage users users %>% count(used_engage) %>% mutate(prop = percent(n / sum(n))) ## # A tibble: 2 x 3 ## used_engage n prop ## <lgl> <int> <chr> ## 1 FALSE 11575 98% ## 2 TRUE 263 2% Only 263 (2%) of these customers replied to a comment in Engage. Next we will use a technique called survival analysis to compare churn and retention rates of these customers. ## Survival Analysis Survival analysis is a common branch of statistics used for analyzing the expected duration of time before a certain event occurs. It is commonly used in the medical field to analyze mortality rates, hence the name “survival”. It’s especially useful when the data is right-censored, meaning there are cases in which the event hasn’t happened yet, but will likely happen at some time in the future. In the figures below, see the numbers with a “+”. These refer to customers that haven’t churned yet – their time to churn is X days “+”. # build survival object km <- Surv(users$time_to_cancel, users$canceled) # preview data head(km, 20) ## [1] 162+ 0 1+ 8+ 408+ 566+ 10+ 10+ 496+ 443+ 622+ 455+ 503+ 670 52 ## [16] 111+ 185 78 60 243 To begin our analysis, we use the formula Surv(time, canceled) ~ 1 and the survfit() function to produce the Kaplan-Meier estimates of the probability of “survival” over time. The times parameter of the summary() function gives some control over which times to print. Here, it is set to print the estimates for 1, 30, 60 and 90 days, and then every 90 days thereafter. # get survival probabilities km_fit <- survfit(Surv(time_to_cancel, canceled) ~ 1, data = users) # summarise summary(km_fit, times = c(1, 7, 14, 30, 60, 90 * (1:10))) ## Call: survfit(formula = Surv(time_to_cancel, canceled) ~ 1, data = users) ## ## time n.risk n.event survival std.err lower 95% CI upper 95% CI ## 1 11729 181 0.985 0.00113 0.982 0.987 ## 7 11433 150 0.972 0.00152 0.969 0.975 ## 14 11258 66 0.966 0.00166 0.963 0.970 ## 30 10750 400 0.931 0.00235 0.927 0.936 ## 60 9309 873 0.853 0.00333 0.847 0.860 ## 90 8165 702 0.787 0.00389 0.780 0.795 ## 180 5859 1121 0.672 0.00461 0.663 0.681 ## 270 4192 666 0.589 0.00504 0.580 0.599 ## 360 2933 374 0.531 0.00538 0.520 0.541 ## 450 1567 609 0.409 0.00603 0.397 0.421 ## 540 890 123 0.370 0.00641 0.358 0.383 ## 630 327 44 0.343 0.00721 0.329 0.358 The survival column shows the estimate for the percentage of Business customers still active after a certain number of days after subscribing. For example, around 93% of Business subscriptions are still active by day 30 and around 85% are still active by day 60. This curve can be plotted. You can see dips in the curve every 30 days, which makes sense given the monthly billing period of most subscriptions. Now lets segment the customers by whether or not they used Engage and fit survival curves for each segment # get survival probabilities km_fit <- survfit(Surv(time_to_cancel, canceled) ~ used_engage, data = users) # summarise summary(km_fit, times = c(1, 7, 14, 30, 60, 90, 180)) ## Call: survfit(formula = Surv(time_to_cancel, canceled) ~ used_engage, ## data = users) ## ## used_engage=FALSE ## time n.risk n.event survival std.err lower 95% CI upper 95% CI ## 1 11466 181 0.984 0.00115 0.982 0.987 ## 7 11172 149 0.971 0.00155 0.968 0.974 ## 14 10997 65 0.966 0.00170 0.962 0.969 ## 30 10495 397 0.930 0.00239 0.925 0.935 ## 60 9061 870 0.850 0.00339 0.844 0.857 ## 90 7928 697 0.783 0.00396 0.776 0.791 ## 180 5657 1110 0.666 0.00468 0.657 0.675 ## ## used_engage=TRUE ## time n.risk n.event survival std.err lower 95% CI upper 95% CI ## 1 263 0 1.000 0.00000 1.000 1.000 ## 7 261 1 0.996 0.00380 0.989 1.000 ## 14 261 1 0.992 0.00537 0.982 1.000 ## 30 255 3 0.981 0.00847 0.964 0.998 ## 60 248 3 0.969 0.01072 0.948 0.990 ## 90 237 5 0.949 0.01372 0.923 0.977 ## 180 202 11 0.902 0.01914 0.865 0.940 We can see that, although the sample is small, customers that used Engage churned at significantly lower rates than those that did not. For example, around 85% of Business customers that did not use Engage are still active by day 60, compared to around 97% of Business customers that did use Engage. Plotting these survival curves shows a seemingly large difference in churn rates. The data suggests that customers that replied to a comment in Engage churn at significantly lower rates. However, this is likely affected by sampling bias. Those customers that used Engage likely have Instagram accounts with more followers and engagement, which would have granted them earlier access to the product. Customers that churned were less likely to have been invited to use Engage.
### Discussion You must be signed in to discuss. LB ##### Marshall S. University of Washington ##### Farnaz M. Simon Fraser University ##### Jared E. University of Winnipeg Lectures Join Bootcamp ### Video Transcript in this question. There are three forces acting on the player. The weight forced the normal force, alerted by the ground and the frictional force exerted by the ground too. And in the first item, we have to calculate what is the magnitude off. The frictional force knows that the player is already moving. It's sliding on the ground. Therefore, we can calculate the frictional force using the following equation frictional force. In that case, where the player is already moving, he's given by the genetic frictional coefficient times the normal force. But in the situation, the normal force is exactly counterbalanced by the weight force. Therefore, what we have here is that the frictional force Zico student kinetic friction coefficient times The weight force with the weight force is given by the mass off the player times acceleration off gravity. Remember that acceleration of gravity is approximately 9.8 meters per second squared near the surface off the earth. Therefore, the fictional force is given by 0.49 times into one times 9.8, which gives us a frictional force off approximately 300 19 new times. So this is the answer for the first item on the second night and we have the following. The player started his light and then after a time off 1.6 seconds, it comes to rest. So the final velocity is he close to zero? Then what? Waas his initial velocity. To answer that question, we can use the following equation the velocity as a function off time. Is it close to the initial velocity plus the acceleration things that interval of time, then the final velocity is zero. So zero is it close to the initial velocity plus the acceleration times 1.6, which is the time it took for the velocity to become zero Then the initial velocity is because to minors acceleration times 1.6. Then how can we calculate acceleration? We can use Newton's second law for that you don't Second Law tells us that that force is the coast. The mass times acceleration. We we were applying Newton's second law over the horizontal axis, which I choose to be pointing to the right. Then the net force on that access is the frictional force. We just pointing to the left So miners fictional force is there close to the mass off that player times its acceleration then minus 390 is he goes to 81 times acceleration finally minus 390. Divided by a 21 is the acceleration off the player. Then the initial velocity is eco's too. 390 times 1.6 divided by 8 to 1. What happened to the minus sign? Well, there is a minor sign here and the minus sign here. So minus sign times a minus sign is a plus sign. This is why the minus sign Don't appear here and then the initial velocity is approximately 7.7 meters per second and this is the answer for the second item. Brazilian Center for Research in Physics LB ##### Marshall S. University of Washington ##### Farnaz M. Simon Fraser University ##### Jared E. University of Winnipeg Lectures Join Bootcamp
### 4496 Blind Sparsity Based Motion Estimation and Correction Model for Arbitrary MRI Sampling Trajectories Anita Möller1, Marco Maass1, Tim Jeldrik Parbs1, and Alfred Mertins1 1Institute for Signal Processing, Universität zu Lübeck, Lübeck, Germany ### Synopsis A blind retrospective MRI motion estimation and compensation algorithm is designed for arbitrary sampling trajectories. Using the idea of natural images being sparsely representable, the algorithm is based on motion estimation between a motion corrupted image and it’s sparse representative. Therefore, rigid motion models are designed and used in gradient descent methods for image quality optimization. As the motion estimation and compensation work on arbitrary real valued sampling coordinates, the algorithm is capable for all trajectories. Image reconstruction is performed by computationally efficient gridding. The exact motion estimation results are shown for PROPELLER and radial trajectory simulation. ### Introduction Several motion compensation methods exist for each sampling trajectory, measuring modality, and imaging object, separately. Often they are based on extra navigator measurements, reduction of patient comfort or extension of measurement time like in gating. But at all, free non-periodic patient motion like swallowing is still not compensatable. There is a lack of a general blind motion estimation algorithm retrospectively compensating free patient motion that is applicable to arbitrary sampling trajectories. A model for such an algorithm is designed based on the sparsity of MR images, gradient descent and fast gridding. Simulations were driven on PROPELLER1 and radial trajectories. ### Methods One MRI k-space measurement is composed of several partial measurements $y_n$ along specified trajectories within one readout per different time intervals $n$. Patient motion within one readout is assumed to be constant due to small intervals. A suitable image reconstruction problem is given by $$\hat{\boldsymbol{x}}=\underset{\boldsymbol{x}}{\arg\min}\sum_{n=1}^N\left\|\mathcal{S}_n\mathcal{F}\mathcal{D}_n\boldsymbol{x}-\boldsymbol{y}_n\right\|_2^2+\frac{1}{\sigma^2}\Phi(\boldsymbol{x})$$ with the sampling operator $\mathcal{S}_n$ representing the trajectory at time $n$, $\mathcal{F}$ denoting the Fourier transform of the image $\boldsymbol{x}$ which is motion corrupted by operator $\mathcal{D}_n$. Sampling is described by $\mathcal{S}\mathcal{F}\boldsymbol{x}$ on arbitrary k-space trajectories and needs the computation with real-valued nonequidistant frequency coordinates. It is calculated by the nonequidistant discrete Foruier transform (NDFT)2. Two motion models are combined in $\mathcal{D}_n$ for translation and rotation separately. Full and sub pixel translation shifts are mathematically modelled by convolution matrices in image space3. Rotation is described by applying a rotation matrix to the sampling trajectory coordinates followed by a barycentric interpolation4 combined with Delaunay triangulation5 of the coordinates. This delivers a differentiable motion model on arbitrary nonequidistant frequency coordinates. A meaningful regularization on motion corrupted MR images is enforcing sparsity in the wavelet domain by $\Phi=\|W\boldsymbol{x}\|_1$ to give advantage to clear natural structures. For motion estimation, a three-step iteration algorithm is performed. A sparsifying ADMM6 is evaluated in the first step. It solves the minimization for fixed motion parameters by splitting it into a measurement fitting and a sparsifying problem which are iteratively solved and updated by $$\hat{\boldsymbol{x}}_{d+1}= \underset{\boldsymbol{x}}{\arg\min}\sum_{n=1}^N\|\mathcal{S}_n\mathcal{F}\mathcal{T}_n\boldsymbol{x}-\boldsymbol{y}_n\|_2^2+\frac{1}{\lambda^2}\left\|\boldsymbol{x}-\bar{\boldsymbol{x}}_{d+1}\right\|_2^2, \quad\bar{\boldsymbol{x}}_{d+1}=\hat{\boldsymbol{v}}_d-\boldsymbol{u}_d,$$ $$\hat{\boldsymbol{v}}_{d+1}= \underset{\boldsymbol{v}}{\arg\min}\left\|W\boldsymbol{v}\right\|_1+\frac{\sigma^2}{\lambda^2}\left\|\boldsymbol{v}-\bar{\boldsymbol{v}}_{d+1}\right\|^2_2,\quad \bar{\boldsymbol{v}}_{d+1}=\hat{\boldsymbol{x}}_{d+1}+\boldsymbol{u}_d,$$ $$\boldsymbol{u}_{d+1}=\boldsymbol{u}_d+\hat{\boldsymbol{x}}_{d+1}-\hat{\boldsymbol{v}}_{d+1}.$$ The measurement fitting problem is solved by conjugate gradients7 using the inverse NDFT2. The sparsity of the solution for the second problem is enforced by soft thresholding in the wavelet domain. In the second step, motion estimation is achieved by using the sparsified reconstruction of the ADMM in the derivatives of the motion models with Newton’s gradient method7. Thereby, the parametrization of translation is improved with a Gaussian model. The last step updates the globally estimated motion on the k-space frequency coordinates and coefficients. Finally, image reconstruction is performed by gridding8 to overcome blurring of the inverse NDFT. The frequency coefficients are resampled onto a Cartesian grid. As the arbitrary trajectory coordinates were not equally dense over the whole k-space, the coefficients are convolved with an area density compensation function. This weighting is computationally efficient calculated by partitioning the k-space into rectangles around the sampling points and saving the structure in k-d trees9. ### Results Representatively, PROPELLER and radial trajectories were simulated. The algorithm was tested on BrainWeb10 and Shepp-Logan phantom data in a FOV of 455 and 160 pixels, respectively. Smooth patient motion was modeled as an autoregressive moving average process with maximum translation amplitudes up to 30 pixels and maximum rotation up to 45°. Figure 1 shows motion corrupted images and the motion compensated results of the algorithm calculated with PROPELLER trajectories. In the corrupted images, no structure or even no image is identifiable. The reconstructed images do not show motion artefacts, but clear contours and details are visible. Only a few gridding artefacts appear due to low resolution. In Figure 2, the progresses of the corresponding originally applied and by the algorithm estimated motion are shown. Their courses follow very closely. The table in Figure 3 shows the mean percentage improvement of the PSNR and mutual information (MI) between motion corrupted images and motion compensated reconstructions for different maximum translation and rotation amplitudes. ### Discussion The results prove the ability of the algorithm to estimate rigid motion very exactly. Even from images corrupted with motion shifts about one fifth of the FOV very detailed images are reconstructed. Small differences between the original and estimated motion just shift the position of the image centroid but do not effect the image quality. The high improvements of the image quality measures emphasize the impression given by the reconstructed images. The method is easily adaptable to other motion models and with it expandable to physical descriptions of elastic motion. As sampling is mathematically described for real-valued nonequispaced frequency coordinates the algorithm quality is largely independent of the design of MRI trajectories. ### Acknowledgements This work has been supported by the German Research Foundation under Grant No. ME 1170/11-1. ### References 1. J.G. Pipe. Motion Correction With PROPELLER MRI: Application to Head Motion and Free-Breathing Cardiac Imaging. Magn Reson Med, 42(5):963-969, 1999. 2. J. Keiner, S. Kunis, D. Potts. Using NFFT 3-A Software Library for Various Nonequispaced Fast Fourier Transforms. ACM Trans Math Softw, 36(4):19:1-19:30, 2009. 3. A. Möller, M. Maass, A. Mertins. Blind Sparse Motion MRI with Linear Subpixel Interpolation. Proc BVM, 510-515, 2015. 4. K. Hormann. Barycentric Interpolation. In: G.E. Fasshauer, editor. Approximation Theory XIV: San Antonio 2013. Springer, Cham, 197-218, 2014. 5. F. Aurenhammer, R. Klein, D.T. Lee. Voronoi Diagrams and Delaunay Triangulations. WORLD Scientific, 2013. 6. N. Parikh, S. Boyd, Proximal Algorithms. Found Trends Optim, 1(3):127-239, 2014. 7. J. Nocedal, S.J. Wright. Numerical Optimization-2nd ed. Springer, New York, 2006. 8. O.K. Johnson, J.G. Pipe. Convolution Kernel Design and Efficient Algorithm for Sampling Density Correction. Magn Reson Med, 61(2):439-447, 2009. 9. J.L. Bentley. Multidimensional Binary Search Trees Used for Associative Searching. Commun ACM, 18(9):509-517, 1975. 10. C.A. Cocosco, V. Kollokian, R.K.S. Kwan, E.C. Evans. BrainWeb: Online Interface to a 3D MRI Simulated Brain Database. Neuroimage, 5(4):425, 1997. ### Figures Figure 1: Top line: Motion corrupted images of the BrainWeb and Shepp-Logan phantom. Bottom line: Corresponding motion compensated reconstructions. Original motion with maximum shift of 30 pixels per dimension and maximum rotation of 45° was applied. It is correspondingly shown with the estimated motion used for motion compensation in Figure 2. Figure 2: Original and estimated absolute motion for the corresponding images in Figure 1 per measurement time $n$. Figure 3: Mean percentage improvement of the image quality measures MI and PSNR for different images and maximum translation and rotation amplitudes over five runs each. Proc. Intl. Soc. Mag. Reson. Med. 27 (2019) 4496
# showing that $\lim_{n\rightarrow \infty} \int_0^1 f_n(x)g(x)~dx =0$ Let $\{f_n\}$ be a sequence of functions in $L^2[0,1]$ such that $f_n(x)\rightarrow 0$ almost everywhere on $[0,1]$. If $|f_n(x)|\le 1$ for all $n$ and almost all $x\in [0,1]$. I require direction in answering the following questions: (a) $$\lim_{n\rightarrow \infty} \int_0^1 f_n(x)g(x)~dx =0$$ for all $g\in L^2[0,1]$. (b) Will the conclusion still hold if the condition $|f_n(x)|\le 1$ is replaced by the condition that $||f_n||_{L^2}$ are uniformly bounded? For (a), given any $g\in L^2[0,1]$, since $|f_n(x)|\le 1$ a.e. on $[0,1]$ for all $n$, we have $$|f_n(x)g(x)|\le |g(x)| \mbox{ a.e. on [0,1] for all }n.$$ Since $g\in L^2[0,1]$, by dominated convergence theorem DCT, we have $$\lim_{n\rightarrow \infty} \int_0^1 f_n(x)g(x)~dx =\int_0^1 \lim_{n\rightarrow \infty}\big(f_n(x)g(x)\big)~dx=0,$$ where the last equality follows from the assumption that $f_n(x)\rightarrow 0$ a.e. on $[0,1]$ as $n\rightarrow\infty$. For (b), we can see that the conclusion does not hold if we just assume that $||f_n||_{L^2}$ is uniformly bounded by the following example: take $f_n$ to be $$f_n(x)=\left\{ \begin{array}{ll} n, & \hbox{if } x\in[0,\frac{1}{n}]; \\ 0, & \hbox{otherwise.} \end{array} \right.$$ It's clear that $f_n(x)\rightarrow 0$ a.e. on $[0,1]$ as $n\rightarrow\infty$, and $\|f_n\|_{L^2}=1$ for all $n$. Take $g(x)=x^{-\frac{1}{3}}$. Then $g\in L^2[0,1]$ because $\int_0^1g(x)^2dx=\int_0^1x^{-\frac{2}{3}}=3x^{\frac{1}{3}}\Big|^1_0=3$. However, $$\int_0^1f_n(x)g(x)dx=n\int_0^{\frac{1}{n}}x^{-\frac{1}{3}}dx=\frac{3}{2}nx^{\frac{2}{3}}\Big|^{\frac{1}{n}}_0=\frac{3}{2}n^{\frac{1}{3}}\rightarrow\infty$$ as $n\rightarrow\infty$. How did you get $\|f_n\|_{L^2}=1$. I'm getting $\sqrt{n}$. I'm using this definition: $$\|f_n\|_{L^p}=\left(\int |f_n|^p\right)^{1/p}.$$ Am I doing something wrong? – Kuku Mar 23 '12 at 1:35
# Oops... This website is made for modern browsers. You are seeing this, because your browser is missing a feature or two. Please install the latest update or switch to a modern browser. See you soon. # Gridshell joints In this tutorial, we will model a joint in reference to one of a gridshell. Considerations of proper engineering are left out and the focus is solely on creating solids that are generated on aligned construction planes. Operations on solids, like trimming or merging, are computationally intensive and will lock Grasshopper when parameters are changed upstream. Thus, the number of objects operated on should be limited. In this tutorial, this is done by creating a box that can be moved around and only the nodes inside the box are calculated. ## Grasshopper The methods described in this tutorial expect that a surface for the gridshell is present and that there are segmented surfaces for each grid cell. To keep it simple, we will use a torus hereafter. #### 0 ##### Generate a gridshell surface and subdivide it To generate a torus as base geometry, we create a Circle Circle (Cir) Curve  >  Primitive  >  Circle Create a circle defined by base plane and radius. Inputs Plane (P)Base plane of circle Outputs Circle (C)Resulting circle with a radius of 30 and then use End Points End Points (End) Curve  >  Analysis  >  End Points Extract the end points of a curve. Inputs Curve (C)Curve to evaluate Outputs Start (S)Curve start point End (E)Curve end point to get a point on the circle. Next, we draw another circle on the XZ Plane XZ Plane (XZ) Vector  >  Plane  >  XZ Plane World XZ plane. Inputs Origin (O)Origin of plane Outputs Plane (P)World XZ plane with a smaller radius of 15 and then use it as a section curve for Sweep1 Sweep1 (Swp1) Surface  >  Freeform  >  Sweep1 Create a sweep surface with one rail curve. Inputs Rail (R)Rail curve Sections (S)Section curves Miter (M)Kink miter type (0=None, 1=Trim, 2=Rotate) Outputs Brep (S)Resulting Brep and the first circle as rail curve. To subdivide the surface of the torus, we first have to calculate the size of the grid cells. We do this with Dimensions Dimensions (Dim) Surface  >  Analysis  >  Dimensions Get the approximate dimensions of a surface Inputs Surface (S)Surface to measure Outputs U dimension (U)Approximate dimension in U direction V dimension (V)Approximate dimension in V direction and then use Division Division (A/B) Maths  >  Operators  >  Division Mathematical division Inputs A (A)Item to divide (dividend) B (B)Item to divide with (divisor) Outputs Result (R)The result of the Division through 5 to get corresponding values. The component Divide Domain² Divide Domain² (Divide) Maths  >  Domain  >  Divide Domain² Divides a two-dimensional domain into equal segments. Inputs Domain (I)Base domain U Count (U)Number of segments in {u} direction V Count (V)Number of segments in {v} direction Outputs Segments (S)Individual segments lets us divide a two-dimensional domain (we can directly plug in the torus) into segments according to our calculated values. We use Isotrim Isotrim (SubSrf) Surface  >  Util  >  Isotrim Extract an isoparametric subset of a surface. Inputs Surface (S)Base surface Domain (D)Domain of subset Outputs Surface (S)Subset of base surface to subdivide the torus into segmented surfaces. #### 1 ##### Create a box to capture nodes with To place a box on the surface of the torus, we have to find the coordinates for the box' base first. This is done with the component Evaluate Surface Evaluate Surface (EvalSrf) Surface  >  Analysis  >  Evaluate Surface Evaluate local surface properties at a {uv} coordinate. Inputs Surface (S)Base surface Point (uv){uv} coordinate to evaluate Outputs Point (P)Point at {uv} Normal (N)Normal at {uv} U direction (U)U direction at {uv} V direction (V)V direction at {uv} Frame (F)Frame at {uv} on which input S is reparameterized. We can now use a MD Slider MD Slider (MD Slider) Params  >  Input  >  MD Slider A multidimensional slider to generate the coordinates and find a point on the surface. The box is created with Center Box Center Box (Box) Surface  >  Primitive  >  Center Box Create a box centered on a plane. Inputs Base (B)Base plane X (X)Size of box in {x} direction. Y (Y)Size of box in {y} direction. Z (Z)Size of box in {z} direction. Outputs Box (B)Resulting box and we use three Number Slider Number Slider Params  >  Input  >  Number Slider Numeric slider for single values to change the size of the box. Now, we can use all four sliders and the Rhino viewport to capture the desired number of nodes within this box. This selection will be used for further modeling of the solids. #### 2 ##### Isolate the captured nodes To continue, we will further dismantle the grid cell surfaces with Deconstruct Brep Deconstruct Brep (DeBrep) Surface  >  Analysis  >  Deconstruct Brep Deconstruct a brep into its constituent parts. Inputs Brep (B)Base Brep Outputs Faces (F)Faces of Brep Edges (E)Edges of Brep Vertices (V)Vertices of Brep and isolate the nodes that are within the box. For the latter, we use Point In Brep Point In Brep (BrepInc) Surface  >  Analysis  >  Point In Brep Test whether a point is inside a closed brep Inputs Brep (B)Brep for inclusion test Point (P)Point for inclusion test Strict (S)If true, then the inclusion is strict Outputs Inside (I)True if point is on the inside of the Brep. together with the vertices V. This will give us a list of Boolean values that can be connected to a Dispatch Dispatch (Dispatch) Sets  >  List  >  Dispatch Dispatch the items in a list into two target lists. Inputs List (L)List to filter Dispatch pattern (P)Dispatch pattern Outputs List A (A)Dispatch target for True values List B (B)Dispatch target for False values component. We now have all the nodes but the surfaces, that are next to each other, share the same vertices. Thus, by dismantling the surfaces, we got vertices that have the same coordinates as vertices from an adjacent surface. Thus, we use Cull Duplicates Cull Duplicates (CullPt) Vector  >  Point  >  Cull Duplicates Cull points that are coincident within tolerance Inputs Points (P)Points to operate on Tolerance (T)Proximity tolerance distance Outputs Points (P)Culled points Indices (I)Index map of culled points Valence (V)Number of input points represented by this output point and flatten its input P to end up with a list of unique nodes that will serve for modeling the joints. #### 3 ##### Select the surrounding surfaces Next, we select the surfaces that connect to the identified nodes. The vertices of each grid cell are separated in lists and this will help us to get the right surfaces. We connect a Sort List Sort List (Sort) Sets  >  List  >  Sort List Sort a list of numeric keys. Inputs Keys (K)List of sortable keys Values A (A)Optional list of values to sort synchronously Outputs Keys (K)Sorted keys Values A (A)Synchronous values in A component to the Point In Brep Point In Brep (BrepInc) Surface  >  Analysis  >  Point In Brep Test whether a point is inside a closed brep Inputs Brep (B)Brep for inclusion test Point (P)Point for inclusion test Strict (S)If true, then the inclusion is strict Outputs Inside (I)True if point is on the inside of the Brep. detection from before and append a List Item List Item (Item) Sets  >  List  >  List Item Retrieve a specific item from a list. Inputs List (L)Base list Index (i)Item index Wrap (W)Wrap index to list bounds Outputs Item (i)Item at {i'} on which we Reverse input L. Now, if one of the vertices is within the box, the value 1 for True is forwarded. Using Dispatch Dispatch (Dispatch) Sets  >  List  >  Dispatch Dispatch the items in a list into two target lists. Inputs List (L)List to filter Dispatch pattern (P)Dispatch pattern Outputs List A (A)Dispatch target for True values List B (B)Dispatch target for False values with both inputs flattened will sort through the surfaces and select those which are connected to our nodes. #### 4 ##### Select the connected curves To find the curves which are connected to the nodes, we grab the edges E from the dismantled grid cells and attach an End Points End Points (End) Curve  >  Analysis  >  End Points Extract the end points of a curve. Inputs Curve (C)Curve to evaluate Outputs Start (S)Curve start point End (E)Curve end point component on which we graft both outputs. We now use Point In Brep Point In Brep (BrepInc) Surface  >  Analysis  >  Point In Brep Test whether a point is inside a closed brep Inputs Brep (B)Brep for inclusion test Point (P)Point for inclusion test Strict (S)If true, then the inclusion is strict Outputs Inside (I)True if point is on the inside of the Brep. to check whether one of the end points is within our box. By using Sort List Sort List (Sort) Sets  >  List  >  Sort List Sort a list of numeric keys. Inputs Keys (K)List of sortable keys Values A (A)Optional list of values to sort synchronously Outputs Keys (K)Sorted keys Values A (A)Synchronous values in A and List Item List Item (Item) Sets  >  List  >  List Item Retrieve a specific item from a list. Inputs List (L)Base list Index (i)Item index Wrap (W)Wrap index to list bounds Outputs Item (i)Item at {i'} (with input L reversed) we again forward 1 for True to a Dispatch Dispatch (Dispatch) Sets  >  List  >  Dispatch Dispatch the items in a list into two target lists. Inputs List (L)List to filter Dispatch pattern (P)Dispatch pattern Outputs List A (A)Dispatch target for True values List B (B)Dispatch target for False values component with both inputs flattened. We now have all the edges that are completely or partially in the box, but because adjacent cells have equal edges, there are some duplicate ones. We can remove duplicate curves by getting their midpoint as a Point On Curve Point On Curve (CurvePoint) Curve  >  Analysis  >  Point On Curve Evaluates a curve at a specific location and then use Cull Duplicates Cull Duplicates (CullPt) Vector  >  Point  >  Cull Duplicates Cull points that are coincident within tolerance Inputs Points (P)Points to operate on Tolerance (T)Proximity tolerance distance Outputs Points (P)Culled points Indices (I)Index map of culled points Valence (V)Number of input points represented by this output point to get the indices of the duplicates. It’s important that we right-click Cull Duplicates Cull Duplicates (CullPt) Vector  >  Point  >  Cull Duplicates Cull points that are coincident within tolerance Inputs Points (P)Points to operate on Tolerance (T)Proximity tolerance distance Outputs Points (P)Culled points Indices (I)Index map of culled points Valence (V)Number of input points represented by this output point and select Leave One. A List Item List Item (Item) Sets  >  List  >  List Item Retrieve a specific item from a list. Inputs List (L)Base list Index (i)Item index Wrap (W)Wrap index to list bounds Outputs Item (i)Item at {i'} component will now do the sorting and we get the curves that will become the gridshell members. #### 5 ##### Find proper vectors for construction planes All cells of the gridshell have a different orientation in space and the construction planes of the joints should deviate as little as possible from the adjacent surfaces. For this, we calculate the mean vector by adding the normal vectors of the corresponding cells. Before we can add the vectors, we have to find the cells that are associated with a node. For this, we use Area Area (Area) Surface  >  Analysis  >  Area Solve area properties for breps, meshes and planar closed curves. Inputs Geometry (G)Brep, mesh or planar closed curve for area computation Outputs Area (A)Area of geometry Centroid (C)Area centroid of geometry to get the centers of the cells and then we use Closest Points Closest Points (CPs) Vector  >  Point  >  Closest Points Find closest points in a point collection. Inputs Point (P)Point to search from Cloud (C)Cloud of points to search Count (N)Number of closest points to find Outputs Closest Point (P)Point in [C] closest to [P] CP Index (i)Index of closest point Distance (D)Distance between [P] and [C](i) to find the closest center points for each node. The number of the closest points N depends on the type of grid (triangular, hexagonal etc.) and on the position of the cell within the grid. We get this number from output V of the Cull Duplicates Cull Duplicates (CullPt) Vector  >  Point  >  Cull Duplicates Cull points that are coincident within tolerance Inputs Points (P)Points to operate on Tolerance (T)Proximity tolerance distance Outputs Points (P)Culled points Indices (I)Index map of culled points Valence (V)Number of input points represented by this output point component from step 2, because the number of neighboring cells equals the number of duplicate vertices. We now have a separate list of associated surface indices for each node and a List Item List Item (Item) Sets  >  List  >  List Item Retrieve a specific item from a list. Inputs List (L)Base list Index (i)Item index Wrap (W)Wrap index to list bounds Outputs Item (i)Item at {i'} component will get us the actual surfaces. Then, we connect an Evaluate Surface Evaluate Surface (EvalSrf) Surface  >  Analysis  >  Evaluate Surface Evaluate local surface properties at a {uv} coordinate. Inputs Surface (S)Base surface Point (uv){uv} coordinate to evaluate Outputs Point (P)Point at {uv} Normal (N)Normal at {uv} U direction (U)U direction at {uv} V direction (V)V direction at {uv} Frame (F)Frame at {uv} and reparameterize input S. A Panel Panel Params  >  Input  >  Panel A panel for custom notes and text values set to .5,.5 will evaluate the surface at its center. At output N we get the normal vectors and we add them with Mass Addition Maths  >  Operators  >  Mass Addition Perform mass addition of a list of items Inputs Input (I)Input values for mass addition. Outputs Partial Results (Pr)List of partial results . Also, we connect another Mass Addition Maths  >  Operators  >  Mass Addition Perform mass addition of a list of items Inputs Input (I)Input values for mass addition. Outputs Partial Results (Pr)List of partial results to output U to get the mean orientation of each node. #### 6 ##### Create solids for joints The next task is to create the construction planes with the just created vectors: First we create planes perpendicular to the mean normal vectors with Plane Normal Plane Normal (Pl) Vector  >  Plane  >  Plane Normal Create a plane perpendicular to a vector. Inputs Origin (O)Origin of plane Z-Axis (Z)Z-Axis direction of plane Outputs Plane (P)Plane definition . At input O we connect the nodes from step 2 as origins and flatten input Z to which we connect the normal vectors. Then, we append an Align Plane Align Plane (Align) Vector  >  Plane  >  Align Plane Perform minimal rotation to align a plane with a guide vector Inputs Plane (P)Plane to straighten Direction (D)Straightening guide direction Outputs Plane (P)Straightened plane Angle (A)Rotation angle component and connect the mean orientation vector with input D flattened. Now that we have our construction planes, we will create a ring that the grid members can later attach to. In a Panel Panel Params  >  Input  >  Panel A panel for custom notes and text values with Multiline Data turned off, we write the inner and outer radius of our ring (here 0.10 and 0.12) and draw matching circles with Circle Circle (Cir) Curve  >  Primitive  >  Circle Create a circle defined by base plane and radius. Inputs Plane (P)Base plane of circle Outputs Circle (C)Resulting circle (graft input R). Explode Tree Explode Tree (BANG!) Sets  >  Tree  >  Explode Tree Extract all the branches from a tree Inputs Data (D)Data to explode Outputs Branch 0 (-)All data inside the branch at index: 0 Branch 1 (-)All data inside the branch at index: 1 and Ruled Surface Ruled Surface (RuleSrf) Surface  >  Freeform  >  Ruled Surface Create a surface between two curves. Inputs Curve A (A)First curve Curve B (B)Second curve Outputs Surface (S)Ruled surface between A and B will create a flat ring that will serve as base for the solids. The axes of the grid members will meet the ring at half height, so we have to extrude it upwards and downwards. The direction of the extrusion vectors is given by the mean normal vectors and we have to adjust their length. A panel with 0.05 will give us a value for half of the height and Negative Negative (Neg) Maths  >  Operators  >  Negative Compute the negative of a value. Inputs Value (x)Input value Outputs Result (y)Output value will create the counterpart. Both values are connected to an Amplitude Amplitude (Amp) Vector  >  Vector  >  Amplitude Set the amplitude (length) of a vector. Inputs Vector (V)Base vector Amplitude (A)Amplitude (length) value Outputs Vector (V)Resulting vector component on which we graft input A and flatten input V. Now, each vector will be converted into two short vectors. But, we need to use Flip Matrix Flip Matrix (Flip) Sets  >  Tree  >  Flip Matrix Flip a matrix-like data tree by swapping rows and columns. Inputs Data (D)Data matrix to flip Outputs Data (D)Flipped data matrix to adjust the data sorting for the rings. To put it all together, we use an Extrude Extrude (Extr) Surface  >  Freeform  >  Extrude Extrude curves and surfaces along a vector. Inputs Base (B)Profile curve or surface Direction (D)Extrusion direction Outputs Extrusion (E)Extrusion result component and it generates the solids from the rings. We have to graft and simplify input B and simplify input D to get matching data streams. Because we have two extrusions for each node, one upward and one downward, we can combine them to a single Brep with Solid Union Solid Union (SUnion) Intersect  >  Shape  >  Solid Union Perform a solid union on a set of Breps. Inputs Breps (B)Breps to union Outputs Result (R)Union result . #### 7 ##### Shorten length of grid members We will continue with the curves from step 4. The goal is to shorten them, so that they end on the center lines of the rings. In this example, we will cut off 0.11 from each end. After retrieving their initial Length Length (Len) Curve  >  Analysis  >  Length Measure the length of a curve. Inputs Curve (C)Curve to measure Outputs Length (L)Curve length , we use Subtraction Subtraction (A-B) Maths  >  Operators  >  Subtraction Mathematical subtraction Inputs A (A)First operand for subtraction B (B)Second operand for subtraction Outputs Result (R)Result of subtraction to calculate the end points on one side. On the other side, we need to start at 0.11. But we need this value for each line. Thus, we have to use Longest List Longest List (Long) Sets  >  List  >  Longest List Grow a collection of lists to the longest length amongst them Inputs List (A) (A)List (A) to operate on List (B) (B)List (B) to operate on Outputs to duplicate it as many times as our list of initial curves is long. Both values (start and end) are combined with Merge Merge (Merge) Sets  >  Tree  >  Merge Merge a bunch of data streams Inputs Data 1 (D1)Data stream 1 Data 2 (D2)Data stream 2 Outputs Result (R)Result of merge on which graft the inputs. We now have lists that contain two values each. To shatter the curves at these lengths, we have to calculate their curve parameters. For this, we use Evaluate Length Evaluate Length (Eval) Curve  >  Analysis  >  Evaluate Length Evaluate a curve at a certain factor along its length. Length factors can be supplied both in curve units and normalized units. Change the [N] parameter to toggle between the two modes. Inputs Curve (C)Curve to evaluate Length (L)Length factor for curve evaluation Normalized (N)If True, the Length factor is normalized (0.0 ~ 1.0) Outputs Point (P)Point at the specified length Tangent (T)Tangent vector at the specified length Parameter (t)Curve parameter at the specified length and graft input C. We have to set input N to False, which states that we are working with the actual and not the normalized lengths. Next, we connect output t with a Shatter Shatter (Shatter) Curve  >  Division  >  Shatter Shatter a curve into segments. Inputs Curve (C)Curve to trim Parameters (t)Parameters to split at Outputs Segments (S)Shattered remains component and also graft the curves that we connect to input C. The curves are now cut into three peaces. Because we only interested in the middle part, we use List Item List Item (Item) Sets  >  List  >  List Item Retrieve a specific item from a list. Inputs List (L)Base list Index (i)Item index Wrap (W)Wrap index to list bounds Outputs Item (i)Item at {i'} and set the index to 1. We now find the shortened curves at its output. #### 8 ##### Calculate vectors for construction planes To create a section profile for each curve, we have to generate a construction plane. This should deviate as little as possible from the neighboring cells, so that the grid member has the same tilt to both adjacent surfaces (similar to step 5). With Point On Curve Point On Curve (CurvePoint) Curve  >  Analysis  >  Point On Curve Evaluates a curve at a specific location we get the midpoint on each curve and then find the neighboring cells with Closest Points Closest Points (CPs) Vector  >  Point  >  Closest Points Find closest points in a point collection. Inputs Point (P)Point to search from Cloud (C)Cloud of points to search Count (N)Number of closest points to find Outputs Closest Point (P)Point in [C] closest to [P] CP Index (i)Index of closest point Distance (D)Distance between [P] and [C](i) . The number of points N to look for is given by output V of Cull Duplicates Cull Duplicates (CullPt) Vector  >  Point  >  Cull Duplicates Cull points that are coincident within tolerance Inputs Points (P)Points to operate on Tolerance (T)Proximity tolerance distance Outputs Points (P)Culled points Indices (I)Index map of culled points Valence (V)Number of input points represented by this output point from step 4. The same way as in step 5, we use Evaluate Surface Evaluate Surface (EvalSrf) Surface  >  Analysis  >  Evaluate Surface Evaluate local surface properties at a {uv} coordinate. Inputs Surface (S)Base surface Point (uv){uv} coordinate to evaluate Outputs Point (P)Point at {uv} Normal (N)Normal at {uv} U direction (U)U direction at {uv} V direction (V)V direction at {uv} Frame (F)Frame at {uv} , reparameterize its input S, set .5,.5 as uv-coordinate and calculate the mean vector with Mass Addition Maths  >  Operators  >  Mass Addition Perform mass addition of a list of items Inputs Input (I)Input values for mass addition. Outputs Partial Results (Pr)List of partial results attached to output N. #### 9 ##### Create solids for grid members Next, we will give body to the grid members. By using Perp Frame Perp Frame (PFrame) Curve  >  Analysis  >  Perp Frame Solve the perpendicular (zero-twisting) frame at a specified curve parameter. Inputs Curve (C)Curve to evaluate Parameter (t)Parameter on curve domain to evaluate Outputs Frame (F)Perpendicular curve frame at {t} and reparameterizing its input C, we can create planes perpendicular to the curves. The location on the curves (input t) is set to their starting point with 0. These planes are not yet aligned with our surfaces and we need to use Align Plane Align Plane (Align) Vector  >  Plane  >  Align Plane Perform minimal rotation to align a plane with a guide vector Inputs Plane (P)Plane to straighten Direction (D)Straightening guide direction Outputs Plane (P)Straightened plane Angle (A)Rotation angle together with our vectors from step 8. Input P has to be grafted so that the data structures match. The actual section profile is drawn as a Rectangle Rectangle (Rectangle) Curve  >  Primitive  >  Rectangle Create a rectangle on a plane Inputs Plane (P)Rectangle base plane X Size (X)Dimensions of rectangle in plane X direction. Y Size (Y)Dimensions of rectangle in plane Y direction. Outputs Rectangle (R)Rectangle Length (L)Length of rectangle curve that has the X and Y dimensions -0.045 To 0.045. This way, our curves remains the axes of our members. We then use Sweep1 Sweep1 (Swp1) Surface  >  Freeform  >  Sweep1 Create a sweep surface with one rail curve. Inputs Rail (R)Rail curve Sections (S)Section curves Miter (M)Kink miter type (0=None, 1=Trim, 2=Rotate) Outputs Brep (S)Resulting Brep to create solids along the full length of our curves. Again, we have to do same grafting for matching the data structures. #### 10 ##### Solve intersection for joints and grid members In the steps before, we have created joints and grid members as solids, but they still interpenetrate. We can now either trim the joints with the grid members or the grid members with the joints by using the Solid Difference Solid Difference (SDiff) Intersect  >  Shape  >  Solid Difference Perform a solid difference on two Brep sets. Inputs Breps A (A)First Brep set Breps B (B)Second Brep set Outputs Result (R)Difference result component. This depends on the engineering of the joints. Likewise, more construction parts could be added in a similar fashion. ## Get the results ### Version Info • Rhino 6.31 • Grasshopper 1.0.0007 ## But I only have a mesh… Let’s have a look at what has to be changed, if we initially started with a mesh instead of a surface. To quickly create a mesh, we use Mesh Surface Mesh Surface (Mesh UV) Mesh  >  Util  >  Mesh Surface Create a Surface UV mesh Inputs Surface (S)Surface geometry U Count (U)Number of quads in U direction V Count (V)Number of quads in V direction Overhang (H)Allow faces to overhang trims Equalize (Q)Equalize span length Outputs Mesh (M)UV Mesh together with the grid cell dimensions from earlier. Face Boundaries Face Boundaries (FaceB) Mesh  >  Analysis  >  Face Boundaries Convert all mesh faces to polylines Inputs Mesh (M)Mesh for face boundary extraction Outputs Boundaries (B)Boundary polylines for each mesh face will convert the mesh faces to polylines. To rework step 1 (creating the capture box), we have to do a little more work to find a replacement for Evaluate Surface Evaluate Surface (EvalSrf) Surface  >  Analysis  >  Evaluate Surface Evaluate local surface properties at a {uv} coordinate. Inputs Surface (S)Base surface Point (uv){uv} coordinate to evaluate Outputs Point (P)Point at {uv} Normal (N)Normal at {uv} U direction (U)U direction at {uv} V direction (V)V direction at {uv} Frame (F)Frame at {uv} : We use Construct Point Construct Point (Pt) Vector  >  Point  >  Construct Point Construct a point from {xyz} coordinates. Inputs X coordinate (X){x} coordinate Y coordinate (Y){y} coordinate Z coordinate (Z){z} coordinate Outputs Point (Pt)Point coordinate together with sliders for the coordinates which will give us a point independent of our geometry. We can bring both together with Mesh Closest Point Mesh Closest Point (MeshCP) Mesh  >  Analysis  >  Mesh Closest Point Finds the closest point on a mesh Inputs Point (P)Point to search from Mesh (M)Mesh to search for closest point Outputs Point (P)Location on mesh closest to search point Index (I)Face index of closest point Parameter (P)Mesh parameter for closest point and get the index of the mesh face closest to our point at output I. List Item List Item (Item) Sets  >  List  >  List Item Retrieve a specific item from a list. Inputs List (L)Base list Index (i)Item index Wrap (W)Wrap index to list bounds Outputs Item (i)Item at {i'} will find the matching curves from Face Boundaries Face Boundaries (FaceB) Mesh  >  Analysis  >  Face Boundaries Convert all mesh faces to polylines Inputs Mesh (M)Mesh for face boundary extraction Outputs Boundaries (B)Boundary polylines for each mesh face . To create a construction plane for our box, we use Is Planar Is Planar (Planar) Surface  >  Analysis  >  Is Planar Test whether a surface is planar Inputs Surface (S)Surface to test for planarity Interior (I)Limit planarity test to the interior of trimmed surfaces Outputs Planar (F)Planarity flag of surface Plane (P)Surface plane which converts the polyline to a surface and evaluates its plane. We now have a base plane for our box from earlier. What remains, is to find a substitution for Deconstruct Brep Deconstruct Brep (DeBrep) Surface  >  Analysis  >  Deconstruct Brep Deconstruct a brep into its constituent parts. Inputs Brep (B)Base Brep Outputs Faces (F)Faces of Brep Edges (E)Edges of Brep Vertices (V)Vertices of Brep . This time, we work with the polylines instead of segmented surfaces. We can directly forward them to step 3, and they will be converted to surfaces with Surface Surface (Srf) Params  >  Geometry  >  Surface Contains a collection of generic surfaces . Also, we break the Face Boundaries into lines with Explode Explode (Explode) Curve  >  Util  >  Explode Explode a curve into smaller segments. Inputs Curve (C)Curve to explode Recursive (R)Recursive decomposition until all segments are atomic Outputs Segments (S)Exploded segments that make up the base curve Vertices (V)Vertices of the exploded segments and forward them (output S) to step 4. The Explode Explode (Explode) Curve  >  Util  >  Explode Explode a curve into smaller segments. Inputs Curve (C)Curve to explode Recursive (R)Recursive decomposition until all segments are atomic Outputs Segments (S)Exploded segments that make up the base curve Vertices (V)Vertices of the exploded segments component will also give us the vertices of each face at output V. The start and end point of each face boundary are listed as two separate vertices, but they are in fact the same point. Thus, we use Cull Index Cull Index (Cull i) Sets  >  Sequence  >  Cull Index Cull (remove) indexed elements from a list. Inputs List (L)List to cull Indices (I)Culling indices Wrap (W)Wrap indices to list range Outputs List (L)Culled list to remove the unnecessary point. The rest of step 2 will remain unchanged. Now, every change is made to use the algorithm with meshes. It could be tweaked even more to use mesh faces for the calculation of the construction plane vectors. This optimization would be even more useful if additional Grasshopper Plugins would be used. For now, it’s fine for vanilla Grasshopper. ## Get the results ### Version Info • Rhino 6.31 • Grasshopper 1.0.0007
# In a Compton-efiect experitnent in which the incident $x$ -rays have a wavelength of $10.0 mathrm{pm}$, the scatiered $mathrm{x}$ -rays ###### Question: In a Compton-efiect experitnent in which the incident $x$ -rays have a wavelength of $10.0 mathrm{pm}$, the scatiered $mathrm{x}$ -rays at a certain angle have a wavelength of $10.5 mathrm{pm}$. Find the momentum (magnitude and direction) of the corresponding recoil electrons. #### Similar Solved Questions ##### 8. The cantilever beam in Figure Q8 subjects to concentrated loading. The cross section geometry gives... 8. The cantilever beam in Figure Q8 subjects to concentrated loading. The cross section geometry gives the second moment of area / 100 x 10 m. The longitudinal geometry of the beam: a 2 m, b 1 m. The material of the beam: Young's modulus E 200 GPa. The loading: concentrated force P 10 KN. (a) De... ##### What would be the product ofthe following reaction sequence?AICI;ii) (CH3-MH jii) LiBH_CN What would be the product ofthe following reaction sequence? AICI; ii) (CH3-MH jii) LiBH_CN... ##### Consider the function f (*)=3* +1 Classify this function as linear; exponential; logarithmic, or logistic:Vertical intercept:Horizontal intercept:Vertical asymptote:Horizontal asymptote:Domain:Range:lim f()lim f(x)Graph the function on the given axes_ Include any asymptotes and label the coordinates of at least two points_ Consider the function f (*)=3* +1 Classify this function as linear; exponential; logarithmic, or logistic: Vertical intercept: Horizontal intercept: Vertical asymptote: Horizontal asymptote: Domain: Range: lim f() lim f(x) Graph the function on the given axes_ Include any asymptotes and label the co... ##### 7 Ata PGA tournament; held in Florida, the following scores from two consecutive weekdays were revealed for eight randomly selected golfers At & =0.0S,is there significant evidence that there is a difference in mean scores of golfers for the two days? Golfer 2 3 4 5 Thursday 67 65 68 68 68 70 69 70 Friday 68 70 69 71 72 69 70 70 7 Ata PGA tournament; held in Florida, the following scores from two consecutive weekdays were revealed for eight randomly selected golfers At & =0.0S,is there significant evidence that there is a difference in mean scores of golfers for the two days? Golfer 2 3 4 5 Thursday 67 65 68 68 68 70 ... ##### Which of the following shows right match of a salt solution and its property? a. Sodium... Which of the following shows right match of a salt solution and its property? a. Sodium fluoride (NaF) - acidic b. Ammonium bromide (NH4Br) - basic c. Lithium sulfate (Li2SO4) - basic d. Potassium bicarbonate (KHCO3) - acidic... ##### The equation of the tangent plane t0 z =f(x,v) at (3,5) is 2 =7+3(x-3) 4lv-5)Find the value of the directional derivative f, at (3,5) in the direction of 3 7 + 4]. Round your answers to decimal places_ The equation of the tangent plane t0 z =f(x,v) at (3,5) is 2 =7+3(x-3) 4lv-5) Find the value of the directional derivative f, at (3,5) in the direction of 3 7 + 4]. Round your answers to decimal places_... ##### Knight Company reports the following costs and expenses in May. Factory utilities Depreciation on factory equipment... Knight Company reports the following costs and expenses in May. Factory utilities Depreciation on factory equipment $17,400 Direct labor$70,300 13,850Sales salaries 46,700 Property taxes on factory building 3,100 1,500 2,220 15,900 2,770 Depreciation on delivery trucks 3 q00 Indirect factory labor ... ##### 4) Why are Okazaki fragments only found on the lagging strand? How does this relate to the description of DNA synthesis as "semi-discontinuous' 5) What would be the result of a mutation in telomerase that made it non-functional? Describe the Griffith experiment; Describe the Avery experiment; 4) Why are Okazaki fragments only found on the lagging strand? How does this relate to the description of DNA synthesis as "semi-discontinuous' 5) What would be the result of a mutation in telomerase that made it non-functional? Describe the Griffith experiment; Describe the Avery experime... ##### Verne 1 1 ima 1 2 { oek 1 "" 1 1 1 "6 { L 2 1 1 :} [ :30 (1"cal Representalion45 Verne 1 1 ima 1 2 { oek 1 "" 1 1 1 "6 { L 2 1 1 :} [ : 30 (1" cal Representalion 45... ##### On January 1, 2021, The Barrett Company purchased merchandise from a supplier. Payment was a noninterest-bearing... On January 1, 2021, The Barrett Company purchased merchandise from a supplier. Payment was a noninterest-bearing note requiring five annual payments of $32,000 on each December 31 beginning on December 31, 2021, and a lump-sum payment of$220,000 on December 31, 2025. A 12% interest rate properly re... ##### How do you do this A car starting from rest moves in a straight line with... How do you do this A car starting from rest moves in a straight line with a constant acceleration of 1.95 m/s for 14.0 s, then slows down to a stop with a constant eceleration of 2.75 m/s How far does it travel? meters... ##### Problem 4. Screening Tests Suppose that a certain disease is prese test designed to detect this... Problem 4. Screening Tests Suppose that a certain disease is prese test designed to detect this disease if present. The test does not always work perfectly. Sometimes the test is negative when the disease times that the test produces various results. nt in 10% of the population, and that there is a ... ##### Question 2ptsComparing these two molecules:HNOHMolecule A /5 (Selectlikely to be found accumulated in fatty tissue:Compared t0 Inolecule B, molecule E relatively ISelect |Molecule B has[8i3d]Broups that Ar capable of hydrogcn bonding with watermord Question 2 pts Comparing these two molecules: HN OH Molecule A /5 (Select likely to be found accumulated in fatty tissue: Compared t0 Inolecule B, molecule E relatively ISelect | Molecule B has [8i3d] Broups that Ar capable of hydrogcn bonding with water mord... ##### A small charged particle with q = 15.0 μC is placed at each of the eight... A small charged particle with q = 15.0 μC is placed at each of the eight corners of a cube with side length s = 25.0 cm. Find the magnitude of the electric field at the center of one of the faces of the cube. NOTE: ANSWER IS: 4.70 x 10^6 N/C NEED HELP FINDING THIS ANSWER... ##### Pts) 21. certain reaction equivalent of Mg needed to completely react with every equivalent. KMnO4: Each mol of Mg contains 2 equivalents of Mz, and each mol of KMnOa contains equivalents of KMnOa.You have an aqueous solution of KMnO4 which contains 12.67 % KMnOs by mass} (12.67 g KMnOa / 100 grams solution ) The density of the solution is 4.157 g solution LmL solulion.Find the volume of the solution of KMnOathat is needed to completely react with 51 * 102` Mg atoms. (Assume all conversion fact pts) 21. certain reaction equivalent of Mg needed to completely react with every equivalent. KMnO4: Each mol of Mg contains 2 equivalents of Mz, and each mol of KMnOa contains equivalents of KMnOa. You have an aqueous solution of KMnO4 which contains 12.67 % KMnOs by mass} (12.67 g KMnOa / 100 grams... ##### Solve (he following equation+24-0neccssanthe answer box complete your choicc(cimsnalural loganthans? Select Iko correct chaice below and,What the solubionThe solution sot Is (Use coMma boparate ungwven The solution set = Ihe empty s0tneadou Typo0xnclanswbnsimplified (orm ) Solve (he following equation +24-0 neccssan the answer box complete your choicc (cims nalural loganthans? Select Iko correct chaice below and, What the solubion The solution sot Is (Use coMma boparate ungwven The solution set = Ihe empty s0t neadou Typo 0xnclanswbn simplified (orm )... ##### Httpslfusq4Stxtheexpertta conCommon/TakefutorialAssignmentaspx home student; contreras 10220+Butph edu K Account Loa Outclass Managcment Hatp CHSA 6 HW Begia Date: 107172018 12 00 00 AM Dale; 12713 2018 11.59 00 PM Eud Date: 12,1+/2018 12 00 00_ (590) Problem 6: Fikg kady Kie Pulled away fiom butTD E structute the hzuc snoudRandomized Variables71kgSu9a Fart (2) (alculattRenecthe fu of thcNctons) If thc Lyd; moutentnhTANc cqulburcotnl]44LecLAEAacolaAnnnicochl nrnae comnhii Derieri RadinntFiletr httpslfusq4Stxtheexpertta conCommon/TakefutorialAssignmentaspx home student; contreras 10220+Butph edu K Account Loa Out class Managcment Hatp CHSA 6 HW Begia Date: 107172018 12 00 00 AM Dale; 12713 2018 11.59 00 PM Eud Date: 12,1+/2018 12 00 00_ (590) Problem 6: Fikg kady Kie Pulled away fiom butTD... ##### Ex 16.95 Let E be the volume described by x2 + y + 2 < 4,and F = (x,y,2) Compute F . NdS . (answer) Ex 16.95 Let E be the volume described by x2 + y + 2 < 4,and F = (x,y,2) Compute F . NdS . (answer)... ##### Which is a stronger base? Why? A. ~OH Strong bases have weak B. Hzo (stable) conjugate acids! Remember L everything A. NH3 prefers to be in a lower B. Hzo (more stable) state. A. ~CHz 3 B. ~NH2 Very strong bases are C. ~OH unstablel "Strong" means verureactive Which is a stronger base? Why? A. ~OH Strong bases have weak B. Hzo (stable) conjugate acids! Remember L everything A. NH3 prefers to be in a lower B. Hzo (more stable) state. A. ~CHz 3 B. ~NH2 Very strong bases are C. ~OH unstablel "Strong" means verureactive... ##### Identify whether each species functions as Bronsted-Lowry acid or a Bronsted-Lowry base in this net ionic equation_CH;COOHCH;COO Brensted-LowryHS"Bronsted-LowrBronsted-LowryBrensted-Lowryacid baseIn this reaction:The formula for the conjugateof CH;C COOH isThe formula for the conjugateof S2- isThe pH of an aqueous solution of 0.563 M hydrofluoric acid isIn the laboratory; general chemistry student measured the pH ofa 0.557 M aqueous solution of ethylamine; CzHsNHz to be 12.203Use the inform Identify whether each species functions as Bronsted-Lowry acid or a Bronsted-Lowry base in this net ionic equation_ CH;COOH CH;COO Brensted-Lowry HS" Bronsted-Lowr Bronsted-Lowry Brensted-Lowry acid base In this reaction: The formula for the conjugate of CH;C COOH is The formula for the conjuga... ##### Find ule muildleu PIDUL IUl queSLUNDU. If you pick a card at random from a well... Find ule muildleu PIDUL IUl queSLUNDU. If you pick a card at random from a well shuffled deck, what is the probability that you get a three or a spade?... ##### Which of the following sets of four quantum numbers(n, l, ml, ms)correctly describes an electron occupying a d orbital of an elementin the second row of the transition metals?Group of answer choices4 2 2 + 1/25 1 2 +1/24 1 –1 – 1/25 2 2 +1/24 0 –1 – 1/2 Which of the following sets of four quantum numbers (n, l, ml, ms) correctly describes an electron occupying a d orbital of an element in the second row of the transition metals? Group of answer choices 4 2 2 + 1/2 5 1 2 +1/2 4 1 –1 – 1/2 5 2 2 +1/2 4 0 –1 – 1/2... ##### Suppose the Hamiltonian $H$, for a particular quantum system. is a function of some parameter $lambda$; let $E_{n}(lambda)$ and $psi_{n}(lambda)$ be the eigenvalues and eigenfunctions of $H(lambda)$. The Feynman-Hellmann theorem states that$frac{partial E_{n}}{partial lambda}=leftlanglepsi_{n}left|frac{partial H}{partial lambda}ight| psi_{n}ightangle$(assuming either that $E_{n}$ is nondegenerate, or-if degenerate- that the $psi_{n}$ 's are the "good" linear combinations of the Suppose the Hamiltonian $H$, for a particular quantum system. is a function of some parameter $lambda$; let $E_{n}(lambda)$ and $psi_{n}(lambda)$ be the eigenvalues and eigenfunctions of $H(lambda)$. The Feynman-Hellmann theorem states that frac{partial E_{n}}{partial lambda}=leftlanglepsi_{n}lef... ##### 11 Suppose that the following data were obtained from the records of a certain two-year college: Of those who were freshmen (F_ during particular year, 80% became sophomores (S) the next year and 20% dropped out (D) Of those who were sophomores during particular year; 90% graduated (G) by the next year and 10% dropped out Use the absorbing stochastic matrix to answer the following questions: points)6 89Identify $and R Find the fundamental matrix: Find the stable matrix Determine the probability 11 Suppose that the following data were obtained from the records of a certain two-year college: Of those who were freshmen (F_ during particular year, 80% became sophomores (S) the next year and 20% dropped out (D) Of those who were sophomores during particular year; 90% graduated (G) by the next y... 5 answers ##### According to a Pew Rescarch Center study, May 2011, 37% of all Arnericnn FtAa adults had smart phone (ore which the user can use t0 read email and surf pnu the Internet): communications professor at university believes this nercentge is higher mang community college= studlents . She selects 460 community college = stiulents random and finds that ?05 of them have smart phone: Then in testing the hypotheses: Ho: p 0.37 VersusHap > 0.37,what is the test statistic?(Please round your answer t0 tW According to a Pew Rescarch Center study, May 2011, 37% of all Arnericnn FtAa adults had smart phone (ore which the user can use t0 read email and surf pnu the Internet): communications professor at university believes this nercentge is higher mang community college= studlents . She selects 460 com... 1 answer ##### Shield Inc. is a new small business begun o a leading provider of IT services an... Shield Inc. is a new small business begun o a leading provider of IT services an s a new small business begun on September 1, 2019, by Tony Stark. The company is ng provider of IT services and communication technology. The following is a selected list ransactions that occurred during the month of Se... 1 answer ##### Given a set, weights, and an integer desired_weight, remove the element of the set that is... Given a set, weights, and an integer desired_weight, remove the element of the set that is closest to desired_weight (the closest element can be less than, equal to OR GREATER THAN desired_weight), and associate it with the variable actual_weight. For example, if weights is (12, 19, 6, 14, 22, 7) an... 5 answers ##### Let n,q € N Prove that if Jm € @, then Nm € N. Let n,q € N Prove that if Jm € @, then Nm € N.... 1 answer ##### Nadiya (28) is single. She paid$2,000 in eligible student loan interest during the year. She... Nadiya (28) is single. She paid $2,000 in eligible student loan interest during the year. She also received income from the following sources:$35,000 in wages. Unemployment income of $4,320.$300 in gambling winnings from a winning lottery ticket. 1) What is Nadiyah's adjusted gross income? a... ##### The line graph shows the cost of inflation in some country: What cost S10,000 in 1975 would cost the amount shown by the graph The Cost of Inflation in subsequent years. Below are two mathematical models for the data shown in the graph_ In each formula C represents the cost years after 1980 of what cost \$10,000 in 1975 Answer parts through 0 Model C =952x + 15,470 30 | Model 2 C = 2x2 1020x + 14,138 Year 8 1980 1990 2000Use the graph to estimate the cost in 2000 to the nearest thousand dollar The line graph shows the cost of inflation in some country: What cost S10,000 in 1975 would cost the amount shown by the graph The Cost of Inflation in subsequent years. Below are two mathematical models for the data shown in the graph_ In each formula C represents the cost years after 1980 of what ...
# Part 2.4Basic output methods  There are several very simple ways of outputting information to the user or to the debugger. Some forms are obtrusive outputs and some are unobtrusive. An obtrusive output means that the program stops running the main sequence whilst the output is displayed whereas an unobtrusive output does not stop the main execution. ## Console outputs The console is a form of collecting multiple outputs for reference. Output here is unobtrusive. ### Console.Write The Console.Write method is used to output any information of any of 17 types (for instance, boolean, integer, single) on to a line without taking a new line. Below is a sample write command: VB.NET Console.Write("Hello world") Console.Write("This is a test") ### Console.WriteLine The Console.WriteLine method is used to output some text and take a line break after it. This way multiple outputs do not get mixed up as easily as shown above. VB.NET Console.WriteLine("Hello world") Console.WriteLine("This is a test") ## Dialog based output Dialog based output such as message boxes tends to be obtrusive output - it breaks execution of the program. ### MsgBox One of the simple forms of output is to use MsgBox which simply takes up to three arguements as follows: MsgBox(prompt As String, [Buttons As MsgBoxStyle] [title As String]) The prompt parameter is used to contain a message as a string and title is an parameter. Below is a sample: VB.NET MsgBox("Hello world") The other option that was given was the Buttons option. This option is used to specify how the message box looks. Below is a table showing all types of message box. All samples are from Windows 7. ParameterSample MsgBoxStyle.AbortRetryIgnore MsgBoxStyle.ApplicationModal MsgBoxStyle.Critical MsgBoxStyle.DefaultButton1 MsgBoxStyle.DefaultButton2 MsgBoxStyle.DefaultButton3 MsgBoxStyle.Exclamation MsgBoxStyle.Information MsgBoxStyle.MsgBoxHelp MsgBoxStyle.MsgBoxRight MsgBoxStyle.MsgBoxRtlReading MsgBoxStyle.MsgBoxSetForeground MsgBoxStyle.OkCancel MsgBoxStyle.OkOnly MsgBoxStyle.Question MsgBoxStyle.RetryCancel MsgBoxStyle.SystemModal MsgBoxStyle.YesNo MsgBoxStyle.YesNoCancel ### MessageBox The MsgBox method is quite limited, whereas the MessageBox class is more feature rich and customisable. One of the lacking features of the MsgBox method is that it does not support both an icon and special buttons. As MessageBox is a more powerful class rather than a simple predefined shorthand function it does not work the same way and provides much more features. The following sample illustates this. VB.NET MessageBox.Show("Hello world!", "Sample program", MessageBoxButtons.YesNoCancel, MessageBoxIcon.Warning, MessageBoxDefaultButton.Button1) Both of these dialog outputs return a DialogResult as a result, which means using an if statement, the result can be used to determine what events happen on each button. ## Other output styles There are other ways of outputting information such as directly putting the information on to the user interface through the use of a control such as a Label or ListBox. • Create a program that displays the same sentence on both the console and on a dialog. Code preview Feedback 👍 Comments are sent via email to me.
# To find trace and determinant of matrix [duplicate] Let $A$ be a $4\times 4$ matrix with real entries such that $-1,1,2,-2$ are its eigenvalues. If $B=A^{4}-5A^{2}+5I$, where $I$ denotes the $4\times 4$ identity matrix, then which of the following statements are correct? 1. $\det (A+B)=0$ 2. $\det B=1$ 3. $\text{trace}(A-B)=0$. 4. $\text{trace}(A+B)=4$. NOTE: There may one or more options correct. I know that trace of matrix means sum of eigenvalues of matrix and determinant means product of eigenvalues. But i dont know how to apply these things in this question? - Have you seen that the eigenvalues of $p(A)$ are $p(\text{the eigenvalues of }A)$ when $p$ is a polynomial? If not, you could prove this, and use it. Note that $B$, $A+B$, and $A-B$ are polynomials in $A$. –  Jonas Meyer May 22 '12 at 5:45 This is essentially this question! –  Arturo Magidin May 22 '12 at 5:45 ## marked as duplicate by Arturo Magidin, copper.hat, Chris Eagle, Alex Becker♦, t.b.May 22 '12 at 14:16 The characteristic polynomial of $A$ is $$(t+1)(t-1)(t+2)(t-2) = (t^2-1)(t^2-4) = t^4 - 5t^2 + 4.$$ Therefore, by the Cayley-Hamilton Theorem, $$A^4 - 5A^2 + 4I = 0.$$ In particular, $B= A^4 - 5A^2 + 5I = (A^4-5A^2+4I)+I = I$. So $B=I$, $A+B=A+I$, and $A-B=A-I$. The eigenvalues of $A+\mu I$ are of the form $\lambda+\mu$, where $\lambda$ is an eigenvalue of $A$. So: Since $-1$ is an eigenvalue of $A$, then $0=-1+1$ is an eigenvalue of $A+I=A+B$, so $\det(A+B)=0$. Since $B=I$, $\det(B)=1$. The trace of $A-B$ is the sum of the eigenvalues of $A-B$; the sum of the eigenvalues of $A-B$ is $(-1-1) + (1-1) + (2-1) + (-2-1) = -2+0+1-3 = -4$. Alternatively, it is the sum of the trace of $A$ (which is $0$, since its eigenvalues add up to $0$) and the trace of $-B$, which is $-I$, hence the trace is $-4$. And the trace of $A+B$ is the sum of the eigenvalues of $A+B$, which is $(-1+1) + (1+1) + (2+1) + (-2+1) = 4$. Or it is $\mathrm{trace}(A)+\mathrm{trace}(B) = 0 + \mathrm{trace}(I_4) = 4$. - Thanks! Sorry i don't know this question is already asked. And thanks once again that now i understand this question properly. –  Kns May 22 '12 at 5:51
# Analysis Overview To analyze the effect of the course redesign, we consider data from Fall 2014 to Fall 2016, for all sections of Chemistry 111. Students withdrawing or receiving an incomplete (W/I) are excluded from this analysis. A repeatable grade is defined as a student receiving a D, D+, F grade, or an unexcused withdrawal WU. 1. Examine bivariate associations between outcome measures, covariates and the CRT intervention condition. 1. Student and class level characteristics are used as covariates. 2. Each predictor is compared to the intervention condition (CRT vs Traditional), and to the outcomes (GPA and receipt of a repeatable grade.) 3. Analysis is done on two sets of data samples: • All terms (Full sample) • Just F14 and F16 terms only 2. Model building - Start with variables that were shown to be bivariately associated with either intervention or outcome, build a multivariable statistical model to assess the effect of the CRT on student performance. 3. Use a supervised machine learning technique to identify other key characteristics highly predictive of a student receiving a repeatable grade. ### Outcome measurements 1. GPA Since GPA is being treated as a continuous variable, distributions of GPA are displayed using boxplots with overlaid violin plots. The diamonds represent the average or mean value for that group. Two-sample T-tests and ANOVA tests are used to compare the average GPA across levels of the covariate of interest. 2. Receipt of a repeatable grade We also dichotomize the grade a student received and create an indicator for whether or not the student received a repeatable grade. This is also called the failure rate in some places in this report. Where the sample size is large enough, $$\chi^{2}$$ tests for equal proportions across two / and multiple groups are conducted, otherwise Fishers Exact Tests are used. Any comparison where a zero cell occurs (a combination of factors that does not exist in the data), no statistical test is conducted. # Performance Outcomes We examine four semesters of data from a single instructor. One semester of traditional course offering from F14, two semesters of redesigned courses (F15, S16) and one semester of the redesign with an SI enhancement. These are colored in red, blue and green respectively in most plots. ### GPA The overall distribution of gpa for the crt courses has shifted in the positive direction compared to the traditional courses. The mean for Fall 15 and F16 are higher than F14, with S16 closer to F14 than the other redesigned terms. The variance in GPA from the last semester is much higher than the rest, potentially indicating that the effect of SI did not have a uniform impact on increasing the GPA for all students. ### DWF rate The percent of students receiving a repeatable grade has declined over the past four semesters. ### Statistical Comparisons Class level performance characteristics Term Type Ave. GPA # DFW Enrollment DFW Rate F14 Traditional 1.53 68 154 0.44 F15 CRT 1.78 51 150 0.34 S16 CRT 1.65 58 157 0.37 F16 CRT+SI 1.89 49 149 0.33 • There is a significant improvement in GPA averaged across all redesigned sections compared to traditional (p=0.011). • There is also a significant improvement in average GPA in F16 compared to F14 (p=0.003). • The failure rate in F16 is lower than the failure rate in F14, but only marginally so (p=0.093) • The proportion of repeatable grades does not differ significantly across semesters (p=0.058) # Student Characteristics ## Gender ### Course Type Since Fall 14, the gender distribution has become closer to equal, with slightly more females than males since Spring 16. This difference is moderately statistically significant ($$\chi^2_{3}$$=6.5, p=.09). Distribution of gender across terms gender F14 F15 S16 F16 F 66 (42.9%) 62 (41.3%) 83 (52.9%) 77 (51.7%) M 88 (57.1%) 88 (58.7%) 74 (47.1%) 72 (48.3%) ### Performance - GPA There is no statistical difference in the average GPA between males and females, regardless of cohort. Comparison of GPA across genders Full Sample F14 and F16 only Gender Mean SD p-value Mean SD p-value F 1.70 0.96 0.88 1.73 1.02 0.75 M 1.71 1.10 1.69 1.08 A multiple linear regression model found that the redesign did not have a statistically different impact on males and females. ### Performance - DFW rate There is no difference in proportion of males and females getting a repeatable grade on the entire sample, or when using data on the F14 and F16 cohorts only. N(%) of students passing and failing Chem 111 by course type and gender Females Males Course Type Pass DFW Pass DFW Traditional 41 (62.1%) 25 (37.9%) 45 (51.1%) 43 (48.9%) CRT 94 (64.8%) 51 (35.2%) 104 (64.2%) 58 (35.8%) CRT+SI 51 (66.2%) 26 (33.8%) 49 (68.1%) 23 (31.9%) A logistic regression model on the full data indicates that students in the redesigned section (CRT and CRT + SI) have significantly lower odds of failing Chem 111 compared to students in the traditional courses after adjusting for gender. Logistic Regression of course type on the receipt of repeatable grade OR CI p-value Gender 1.119 (0.8,1.56) 0.505 CRT 0.699 (0.47,1.04) 0.076 CRT + SI 0.626 (0.39,1) 0.049 Summary: The redesign had a positive effect on GPA and DWF rate after controlling for gender, and while it appears as if males improved more than females between Traditional and CRT + SI, the difference is not statistically significant. ## Age ### Course Type Students in F14 were on average 0.68 (0.01, 1.34) years older than those in the two CRT semesters (p=.04), and 0.87 (0.1, 1.65) years older than students in Fall 16 (p=.02). Mean and SD for ages within class types age 20.72 (3.37) 20.04 (2.73) 19.85 (2.55) Tukeys Honest Significant Difference for differences in average age between all pairs of class types diff lwr upr p adj CRT-Traditional -0.68 -1.34 -0.01 0.04 CRT+SI-Traditional -0.87 -1.65 -0.10 0.02 CRT+SI-CRT -0.19 -0.87 0.48 0.78 ### Performance - GPA GPA and age appear to have a non-linear relationship. Zooming in on students under 25, the relationship between age and GPA is decreasing until about 20, then increases afterward. A linear spline model was fit on the full sample using a knot at 20 to allow for a change in slope while controlling for the effect of the course type. Estimate CI p-value Age -0.090 (-0.23,0.05) 0.1951 Over 20 0.192 (-0.01,0.4) 0.0644 CRT 0.162 (-0.05,0.37) 0.1252 CRT + SI 0.342 (0.1,0.58) 0.0052 Under 20, there is a very slight ($$\beta=-0.09$$) and non-significantly different from zero downward trend in GPA as age increases. For each year older than 20 a student is, the average GPA increases by 0.19 (p=.06). Students in the CRT + SI course have on average 0.34 higher GPA compared to students in the traditional class (p=.005). An interaction term between age and course type was tested but found to be non-significant. This indicates that the intervention has a similar positive effect on a student’s GPA regardless of their age. ### Performance - DFW rate There is no difference in the average age for students receiving a repeatable grade or not, with our without the older students included in the analysis. Mean and SD of student ages grouped by whether or not they passed the class Full Sample F14 and F16 only Repeat Grade Mean SD p-value Mean SD p-value All Ages No 20.23 3.13 0.44 19.73 1.55 0.59 Yes 20.06 2.39 19.66 1.39 Under 25 No 20.33 3.30 0.79 19.72 1.61 0.82 Yes 20.24 2.54 19.68 1.46 Logistic Regression of course type on the receipt of repeatable grade OR CI p-value Age 0.970 (0.91,1.03) 0.335 CRT 0.682 (0.46,1.02) 0.059 CRT + SI 0.603 (0.38,0.96) 0.035 A logistic regression model on the full data across all ages indicates that students in the redesigned section (CRT and CRT + SI) have significantly lower odds of failing Chem 111 compared to students in the traditional courses after adjusting for age. Summary: The redesign had a positive effect on GPA and DWF rate after controlling for age. ## Underrepresented Minority (URM) The definition used in this report for an underrepresented minority student is as follows: • Non-URM: Asian, NHOPI, White • URM: Black, HL • Unknown: Mult, Unk ### Course Type The proportion of students in each URM category listed above has changed significantly since F14 ($$\chi^2_{6}$$=18.0, p=.006). Most notably is that the proportion of URM students has increased greatly from F14 to S16, becoming about the same proportion (40%) as the non-URM group. Proportion of students in each URM classification across terms urm_status F14 F15 S16 F16 Non-URM 81 (52.6%) 70 (46.7%) 65 (41.4%) 65 (43.6%) URM 44 (28.6%) 63 (42%) 78 (49.7%) 60 (40.3%) Unknown 29 (18.8%) 17 (11.3%) 14 (8.9%) 24 (16.1%) ### Performance - GPA The distribution of GPA across URM status is similar for the entire sample and when only considering F14 and F16. When comparing the average GPA across URM status for all four semesters combined, Non URM students have on average 0.39 (95% CI 0.18-0.60, p<.001) higher GPA compared to URM students. Mean (SD) of GPA within URM classification urm_status mean sd Non-URM 1.896085 1.0901600 URM 1.508571 0.9174358 Unknown 1.666667 1.0534320 Tukeys Honest Significant Difference for pairwise differences in average GPA between URM classifications diff lwr upr p adj URM-Non-URM -0.39 -0.60 -0.18 0.00 Unknown-Non-URM -0.23 -0.53 0.07 0.17 Unknown-URM 0.16 -0.14 0.46 0.44 Next we examine how the average GPA for students in each URM classification changes with the different redesign models. The plot below shows the mean with 95% CI for each combination of URM status and course type. The confidence intervals are added to this plot to more easily identify which pairs of URM status within course type are different. If the CI bars do not overlap (e.g. Non-URM vs URM in the CRT) this is a significant difference. However, if the bars overlap that does not mean there is no difference. You cannot make a statistical conclusion if the bars overlap. A multiple linear model of GPA using course type and URM status and the interaction between the two as predictors was run to test for a significant decrease in the gap between URM and non-URM students from each of the two redesigned course types compared to the traditional class. This is called a difference of differences analysis. Difference in GPA as URM - NonURM for each course type Course Type GPA gap CI p-value Traditional -0.38 (-0.76, -0.01) 0.0431 CRT -0.53 (-0.77, -0.29) <.0001 CRT+SI -0.20 (-0.56 , 0.16) 0.271 The table to the right shows that under the traditional course model the gap in GPA between URM and non URM was 0.38 (with URM being significantly lower), the gap widened to 0.53, with URM still being statistically significantly lower, but under the CRT + SI model of redesign, the gap decreased such that URM students do not have a significantly lower GPA compared to NonURM students. The decrease from 0.32 to 0.28 however is not statistically significant (p=0.48). ### Performance - DFW rate The failure rate for URM status has decreased as the level of intervention increased from Traditional to CRT to CRT + SI. N(%) of students passing and failing Chem 111 by course type and URM status Non-URM URM Unknown Course Type Pass DFW Pass DFW Pass DFW Traditional 51 (63%) 30 (37%) 18 (40.9%) 26 (59.1%) 17 (58.6%) 12 (41.4%) CRT 100 (74.1%) 35 (25.9%) 75 (53.2%) 66 (46.8%) 23 (74.2%) 8 (25.8%) CRT+SI 46 (70.8%) 19 (29.2%) 40 (66.7%) 20 (33.3%) 14 (58.3%) 10 (41.7%) Summary: The CRT+SI redesign is associated with an improvement in GPA and reduction in DFW rates in URM students. ## First Generation ### Course Type The proportion of students that are first generation has remained statistically constant for the past four semesters. Proportion of first generation students across terms firstgeneration F14 F15 S16 F16 N 83 (53.9%) 85 (56.7%) 79 (50.3%) 81 (54.4%) Y 71 (46.1%) 65 (43.3%) 78 (49.7%) 68 (45.6%) ### Performance - GPA When considering the entire four semesters, first generation students have on average 0.29 gpa points lower than non-first generation students. The difference is non-significant when only considering the F14 and F16 cohorts, indicating that these samples are not the same. Comparison of GPA across first generation status Full Sample F14 and F16 only First Generation Mean SD p-value Mean SD p-value N 1.84 1.03 0.00047 1.74 1.08 0.49 Y 1.55 1.01 1.66 1.02 Next we examine the average GPA for first generation and non-first generation students under the different redesign models. The mean profile plot with 95% CI below indicate that just flipping the classroom improves the overall GPA of the class, but primarily for non-first generation students only. However when adding SI to the course model the GPA for the first generation students rises to meet that of the non-first gen. This explains why the difference in average GPA is significantly different when considering the full sample, but not when considering the Traditional and CRT + SI cohorts only. ### Performance - DFW rate The failure rate for each group decreases in a similar pattern to the improvement in GPA. N(%) of students passing and failing Chem 111 by course type and first generation status Non-First Generation First Generation Course Type Pass DFW Pass DFW Traditional 47 (56.6%) 36 (43.4%) 39 (54.9%) 32 (45.1%) CRT 120 (73.2%) 44 (26.8%) 78 (54.5%) 65 (45.5%) CRT+SI 54 (66.7%) 27 (33.3%) 46 (67.6%) 22 (32.4%) For both groups the DFW rate went from 43-45% under the traditional course type, to 32-33% under redesign with SI. Summary: Both the course type and first generation status are associated with a difference in GPA and DFW rate, but the redesign type has different effects for first generation students, specifically the redesign was associated with an improvement in performance for non-first generation students only. Adding on SI to the redesigned class bridged that gap for the first generation students. ## Admissions Index (HS Eligibility) ### Course Type There is no significant difference in average admissions index across the three course types. Mean and SD of HSEI within class types admission_index 3.74 (0.37) 3.73 (0.33) 3.79 (0.35) Tukeys Honest Significant Difference for differences in average HSEI between all pairs of class types diff lwr upr p adj CRT-Traditional -0.01 -0.10 0.08 0.94 CRT+SI-Traditional 0.05 -0.05 0.16 0.43 CRT+SI-CRT 0.07 -0.02 0.15 0.15 ### Performance - GPA Admissions index does not have a strictly linear relationship with GPA, and that the relationship differs across the course types. For the Traditional and CRT + SI , there appears to be a flat relationship between HSEI and GPA until about HSEI=3.5, then a positive relationship for HSEI above 3.5. • The slopes below HSEI < 3.5 do not significantly differ based on course type - this is primarily due to the large standard error of the slope rather than the magnitude of the slope itself. • The GPA for students in the CRT + SI is moderately (p<.10) larger than the average GPA for students in the traditional classes (0.38, 95% CI -0.07, 0.83) A linear spline model was fit on the full sample using a knot at 3.5 to allow for a change in slope. No interaction terms were included since there was no significant difference in slope across CRT group. Estimate CI p-value Age 0.147 (-0.67,0.96) 0.7227 HSEI over 3.5 1.323 (0.33,2.31) 0.0089 CRT 0.146 (-0.06,0.35) 0.1589 CRT + SI 0.249 (0.02,0.48) 0.0369 • There no effect of HSEI on GPA when HSEI is below 3.5 (p=.7) • There is a strong effect of HSEI on GPA when HSEI is above 3.5 (p=.0089) • After controlling for HSEI there is still a significant improvement in GPA for students in the CRT + SI redesign compared to those in the traditional course. ### Course Type - DFW Students who pass CHEM 111 have on average a 0.2 GPA higher admissions index compared to students who receive a repeatable grade (3.62 vs 3.82, p<.0001). Mean and SD of admissions index grouped by whether or not they passed the class Mean SD p-value Pass 3.82 0.34 < 0.001 Fail 3.62 0.31 Logistic Regression of course type on the receipt of repeatable grade OR CI p-value HSEI 0.151 (0.08,0.27) <0.001 CRT 0.731 (0.46,1.17) 0.19 CRT + SI 0.736 (0.43,1.27) 0.27 Summary: • After controlling for admissions index, there is no direct effect of the course redesign on the likelihood a student will pass the course. • There is a positive relationship between admissions index and GPA, but only for a high admissions score. • The type of course redesign does not significantly change the relationship between admissions index and GPA. # Bivariate Summary Demographic factors that were associated with GPA or passing the class: • Age • Under-represented minority classification • First Generation status • Admissions index - but not linearly Formal student groups (EOP/Reach/TRIO/DSS) were unable to be analyzed due to either inconsistent data or small sample sizes. # Multivariable modeling ### Data preparation #### Collapsing and grouping varibles. • The number of college prep math, English, and lab science classes were grouped into more reasonable ranges such as under 6 units, 6 to 8, and over 10. Reference groups are always no units of college prep classes. • The college in which the student is currently a major of was collapsed into • Other: BSS, BUS, CME, HFA, UGE • ECC • NS: NS and AGR #### Restrict on age Since 95% of the students are below 25 years of age, the multivariable model will be fit on only students less than 25 years of age. #### Imputing missing values About 15% of the records in the data were missing admissions index, so for the purpose of machine learning modeling missing values are imputed using predictive mean matching via the MICE (multiple imputation using chained equations) package in R. Only one computed data set is used (instead of multiple) for exploratory purposes only. ## Variable Selection A Random Forests model is fit to the full data, with all available student and class level information available to the model. This allows the model to consider the effects of secondary, non-demographic variables such as whether or not the student has passed a GE Math class by the time they take Chem 111, the number of units they are currently taking (including Chemistry), and if they participated in high impact practices such as Summer Bridge. The plot below graphs the % increase in unexplained variance (mean squared error MSE) if that variable were excluded from the model. • A good model has a low amount of unexplained variance, so a high % increase means that variable plays an important role in predicting GPA. • For display purposes only those showing an increase of greater than .1% are displayed. Variables with high importance were then used as a starting point to build a multiple linear regression model to assess the effect of the course redesign after controlling for these other features. ## Linear Regression on GPA Prior to modeling, a single classification tree was built to determine the optimal location of knots for the known non-linear relationships between GPA, age and admissions index. Spline knots were created at HSEI = 3.8, and Age = 22 and included in the starting linear model. Further variable selection was conducted via backwards selection, removing the variables that were found to be not significant with GPA with a pvalue > 0.5. Certain demographics of great interest (i.e. First Generation status) were kept in the model regardless of their statistical significance level. Estimate CI p-value Term GPA 0.877 (0.81,0.94) <0.001 HSEI -0.005 (-0.28,0.27) 0.9712 HSEI > 3.8 0.723 (0.2,1.24) 0.0066 Level: Sophmore 0.007 (-0.11,0.13) 0.9133 Level: Junior 0.168 (-0.01,0.35) 0.0646 Level: Senior 0.119 (-0.13,0.37) 0.3552 CP Math units: [6-8) 0.704 (-0.52,1.92) 0.2576 CP Math units: [8-10) 0.988 (-0.22,2.2) 0.1101 CP Math units: 10+ 1.008 (-0.21,2.22) 0.1033 College: ECC 0.122 (-0.02,0.26) 0.0813 College: Other -0.228 (-0.37,-0.09) 0.0011 Male 0.295 (0.18,0.41) <0.001 GE Math Complete 0.280 (-0.01,0.57) 0.0583 Redesign: CRT 0.059 (-0.07,0.19) 0.3656 Redesign: CRT + SI 0.191 (0.04,0.34) 0.0112 GE English Complete -0.300 (-0.59,-0.02) 0.0391 URM 0.004 (-0.12,0.12) 0.9490 URM-Unknown 0.079 (-0.09,0.25) 0.3496 First Generation -0.052 (-0.16,0.06) 0.3592 Any College prep English -1.089 (-2.35,0.17) 0.0893 Summary After controlling for other significant and demographic characteristics: • Gender • URM status • First generation status • College prep English and math classes • Completion of GE English and Math classes • Term GPA Students in the CRT + SI class had significantly higher GPA compared to students in the traditional course (0.18, 95% CI (0.03-0.33), p=.0155). ## Logistic Regression on a repeat grade Improving GPA by 0.18 points is great, but this doesn’t tell us which students are improving. Last we’ll analyze the likelihood of a student passing the class. Again our goal is to see if the redesign alone, or in conjunction with SI moves the needle significantly on the lower performing students after controlling for other factors. Here we see a similar plot as before, that shows the variable importance. The difference again is that we are now predicting a student passing the class, not simply their GPA. Each variable can inform the model differently. The measure being plotted here is the decrease in accuracy (correctly predicting the outcome), if that variable is not included in the model. Again for demonstration purposes only variables that would decrease the accuracy by over 0.1% are displayed. • Unsurprisingly, how well a student does across all classes that semester is a strong predictor for whether or not the student passes Chemistry. This serves as an excellent reminder that classes don’t live in bubbles. If a student is doing poorly in one class, it is likely that they are struggling in their other classes as well. Using these variables as a starting point and subsequently doing backwards variable selection and testing some interaction terms we get to a final model to predict passing Chemistry 111 that includes: • term level GPA • URM status • Gender • First generation • College • Course Redesign type • GE course status (English/Math/Critical Thinking) • Entry level English/Math proficiency • Interaction between admissions index and Course redesign type The inclusion of this interaction term indicates that the type of course redesign changes the relationship between admissions index and the probability of a student passing the course. The typical method of presenting results from a Logistic regression is to present the Odds Ratio, comparing the odds of passing the class for “Group A” compared to “Group B”. Most people don’t think in terms of Odds. So what we present here is the distribution of the model predicted probability of passing the class, after controlling for all these other variables, and display this by course type. Mean and Median predicted probability of passing the class by class type crt mean median CRT 64.1% 75.8% CRT+SI 66.9% 79.3% • The likelihood of passing Chem 111 significantly increases with the level of redesign (direct numbers not shown here). • Under the Traditional course offering, on average a student would have a 56.4% chance of passing the course (according to this model) • With CRT, on average a student would have 64.1% chance of passing the course • With CRT + SI, half the students were predicted to have up to 66.9% chance of passing the course. This may seem like a moderate gain, only a 10.5% percentage point increase in predicted probability of passing the course from Traditional to CRT + SI, but remember this is after controlling for all other factors, such as admissions index. • This modeled probability reflects the unadjusted, raw, course (and coarse) level passing rates for the class. • This strengthens the justification that these improvements are likely due to the redesign itself, not something external. This is a good place to further demonstrate how the redesign affects students with different ranges of admissions index, differently. The ranges shown here reflect the distribution of admissions index for Chemistry students: the bottom 25% of students had below a 3.5 admissions index, the median is 3.7, the third quartile is 4.0, with a max of 4.6. • For the bottom half of the class, the course redesign improved the predicted probability of passing the course. • For the upper half, there was either no, or a negative effect of the redesign. • These comparisons were not tested directly for statistical significance, but the redesign did significantly change the relationship between HSEI and the likelihood of passing the course. This plot simply demonstrates in what direction and magnitude that change occurred. Summary • Course redesign significantly increased the probability a student will pass Chemistry 111. #### Notes and Comments • These models are an additive, each covariate is modeled as if it modifies GPA or the probability of passing the class in a linear manner. • This model is simple in that unless interactions between variables are explicitly specified, they are not considered. • This differs from the supervised learning model such as a Random Forest model, that allows for all variables to interact with each other. • The purpose of these models is classification, not assessing the impact of a covariate on an outcome. • The effect for all categorical variables represent a change from baseline or reference group, to the group listed. • This analysis only examined courses taught by the same instructor across four semsters to assess the effectiveness of a single individual course redesign. # Next steps / Needed followup • Enhancements to the current model • Control for whether or not the student is on the second time taking the class. • Create an SI specific model • using additional data from Spring 16, and other Chemistry classes other than Chem 111 and by other instructors. • Key informant interviews with top performing students to identify what they don’t like. This would allow us to examine the following questions: • Can we show that SI is related to the reduction of the GAP between first gen and/or URM students? • How does the DFW rate change as a function of SI attendance? • Do people self-select into SI based on a measurable characteristic?
• A • A • A • АБB • АБB • АБB • А • А • А • А • А Обычная версия сайта ## Multichannel Queueing Systems with Regenerative Input Flow Theory of Probability and Its Applications. 2014. Vol. 58. No. 2. P. 174-192. Afanasyeva L., Tkachenko A. Переводчик: A. Sergeev. We consider the multichannel queueing system with nonidentical servers and regenerative input flow. The necessary and sufficient condition for ergodicity is established, and functional limit theorems for high and ultra-high load are proved. As a corollary, the ergodicity condition for queues with unreliable servers is obtained. Suggested approaches are used to prove the ergodic theorem for systems with limitations. We also consider the hierarchical networks of queueing systems
# Sequence of consecutive integers containing a relatively prime number. Is there a published result that says the following? Let $p$ be an prime number. Then for all integer $n>p$ there is at least one number in the consecutive sequence $2n,2n+1,2n+2,\ldots,2n+2p-1$ that is relatively prime to all prime numbers less than or equal to $p$. For example, if $p=5$ and $n=17$, the sequence $34,35,36,37,38,39,40,41,42,43$ has several numbers not divisible by $2,3,5$. Now this just needs to be true for any $n>p$ and any prime $p$. I have already asked my colleague who received his Ph.D. in Number Theory. He suggested I pose it to a wider audience, hence why I'm here. Any suggestions on results would be greatly appreciated. • I'd say we know something a bit stronger which is, the sequence of prime numbers contains arithmetic subsequences of arbitrary length which is due to Green and Tao (2004). Nov 12, 2017 at 23:26 • @Raito you are misunderstanding the question due to the ambiguous use of the word "any": sometimes it means some ("does this equation have any solution?") and sometimes it means all (for any $\varepsilon > 0\ldots$). When writing "for any integer $n > p$" I believe the intention is "for all integers $n > p$," in which case the Green--Tao theorem doesn't help. – KCd Nov 13, 2017 at 2:20 • @KCd is right. I apologize for ambiguous use of "any". I have edited it to make it more clear. Nov 13, 2017 at 12:18 The conjecture is false. Let $p_1, p_2, p_3, \dots$ be the primes. Define the primorial Jacobsthal function $h(k)$ to be the smallest length $\ell$ such that every consecutive sequence of integers of length $\ell$ contains a number relatively prime to $p_1, \dots p_k$. Your conjecture is equivalent to the assertion that $h(k) \le 2 p_k$. (The condition is invariant upon adding or subtracting multiples of $P = \prod_{i=1}^k p_i$, so the condition that $n > p$ is irrelevant, and for fixed $p$ the conjecture can be verified by a finite calculation; it suffices to examine consecutive sequences of integers between $0$ and $P - 1$ inclusive.) According to the table of values of $h(k)$ in the linked paper we have $$p_{14} = 43, h(14) = 90 > 86$$ so there is a consecutive sequence of $89$ integers none of which is relatively prime to the primes up to $43$. This is the smallest counterexample. You can show that $h(k) \ge 2p_{k-1}$ by constructing a consecutive sequence of integers of length $2p_{k-1} - 1$ none of which is relatively prime to $p_1, \dots p_k$. This can be done by considering the integers $n - a, n - a + 1, \dots n + a - 1, n + a$ where $n$ is a solution to the system of congruences $$n \equiv 0 \bmod p_1 \dots p_{k-2}$$ $$n-1 \equiv 0 \bmod p_{k-1}$$ $$n+1 \equiv 0 \bmod p_k$$ and $a = p_{k-1} - 1$. This construction is actually best possible for $p_k \le 19$; for example when $p_k = 19$ it produces the sequence $60044, \dots 60076$. It first stops being best possible when $p_k = 23$, where this construction gives a sequence of length $37$ but the longest sequence has length $39$ (namely $20332472, \dots 20332510$). For completeness' sake here is the (very sloppy, I'm sure) Python script I used to test the conjecture; it allowed me to compute enough values of $h(k)$ that I could look it up on the OEIS, and looking at the longest sequences led me to the construction above. Note that it computes $h(k) - 1$, the length of the longest sequence of integers not relatively prime to $p_1, \dots p_k$. from sympy import sieve from math import gcd from operator import mul import functools as ft def longest(p): primes = list(sieve.primerange(1, p+1)) product = ft.reduce(mul, primes, 1) biglist = list(range(0, primordial(p))) def isrelprime(n): for p in primes: if gcd(n,p) != 1: best = 0 bestlen = 0 curlen = 0 for i in biglist: if not isrelprime(i): curlen += 1 else: curlen = 0 if curlen > bestlen: best = i bestlen = curlen print("\nPrime : " + str(p)) print("Best : " + str(best)) print("Length : " + str(bestlen)) for i in range(0, bestlen): print(str(best - i) + " has gcd " + str(gcd(best - i, product))) primes = list(sieve.primerange(1, 20)) for p in primes: longest(p) And here's a little bit of extra code for computing the gcd of a particular sequence of consecutive integers with the product of the primes up to some prime. def check(n, p, len): primes = list(sieve.primerange(1, p+1)) product = ft.reduce(mul, primes, 1) for i in range(0, len): print(str(n - i) + " has gcd " + str(gcd(n - i, product))) check(60076, 19, 33) check(20332510, 23, 39)
# Time derivative of flux We have a time and even "position" invariant vector field and a surface. If the surface is moving with constant velocity, is the flux through the surface should constant in time? Also, is there an easy to follow proof for the formula $$\frac{d}{dt} \iint_{S(t)}\overline{V}(\overline{r},t) \cdot d\overline{A}$$ • What do you mean a "position" invariant vector field and which vector field? The vector field you want to compute the flux or the velocity field? – achille hui Jun 15 '14 at 18:01 • The vector whose flux has to be computed is constant in time and is the same everywhere. – user42768 Jun 15 '14 at 18:29 • If $\vec{V}$ is constant in time and space, then $\int_{S(t)} \vec{V} \cdot d\vec{A}$ is identically zero by divergence theorem. – achille hui Jun 15 '14 at 18:32 • I am terribly sorry, I don't know why I said closed surface. The surface is not closed. I'm again very sorry. – user42768 Jun 15 '14 at 18:48 • Your formula would seem to be incomplete. Or, do you mean to ask, "is there some way to simplify this derivative..." Fwiw, I do think the answer is affirmative, the flux is constant if a rigid surface moves at constant velocity through a constant vector field (constant in both space and time here). – James S. Cook Jun 15 '14 at 19:16 Here is a proof, though I'm not sure it is easy to follow. In below derivation, we will use the adjective "nice" for anything sufficiently regular to make the argument works. If I'm not mistaken, continuous differentiable up to second order is sufficient. Let • $\vec{v}(\vec{x},t)$ be a "nice" velocity field. • $\vec{B}(\vec{x},t)$ be a "nice" vector field which we want to compute the flux. • $S$ be a compact "nice" surface with piecewise "nice" boundary in $\mathbb{R}^3$. • $S(t)$ be a family of surfaces generated by $S$ using the velocity field $\vec{v}$. More precisely, for each $\vec{x} \in S$, consider the initial value problem: $$\frac{d}{dt} \gamma_{\vec{x}}(t) = \vec{v}(t,\gamma_{\vec{x}}(t)),\quad t \in (-\epsilon, \epsilon ) \quad\text{ with initial condition }\quad \gamma_{\vec{x}}(0) = \vec{x}.$$ Define a function by $$\Gamma : S \times (-\epsilon, \epsilon ) \quad\mapsto\quad \Gamma(\vec{x},t) = \gamma_{\vec{x}}(t)\in\mathbb{R}^3$$ Standard theory of ODE tell us $\Gamma$ is a "nice" function (as least for small enough $\epsilon$). $S(t)$ is simply the image $\Gamma(S \times \{t\})$. As a geometric object, I'm not 100% sure $S(t)$ is "nice". For the sake to get a result, let's assume they are. The number we want to compute is following time derivative $$\mathscr{F}_{t} = \frac{d}{dt}\int_{S(t)} \vec{B}\cdot d\vec{A}$$ where $d\vec{A}$ is the area element associated with $S(t)$. To carry out the computation, let us assume $S$ is small enough to fit into a single coordinate chart $$\varphi : \mathbb{R}^2 \supset \mathcal{O} \ni (r,s) \quad\mapsto\quad \varphi(r,s) \in S \subset \mathbb{R^3}$$ We will further assume the domain $\mathcal{O}$ has piecewise "nice" boundary${}^{\color{blue}{[1]}}$. Using $\Gamma$, we can extend it to a function $$\vec{X} : \mathcal{O} \times (-\epsilon, \epsilon ) \ni (r, s, t) \quad\mapsto\quad \vec{X}(r,s,t) = \Gamma(\varphi(r,s),t)$$ For any function (or expression) $\psi$ that depends on $(r,s,t)$, we will adopt the shorthand $$\psi_r = \frac{\partial\psi}{\partial r},\quad \psi_s = \frac{\partial\psi}{\partial s},\quad\text{ and }\quad \psi_t = \frac{\partial\psi}{\partial t}$$ In terms of $\vec{X}$, the area element $d\vec{A}$ is simply $\vec{X}_r \times \vec{X}_s dr ds$ and \begin{align} \mathcal{F}_t &= \frac{d}{dt} \int_{\mathcal{O}} \vec{B}\cdot (\vec{X}_r \times \vec{X}_s) dr ds\\ &= \int_{\mathcal{O}}\left[ \frac{d\vec{B}}{dt} \cdot (\vec{X}_r \times \vec{X}_s) + \vec{B} \cdot \left( (\vec{X}_r)_t \times \vec{X}_s + \vec{X}_r \times (\vec{X}_s)_t \right) \right] dr ds\\ &= \int_{\mathcal{O}}\left[ \left(\vec{B}_t + \color{firebrick}{(\vec{v}\cdot\vec{\nabla})\vec{B}}\right) \cdot \color{firebrick}{(\vec{X}_r \times \vec{X}_s)} + \vec{B} \cdot\left( \vec{v}_r \times \vec{X}_s + \vec{X}_r \times \vec{v}_s\right) \right] dr ds \end{align} Notice the second term in the integrand $$\vec{B} \cdot\left( \vec{v}_r \times \vec{X}_s + \vec{X}_r \times \vec{v}_s\right) = \vec{B} \cdot \left( ( \vec{v} \times \vec{X}_s)_r - ( \vec{v} \times \vec{X}_r)_s \right)$$ can be rewritten as $$( \vec{B}\cdot( \vec{v} \times \vec{X}_s) )_r - ( \vec{B}\cdot( \vec{v} \times \vec{X}_r) )_s + \color{red}{((\vec{X}_r\cdot\vec{\nabla})B) \cdot (\vec{X}_s \times \vec{v})} + \color{red}{((\vec{X}_s\cdot\vec{\nabla})B) \cdot (\vec{v}\times\vec{X}_r)}$$ Now for any five vectors $\vec{a},\vec{b},\vec{c}, \vec{p}, \vec{q} \in \mathbb{R}^3$, we have an identity${}^{\color{blue}{[2]}}$: $$(\vec{p}\cdot\vec{q})(\vec{a}\cdot(\vec{b}\times\vec{c})) = (\vec{p}\cdot\vec{a})(\vec{q}\cdot(\vec{b}\times\vec{c})) + (\vec{p}\cdot\vec{b})(\vec{q}\cdot(\vec{c}\times\vec{a})) + (\vec{p}\cdot\vec{c})(\vec{q}\cdot(\vec{a}\times\vec{b}))$$ $\vec{\nabla} \otimes \vec{B}$ is a rank 2 tensor, we can decompose it as a sum of outer product of vectors $$\vec{\nabla} \otimes \vec{B} = \vec{p}_1 \otimes \vec{q}_1 + \vec{p}_2 \otimes \vec{q}_2 + \cdots$$ If we substitute $\vec{a},\vec{b},\vec{c}$ in above identity by $\vec{v}$, $\vec{X}_r$ and $\vec{X}_s$ respectively, it is not hard to deduce: $$\color{firebrick}{((\vec{v}\cdot\vec{\nabla})\vec{B}) \cdot (\vec{X}_r \times \vec{X}_s)} + \color{red}{((\vec{X}_r\cdot\vec{\nabla})B) \cdot (\vec{X}_s \times \vec{v}) + ((\vec{X}_s\cdot\vec{\nabla})B) \cdot (\vec{v}\times\vec{X}_r)} = \color{green}{( \vec{\nabla}\cdot \vec{B}) (\vec{v}\cdot(\vec{X}_r \times \vec{X}_s)}.$$ Substitute this back into the integrand for $\mathcal{F}_t$, we get $$\mathcal{F}_t = \int_{\mathcal{O}} \left[ \left((\vec{B}_t + \color{green}{(\vec{\nabla}\cdot \vec{B}) \vec{v}} )\cdot \color{green}{( \vec{X}_r \times \vec{X}_s )}\right)_{\color{blue}{\verb/I/}} + \left( ( \vec{B}\cdot( \vec{v} \times \vec{X}_s) )_r - ( \vec{B}\cdot( \vec{v} \times \vec{X}_r) )_s \right)_{\color{blue}{\verb/II/}} \right] dr ds\\$$ The integrand split into two pieces. It is obvious what Piece $\color{blue}{\verb/I/}$ is. For piece $\color{blue}{\verb/II/}$, we can transform it first to a line integral along $\partial\mathcal{O}$ using the classical Green's theorem in $\mathbb{R}^2$. We then re-express it as a line integral along $\partial S(t)$ in $\mathbb{R}^3$. Finally, we convert it back to a surface integral over $S(t)$ using Kevin-Stokes theorem: \begin{align} \text{Piece }\color{blue}{\verb/I/} &= \int_{S(t)}\left( \vec{B}_t + (\vec{\nabla}\cdot\vec{B})\vec{v}\right)\cdot d\vec{A}\\ \\ \text{Piece }\color{blue}{\verb/II/} &= \int_{\partial S(t)} ( \vec{B}\times\vec{v} ) \cdot d\vec{X} =\int_{S(t)} \vec{\nabla} \times (\vec{B} \times \vec{v}) \cdot d\vec{A} \end{align} Combine this, we get $$\mathcal{F}_t = \int_{S(t)} \left( \vec{B}_t + (\vec{\nabla}\cdot\vec{B})\vec{v} - \vec{\nabla} \times (\vec{v} \times \vec{B})\right)\cdot d\vec{A}\tag{*1}$$ When $S$ is too large to fit in a single coordinate chart, we can split $S$ into finite many pieces that fit. We then apply $(*1)$ to the individual piece and sum their contributions. At the end, $(*1)$ continue to work... Notes • $\color{blue}{[1]}$ $\partial\mathcal{O}$ need to be nice enough to apply classical Green's theorem. • $\color{blue}{[2]}$ When $\vec{a},\vec{b},\vec{c}$ are linear independent, they form a basis of $\mathbb{R}^3$. The correspond dual basis consists of $\frac{\vec{b} \times \vec{c}}{\Delta}$, $\frac{\vec{c} \times \vec{a}}{\Delta}$, and $\frac{\vec{a} \times \vec{b}}{\Delta}$ where $\Delta = \vec{a}\cdot(\vec{b}\times\vec{c})$. The identity in main text is nothing special but a formula of the dot product for this particular pair of basis/dual basis. • Let me see if I understood correctly. First, we find a parametric representation of the initial surface S, $\mathcal{O}$. And we denote the body that is formed by the surface after it has moved for "t time" by S(t). Why is the derivative of the flux through S(t) equal to the difference between the flux through the surface "at moment t" and the surface "at moment 0" over t? Maybe I misunderstood something. Thank you! – user42768 Jun 16 '14 at 6:26 • No, the $t = 0$ is nothing special. It is only used to pick an initial surface to generate parametrization for $S(t)$ at each $t \in (-\epsilon,\epsilon)$. There are no "differences" of any sort in the derivation. We don't compare flux at time $t$ and $0$ at all. Everything are derivatives and all expression I write down happens at a time $t$ not necessary equal to $0$. – achille hui Jun 16 '14 at 6:55 • But this is how I understand this derivative: First we have a surface S and we now the flux through it. After a very short time, let's say dt, S has moved according to v. We now compute the flux through it. As a final step, we compute the flux difference over dt. In your proof, S(t) represents the volume "described by the moving surface", or the surface at moment t? Again, thank you for your time. – user42768 Jun 16 '14 at 7:08 • $S(t_1)$ is the surface $\Gamma( \mathcal{O} \times \{t_1\} )$, not a body $V(t_1,t_2) = \Gamma( \mathcal{O} \times [t_1,t_2] )$. The way you described is the physicist's way of deriving the formula. In that case, the boundary of $V(t,t+dt)$ consists of 3 surfaces, $S(t)$, $S(t+dt)$ and $\Gamma(\partial\mathcal{O} \times [t,t+dt] )$. It is the last piece that will give you the curl piece $\vec{\nabla} \times ( \vec{v} \times \vec{B} )$. The physicists' way will be more easier to understand but has more logical loop holes. – achille hui Jun 16 '14 at 7:22 • So, looking from a physicist's point of view, the first term in the right-hand side is caused by the change in the field, the second term is the divergence of the field "inside" that volume, and the last one considers the flux through the surface that "links" the boundaries of the surfaces at those two different moments in time? I am immensely appreciative for your understanding. – user42768 Jun 16 '14 at 7:28 ${\color{#66f}{\large\tt\mbox{This is from$\tt\color{#c00000}{G\&R}$Table,}\ 7^{\rm\underline{a}}\ \mbox{ed.}}}$
Publication Title Orientational diffusion of methyl groups in crystalline $CH_{3}F:$ an infrared study Author Abstract The mid-infrared (4000-400 cm(-1)) spectra of polycrystalline methyl fluoride have been investigated at temperatures between 8 and 85 K. Least-squares band fitting was performed for all fundamental bands belonging to symmetric and asymmetric vibrations of the methyl group, and the contributions of inhomogeneous broadening and vibrational and orientational relaxation to the bandwidths have been evaluated. From the temperature dependence of the bandwidths, using the modified Rakov approach, the activation enthalpy, Delta H*, and activation entropy, Delta S*, for orientational diffusion of the methyl groups have been determined to be Delta H* = 0.85(7) kJ mol(-1) and Delta S* = -7(1) J mol(-1) K-1. Language English Source (journal) The journal of physical chemistry : A : molecules, spectroscopy, kinetics, environment and general theory. - Washington, D.C., 1997, currens The journal of physical chemistry : B : materials, surfaces, interfaces and biophysical. - Washington, D.C., 1997, currens Publication Washington, D.C. : 1998 ISSN 1089-5639 [print] 1520-5215 [online] Volume/pages 102(1998), p. 6493-6498 ISI 000075786400009 Full text (Publisher's DOI) UAntwerpen Faculty/Department Research group Publication type Subject Affiliation Publications with a UAntwerp address
## Glee: Special Education Categories: Glee|Tags: , , , | Part 2 of 2: Rhythm & Boos Inevitably, when I get up on my high horse, I end up being knocked down. Such is the [...] ## Glee: Grilled Cheesus Categories: Glee|| Part 1: Camembert/ComeonBurt So, here's what you missed on Glee: Kurt's dad has a heart attack, which is the cue for the New Directions to [...] ## Glee: Sectionals Categories: Glee|Tags: , , , , , , , , , | Part 1 of 3: New Directions for Dummies So, here's what you missed on Glee: Emma puts her wedding on hold so the glee club [...] ## Glee: Sweet Dreams Categories: Glee|Tags: , , , , , , , , , , | Really and truly, we could have skipped this episode and reviewed the highlights (Rachel auditions for Funny Girl, Finn is in fake college, Marley is [...] ## Glee: Hairography Categories: Glee|Tags: , , , , | Part 1 of 3: Papa(s) Don't Preach So, here’s what you missed on Glee: Sue’s in cahoots with another show choir, using hairography to succeed [...] ## Glee: Hairography Categories: Glee|| Part 2 of 3: Hairstory Sometimes, the New Directions can twirl and dip like One Direction. Other times, they need a little help from Mr [...] • #### Glee: Ballad Categories: Glee|Tags: , , , , , , | Part 3 of 3: My Choral Romance Artie smoulders this episode, and that's all he does. What's your sweater vest made from? "Your mom's chest [...] ## Glee: Wheels Categories: Glee|Tags: , , , , , , , , , , , | Part 1 of 3: Wicked So, here's what you missed on Glee: the glee club has a bake sale to raise money for a handicap [...] ## Glee: Wheels Categories: Glee|Tags: , , , , | Part 2 of 3: Les Miserables If your mother ever told you no nice boy likes a bitch, she'd be right. All the hot boys [...]
# an example of regular ring with nilpotent elements A regular local ring is a domain. But in general, a regular ring is not domain, so you can find regular rings with nilpotent elements. I am unable to construct an example of (A, I) as A is a regular ring, I is a nilpotent ideal and A/I is regular Any help would be appreciated - add comment ## 1 Answer There is no. A regular ring is reduced : • a ring is reduced if and only if all its localization at prime ideals are reduced ; • the localization at a prime ideal of a regular ring is a local regular ring (definition) hence reduced. However, a regular ring can contains zero-divisors. For example the ring $k[x]\times k[y]$, representing the disjoint union of two affine lines, is not a domain but regular. And by the way, if $I$ is a nilpotent ideal, $A/I$ needs not contain nilpotent element, and in fact, it would have less nilpotent elements than $A$. - "the localization at a prime ideal of a local ring is regular (definition) hence reduced." there's a typo here: you mean that localization of a regular ring is still regular. –  S123 Feb 28 '12 at 14:07 Corrected, thank you ! –  Lierre Feb 28 '12 at 15:16 add comment
# How much down do I need to go camping in outer space? I just recently finished making my own sleeping bag. So, let’s say you are like me and you want one as well. One of the initial things that you will want to do is gather your materials. Although it’s a lot more interesting picking out the fabric to use for the shell, the major material in terms of amount and cost is the insulation. Since you are building your own sleeping bag I am just going to assume that you will be using down feathers as your choice of insulation. I can not actually imagine why anyone would go through the trouble of making their own sleeping bag if they plan on using polyester filling as insulation. If you are that type of person, I suggest you just go purchase one of the ubiquitous synthetic material bags on the market. If you still want to do this, you might as well read this post as well, since it will at least give you some idea of what you need to think about for how much insulation. The function of the sleeping bag is to provide long path of low thermal conductivity material between you, the thing being insulated, and the outside, the thing that acts as a heat sink. The way it does this is by trapping a large amount of air and changing the mode of heat transfer from convection to conduction. Air is a good thermal conductor when it is free to transfer heat via convection and a very good insulator when it is limited to only conduction. This is why sleeping bags can be so light; you are utilizing the air surrounding you as a thermal insulator by limiting it’s mobility. Down, it turns out, is very good at limiting convection in air without adding much additional weight. It also has the very desirable quality of being a resilient material that resists permanent deformations. These two qualities together make it ideal for outdoor activities. And of importance to the McChoppin’ mindset, it’s an old-timey material. You know, something that Shackleton himself could have used. 1 In the sleeping bag business world the lowest temperature that can comfortably be tolerated in the sleeping bag is normal equated to “loft.” Loft is the path distance between you and the outside, or, in other words, the thickness of the sleeping bag from the inside to the outside. The total thickness of the bag should be twice its loft. However, in determining things for making your sleeping bag, in particular, how much down feathers to purchase, we care more about equating the temperature to the weight of the down that we will need. It turns out not all down is equal. It’s actually all equal from thermal conductivity stand point; down as a fill insulation is mostly air and therefore has essentially has the thermal insulation properties of air. The major difference in down from batch to batch is the density. The inverse of down’s density is expressed by down feather merchants as “fill power.” Because it is the inverse, a higher fill power is a lower density. Numerically, fill power is equal to the number of cubic inches 1 ounce of down occupies. I imagine they choose inverse density as the metric because, for the purpose of purchasing things, larger number means better to most people and therefore they can sell 900 fill power down for more than 700 fill power because most people would rather have less dense down. The fill power rating for most down sleeping bags seem to range from about 650 to 900 fill power.2 As I said the only major difference is density. And since I’m not particularly concerned about the weight of my bag, I’m not terribly concerned about the fill power, other than cost. However, there does not seem to be much of a market out there for down feathers that is readily accessible directly to the individual sleeping bag maker. During my search I found my self pretty much limited to the 900 fill power end. And keep in mind that you pay for down by the ounce, but how much you need is dependent on the type of down you buy. Therefore the ratio of the fill power directly relates to the ratio of the price in down. In other words, let’s say that you find 900 fill power down that sells for 30 dukes for 3 ounces. You also find 700 fill power down that sells for 25 dukes. If you don’t care about the weight, which down should you purchase to reduce your cost? It turns out that you should go with the more expensive 900 fill power down.3 The 700 fill power down should be less than \$23.33 if it is going to be more economical, assuming everything else is equal. Alright, I’m going to assume that your pricing of the market is going to lead you to the same conclusion that it lead me to: 900 fill power down.4 So, how much do we need? Well, I’ve gone through the little bit of trouble to get this equation that relates expected outside temperature to amount of down needed. It is $W = -\frac{1}{3} \cdot T +19$, where T is the temperature in degree Fahrenheit you plan on taking you bag to in the outdoors and W is the weight of 900 fill down you will need in ounces to achieve this. A few thing recognize immediately is that at zero degrees you need 19 ounces of down. For every degree above that you need one third of an ounce less. For every degree below that you need one third of an ounce more. This equation also says that you need no down in your sleeping bag if you plan on going no colder than 60 degree Fahrenheit. That seem reasonable, as at 60 degree I can sleep clothed with little more than a light blanket. So, that is pretty much all you need to know. Pick a temperature, plug it in, and that is now much 900 fill power down you need to buy to start making a sleeping bag. Go nuts. But there are a few more things you might like to know about the equation and what went into it. First, if you don’t get 900 fill power down you can still use the equation. Just calculate the amount of down you would need if you were getting 900 fill down and multiple it by the ratio of the fill powers. So, if you needed 22 ounces of 900 fill down you will need 28 ounces of 700 fill power down. Second, I made this calculation based upon my size: six feet four inches. If you are a lot different in size, than you will need more or less down. By “a lot” I mean that for this rough calculation, anyone less than 4 inches less than me in height will need at least 5% less down. A similar thing for your girth: I’m fairly skinny. Also, keep in mind this is a rough calculation. The exact amount of down you will need is related to how much space you need to fill in your sleeping bag. Due to difference is construction and actual down quality you may end up needing a little more or a little less. All right, all those caveats aside, you’re probably wondering where I got that equation from: using freely available data of the intertubes I did a linear fit to all the data points and got the equation. How good is the linear fit? What was the data? The data I gathered from Peter Hutchinson Designs, Feather Friends,Western Mountaineering, and Thru-Hiker. The PHD results are easy: they already give 900 fill power down in terms of weight needed for a specific temperature. For TH, they give you a loft for a desired temperature. So I just calculated the total volume given the loft and my dimensions and divided by the fill power. Because WD and FF don’t give much information about the down that they use, I had to assume a fill power and correct for it to get everything into 900 fill power. Despite the fact that both WD and FF seem to hint at using a higher fill power, an assumed fill power of 750 seem to get the data inline with the PHD and TH results. FF and WM still seem to be pretty conservative with the temperature. But they all agree that no matter which down you choose you don’t need any at about 60 degrees Fahrenheit. Personally, I choose to more heavily weight the PHD and TH results for a couple of reasons. First, they were the first two data sets I looked at: blatant human bias. Second, they both went for a more conservative approach with the amount of down needed: I wanted to buy less down. And finally, I suspect I will need less down then the market that FF and WM are going after: total hubris. So, keep in mind that my equation is likely to be on the absolute low end of the amount of down that you’ll need. I don’t recommend getting less than it tell you. If you think you might be want a little extra thermal protection, get more down. All right, so I was going to talk about how well this simple model for projecting the amount of down you need compares to an actual (but still relatively simple) thermal model of you, the sleeping bag, and the outdoors, but this post is too long already. For example, my equation predicts a number for sleeping in outer space. Which, for the sake of this discussion, is 0 Kelvin.5 Or -460 degrees Fahrenheit. It tells you you need some 170 ounces of 900 fill power down. At 30 dollar for 3 ounces, that equates to a cost of 1645 dukes. Down is by far the highest cost involved in a home-made sleeping bag, so we can use that as an estimate for the total cost of the bag. Western Mountaineering’s best bag only goes to -40 degrees F, but costs half as much! By the way, it turns out that that much down (170 ounces) would offer 3 feet of loft. I’d be in a sleeping bag that had a diameter as much as I am tall. But would that really work? I suspect not. But we will check it out in a future post. 1 Shackleton, if my memory serves me correctly, actually used sleeping bags stuffed with Reindeer fur. An intriguing idea, one that I’d be tempted to try, if it weren’t for the fact that during their 800 mile open-ocean voyage to South Georgia the bags got wet and started to rot. Shackleton and this men later complained that everything was covered in hair and that it often found its way into their mouths. It does not sound like a pleasant experience. 2 I will ignore the “+” part of the description of down because I feel that it is even more of a marketing ploy; if 650+ down was really significantly different than 650 fill power, why not just label it 700? 3 Unless you want a sleeping bag that needs less than 3 ounces of down insulation at 900 fill power. But I’m imagining that you aren’t planning on making sleeping bags for for pet canary, in the quantities that you will likely need we can ignore the fact that the economics depend slightly on much down you are getting. 4 Unfortunately there just doesn’t seem to be a whole lot of other reputable options. If you can find any I would be interested to know of them. 5 Also, nevermind that the major component of heat loss in outer-space is is radiative heat loss and not convection and conduction, which is what your sleeping bag can protect you against. If you want, you can imagine that your in an air filled bubble at standard pressure floating around in outer-space and you need a sleeping bag to keep warm. And please don’t bring up any other objections, after all I am a physicist. 1. Dan The Man Very creative and informative. I like where your head is at. Keep up the good work. 2. wow, great, I was wondering
# POJ - 2989 All Friends 极大团 Time Limit: 1000MS Memory Limit: 65536KB 64bit IO Format: %lld & %llu Description Sociologists are interested in the phenomenon of "friendship". To study this property, they analyze various groups of people. For each two persons in such a group they determine whether they are friends (it is assumed that this relation is symmetric). The sociologists are mostly interested in the sets of friends. The set S of people is the set of friends if every two persons in S are friends. However, studying the sets of friends turns out to be quite complicated, since there are too many such sets. Therefore, they concentrate just on the maximal sets of friends. A set of friends S is maximal if every person that does not belong to S is not a friend with someone in S. Your task is to determine the number of maximal sets of friends in each group. In case this number exceeds 1 000, you just need to report this -- such a group is too complicated to study. Input The input consists of several instances, separated by single empty lines. The first line of each instance consists of two integers 1 ≤ n ≤ 128 and m -- number of persons in the group and number of friendship relations. Each of m following lines consists of two integers ai and bi (1 ≤ ai, bi ≤ n). This means that persons ai and bi (ai ≠ bi) are friends. Each such relationship is described at most once. Output The output for each instance consists of a single line containing the number of maximal sets of friends in the described group, or string "Too many maximal sets of friends." in case this number is greater than 1 000. Sample Input 5 4 1 2 3 4 2 3 4 5 Sample Output 4 BronKerbosch(All, Some, None): if Some and None are both empty: report All as a maximal clique //所有点已选完,且没有不能选的点,累加答案 for each vertex v in Some: //枚举Some中的每一个元素 BronKerbosch1(All ⋃ {v}, Some ⋂ N(v), None ⋂ N(v)) //将v加入All,显然只有与v为朋友的人才能作为备选,None中也只有与v为朋友的才会对接下来造成影响 Some := Some - {v} //已经搜过,在Some中删除,加入None None := None ⋃ {v} void dfs(int d, int an, int sn, int nn){ if(!sn && !nn) S++;//已经搜到了,与所选点全部有连接的点以全部放入all中 int u = some[d][1]; For(i, 1, sn){ int v = some[d][i]; if(G[u][v]) continue;//优化,只要取与当前已经搜过的点没有连接的点就可以了,因为它们肯定已经被处理过了 int tsn = 0, tnn = 0; For(j, 1, an) all[d+1][j] = all[d][j]; all[d+1][an+1] = v; //将点v放入all中 For(j, 1, sn) if(G[v][some[d][j]]) some[d+1][++tsn] = some[d][j]; For(j, 1, nn) if(G[v][none[d][j]]) none[d+1][++tnn] = none[d][j]; dfs(d+1, an+1, tsn, tnn); some[d][i] = 0; none[d][++nn] = v; //从some中取出v,放入none(判重) } } #include<cstdio> #include<cstdlib> #include<cstring> #include<cmath> #include<iostream> #include<algorithm> #include<queue> #include<map> using namespace std; #define MAXN (150+5) #define INF 0x3f3f3f3f #define Set(a, v) memset(a, v, sizeof(a)) #define For(i, a, b) for(int i = (a); i <= (int)(b); i++) bool G[MAXN][MAXN]; int n, m, S, all[MAXN][MAXN], some[MAXN][MAXN], none[MAXN][MAXN]; void init(){ Set(G, 0); } void dfs(int d, int an, int sn, int nn){ if(!sn && !nn) S++; if(S > 1000) return; int u = some[d][1]; For(i, 1, sn){ int v = some[d][i]; if(G[u][v]) continue; int tsn = 0, tnn = 0; For(j, 1, an) all[d+1][j] = all[d][j]; all[d+1][an+1] = v; For(j, 1, sn) if(G[v][some[d][j]]) some[d+1][++tsn] = some[d][j]; For(j, 1, nn) if(G[v][none[d][j]]) none[d+1][++tnn] = none[d][j]; dfs(d+1, an+1, tsn, tnn); some[d][i] = 0; none[d][++nn] = v; } } int main(){ freopen("test.in", "r", stdin); freopen("test.out", "w", stdout); while(scanf("%d%d", &n, &m) != EOF){ init(); For(i, 1, m){ int u, v; scanf("%d%d", &u, &v); G[u][v] = G[v][u] = true; } For(i, 1, n) some[0][i] = i; S = 0; dfs(0, 0, n, 0); if(S <= 1000) printf("%d\n", S); else printf("Too many maximal sets of friends.\n"); } return 0; } • 评论 • 下一篇 • 上一篇
# Official web site of Takeuchi Lab, Univ. Tokyo ## KPZ universality in growing interfaces of liquid-crystal turbulence ### Growing interfaces of liquid-crystal turbulence and KPZ Physics of critical phenomena at equilibrium has been established on the basis of various experiments, exact solutions and other approaches, which led to the invention of fundamental frameworks such as the renormalization group theory and the conformal field theory. One may ask then: does a similarly profound physics exist for systems driven out of equilibrium? No one knows the answer yet, but recent theoretical developments on the Kardar-Parisi-Zhang (KPZ) universality class, the fundamental class describing random growth processes [A], have established an example in which various statistical properties can be derived exactly, despite being far from equilibrium [1,B]. We are studying this problem experimentally, focusing on growing interfaces of turbulent domains in electrically driven liquid crystal convection. This turbulent state called DSM2 consists of densely entangled topological defects, and can be generated by shooting ultraviolet laser pulses under a sufficiently high voltage applied to the system. By changing the beam shape of the laser, we can generate both circular and flat interfaces (Movies 1 and 2). ### Universal interface fluctuations indicate a subclass structure of KPZ Measuring statistical properties of interface fluctuations, we obtain the same set of the KPZ scaling exponents for both cases, but nevertheless, fluctuations of the circular and flat interfaces are found to exhibit distinct distribution and correlation functions. The obtained functions agree with the results previously derived for solvable models of the KPZ class [1,B]; in particular, the distribution function for the circular and flat interfaces is shown to be the largest-eigenvalue distribution for GUE and GOE random matrices, respectively, called the Tracy-Widom distribution in random matrix theory (see Fig. 1). This result implies that universality arises even in detailed statistical properties such as the distribution and correlation functions. Moreover, the KPZ class actually splits into a few universality "subclasses" according to the global geometry of interfaces (or the initial condition), characterized by different distribution and correlation functions: here we identified the circular (or curved) and flat subclasses [2,3]. Analytical developments on the KPZ class are remarkable, yet there do exist important statistical properties that remain unsolved, such as the time correlation. We are determining such unsolved properties experimentally, by means of careful analysis of the growing liquid-crystal turbulence. In particular, on the time correlation, we found even qualitative differences between the circular and flat interfaces, regarding symmetry between positive and negative fluctuations and persistence of correlation [3]. For the circular interfaces, we showed that correlation persists forever in time. This anomalous result was subsequently confirmed by theory [4], and eventually a mathematical theorem was established [C]. ### Designing the initial condition arbitrarily by laser holographic technique The KPZ universal fluctuations are classified into universality subclasses, which are determined mainly by the initial condition. This led us to extend our experimental system so that the initial condition can be designed arbitrarily, which we realized by adapting a laser holographic technique (Movie 3) [5]. This new system allows us to study KPZ universal fluctuations for various geometries, which were hitherto inaccessible. For example, from a ring of a finite radius, we have a pair of interfaces growing outward and inward from the initial ring. We found, both experimentally and numerically, that the outgrowing interfaces show crossover from the flat subclass to the circular subclass, and that the ingrowing interfaces behave like flat interfaces, until they eventually deviate from the KPZ class and collapse near the center of the ring [5]. Theoretically, a variational formula was proposed as a useful tool to deal with KPZ fluctuations for general initial conditions [D], which has indeed been used for many analytical studies. We proposed a method to evaluate the variational formula numerically, and showed that it can be used to reproduce our experimental and numerical results quantitatively [5]. This underlines the practical importance of the variational formula, which can predict and account for fluctuation properties of interfaces for general initial conditions. ### Towards the detection of the stationary KPZ subclass It is theoretically known that the circular and flat subclasses are not the only ones that exist: indeed, the stationary subclass, which describes the stationary state of the KPZ-class interfaces, is also well established [1,B]. This subclass is however practically inaccessible in experiments, because it takes infinitely long time to reach it. To circumvent this difficulty, we studied, instead of the height $h(x,t)$, the height difference between two times, $\Delta h(x,\Delta t, t_0) = h(x, t_0+\Delta t) - h(x, t_0)$. Thereby we found crossover between the flat and stationary subclasses, first numerically at high precision (Fig. 2), then we identified indications of the same crossover in the experiment [6]. ### KPZ half-space problem If an interface ends at a boundary, the boundary condition is expected to affect the fluctuation properties. This is well studied by the name of the KPZ half-space problem, which deals with interfaces growing in a semi-infinite region $x \geq 0$ with a given boundary condition at $x=0$, and several universality subclasses have been established. In particular, when a circular interface grows with its slope at $x=0$ fixed at a large positive value, the GSE Tracy-Widom distribution from random matrix theory is predicted. However, experimentally, it is in general difficult to impose a well-defined boundary condition for growing interfaces. Experiments on the KPZ half-space problem seem to be therefore a mission impossible. To overcome this difficulty, we propose the "biregional" interface growth problem, in which the interface grows at different speeds in the left and right regions [7] (Movie 4). Although experimental tests of the GSE Tracy-Widom distribution are on the way, we showed numerically that our biregional setting indeed results in the universal distributions predicted for the KPZ half-space problem [7]. The biregional interface growth is therefore a useful approach to study the KPZ half-space problem experimentally and numerically. ### References (Takeuchi Lab) [1] For an introductory review on modern aspects of KPZ: K. A. Takeuchi, Physica A 504, 77 (2018) [web, preprint]. [2] K. A. Takeuchi and M. Sano, Phys. Rev. Lett. 104, 230601 (2010) [pdf, web]; K. A. Takeuchi, M. Sano, T. Sasamoto, and H. Spohn, Sci. Rep. 1, 34 (2011) [pdf, web]. [3] K. A. Takeuchi and M. Sano, J. Stat. Phys. 147, 853 (2012) [web, preprint]. [4] J. De Nardis, P. Le Doussal, and K. A. Takeuchi, Phys. Rev. Lett. 118, 125701 (2017) [pdf, web]. [5] Y. T. Fukai and K. A. Takeuchi, Phys. Rev. Lett. 119, 030602 (2017) [pdf, web]; Phys. Rev. Lett. 124, 060601 (2020) [pdf, web]. [6] K. A. Takeuchi, Phys. Rev. Lett. 110, 210604 (2013) [pdf, web]. [7] Y. Ito and K. A. Takeuchi, Phys. Rev. E 97, 040103(R) (2018) [pdf, web]. (other groups) [A] A.-L. Barabási and H. E. Stanley, Fractal Concepts in Surface Growth (Cambridge Univ. Press, Cambridge, 1995). [B] Other reviews on exact studies on KPZ: T. Kriecherbauer and J. Krug, J. Phys. A 43, 403001 (2010) [web]; I. Corwin, Random Matrices Theory Appl. 1, 1130001 (2012) [web]; H. Spohn, Lecture Notes of the Les Houches Summer School, 104, 177-227 (2017) [web, preprint]. [C] K. Johansson, Probab. Theory Relat. Fields 175, 849 (2019) [web]. [D] J. Quastel and D. Remenik, Springer Proceedings in Mathematics & Statistics, 69, 121 (2014) [web, preprint]. ### Main contributors K. A. Takeuchi, Y. T. Fukai, Y. Ito, T. Iwatsuka See other research subjects ↑ PAGE TOP
# Homework Help: Small Poisson Question 1. Dec 5, 2006 ### Hartogsohn26 Hello I just found this forum, was recommended it by a person that I know and have a small probability question regarding Poisson distribution. Let (T,U) be a to dimensional discrete stochastic vector with the probability function $$p_{T,U}$$ given by. $$P(T=t, U = u) = \left{ \begin{array}{cccc} \frac{1}{3} \cdot e^{-\lambda} \frac{\lambda^u}{u!} & u \in \{-1,0,1\} & \mathrm{and} & t \in \{0, 1, \ldots\} \\0 & \mathrm{elsewhere.} \end{array}$$ where $$\lambda > 0$$ 1) describe the support for $$\mathrm{supp} (P_{T,U})$$ Solution (1) the support $$\mathrm{supp} (P_{T,U})$$ is the set of values for the real valued probability function P which produces non-negative values. therefore $$\mathrm{supp} (P_{T,U}) = \{0,1, \ldots \}$$ 2) show that the probability function $$p_T$$ and $$p_U$$ for T and U is. $$P(T = t) = P_{T} = \left{ \begin{array}{cccc} \frac{1}{3} & t \in \{-1,0,1\} \\0 & \mathrm{elsewhere.} \end{array}$$ and $$P(U = u) = P_{U} = \left{ \begin{array}{cccc} e^{-\lambda} \frac{\lambda^{u}}{u!} & u \in \{0,1,\ldots\} \\0 & \mathrm{elsewhere.} \end{array}$$ which inturn means $$U \sim po(\lambda)$$ Solution (2) How do I show this ? If not as above. Or do I show that they have same variance?? (3) Assume that $$\lambda = 1$$ then $$P(T=U) = \frac{2}{3} e^{-\lambda}$$ Solution is $$P(T = U) = P(t \cup u)$$??? Best Regards Hartogsohn
# Anomaly detection thresholds issue I'm working on an anomaly detection development in Python. More in details, I need to analysed timeseries in order to check if anomalies are present. An anomalous value is typically a peak, so a value very high or very low compared to other values. The main idea is to predict timeseries values and, using thresholds, detect anomalies. Thresholds are calculated using the error, that is the real values minus the predicted ones. Then, mean and standard deviation of the error are performed. The upper threshold is equals to mean + (5 * standard deviation). The lower threshold is equals to mean - (5 * standard deviation). If the error exceeds thresholds is marked as anomalous. What doesn't work with this approach is that if I have more than one anomalous value in a day, they are not detected. This because error, mean and standard deviation are too much influenced by anomalous values. How can i fix this problem? Is there another method that i can use to identify thresholds without this issue? Thank you • Does your data vary significantly from day to day? Oct 29, 2019 at 22:24 • no, data are quite similar Oct 30, 2019 at 8:00 • In the equation: The upper threshold is equals to mean + (5 * standard deviation) are mean and standard deviation of the one day sample or is it of a much larger period? Oct 30, 2019 at 19:40 You will probably have to change your critical value to something other than 5 to get the same kind of coverage. According to Wikipedia, you'll want the new critical value to be $$5\sqrt{\frac{\pi}{2}}$$ if your data are iid Gaussian.
# Definition:Open Set/Pseudometric Space Let $P = \left({A, d}\right)$ be a pseudometric space. An open set in $P$ is defined in exactly the same way as for a metric space: $U$ is an open set in $P$ if and only if: $\forall y \in U: \exists \epsilon \left({y}\right) > 0: B_\epsilon \left({y}\right) \subseteq U$ where $B_\epsilon \left({y}\right)$ is the open $\epsilon$-ball of $y$.
# Simple plotting from the command line The problem with making graphs in Matplotlib, Gnuplot, and even Numbers, is that it takes too long to get something that looks halfway decent1 when I’m not in “graph-making mode.” I’ll be working on a problem and realize that a quick visualization of a function or some data would help my understanding. I don’t want my train of thought derailed by the context shift that comes with starting up a program and fiddling with it—I just want a reasonable-looking graph now. So I wrote a couple of little scripts to make plots from the command line. The scripts are in this GitHub repository, and I’ll quote its README to explain how they’re used. Two simple scripts for creating plots from the command line: one for plotting functions and the other for plotting points. The scripts require the NumPy and matplotlib modules. The graphs produced aren’t intended to be of publication quality. The goal is to make readable graphs very quickly. ## Functions For these, everything is given on the command line. plot 'function(s)' minx maxx [miny maxy] Three arguments are required: the function itself, which would normally be enclosed in quotes to avoid problems with shell interpretation; the minimum x value; and the maximum x value. The last two arguments, the minimum and maximum y values are optional—if they aren’t given, the script will figure them out. A common reason to specify the y limits is if the function “blows up” between the x limits and the extreme y values make the rest of the graph look flat. The function can include any Python expression or function, including those in the NumPy library. There’s no need to use a numpy prefix. More than one function can be specified; separate them with a semicolon. Thus, something like plot 'tan(x); x' pi 3*pi/2-.01 0 5 can be used to find the intersection of the two given functions. This example also shows that you can use expressions in the limits. The result of plot is an 800×600 PNG image that’s sent to standard output. Normally, this would be redirected to a file, plot 'tan(x); x' pi 3*pi/2-.01 0 5 > plot.png and viewed in whatever graphics program the user prefers. No controls for tick marks or grid spacing are provided—this is quick and dirty plotting. ## Points The script that plots lists of points, pplot, gets its data from standard input. On the Mac, it would be common to pass data in from the clipboard this way: pbpaste | pplot As with plot, pplot produces an 800×600 PNG image that’s sent to standard output, so one would normally redirect this to a file: pbpaste | pplot > plot.png Each line of the input data represents one (x, y) point. The x and y values can be separated by tabs, spaces, or commas. Normally, pplot plots the points only. If you want to connect the points with a line, use the -l option: pbpaste | pplot -l > plot.png Here’s plot: python: 1: #!/usr/bin/python 2: 3: from __future__ import division 4: from numpy import * 5: import matplotlib.pyplot as plt 6: import sys 7: 8: fcns = sys.argv[1].split(';') 9: minx = eval(sys.argv[2]) 10: maxx = eval(sys.argv[3]) 11: plt.xlim(minx, maxx) 12: try: 13: miny = float(sys.argv[4]) 14: maxy = float(sys.argv[5]) 15: plt.ylim(miny, maxy) 16: except IndexError: 17: pass 18: 19: x = linspace(minx, maxx, 100) 20: for i, f in enumerate(fcns): 21: y = eval(f) 22: plt.plot(x, y, "-", linewidth=2) 23: plt.minorticks_on() 24: plt.grid(which='both', color='#aaaaaa') 25: plt.savefig(sys.stdout, format='png') A few things about plot that I’m not proud of: • No error handling. The try/except clause isn’t really error handling, it’s just a simple way to handle optional arguments. I’m assuming that if I screw up the input, whatever error message Python spits back will help me fix it. • The evals. I know it’s considered dangerous, but I’m not going to write my own parser. Plus, I think the danger is overblown. I know perfectly well how the program works; why would I pass in a function that wipes my hard drive? • Bringing in every NumPy command with the import statement. I’d rather not have the namespace polluted, but I’ll be damned if I’m going to type things like np.sin(x) every time I need a trig function. You may be wondering why I used savefig in Line 25 instead of show. I just don’t like the visualizer that show launches—its window is too small, it blocks further input from the command line, and it requires a mouse click to dismiss it. I’d rather just open a PNG file in Preview. If you prefer show, be my guest; just change Line 25 to plt.show(). Here’s pplot: python: 1: #!/usr/bin/python 2: 3: import matplotlib.pyplot as plt 4: import sys 5: import re 6: import getopt 7: 8: marker = '.' 9: opts, args = getopt.getopt(sys.argv[1:], 'l') 10: for o, a in opts: 11: if o == '-l': 12: marker = '.-' 13: 14: x = [] 15: y = [] 16: for line in sys.stdin: 17: line = line.strip() 18: vals = re.split(r'\s+|\s*,\s*', line) 19: x.append(vals[0]) 20: y.append(vals[1]) 21: plt.plot(x, y, marker) 22: plt.minorticks_on() 23: plt.grid(which='both', color='#aaaaaa') 24: plt.savefig(sys.stdout, format='png') As you can see in Line 18, the x and y values can be separated by any amount of whitespace or by a comma and whitespace. This makes pplot more flexible in the input it can accept. Like plot, this is a pretty bare-bones script, with no error handling. An obvious deficiency in pplot is its inability to graph more than one dataset. I thought about extending it, but there are too many ways to make a table with multiple datasets. Once you get to that level of complexity, you’re better off using pandas, which has ways of slicing and handling missing data that a simple program like pplot couldn’t hope to include. 1. Actually, “halfway decent” is beyond Numbers. Its graphs are acceptable, I suppose, for business graphics, where pie charts are considered hot shit, but totally wrong for science and engineering.
# Math Help - Setting Up Integral 1. ## Setting Up Integral y=1/(1+x^2), y=0, x=0, x=2 about x=2 Setup the integral for the volume. So, I know it's a version of 2pi integral of the height and c, but I don't know what those are. 2. $V = 2\pi \int_0^2 (2-x) \cdot \frac{1}{1+x^2} \, dx$
Browse Questions # For the reaction ,$\;2N_{2}O_{5} \to 4 NO_{2}+O_{2}\;$ , the rate equation can be expressed in two ways $\;- \large\frac{d[N_{2}O_{5}]}{dt} = k \normalsize N_{2}O_{5}\;$ and $\;+ \large\frac{d[NO_{2}]}{dt} = k^{'} \normalsize N_{2}O_{5}\;$ $\;k\;$ and $\;k^{'}\;$ are related as : $(a)\;k=k^{'} \qquad(b)\;2k=k^{'}\qquad(c)\;k=2k^{'}\qquad(d)\;k=4k^{'}$
## April 13, 2011 ### The Three-Fold Way (Part 6) #### Posted by John Baez I started this tale by pointing out two ‘problems’ with real and quaternionic quantum theory: • Apparently you can’t get observables from symmetries. More precisely: 1-parameter unitary groups on real or quaternionic Hilbert spaces don’t have self-adjoint generators, as they do in complex quantum theory. • Apparently you can’t tensor two quaternionic Hilbert spaces. More precisely, you can’t tensor two quaternionic Hilbert spaces (which are one-sided $\mathbb{H}$-modules) and get another one. Then I spent some time talking about how real, complex and quaternionic quantum mechanics fit together in a unified structure: the three-fold way. Today, I’d like to conclude the tale by saying how the three-fold way solves — or dissolves — these problems. We’ve seen how on a real, complex or quaternionic Hilbert space, any continuous one-parameter unitary group has a skew-adjoint generator $S$. In the complex case we can write $S$ as $i$ times a self-adjoint operator $A$, which in physics describes a real-valued observable. Alas, in the real or quaternionic cases we cannot do this. However, as we saw last time, real and quaternionic Hilbert spaces can be seen as a complex Hilbert space with extra structure. This solves this problem. Indeed, we’ve seen faithful embeddings $\alpha_* : Hilb_\mathbb{R} \to Hilb_\mathbb{C}$ and $\beta^* : Hilb_\mathbb{H} \to Hilb_\mathbb{C} .$ And these are ‘dagger-functors’, so they send skew-adjoint operators to skew-adjoint operators. So, given a skew-adjoint operator $S$ on a real Hilbert space $H$, we can apply the functor $\alpha_*$ to reinterpret it as a skew-adjoint operator on the complexification of $H$, namely $H \otimes {{}_\mathbb{R}\mathbb{C}_{\mathbb{C}}}$. By abuse of notation let us still call the resulting operator $S$. Now we can write $S = i A$. But resulting self-adjoint operator $A$ has an interesting feature: its spectrum is symmetric about $0$! In the finite-dimensional case, all this means is that for any eigenvector of $A$ with eigenvalue $c \in \mathbb{R}$, there is corresponding eigenvector with eigenvalue $-c$. This is easy to see. Suppose that $A v = c v$. By a theorem from last time, the complexification $H \otimes {{}_\mathbb{R}\mathbb{C}_{\mathbb{C}}}$ comes equipped with an antiunitary operator $J$ with $J^2 = 1$, and we have $S J = J S$. It follows that $J v$ is an eigenvector of $A$ with eigenvalue $-c$: $A J v = - i S J v = - i J S v = J i S v = - J A v = - c J v .$ (In this calculation I’ve reverted to the standard practice of treating a complex Hilbert space as a left $\mathbb{C}$-module.) In the infinite-dimensional case, we can make an analogous but more subtle statement about the continuous spectrum. Similarly, given a skew-adjoint operator $S$ on a quaternionic Hilbert space $H$, we can apply the functor $\beta^*$ to reinterpret it as a skew-adjoint operator on the underlying complex Hilbert space. Let us again call the resulting operator $S$. We again can write $S = i A$. And again, the spectrum of $A$ is symmetric about $0$. The proof is the same as in the real case: now, by another theorem from last time, the underlying complex Hilbert space $H$ is equipped with an antiunitary operator $J$ with $J^2 = -1$, but we again have $S J = J S$, so the same calculation goes through. So, that takes care of one problem . The second ‘problem’ is that we cannot take the tensor product of two quaternionic Hilbert spaces and get another quaternionic Hilbert spaces. But we’ve seen that there’s a strong analogy between bosons and real Hilbert spaces, and between fermions and quaternionic Hilbert spaces. This makes it seem rather odd to want to tensor two quaternionic Hilbert space and get another one. Just as two fermions make a boson, the tensor product of two quaternionic Hilbert spaces is naturally a real Hilbert space. After all, we saw last time that a quaternionic Hilbert space can be identified with a complex Hilbert space with an antiunitary $J$ such that $J^2 = -1$. If we tensor two such spaces, we get a complex Hilbert space equipped with an antiunitary $J$ such that that $J^2 = 1$. And we’ve seen that this can be identified with a real Hilbert space. Further arguments of this sort give four tensor product functors: \begin{aligned} \otimes : Hilb_\mathbb{R} \times Hilb_\mathbb{R} & \to & Hilb_\mathbb{R} \\ \otimes : Hilb_\mathbb{R} \times Hilb_\mathbb{H} & \to & Hilb_\mathbb{H} \\ \otimes : Hilb_\mathbb{H} \times Hilb_\mathbb{R} & \to & Hilb_\mathbb{H} \\ \otimes : Hilb_\mathbb{H} \times Hilb_\mathbb{H} & \to & Hilb_\mathbb{R} . \end{aligned} We can also tensor a real or quaternionic Hilbert space with a complex one and get a complex one: \begin{aligned} \otimes : Hilb_\mathbb{C} \times Hilb_\mathbb{R} & \to & Hilb_\mathbb{C} \\ \otimes : Hilb_\mathbb{R} \times Hilb_\mathbb{C} & \to & Hilb_\mathbb{C} \\ \otimes : Hilb_\mathbb{C} \times Hilb_\mathbb{H} & \to & Hilb_\mathbb{C} \\ \otimes : Hilb_\mathbb{H} \times Hilb_\mathbb{C} & \to & Hilb_\mathbb{C} . \end{aligned} Finally, of course we have: \begin{aligned} \otimes : Hilb_\mathbb{C} \times Hilb_\mathbb{C} & \to & Hilb_\mathbb{C} . \end{aligned} In short, the ‘multiplication’ table of real, complex and quaternionic Hilbert spaces matches the usual multiplication table for the numbers $+1, 0, -1$! This is in fact nothing new: it goes back to a very old idea in group theory: the Frobenius–Schur indicator, which was developed in a joint paper by these two authors in 1906. (If you were at Oxford when I gave my lecture on this stuff, you’ll probably remember that after my talk, or maybe during it, someone we all know and love raised his hand and said “Isn’t this just the Frobenius-Schur indicator?” And I said something like “yes”.) What’s the Frobenius–Schur indicator? It’s a way of computing a number from an irreducible unitary representation of a compact group $G$ on a complex Hilbert space $H$, say $\rho : G \to \U(H)$ where $U(H)$ is the group of unitary operators on $H$. Any such representation is finite-dimensional, so we can take the trace of the operator $\rho(g^2)$ for any group element $g \in G$, and then perform the integral $\int_G \tr(\rho(g^2)) \, d g$ where $d g$ is the normalized Haar measure on $G$. This integral is the Frobenius–Schur indicator. It always equals $1$, $0$ or $-1$, and these three cases correspond to whether the representation $\rho$ is ‘real’, ‘complex’ or ‘quaternionic’ in the sense we’ve explained before. The moral, then, is not to fight the patterns of mathematics, but to notice them and follow them. Puzzle. - Is the category of real (resp. complex, resp. quaternionic) Hilbert spaces equivalent to the category of complex Hilbert spaces $H$ equipped with a operator $J: H \to H$ having some nice property and obeying the equation $J^2 = +1$ (resp. $J^2 = 0$, resp. $J^2 = -1$)? Posted at April 13, 2011 7:33 AM UTC TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2384 ### Re: The Three-Fold Way (Part 6) Neat! It took me a little while to figure out what your puzzle is asking, though. Are you asking whether there is some property X of endomorphisms of complex Hilbert spaces, such that for any $\mathbb{K} \in \{ \mathbb{R}, \mathbb{C}, \mathbb{H}\}$, the category of $\mathbb{K}$-Hilbert spaces is equivalent to the category of complex Hilbert spaces equipped with an endomorphism $J$ satisfying property X and such that $J^2$ is equal to the Frobenius–Schur indicator of complex representations that come from $\mathbb{K}$-representations? In any case, the burning question in my mind after reading this is why? Why do the same three numbers $+1$, $0$, and $-1$ serve as markers for the three types of Hilbert space in the multiplication table, in the Frobenius–Schur indicator, and in the squares of structure operators? Posted by: Mike Shulman on April 13, 2011 7:58 PM | Permalink | PGP Sig | Reply to this ### Re: The Three-Fold Way (Part 6) Mike wrote: It took me a little while to figure out what your puzzle is asking, though. The egregious typo making the first sentence of my puzzle nonsensical probably didn’t help! I’ll fix that now. Are you asking whether there is some property… Yes. Of course for each case separately we know there’s a property X that does the job. So the fun part is finding a single nice property that handles all three cases. In any case, the burning question in my mind after reading this is why? I don’t know! That’s another good puzzle, that must be related to mine. Posted by: John Baez on April 14, 2011 4:03 AM | Permalink | Reply to this ### Re: The Three-Fold Way (Part 6) This may be naive, but regarding the first point (symmetries from observables), what if you simply redefine what an observable is? In traditional QM, an observable corresponds to self adjoint operator. But this is due to the $i$ appearing in the equations. If you absorb the $i$ into the operator, then observables correspond to skew adjoint operators and the problem disappears immediately. Posted by: Eric Forgy on April 14, 2011 12:47 AM | Permalink | Reply to this ### Re: The Three-Fold Way (Part 6) In the case of complex quantum theory, skew-adjoint or self-adjoint makes little difference: multiplying by $i$ turns one into the other, it doesn’t change the eigenvectors, and it multiplies the eigenvalues by $i$. So, say I have a skew-adjoint operator $S$ on a real Hilbert space, e.g. $\mathbb{R}^n$ with its usual dot product. If we ‘measure’ it, what values should we get out? Such an operator may have no eigenvectors: S = \left(\begin{aligned} 0 & 1 \\ -1 & 0 \end{aligned}\right) If you solve this puzzle, there’s a good chance your solution will be essentially the same as the one I presented here. Posted by: John Baez on April 14, 2011 3:56 AM | Permalink | Reply to this Post a New Comment
# How to express Allan variance without neglecting clock drift Allan variance, $\sigma^2[ \tau ]$, or its square root (Allan deviation, $\sigma[ \tau ]$) is a quantity (as function of parameter $\tau$) which is said to be a measure of (or related to) "stability of clocks". For a recent example cmp. "First accuracy evaluation of NIST-F2" (T. P. Heavner et al.), especially "Figure 1. The Total deviation (TOTDEV) of NIST-F2". Clearly, this quantity is referring to one clock itself; e.g. "the NIST-F2" in the article. "All measurements of Allan variance will in effect be the comparison of two different clocks [...] a reference clock and a device under test (DUT)." Likewise, the article on the NIST-F2 states that "The measurement was made using a commercial hydrogen maser as a reference." With two clocks being involved, there's necessarily concern about clock drift; indeed: "[...] drift will contribute to [the raw] output result. When measuring a real system, the linear drift or other drift mechanism may need to be estimated and removed from the time-series prior to calculating the Allan variance." My question: Can you please give an expression (as explicitly as reasonably achievable here) of this mentioned "(raw) output result before post-processing", in terms of • the total phase $\Phi_a[ t ]$ of the "device under test", • the total phase $\Phi_b[ t ]$ of the "reference", • and other parameters as needed, such as perhaps the nominal angular frequencies, $\omega_a$ and $\omega_b$, of the device under test and of the reference, respectively ?
Chapter 33 • Because of the increased risk of perioperative morbidity and mortality, patients with acute hepatitis should have any elective surgery postponed until the acute hepatitis has resolved, as indicated by the normalization of liver tests. • Isoflurane and sevoflurane are the volatile agents of choice because they preserve hepatic blood flow and oxygen delivery. Factors known to reduce hepatic blood flow, such as hypotension, excessive sympathetic activation, and high mean airway pressures during controlled ventilation, should be avoided. • In evaluating patients for chronic hepatitis, laboratory test results may show only a mild elevation in serum aminotransferase activity and often correlate poorly with disease severity. • Approximately 10% of patients with cirrhosis also develop at least one episode of spontaneous bacterial peritonitis, and some patients may eventually develop hepatocellular carcinoma. • Massive bleeding from gastroesophageal varices is a major cause of morbidity and mortality, and, in addition to the cardiovascular effects of acute blood loss, the absorbed nitrogen load from the breakdown of blood in the intestinal tract can precipitate hepatic encephalopathy. • The cardiovascular changes observed in the patient with hepatic cirrhosis are usually that of a hyperdynamic circulation, although clinically significant cirrhotic cardiomyopathy is often present and not recognized. • The effects of hepatic cirrhosis on pulmonary vascular resistance vessels may result in chronic hypoxemia. • Hepatorenal syndrome is a functional renal defect in patients with cirrhosis that usually follows gastrointestinal bleeding, aggressive diuresis, sepsis, or major surgery. It is characterized by progressive oliguria with avid sodium retention, azotemia, intractable ascites, and a very high mortality rate. • Factors known to precipitate hepatic encephalopathy in patients with cirrhosis include gastrointestinal bleeding, increased dietary protein intake, hypokalemic alkalosis from vomiting or diuresis, infections, and worsening liver function. • Following the removal of large amounts of ascitic fluid, aggressive intravenous fluid replacement is often necessary to prevent profound hypotension and kidney failure. The prevalence of liver disease is increasing in the United States. Cirrhosis, the terminal pathology in the majority of liver diseases, has a general population incidence as high as 5% in some autopsy series. It is a major cause of death in men in their fourth and fifth decades of life, and mortality rates are increasing. Ten percent of the patients with liver disease undergo operative procedures during the final 2 years of their lives. The liver has remarkable functional reserve, and thus overt clinical manifestations of hepatic disease are often absent until extensive damage has occurred. When patients with little hepatic reserve come to the operating room, effects from anesthesia and the surgical procedure can precipitate further hepatic decompensation, leading to frank hepatic failure. ### Coagulation in Liver Disease In stable chronic liver disease, the causes of excessive bleeding primarily involve severe thrombocytopenia, endothelial dysfunction, portal hypertension, renal failure, and sepsis (see Chapters 32 and 51). However, the hemostatic changes that occur with liver disease may cause hypercoagulation and thrombosis, as well as an increased risk of bleeding. Clot breakdown may be enhanced by an ... Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access. Ok ## Subscription Options ### AccessAnesthesiology Full Site: One-Year Subscription Connect to the full suite of AccessAnesthesiology content and resources including procedural videos, interactive self-assessment, real-life cases, 20+ textbooks, and more
# Wishful Coding Didn't you ever wish your computer understood you? ## Chord Progression Suggester I did this project together with Sarah, the goal is to suggest good chord progressions to musicians. We both know some music theory, so we started out by reading about what makes a good chord progression. The longer you look at it, the more complicated it gets. I quickly concluded that music is art, not science, so you can do whatever you want. Instead, we wrote a scraper for guitar tabs. Just download a ton of them and look for anything that looks roughly like ([A-Ga-g][b#]?)(m)?((?:maj)?[0-9])?(sus[0-9]|add[0-9])?(/[A-Ga-g][b#]?) We then try to figure out which key the original song was in, and convert the chords to intervals, called universal notation. Now that we have these sequences of legit chord progressions, we need to persist them in a smart way. We went through markov chains, tries and some other crazyness, but fnally settled on just a flat list. It turns out just scanning the whole list for the given pattern takes an acceptable amount of time. The final step was to write a gui that could suggest chords for you. It’s written in Tkinter and uses Mingus for some playing and parsing. Get my new pop hit Get the source
# zbMATH — the first resource for mathematics Construction of a certain class of harmonic close-to-convex functions associated with the Alexander integral transform. (English) Zbl 1038.30010 In this paper the authors make use of the Alexander integral transforms of some analytic functions on the unit disc, which are starlike or convex of order $$\alpha\in [0,1)$$, to study the construction of sense-preserving, univalent, and close-to-convex harmonic functions. ##### MSC: 30C45 Special classes of univalent and multivalent functions of one complex variable (starlike, convex, bounded rotation, etc.) 30C20 Conformal mappings of special domains 44A15 Special integral transforms (Legendre, Hilbert, etc.) Full Text: ##### References: [1] DOI: 10.2307/2007212 · JFM 45.0672.02 · doi:10.2307/2007212 [2] Clunie J., Ann. Acad. Sci Fenn. Ser. AI Math. 9 pp 3– (1984) [3] Duren P. L., Univalent Functions, Grundlehren der Mathematishcen Wissenschaften. Bd. 259 (1983) [4] DOI: 10.1155/S1048953302000023 · Zbl 0999.30010 · doi:10.1155/S1048953302000023 [5] DOI: 10.1090/S0002-9904-1936-06397-4 · Zbl 0015.15903 · doi:10.1090/S0002-9904-1936-06397-4 [6] Srivastava H. M., Current Topics in Analytic Function Theory (1992) · Zbl 0976.00007 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# An integro-differential equation by Sepideh Bakhoda   Last Updated September 23, 2018 12:20 PM I want to solve this integro-differential equation $$(f(x,y,z)-g(x,y,z)\partial_z^{-1}\partial_x)u(x,y,z)=h(x,y,z)$$ where $$f, g ,h$$ are known and $$u$$ is unknown. In addition $$\partial_z^{-1}(\vec{p_1},\vec{p_2})=\theta(z_1-z_2)\delta(x_1-x_2)\delta(y_1-y_2)$$ where $$\theta$$ is Heaviside step function, $$\delta$$ is Dirac's delta function and $$\vec{p_i}=(x_i,y_i,z_i)$$. Can someone point me in the right direction? How should I start to attach this problem? Is there a general way to solve this kind of equation? I would also be very grateful if you introduce me some references on solving these equations. Tags :
# Show that $\int^1_0 \frac{\tanh^{-1} x}{x} \ln [(1 + x)^3 (1 - x)] \, dx = 0$ I am looking for a nice slick way to show $$\int^1_0 \frac{\tanh^{-1} x}{x} \ln [(1 + x)^3 (1 - x)] \, dx = 0.$$ So far I can only show the result using brute force as follows. Let $$I = \int^1_0 \frac{\tanh^{-1} x}{x} \ln [(1 + x)^3 (1 - x)] \, dx.$$ Since $$\tanh^{-1} x = \frac{1}{2} \ln \left (\frac{1 + x}{1 - x} \right ),$$ the above integral, after rearranging, can be rewritten as $$I = \frac{3}{2} \int^1_0 \frac{\ln^2 (1 + x)}{x} \, dx - \int^1_0 \frac{\ln (1 - x) \ln (1 + x)}{x} \, dx - \frac{1}{2} \int^1_0 \frac{\ln^2 (1 - x)}{x} \, dx.\tag1$$ Each of the above three integrals can be found. The results are: $$\int^1_0 \frac{\ln^2 (1 + x)}{x} \, dx = \frac{1}{4} \zeta (3).$$ For a proof, see here or here. $$\int^1_0 \frac{\ln (1 - x) \ln (1 + x)}{x} \, dx = -\frac{5}{8} \zeta (3).$$ For a proof, see here. And $$\int^1_0 \frac{\ln^2 (1 - x)}{x} \, dx = 2 \zeta (3).$$ For a proof of this last one, see here. Thus (1) becomes $$\int^1_0 \frac{\tanh^{-1} x}{x} \ln [(1 + x)^3 (1 - x)] \, dx = \frac{3}{8 } \zeta (3) + \frac{5}{8} \zeta (3) - \zeta (3) = 0,$$ as expected. • what about the change of variable $y=\dfrac{1-x}{1+x}$? – FDP Jun 12 '19 at 8:26 • Your integral is \begin{align}\int^1_0\frac{\ln\left(\frac{16x}{(1+x)^4}\right)\ln x}{1-x^2}\, dx\end{align}. \begin{align}\int^1_0\frac{\ln(1+x)\ln x}{1-x^2}\, dx\end{align} is not so easy to compute. – FDP Jun 12 '19 at 11:11 • The integrand here in fact has an antiderivative that can be expressed in terms of trilogs, dilogs and elementary functions using the integration formula found in this question. This expression isn't quite what most people would describe as "nice", but it certainly could be much worse. The alternative ways of solving the definite integral used in the answers provided below are admittedly slicker. OTOH, having the antiderivative opens the door to computing more general integrals. – David H Jun 22 '19 at 9:25 • What's the point for the bounty? – FDP Jun 25 '19 at 10:35 • I tested something actually and in the same time to bring attention, as it was mentioned. – Zacky Jun 28 '19 at 23:03 ## 2 Answers $$\sf I = \frac12 \int^1_0 \frac{\ln\left(\frac{1+x}{1-x}\right)}{x} \ln ((1 + x)^3 (1 - x)) dx=\frac12 \int_0^1 \frac{(a-b)(3a+b)}{x}dx$$ Where we denoted $$\sf a=\ln(1+x)$$ and $$\sf b=\ln(1-x)$$. Now we're going to use the following algebraic expression: $$\sf (a-b)(3a+b)=(a+b)^2+2(a-b)^2 -4b^2$$ Which is obtainable easily by combining the following expressions: $$\sf a^2=\frac12(a+b)^2+\frac12(a-b)^2 -b^2,\quad ab=\frac14(a+b)^2-\frac14(a-b)^2\tag 1$$ $$\Rightarrow \sf 2I=\int_0^1 \frac{\ln^2\left(1-x^2\right)}{x}dx +2\int_0^1 \frac{\ln^2\left(\frac{1+x}{1-x}\right)}{x}dx-4\int_0^1 \frac{\ln^2\left(1-x\right)}{x}dx$$ Now we put $$\sf x^2=t$$ in the first part and $$\sf \frac{1-x}{1+x}=t$$ for the second one to get: $$\sf 2I=\frac12 \int_0^1 \frac{\ln^2(1-t)}{t}dt+4\int_0^1 \frac{\ln^2 t }{1-t^2}dt-4\int_0^1 \frac{\ln^2(1-t)}{t}dt$$ $$\sf =-\frac72\int_0^1 \frac{\ln^2 t}{1-t}dt+4\int_0^1 \frac{\ln^2 t }{1-t^2}dt=-\frac72\int_0^1 \frac{\ln^2 t}{1-t}dt+\frac72\int_0^1 \frac{\ln^2 t}{1-t}dt=0$$ Above we used that: $$\boxed{\sf \int_0^1 \frac{\ln^2 x}{1-x^2}dx=\frac78 \int_0^1 \frac{\ln^2 x}{1-x}dx}$$ And we can show this in two steps. First: $$\sf {\int_0^1 \frac{\ln^2 x}{1-x}dx}\overset{x\to x^2}=8\int_0^1 \frac{x\ln^2 x}{1-x^2}dx=4{\int_0^1 \frac{\ln^2 x}{1-x}dx}-4\int_0^1 \frac{\ln^2 x}{1+x}dx$$ $$\sf \Rightarrow (1-4){\int_0^1 \frac{\ln^2 x}{1-x}dx}=-4\int_0^1 \frac{\ln^2 x}{1+x}dx\Rightarrow \boxed{\int_0^1 \frac{\ln^2 x}{1+x}dx=\frac34 \int_0^1 \frac{\ln^2 x}{1-x}dx}$$ But we also have that: $$\sf \int_0^1 \frac{\ln^2 x}{1-x^2}dx=\frac12 \int_0^1 \frac{\ln^2 x}{1-x}dx+\frac12 \int_0^1 \frac{\ln^2 x}{1+x} dx$$ $$\sf =\frac12\int_0^1 \frac{\ln^2 x}{1-x}dx+ \frac38\int_0^1 \frac{\ln^2 x}{1-x}dx=\frac78 \int_0^1 \frac{\ln^2 x}{1-x}dx$$ Generalization. In a similar fashion we can deal with the following integral: $$\sf I(m,n,q,p)=\int_0^1 \frac{[m\ln(1+x)+n\ln(1-x)][q\ln(1+x)+p\ln(1-x)]}{x}dx$$ Like from above we will keep $$\sf a=\ln(1+x)$$ and $$\sf b=\ln(1-x)$$. Thus we can write the numarator as: $$\sf f=(ma+nb)(qa+pb)=mqa^2+(mp+nq)ab+npb^2$$ Using $$(1)$$ again we obtain: $$\sf f=\left(\frac{mq}{2}+\frac{mp+nq}{4}\right)(a+b)^2+\left(\frac{mq}{2}-\frac{mp+nq}{4}\right)(a-b)^2+(np-mq)b^2$$ Furthermore we can write: $$\sf \int_0^1 \frac{(a+b)^2}{x}dx=\int_0^1 \frac{\ln^2(1-x^2)}{x}dx=\frac12 \int_0^1 \frac{\ln^2 x}{1-x}dx$$ $$\sf \int_0^1 \frac{(a-b)^2}{x}dx=\int_0^1 \frac{\ln^2\left(\frac{1+x}{1-x}\right)}{x}dx=2\int_0^1 \frac{\ln^2 x}{1-x^2}dx=\frac74\int_0^1 \frac{\ln^2 x}{1-x}dx$$ $$\sf \Rightarrow I(m,n,q,p)=\left(\frac{mq}{8}-\frac{5}{16}(mp+nq)+np\right)\int_0^1 \frac{\ln^2 x}{1-x}dx$$ $$=\boxed{\sf \left(\frac{mq}{4}-\frac{5}{8}(mp+nq)+2np\right)\zeta(3)}$$ • The generalization is really interesting. – Claude Leibovici Jun 25 '19 at 5:24 \begin{align}J=\int^1_0 \frac{\tanh^{-1} x}{x} \ln [(1 + x)^3 (1 - x)] \, dx\end{align} Perform the change of variable $$y=\dfrac{1-x}{1+x}$$, \begin{align}J&=\int^1_0\frac{\ln\left(\frac{16x}{(1+x)^4}\right)\ln x}{1-x^2}\, dx\\ &=4\ln 2\int_0^1\frac{\ln x}{1-x^2}\,dx+\int_0^1\frac{\ln^2 x}{1-x^2}\,dx-4\int_0^1\frac{\ln x\ln(1+x)}{1-x^2}\,dx\\ \end{align} Define on $$[0;1]$$ the function $$R$$ by, \begin{align}R(x)&=\int_0^x \frac{\ln t}{1-t^2}\,dt\\ &=\int_0^1 \frac{x\ln(tx)}{1-t^2x^2}\,dt \end{align} Therefore, \begin{align}K&=\int_0^1\frac{\ln x\ln(1+x)}{1-x^2}\,dx\\ &=\Big[R(x)\ln(1+x)\Big]_0^1-\int_0^1\int_0^1 \frac{x\ln(tx)}{(1-t^2x^2)(1+x)}\,dt\,dx\\ &=\int_0^1 \frac{\ln 2\ln t}{1-t^2}\,dt-\int_0^1\left(\int_0^1 \frac{x\ln t}{(1-t^2x^2)(1+x)}\,dx\right)\,dt-\int_0^1\left(\int_0^1 \frac{x\ln x}{(1-t^2x^2)(1+x)}\,dt\right)\,dx\\ &=\ln 2\int_0^1 \frac{\ln t}{1-t^2}\,dt-\\ &\frac{1}{2}\left(\int_0^1 \frac{\ln t\ln(1+t)}{1-t}\,dt-\int_0^1 \frac{\ln t\ln\left(\frac{1-t}{1+t}\right)}{t}\,dt-\int_0^1 \frac{2\ln 2\ln t}{1-t^2}\,dt+\int_0^1 \frac{\ln(1-t)\ln t}{1+t}\,dt\right)-\\ &\frac{1}{2}\left(\int_0^1 \frac{\ln x\ln(1+x)}{1+x}\,dx-\int_0^1 \frac{\ln x\ln(1-x)}{1+x}\,dx\right)\\ &=2\ln 2\int_0^1 \frac{\ln t}{1-t^2}\,dt-K+\frac{1}{2}\int_0^1 \frac{\ln t\ln\left(\frac{1-t}{1+t}\right)}{t}\,dt \end{align} Therefore, \begin{align}K&=\ln 2\int_0^1 \frac{\ln t}{1-t^2}\,dt+\frac{1}{4}\int_0^1 \frac{\ln t\ln\left(\frac{1-t}{1+t}\right)}{t}\,dt\\ &=\ln 2\int_0^1 \frac{\ln t}{1-t^2}\,dt+\frac{1}{8}\left[\ln^2 t\ln\left(\frac{1-t}{1+t}\right)\right]_0^1+\frac{1}{4}\int_0^1 \frac{\ln^2 t}{1-t^2}\,dt\\ &=\ln 2\int_0^1 \frac{\ln t}{1-t^2}\,dt+\frac{1}{4}\int_0^1 \frac{\ln^2 t}{1-t^2}\,dt\\ \end{align} Therefore, \begin{align}\boxed{J=0}\end{align} NB: It's easy to deduce that, \begin{align}\int_0^1\frac{\ln x\ln(1+x)}{1-x^2}\,dx=\frac{7}{16}\zeta(3)-\frac{1}{8}\pi^2\ln 2\end{align}
# Each exterior angle of a certain regular polygon measures 30. How many sides does the polygon have? Number of sides of the polygon is $12$. Sum of exterior angles of every polygon irrespective of number of its sides is ${360}^{o}$. As it is a regular polygon, all exterior angles are equal and are ${30}^{o}$. Hence number of sides of the polygon is ${360}^{o} / {30}^{o} = 12$.
Show that the Modulus Function Question: Show that the Modulus Function $f: \mathbf{R} \rightarrow \mathbf{R}$ given by $f(x)=|x|$, is neither one-one nor onto, where $|x|$ is $x$, if $x$ is positive or 0 and $|x|$ is $-x$, if $x$ is negative. Solution: $f: \mathbf{R} \rightarrow \mathbf{R}$ is given by, $f(x)=|x|=\left\{\begin{array}{l}x, \text { if } x \geq 0 \\ -x, \text { if } x<0\end{array}\right.$ It is seen that $f(-1)=|-1|=1, f(1)=|1|=1$. $\therefore f(-1)=f(1)$, but $-1 \neq 1$. $\therefore f$ is not one-one. Now, consider $-1 \in \mathbf{R}$. It is known that $f(x)=|x|$ is always non-negative. Thus, there does not exist any element $x$ in domain $\mathbf{R}$ such that $f(x)=|x|=-1$. ∴ f is not onto. Hence, the modulus function is neither one-one nor onto.
# Grade 6 MTAP Reviewer Set 7 This is the seventh set of the MTAP Reviewer for Grade 6. 1.) Write <, =, > in the blank below. 0.78  ____ 0.780 2.) Which of the following numbers has the smallest value? $0.6, \displaystyle \frac{3}{4}, \frac{1}{2}$ 3.) One number is twice another number. Their sum is 48. What are the numbers? 4.) Write the expression $4 \times 4 \times 4$ in exponent form. 5.) What is the largest 2-digit prime number? 6.)  What is the distance between two points (6,4) and (12,4)? 7.) Sam scored 87, 92, and 83 in three tests. What is his mean score? 8.) The diagonal of a square is 6cm. Find the area of the square that is twice its size. 9.) One pair of opposite sides of a quadrilateral are parallel. The other pair is not parallel. What is the name of the quadrilateral? 10.) What whole number is the best estimate for the square root of 90? 11.) Simplify: $3(2x + 1)$ 12.) Evaluate: $\displaystyle \frac{7}{10} - (\frac{1}{2} - \frac{2}{5})$. 1.) = 2.) 1/2 3.) 16 and 32 4.) $4^3$ 5.) 97 6.) 6 units 7.) 87.3 8.) 36 sq. cm. 9.) trapezoid/trapezium 10.) 9 11.) 6x + 3 12.) 3/5
# zbMATH — the first resource for mathematics Report 30/2005: Real Analysis, Harmonic Analysis and Applications to PDE (July 3rd – July 9th, 2005 (0527)). (English) Zbl 1106.42300 Summary: There have been important developments in the last few years in the point-of-view and methods of harmonic analysis, and at the same time significant concurrent progress in the application of these to partial differential equations and related subjects. The conference brought together experts and young scientists working in these two directions, with the objective of furthering these important interactions. Introduction: Major areas and results represented at the workshop are: I. Methods in harmonic analysis (a) Multilinear analysis: this is an outgrowth of the method of ‘tile decomposition’ which has been so successful in solving the problems of the bilinear Hilbert transform. Recent progress involves the control of maximal trilinear operators and an extension of the Carleson-Hunt theorem. (b) Geometry of sets in $$\mathbb{R}^d$$: This includes recent progress on the interaction of Fourier analysis and geometric combinatorics related to the Falconer distance problem. (c) Singular integrals: A break trough has been obtained on singular integrals on solvable Iwasawa AN- groups, which require a new type of Calderón-Zygmund decomposition, since the underlying spaces have exponential volume growth. Further significant progress involves the theory of operator-valued Calderón-Zygmund operators and their connection to maximal regularity of evolution equations. (d) Oscillatory integrals, Fourier integral operators and Maximal operators: This includes estimates of maximal operators related to polynomial polyhedra and their relations with higher dimensional complex analysis, sharp estimates for maximal operators associated to hypersurfaces in $$\mathbb{R}^3$$, estimates for degenerate Radon transforms and linear and bilinear estimates for oscillatory integral operators, as well as optimal Sobolev regularity for Fourier integral operators. II. Applications to P.D.E. (a) Dispersive linear and nonlinear equations: Far reaching new approaches to dispersive estimates for Schrödinger equations via coherent state decompositions and a related new phase space transform adapted to the wave operator were introduced. Further significant progress includes $$L^p$$- estimates respectively blow up rates for eigenfunctions and quasimodes of elliptic operators on compact manifolds with and without boundary, and related problems for globally elliptic pseudodifferential operators, and well-posedness of the periodic KP-I equations. All these results are based in part on important ideas in harmonic analysis (such as I(d) above). (b) Schrödinger operators with rough potentials: This includes quantitative unique continuation theorems and their relations with spectral properties and the study of embedded eigenvalues of Schrödinger operators. Contributions: Carlos E. Kenig, Quantitative unique continuation theorems (p. 1683) Waldemar Hebisch, Singular integral of Iwasawa AN groups (p. 1683) Malabika Pramanik (joint with Andreas Seeger), Optimal Sobolev regularity for Fourier integral operators on Rd (p. 1684) Isroil A. Ikromov, Boundedness problem for maximal operators associated to non-convex hypersurfaces (p. 1686) Alexandru Ionescu (joint with C. E. Kenig), Well-posedness theorems for the KP-I initial value problem on $$\mathbb T \times \mathbb T$$ and $$\mathbb R \times \mathbb T$$ (p. 1689) Michael Cowling (joint with M. Sundari), Hardy’s Uncertainty Principle (p. 1691) Daniel Tataru (joint with Dan Geba), A phase space transform adapted to the wave equation (p. 1693) Hart F. Smith (joint with Chris D. Sogge), Wave packets and boundary value problems (p. 1695) Svetlana Roudenko (joint with M. Frazier, F. Nazarov), Littlewood-Paley theory for matrix weights (p. 1697) Alexander Nagel (joint with Malabika Pramanik), Two problems related to polynomial polyhedra (p. 1700) D. H. Phong (joint with Jacob Sturm), Energy functionals and flows in Kähler geometry (p. 1703) Jong-Guk Bak, Endpoint estimates for some degenerate Radon transforms in the plane (p. 1706) Christoph Thiele (joint with Ciprian Demeter, Terence Tao), A maximal trilinear operator (p. 1708) Camil Muscalu (joint with Xiaochun Li), A generalization of the Carleson-Hunt theorem (p. 1711) Mihail N. Kolountzakis (joint with Evangelos Markakis, Aranyak Mehta), Fourier zeros of Boolean functions (p. 1713) Lars Diening (joint with Peter Hästö), Traces of Sobolev Spaces with Variable Exponents (p. 1714) Ralf Meyer, $$L^p$$-estimates for the wave equation associated to the Grusin operator (p. 1716) Stefanie Petermichl (joint with Sergei Treil, Brett Wick), Analytic Embedding in the unit ball of $$\mathbb {C}^n$$ (p. 1719) Peer Christian Kunstmann, Generalized Gaussian Estimates and Applications to Calderon-Zygmund Theory (p. 1721) Christopher Sogge (joint with John Toth and Steve Zelditch), Blowup rates of eigenfunctions and quasimodes (p. 1724) H. Koch (joint with D. Tataru), Dispersive estimates and absence of positive eigenvalues (p. 1727) Loredana Lanzani (joint with R. Brown, L. Capogna), On the Mixed Boundary Value Problem for Laplace’s Equation in Planar Lipschitz Domains (p. 1729) Sanghyuk Lee, Linear and bilinear estimates for oscillatory integral operators related to restriction to hypersurfaces (p. 1731) Lutz Weis, Calderón-Zygmund operators with operator-valued kernels and evolution equations (p. 1734). ##### MSC: 42-06 Proceedings, conferences, collections, etc. pertaining to harmonic analysis on Euclidean spaces 43-06 Proceedings, conferences, collections, etc. pertaining to abstract harmonic analysis 22-06 Proceedings, conferences, collections, etc. pertaining to topological groups 35-06 Proceedings, conferences, collections, etc. pertaining to partial differential equations 00B05 Collections of abstracts of lectures Full Text:
# Weak Acid Base Equilibrium Equilibrium of Weak Bases Weak bases only partially ionize in aqueous solutions, so they form a system of equilibrium with this general formula right here. Now when we apply the equilibrium expression to this reaction, we’re going to get this expression right here. Notice here that I took the right side of this reaction and put it in the numerator of this fraction, and I took the left side of this reaction and put it in the denominator of this fraction. Now you may be thinking we left water out, but we didn’t because that’s just excess water, so we don’t need to take it into account in this reaction. Now we have this equation right here and this equation is to solve for the base dissociation constant, which looks like this with the capital $$K$$ and then a lower-case subscript $$b$$, so the value of $$K_b$$ indicates the strength of the base. High $$K_b$$ values indicate strong basis which dissociate readily in water, and low $$K_b$$ values indicate weak bases which dissociate only a little in water. By looking just at the $$K_b$$ values, we can determine which base is stronger. Right here we’re taking a look at ammonia and pyridine, so the $$K_b$$ value of ammonia is 1 point 8 times 10 to the negative 5th. Then the $$K_b$$ value of pyridine is 1 point 7 times 10 to the negative 9. Obviously here, ammonia has a higher $$K_b$$ value, so we know that ammonia is a stronger base than pyridine, which means that ammonia is going to dissociate more readily in water. 345729 by Mometrix Test Preparation | This Page Last Updated: July 18, 2022
x02 Chapter Contents x02 Chapter Introduction NAG C Library Manual NAG Library Function Documentnag_real_safe_small_number (X02AMC) 1  Purpose nag_real_safe_small_number (X02AMC) returns the safe range of floating point arithmetic. 2  Specification #include #include double nag_real_safe_small_number 3  Description nag_real_safe_small_number (X02AMC) is a constant defined in the C Header file. nag_real_safe_small_number (X02AMC) is defined to be the smallest positive model number $z$ such that for any $x$ in the range [$z,1/z$] the following can be computed without undue loss of accuracy, overflow, underflow or other error: • $-x$ • $1/x$ • $-1/x$ • $\sqrt{x}$ • $\mathrm{log}\left(x\right)$ • $\mathrm{exp}\left(\mathrm{log}\left(x\right)\right)$ • ${y}^{\left(\mathrm{log}\left(x\right)/\mathrm{log}\left(y\right)\right)}$ for any $y$ None. None. None. None.
# C++ vector, very hard to find bug This topic is 2874 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I was at a loss as to what to call this thread, but I hope the title works. Ok, I've got a bug that I cannot reproduce 100% of the time, but through several hours of tests I have managed to tie it very probably to a set of things. Here is all of the information that I think is relevant: I'm working on a game. There is a vector of NPCs, and a vector of shots (fired by npcs, the player, etc.) Every frame, the game loops through all of the shots and moves them, then moves the NPCs, etc. When a shot detects a collision with an NPC, that NPC's death code is run, and then the shot enters an "animate and then vanish" state. Today I added a boss. His special death code includes one extra thing: an explosion is created upon his death. This explosion is just another shot that cannot move or hurt anything, and just animates and then vanishes. I believe this is where the problem lies. I noticed that sometimes, when I kill the boss with one of my shots, the boss will die and leave an explosion (as expected), but the shot that killed him will keep moving as if nothing happened. This only happens sometimes, though. I finally found that it happens much more consistently if I restart the whole program and immediately go to the boss's level and kill him. If I die, leave the level and come back, etc., it almost never happens again. But as soon as I restart the whole program it will usually (still not always, ugh) happen again the first time I kill that boss. I believe I have fixed the problem, as I have tested it extensively (for a couple hours, just killing over and over and restarting) with and without my "fix," and this has never once happened with my fix applied. Without the fix, it just does a push_back() on the shots vector inside of the npc death code. To fix it, I instead had it add the shot to a temporary vector, and the shot is only added to the real shots vector at the end of the logic code, after looping through everything. I was reading over the explanation of vector::push_back on cplusplus.com, and it says that if the vector's size equals its capacity before the push_back, all of the vector's internal allocated storage is reallocated. Is it possible that what was happening was 1) Begin looping through shots vector 2) Hey! This shot has hit a dude, call his death function. 3) He dies, and an explosion shot is added to the shots vector. Uh oh! Not enough capacity in the shots vector, reallocate! 4) His death function returns, and we go back to... the wrong shot, because of the reallocation. ##### Share on other sites Yeah, I believe that you shouldn't remove or add elements from a vector while you are iterating over it. ##### Share on other sites Thanks for the reply! I've done some further testing. I assumed that my above theory is correct: The problem is due to adding a shot to the shots vector inside of the "iterate through the shots" loop, which causes a reallocation of memory during the iteration through the vector. I came up with a theory explaining the semi-randomness of the bug: I think it only occurs if the creation of the "explosion" shot causes a reallocation of the vector. This would only happen if the vector was just at capacity beforehand. There are other shots flying about (from other NPCs, etc.), which might cause the reallocation upon their (safe, outside of the iteration) creation. This would also explain why restarting the whole program usually allows the bug to occur again. I clear the shots vector whenever the player dies or changes levels, but I'm betting that only changes the vector's size, not its capacity. So after quite a few shots flying about, and the vector being cleared upon level change, it would then take a long time before the vector hit capacity again. To test my theory, I disabled all AI, so no shots were created except for my own, and the "explosion" shot when the boss dies (I actually made the explosion occur for all NPCs' deaths for the test). After quite a few tests, I can reliably state that every single time, the first NPC, and ONLY the first NPC, exhibited the bug. This fits my theory, as the vector is reallocated for the creation of the second shot (the explosion shot) and the bug is caused. To further test the theory, I added vector_shots.reserve(100); to the beginning of the logic loop, so the shots vector would always be big enough to hold 100 shots. This worked as expected, and the big did not show itself. Sorry if I'm being really wordy, I'm just trying to understand this issue. And if someone in the future has a similar issue/question, hopefully this will help. Can anyone verify that this is all true in C++? Unlike most of the bugs I create, I've had to get all scientific and statistical about testing this one. ##### Share on other sites Yeah, I believe that you shouldn't remove or add elements from a vector while you are iterating over it. You can remove elements while iterating over the vector if you remember to switch to the iterator returned by the erase method (as old iterators will be invalid). (This iterator will point to the element directly following the last erased element) You can add elements if you are looping over a vector without using iterators (accessing it as if it was an array) as reallocations won't matter then. For a list you can safely add elements while iterating (Removing requires you to switch to the iterator returned by erase still) ##### Share on other sites Regarding containers and iterator invalidation -- sometimes a std::deque offers a good choice in-between std::vector and std::list (depending on the insertion/deletion-location profile in your application): http://www.outofcore.com/2011/04/c-container-iterator-invalidation/ ##### Share on other sites Hmm, never really used iterators for deletion of elements in a vector, i just access it with the [] operator. Don't know why i'd use the iterator if i can just use it like an array, honestly. For deletion of elements, as long as the order of the elements in the vector doesn't matter, you could use std::swap to swap the element you want to delete with the last element in the vector, and then just call pop_back on the vector. Because if you delete an element from the middle of the vector, it needs to move all elements behind the one you deleted one slot forward. ##### Share on other sites Hmm, never really used iterators for deletion of elements in a vector, i just access it with the [] operator. Don't know why i'd use the iterator if i can just use it like an array, honestly. For deletion of elements, as long as the order of the elements in the vector doesn't matter, you could use std::swap to swap the element you want to delete with the last element in the vector, and then just call pop_back on the vector. Because if you delete an element from the middle of the vector, it needs to move all elements behind the one you deleted one slot forward. You would use an iterator to delete as this allows you to swap your container without having to rewrite your deletion code. Note must be here that you should typedef the iterator definition for that container. ##### Share on other sites Hidden You can add elements if you are looping over a vector without using iterators (accessing it as if it was an array) as reallocations won't matter then. Actually, I think this is the exact opposite of what happens. Iterators to vectors are essentially pointers, and a reallocation will probably mess them up. The documentation for push_back says: "Reallocations invalidate all previously obtained iterators, references and pointers." However, if you access elements by index, reallocations won't break anything. • ### Game Developer Survey We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start! • 15 • 21 • 23 • 11 • 25
# Homework Help: Inclined pully table 1. Nov 5, 2012 ### lames 1. The problem statement, all variables and given/known data block1 2kg block2 2.5kg coefficient of static friction =.5 coefficient of kinetic friction= .3 A)how can you find the acceleration and tension? which coefficient do you use? #### Attached Files: • ###### Untitled.png File size: 1.6 KB Views: 112 Last edited: Nov 5, 2012 2. Nov 5, 2012 ### danielu13 For the Tension, you calculate the amount of tension in each section of string and add them together. The tension in T1 is calculated by the force of friction with is $F_{friction} = F_N\mu_k$ For this, you use the coefficient of kinetic friction, as I am assuming that the force is being calculated when the blocks are already moving. So, $T = T_1 + T_2$ $T_1 = F_N\mu_k$ $T_1 = 5.886 N$ $T_2 = mg$ $T_2 = 24.525 N$ $T = 30.411 N$ For the acceleration you use Newton's Second: $\Sigma\vec{F} = m\vec{a}$ The sum of the forces in the system that we are looking at is contained within the tension of the wires, so $30.411 N = (4.5 kg)a$ $a = 6.758 \frac{m}{s^2}$ 3. Nov 5, 2012 ### lames are you sure those are the right answers? did you look at the picture? 4. Nov 6, 2012 ### haruspex I disagree entirely with danielu3's analysis and answers. Since the (presumed) pulley is massless, the tension, T, is the same throughout the string. The first thing, strictly speaking, is to check that the masses will move. Will the tension overcome the static friction? Assuming so, the two masses each undergo the same acceleration, a. Write down the forces on each mass. What net force is required in each case to produce the acceleration a? What two equations does that give you? 5. Nov 6, 2012 ### danielu13 Yes, the tension in the string is constant. However, there are two components to the tension in the string; one is caused by the force of gravity acting on mass 2 and the other is the force of friction acting on mass 1. I may have issues where I used a positive instead of negative. Tension is one of the things that has always caused me issues though. 6. Nov 6, 2012 ### haruspex You can't add tensions like that. Consider two equal masses, m, hung over a light pulley by a string. The tension in the string is mg, not 2mg. Now make the masses unequal. What is the tension now? 7. Nov 6, 2012 ### danielu13 These pulley problems have always confused me for some reason. If there were two unequal masses on a pulley would the tension be $T = g\frac{m_1+m_2}{2}$ ? 8. Nov 6, 2012 ### haruspex Don't just guess, try to reason it out. Think about the acceleration. 9. Nov 6, 2012 ### danielu13 Well, based on what you said, I was thinking that the tension is based on the even distribution of the masses, which is why I'm thinking the average of the masses. But I have some feeling that it should be g(m2-m1) since the pulley makes the gravitational forces counteract each other. Am I at least thinking in the right direction right now? 10. Nov 6, 2012 ### haruspex Let the acceleration be a (down for the heavier, up for the lighter) and the tension be T. Draw a free body diagram for each mass and write out the F=ma equation for each.
# Duplicate Finding I’ve been getting pretty good signal from a CS-fundamentals-type interview question lately. It’s got quite a lot of solutions, and it’s pretty hard, while still admitting decent naive solutions, so it scales well with a variety of candidates. I’ll leave the merits of brain-teaser–though I think that’s a mis-characterization–interview questions to HN commenters or another post. ## The Question You’re given a mutable array of $$n$$ integers. Each one is valued in the range $$[0,n-2]$$, so there must be at least one duplicate. Return a duplicate. ## The Basics What’s nice here is we can build up a memory/runtime frontier even with some crude asymptotic analysis. Let’s work on an idealized machine where we can process pointers in constant time, but bits still aren’t free (i.e., we can’t treat the integers as being arbitrarily wide). One immediate solution is “bubble”. Let the input array be xs for x in xs: for y in (xs skipping x): if x == y: return y Nice, $$O(n^2)$$ runtime $$O(1)$$ extra space overhead down. Let’s get less naive. We might notice bubble linearly compares equality with every element; but a set can compare equality with multiple elements in constant time. vals = set() for x in xs: if x in vals: return x Great, for a hashset we now have $$O(n)$$ extra space overhead for a linear-time algorithm. One might ask is this “linear time” worst-case or average? A hash set with a doubling strategy that picks prime sizes and an identity hash function would actually avoid the worst-case separate chaining performance bad hash functions induce. But if we’re being pedantic, and we are, you’d need to build-in an algorithm to get arbitrarily large primes during resizes. Luckily we can avoid all this nonsense by switching the set to a bitset above for worst-case linear performance. One might be tempted to encode the bitset in the first bit (the sign bit) of the xs array, but this would still incur linear overhead according to our assumptions. Indeed, these assumptions are somewhat realistic (say n contains values at least up to $$2^{31}$$ and we use 4-byte integers). Now we might ask if there’s a nicer trade-off giving up runtime to save on memory use than going from the hash to the bubble approach. Indeed, we know another data structure that enables equality checks on multiple numbers in sublinear time: the sorted array! xs.sort() for x, y in zip(xs, xs[1:]): if x == y: return x Now, what sort would we use (from an interview perspective, it’s less valuable to see how many sorts do you know, but do you know a sort really well, and what is it doing)? What does your chosen language do? If you’re using python, then you get points for knowing Timsort is a combination of out-of-place mergesort and quicksort; inheriting worst-case $$O(n)$$ space and $$O(n\log n)$$ time from the former. Did you use Java? Props to you if you know JDK8 would be applying dual-pivot quick sort on the primitive type array (bonus bonus: timsort on non-primitives). If you choose quicksort, then you should be aware of its worst-case performance and how to mitigate it. One mitigation would be as in C++, to use a heapsort cutoff (“introsort”). Of course, this is all very extra, but someone who’s aware of what’s going on at all layers of the stack is very valuable. Now here’s where I tell you there are at least three different ways to achieve $$O(n)$$ run time and $$O(1)$$ extra memory overhead for this problem. They are all meaningfully different, and each one has a different runtime profile. Two of the solutions use $$\sim 2n$$ array accesses or sets in the worst case, and one of those has the bonus of being cache-friendly. For starters, we could just apply radix sort to the sorting solution above. This is kind-of cheating since radix sort really needs $$O(\log n)$$ passes over the data, but we assumed pointers and therefore indices into the array are fixed-width, so we should count this solution as allowed. After a few iterations, I’m convinced the most elegant solution to this problem is as follows: def unwind(xs): while xs[-1] != xs[xs[-1]]: xs[xs[-1]], xs[-1] = xs[-1], xs[xs[-1]] return xs[-1] The proof that the above takes linear time is the same as its correctness proof: every iteration, the amount of numbers “not in their slot” goes down by one. This value is bounded above by $$n$$. So the algorithm must terminate, and when it does, the loop condition is broken, which is proof a duplicate was found. I discovered a note by Yuval Filmus which performs a similar computation, but without modifying the array (at the cost of running a cycle algorithm, which would require a few more array accesses). While these solutions are quite satisfying in an aesthetic sense, they might not be the most performant in practice. For what it’s worth, I have not found the cache-friendly version online yet :)
Answer Comment Share Q) # use matrix method to solve the equation. 2x-y+3z=9, x+y+z=6, x-y+z=2 use matrix method to solve the equation. 2x-y+3z=9, x+y+z=6, x-y+z=2
# Find $\lim_{n \rightarrow \infty} \int_{[0,2]} t^{1/n} \, d\mu(t)$ with Dirac measure I want to find the integral of: $$\lim_{n \rightarrow \infty} \int_{[0,2]} t^{1/n} \, d\mu(t)$$ where $$\mu=\delta_0 + \delta_1+ \delta_2$$ (Dirac measures) My problem is that I can't determine if the integral is 2 or 3. Because for $$t=0$$ I have $$0^{1/n} \rightarrow 0$$ so $$t=0$$ does not "contribute" to the integral. However it is part of the measure as $$\mu([0,2])=1+1+1=3$$. Maybe I can say that it is 3 almost everywhere but not sure Any help/hint would be appreciated Directly from definitions- $$\int_{[0,2]} t^{1/n} \, d\mu(t) = 0^{1/n} + 1^{1/n} + 2^{1/n} = 1 + 2^{1/n} \xrightarrow[n\to\infty]{} 2$$ • How are you using the measure? To me it looks more like the integral of a indicator function like $\int_{[0,2]} t^{1/n} \cdot 1_{\{0\},\{1\},\{2\}} d\mu(t)$. Furthermore $\mu([0,2])=3$ and I don't see where you are using this – Daniel Nov 3 '19 at 7:07 • @Daniel 1. Every function is equal $\mu$-almost everywhere to a simple function of three terms $a1_{\{0\}}+ b 1_{\{1\}} + c1_{\{2\}}$ (possibly taking negative or complex values). You can write it as three integrals against 3 different Dirac measures if that helps. 2. where do you use that the Lebesgue measure of $[0,1]$ is 1 when computing usual Lebesgue measure integrals over $[0,1]$? The answer is that you don't – Calvin Khor Nov 3 '19 at 8:42
TCAP Success Grade 4 MATH Chapter 5 ### TCAP Success Grade 4 MATH Chapter 5 Sample 1 pt 1. What is the least common multiple of 4 and 7? 1 pt 3. Which of the statements below is wrong? 1 pt 5. What fraction of the figure below is shaded? 1pt 7. Select all the fraction that are less than $\frac{1}{2}$.
# Finding Propagation of Uncertainty? You measure the mass of the cylinder to be m = 584.9 +- 0.5 grams, and you measure the length of the cylinder to be L = 18.195 +- 0.003 cm. Just like in the lab you performed, you now measure the diameter in eight different places and obtain the following results. Diameter (cm) 2.125 2.090 2.065 2.240 2.110 2.100 2.080 2.240 This gives an average of 2.131 +- 0.0695 This makes the density = 9.01 g/cm^3 +- propagation of uncertainty Trying to calculate this, I have: sqrt( ((1*0.5) / 584.9)^2 + ((-2 * 0,0695) / 2.131)^2 + ((-1 * 0.003) / 18.195)^2 ) = 0.0652 However the online grading system say's that I'm wrong. So where have I gone wrong with the uncertainty of the density. All of the other values have been graded and marked correct, so where did I mess up with the uncertainty. I'm not really clear where the "1", "-2", or "-1" came from in the formula above, I was basing it on my notes from class. gneill Mentor You measure the mass of the cylinder to be m = 584.9 +- 0.5 grams, and you measure the length of the cylinder to be L = 18.195 +- 0.003 cm. Just like in the lab you performed, you now measure the diameter in eight different places and obtain the following results. Diameter (cm) 2.125 2.090 2.065 2.240 2.110 2.100 2.080 2.240 This gives an average of 2.131 +- 0.0695 This makes the density = 9.01 g/cm^3 +- propagation of uncertainty Trying to calculate this, I have: sqrt( ((1*0.5) / 584.9)^2 + ((-2 * 0,0695) / 2.131)^2 + ((-1 * 0.003) / 18.195)^2 ) = 0.0652 However the online grading system say's that I'm wrong. So where have I gone wrong with the uncertainty of the density. All of the other values have been graded and marked correct, so where did I mess up with the uncertainty. It looks like you forgot to multiply the "sqrt" by the calculated value of the function being evaluated (the density value). I'm not really clear where the "1", "-2", or "-1" came from in the formula above, I was basing it on my notes from class. They are values that depend upon the exponent of the variable in the function. Write out the function being evaluated in one line (promote the variables in the denominator to the numerator and adjust exponents accordingly): $$f(m,d,L) = \frac{4}{\pi}M^1 d^{-2} L^{-1}$$ You can pick out the values as the exponents of the variables. Thus, for example, the value "-2" is associated with the diameter variable d. mfb Mentor You calculated the relative uncertainty, and both the formula and the result look good. Similar to gneill, I think the online grading system wants the absolute uncertainty. I've been meaning to get back here to say Thank You! Multiplying by the density was exactly what I needed to do for WebAssign (the online grading system) to accept the answer.
Search Question: arrayQualityMetrics error with MAList 0 6.1 years ago by Daniel Aaen Hansen90 wrote: Dear Wolfgang, I have now been testing the new options for increasing the number of arrays plotted, and I think it is a nice and very useful solution. The only option I lack, is the option to decide how the arrays are sorted. Eg for the spatial plot, arrays could be sorted by name, input order or by the F statistic. And for the MA plot, arrays could be sorted by name, input order or by the D statistic. I think, sorting by the statistic (D and F) is probably most usefull. I have a little problem when I want to create spatial plots for Agilent arrays. The "genes" table (RG$genes) does not contain X and Y columns but it contains Col and Row columns with the coordinates. Hence it is very easy to add X and Y which will result in the spatial plots being created: RG$genes$X = RG$genes$Col RG$genes$Y = RG$genes$Row It works for the spatial plots - they are in the report - but it fails when calculating outliers in the spatial plots. I get this error message: Error in plot.window(...) : need finite 'xlim' values In addition: Warning messages: 1: closing unused connection 3 (QC/index.html) 2: In min(values, na.rm = TRUE) : no non-missing arguments to min; returning Inf 3: In max(values, th, na.rm = TRUE) : no non-missing arguments to max; returning -Inf Agilent arrays contain empty spots (not present in the data files), which seems to be handled well when drawing the spatial plots. There are some white spots on the spatial plots which I assume are the empty spots. My best guess is that these empty/missing spots are creating problems during outlier detection. Can you see what is going on and how to solve it? Best Daniel On Apr 7, 2012, at 1:07 PM, Wolfgang Huber wrote: > Dear Daniel > > thank you for the suggestion. This is now possible with arrayQualityMetrics version >= 3.13.2, by setting the parameter 'maxNumArrays' to '+Inf'. See the manual page of 'aqm.maplot' and 'aqm.spatial for details. > > In addition, also all the other parameters of the report module function are now exposed and can be modified in a call to arrayQualityMetrics. > > As usual, it will take a few days for this version to percolate to the devel branch in the repository (but it is already in svn). > > Let me know your experience with it or if you have further suggestions. > > Best wishes > Wolfgang > > Mar/28/12 9:57 PM, Daniel Aaen Hansen scripsit:: >> Dear Wolfgang, >> >> The plots are definitely more useful now. Thanks. >> >> Regarding implementing an option for plotting all MA-plots and spatial >> plots, I would suggest a user option 'plot.all' which are false by >> default to mimic the current behavior. Then in the functions >> 'aqm.maplot' and 'spatialplot', I suggest adding: >> >> if (plot.all) { >> whj = order(stat, decreasing = TRUE) >> lay = c(4, ceiling(x$numArrays/4)) >> legOrder = "" >> } >> else if (x$numArrays <= 8) { >> ... (default behavior) >> >> Thus, if plot.all is set to true, all arrays are plottet and if not, it >> will behave as it does now. I still think it's useful to order plots by >> the statistic and I also changed the layout to always be 4 columns. This >> is only relevant to the MA-plots and spatial plots since for the other >> plots, all arrays are already included in the plots. >> >> Best, >> Daniel >> >> >> >> On Mar 15, 2012, at 11:17 PM, Wolfgang Huber wrote: >> >>> Dear Daniel >>> >>> thank you for pointing out this problem. In version 3.11.4, the values >>> for the red and green channels were computed by exponentiating (base >>> 2) the values R = A+M/2 and G = A-M/2. From version, 3.11.5 (in >>> subversion, should be on the web in a few days), no exponentiating is >>> done. I hope you will find that behaviour more pleasing. >>> >>> If questions persist, please send me your example dataset and script. >>> More below. >>> >>> Mar/9/12 12:33 PM, Daniel Aaen Hansen scripsit:: >>>> Dear Wolfgang, >>>> >>>> I just tested arrayQualityMetrics 3.11.4 on 'MAList' objects and it >>>> is working for me (it generates a report). However, I am not sure I >>>> understand what is going on. From the box plot and density plot it >>>> looks like it is using data that are not log transformed as some >>>> expression values are very large. Surprisingly, it makes no >>>> difference whether I set 'do.logtransform=FALSE' or >>>> 'do.logtransform=TRUE'. The plots are identical and looks as if they >>>> are made on non-log-transformed data. >>>> >>>> I also tried to convert the data manually, as you suggested: eset<- >>>> as(MA, 'ExpressionSet'). However, this yielded expression values from >>>> -9 to 13 and did not work: >>> >>> Yes, this creates an ExpressionSet from the M values only. The value >>> range you quote seems consistent with that. However, doing that >>> conversion is much inferior to the current functionality for MAList in >>> arrayQualityMetrics, so please ignore that previous suggestion. >>> >>>>> min(exprs(eset)) >>>> [1] -9.230704 >>>>> max(exprs(eset)) >>>> [1] 13.12851 >>>> >>>> Another question: is there a way to make arrayQualityMetrics create >>>> MA-plots and spatial plots for all arrays and not just a subset? I >>>> think that would be a nice feature, but I could not find it. For >>>> example, it would make it easier to compare MA-plots before and after >>>> normalization. As it is now, it may plot MA-plots for different >>>> arrays before and after normalization. >>> >>> Currently, it is not that easy, but of course, this being R and open >>> source, it is possible. You need to create your own version of the >>> function 'aqm.maplot', and e.g. replace the line >>> if (x$numArrays <= 8) { >>> by >>> if (TRUE) { >>> >>> Then you also need to make sure that the function >>> 'arrayQualityMetrics' finds your version of 'aqm.maplot'. This is done >>> either by putting the latter back into the namespace of the package >>> 'arrayQualityMetrics', or by also creating your own version of the >>> function 'arrayQualityMetrics' (which is quite short and simple, it >>> just successively calls the functions creating the various report parts.) >>> >>> See also the vignette "Advanced topics: Customizing >>> arrayQualityMetrics reports and programmatic processing of the output". >>> >>> If someone comes up with a user-interface that allows exposing this >>> (and similar) options to the user at the level of the >>> 'arrayQualityMetrics' function without creating a plethora of >>> parameter options, I will implement it in the package. >>> >>> (The 'philosophy' of the package is to produce a reasonable report for >>> almost everyone with default settings; whereas customisations can be >>> obtained through using the flexibility of the R language, and a >>> modular design of the package functionality - rather than banging in >>> lots and lots of parameters and options into a monolithic function.) >>> >>> Best wishes >>> Wolfgang >>> >>> Wolfgang Huber >>> EMBL >>> http://www.embl.de/research/units/genome_biology/huber >>> >>> >> > > > -- > Best wishes > Wolfgang > > Wolfgang Huber > EMBL > http://www.embl.de/research/units/genome_biology/huber > > [[alternative HTML version deleted]]
# Log-Likelihood Ratio (LLR) Demodulation This example shows the BER performance improvement for QPSK modulation when using log-likelihood ratio (LLR) instead of hard-decision demodulation in a convolutionally coded communication link. With LLR demodulation, one can use the Viterbi decoder either in the unquantized decoding mode or the soft-decision decoding mode. Unquantized decoding, where the decoder inputs are real values, though better in terms of BER, is not practically viable. In the more practical soft-decision decoding, the demodulator output is quantized before being fed to the decoder. It is generally observed that this does not incur a significant cost in BER while significantly reducing the decoder complexity. We validate this experimentally through this example. For a Simulink™ version of this example, see LLR vs. Hard Decision Demodulation in Simulink. ### Initialization Initialize simulation parameters. M = 4; % Modulation order k = log2(M); % Bits per symbol bitsPerIter = 1.2e4; % Number of bits to simulate EbNo = 3; % Information bit Eb/No in dB Initialize coding properties for a rate 1/2, constraint length 7 code. codeRate = 1/2; % Code rate of convolutional encoder constLen = 7; % Constraint length of encoder codeGenPoly = [171 133]; % Code generator polynomial of encoder tblen = 32; % Traceback depth of Viterbi decoder trellis = poly2trellis(constLen,codeGenPoly); Create a comm.ConvolutionalEncoder System object™ by using trellis as an input. enc = comm.ConvolutionalEncoder(trellis); Modulator and Channel Create a comm.QPSKModulator and two comm.QPSKDemodulator System objects. Configure the first demodulator to output hard-decision bits. Configure the second to output LLR values. qpskMod = comm.QPSKModulator('BitInput',true); demodHard = comm.QPSKDemodulator('BitOutput',true,... 'DecisionMethod','Hard decision'); demodLLR = comm.QPSKDemodulator('BitOutput',true,... 'DecisionMethod','Log-likelihood ratio'); The signal going into the AWGN channel is the modulated encoded signal. To achieve the required noise level, adjust the Eb/No for coded bits and multi-bit symbols. Calculate the $SNR$ value based on the ${E}_{b}/{N}_{o}$ value you want to simulate. SNR = convertSNR(EbNo,"ebno","BitsPerSymbol",log2(M),"CodingRate",codeRate); Viterbi Decoding Create comm.ViterbiDecoder objects to act as the hard-decision, unquantized, and soft-decision decoders. For all three decoders, set the traceback depth to tblen. decHard = comm.ViterbiDecoder(trellis,'InputFormat','Hard', ... 'TracebackDepth',tblen); decUnquant = comm.ViterbiDecoder(trellis,'InputFormat','Unquantized', ... 'TracebackDepth',tblen); decSoft = comm.ViterbiDecoder(trellis,'InputFormat','Soft', ... 'SoftInputWordLength',3,'TracebackDepth',tblen); Calculating the Error Rate Create comm.ErrorRate objects to compare the decoded bits to the original transmitted bits. The Viterbi decoder creates a delay in the decoded bit stream output equal to the traceback length. To account for this delay, set the ReceiveDelay property of the comm.ErrorRate objects to tblen. ### System Simulation Generate bitsPerIter message bits. Then convolutionally encode and modulate the data. txData = randi([0 1],bitsPerIter,1); encData = enc(txData); modData = qpskMod(encData); Pass the modulated signal through an AWGN channel. [rxSig,noiseVariance] = awgn(modData,SNR); Before using a comm.ViterbiDecoder object in the soft-decision mode, the output of the demodulator needs to be quantized. This example uses a comm.ViterbiDecoder object with a SoftInputWordLength of 3. This value is a good compromise between short word lengths and a small BER penalty. Define partition points for 3-bit quantization. demodLLR.Variance = noiseVariance; partitionPoints = (-1.5:0.5:1.5)/noiseVariance; Demodulate the received signal and output hard-decision bits. hardData = demodHard(rxSig); Demodulate the received signal and output LLR values. LLRData = demodLLR(rxSig); Hard-decision decoding Pass the demodulated data through the Viterbi decoder. Compute the error statistics. rxDataHard = decHard(hardData); berHard = errHard(txData,rxDataHard); Unquantized decoding Pass the demodulated data through the Viterbi decoder. Compute the error statistics. rxDataUnquant = decUnquant(LLRData); berUnquant = errUnquant(txData,rxDataUnquant); Soft-decision decoding Pass the demodulated data to the quantiz function. This data must be multiplied by -1 before being passed to the quantizer, because, in soft-decision mode, the Viterbi decoder assumes that positive numbers correspond to 1s and negative numbers to 0s. Pass the quantizer output to the Viterbi decoder. Compute the error statistics. quantizedValue = quantiz(-LLRData,partitionPoints); rxDataSoft = decSoft(double(quantizedValue)); berSoft = errSoft(txData,rxDataSoft); ### Running Simulation Example Simulate the previously described communications system over a range of Eb/No values by executing the simulation file simLLRvsHD. It plots BER results as they are generated. BER results for hard-decision demodulation and LLR demodulation with unquantized and soft-decision decoding are plotted in red, blue, and black, respectively. A comparison of simulation results with theoretical results is also shown. Observe that the BER is only slightly degraded by using soft-decision decoding instead of unquantized decoding. The gap between the BER curves for soft-decision decoding and the theoretical bound can be narrowed by increasing the number of quantizer levels. This example may take some time to compute BER results. If you have the Parallel Computing Toolbox™ (PCT) installed, you can set usePCT to true to run the simulation in parallel. In this case, the file LLRvsHDwithPCT is run. To obtain results over a larger range of Eb/No values, modify the appropriate supporting files. Note that you can obtain more statistically reliable results by collecting more errors. usePCT = false;
Electrical Understanding What is a compensator, and how does it operate ? • A lead–lag compensator is a component in a control system that improves an undesirable frequency response in a feedback and control system. It is a fundamental building block in classical control theory. Applications Lead–lag compensators influence disciplines as varied as robotics, satellite control, automobile diagnostics, laser frequency stabilization, and many more. They are an important building block in analog control systems, and can also be used in digital control. Given a control plant, desired specifications can be achieved using compensators. I, D, PI, PD, and PID, are optimizing controllers which are used to improve system parameters (such as reducing steady state error, reducing resonant peak, improving system response by reducing rise time). All these operations can be done by compensators as well. Theory Both lead compensators and lag compensators introduce a pole–zero pair into the open loop transfer function. The transfer function can be written in the Laplace domain as where X is the input to the compensator, Y is the output, s is the complex Laplace transform variable, z is the zero frequency and p is the pole frequency. The pole and zero are both typically negative. In a lead compensator, the pole is left of the zero in the complex plane, | z | < | p | , while in a lag compensator | z | > | p | . A lead-lag compensator consists of a lead compensator cascaded with a lag compensator. The overall transfer function can be written as Typically | p1 | > | z1 | > | z2 | > | p2 | , where z1 and p1 are the zero and pole of the lead compensator and z2 and p2 are the zero and pole of the lag compensator. The lead compensator provides phase lead at high frequencies. This shifts the poles to the left, which enhances the responsiveness and stability of the system. The lag compensator provides phase lag at low frequencies which reduces the steady state error. The precise locations of the poles and zeros depend on both the desired characteristics of the closed loop response and the characteristics of the system being controlled. However, the pole and zero of the lag compensator should be close together so as not to cause the poles to shift right, which could cause instability or slow convergence. Since their purpose is to affect the low frequency behaviour, they should be near the origin. Get homework help
# American Institute of Mathematical Sciences February  2004, 10(1&2): 1-19. doi: 10.3934/dcds.2004.10.1 ## Preservation of spatial patterns by a hyperbolic equation 1 Department of Mathematics, University of California, Irvine, Irvine, CA 92697-3875, United States Received  December 2001 Revised  November 2002 Published  October 2003 Oscillations in a nonlinear, strongly spatially inhomogenious media are described by scalar semilinear hyperbolic equations with coefficients strongly depending on spatial variables. The spatial patterns of solutions may be very complex, we describe them in terms of binary lattice functions and show that the patterns are preserved by the dynamics. We prove that even when solutions of the equations are not unique, the large-scale spatial patterns of solutions are preserved. We consider arbitrary large spatial domains and show that the number of distinct invariant domains in the function space depends exponentially on the volume of the domain. Citation: A. V. Babin. Preservation of spatial patterns by a hyperbolic equation. Discrete and Continuous Dynamical Systems, 2004, 10 (1&2) : 1-19. doi: 10.3934/dcds.2004.10.1 [1] Piotr Kokocki. Homotopy invariants methods in the global dynamics of strongly damped wave equation. Discrete and Continuous Dynamical Systems, 2016, 36 (6) : 3227-3250. doi: 10.3934/dcds.2016.36.3227 [2] George Osipenko. Linearization near a locally nonunique invariant manifold. Discrete and Continuous Dynamical Systems, 1997, 3 (2) : 189-205. doi: 10.3934/dcds.1997.3.189 [3] Liselott Flodén, Jens Persson. Homogenization of nonlinear dissipative hyperbolic problems exhibiting arbitrarily many spatial and temporal scales. Networks and Heterogeneous Media, 2016, 11 (4) : 627-653. doi: 10.3934/nhm.2016012 [4] Takanori Ide, Kazuhiro Kurata, Kazunaga Tanaka. Multiple stable patterns for some reaction-diffusion equation in disrupted environments. Discrete and Continuous Dynamical Systems, 2006, 14 (1) : 93-116. doi: 10.3934/dcds.2006.14.93 [5] Jonathan P. Desi, Evelyn Sander, Thomas Wanner. Complex transient patterns on the disk. Discrete and Continuous Dynamical Systems, 2006, 15 (4) : 1049-1078. doi: 10.3934/dcds.2006.15.1049 [6] Navin Keswani. Homotopy invariance of relative eta-invariants and $C^*$-algebra $K$-theory. Electronic Research Announcements, 1998, 4: 18-26. [7] Esther S. Daus, Shi Jin, Liu Liu. Spectral convergence of the stochastic galerkin approximation to the boltzmann equation with multiple scales and large random perturbation in the collision kernel. Kinetic and Related Models, 2019, 12 (4) : 909-922. doi: 10.3934/krm.2019034 [8] Lijian Jiang, Yalchin Efendiev, Victor Ginting. Multiscale methods for parabolic equations with continuum spatial scales. Discrete and Continuous Dynamical Systems - B, 2007, 8 (4) : 833-859. doi: 10.3934/dcdsb.2007.8.833 [9] Jan Bouwe van den Berg, Gabriel William Duchesne, Jean-Philippe Lessard. Rotation invariant patterns for a nonlinear Laplace-Beltrami equation: A Taylor-Chebyshev series approach. Journal of Computational Dynamics, 2022, 9 (2) : 253-278. doi: 10.3934/jcd.2022005 [10] Mónica Clapp, Jorge Faya. Multiple solutions to a weakly coupled purely critical elliptic system in bounded domains. Discrete and Continuous Dynamical Systems, 2019, 39 (6) : 3265-3289. doi: 10.3934/dcds.2019135 [11] Alexandru Kristály, Ildikó-Ilona Mezei. Multiple solutions for a perturbed system on strip-like domains. Discrete and Continuous Dynamical Systems - S, 2012, 5 (4) : 789-796. doi: 10.3934/dcdss.2012.5.789 [12] Salomón Alarcón. Multiple solutions for a critical nonhomogeneous elliptic problem in domains with small holes. Communications on Pure and Applied Analysis, 2009, 8 (4) : 1269-1289. doi: 10.3934/cpaa.2009.8.1269 [13] Walter D. Neumann and Jun Yang. Invariants from triangulations of hyperbolic 3-manifolds. Electronic Research Announcements, 1995, 1: 72-79. [14] Hannes Uecker. Optimal spatial patterns in feeding, fishing, and pollution. Discrete and Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021099 [15] Mustafa Inc, Mohammad Partohaghighi, Mehmet Ali Akinlar, Gerhard-Wilhelm Weber. New solutions of hyperbolic telegraph equation. Journal of Dynamics and Games, 2021, 8 (2) : 129-138. doi: 10.3934/jdg.2020029 [16] Zhaowei Lou, Jianguo Si, Shimin Wang. Invariant tori for the derivative nonlinear Schrödinger equation with nonlinear term depending on spatial variable. Discrete and Continuous Dynamical Systems, 2022  doi: 10.3934/dcds.2022064 [17] Prashant Shekhar, Abani Patra. Hierarchical approximations for data reduction and learning at multiple scales. Foundations of Data Science, 2020, 2 (2) : 123-154. doi: 10.3934/fods.2020008 [18] E. Kapsza, Gy. Károlyi, S. Kovács, G. Domokos. Regular and random patterns in complex bifurcation diagrams. Discrete and Continuous Dynamical Systems - B, 2003, 3 (4) : 519-540. doi: 10.3934/dcdsb.2003.3.519 [19] Arnaud Ducrot, Vincent Guyonne, Michel Langlais. Some remarks on the qualitative properties of solutions to a predator-prey model posed on non coincident spatial domains. Discrete and Continuous Dynamical Systems - S, 2011, 4 (1) : 67-82. doi: 10.3934/dcdss.2011.4.67 [20] Peng Chen, Xiaochun Liu. Positive solutions for Choquard equation in exterior domains. Communications on Pure and Applied Analysis, 2021, 20 (6) : 2237-2256. doi: 10.3934/cpaa.2021065 2020 Impact Factor: 1.392 ## Metrics • HTML views (0) • Cited by (1) • on AIMS
```--- "Garg, Sanjeev" <[log in to unmask]> wrote: > Hi everybody. > > This might seem to be a very trivial problem for most of you. But its > not working out for me. > I have my test.xml file on Solaris, which adheres to DTD rules from > test.dtd. > > The way, I have included DTD in the XML is: > <!DOCTYPE TestXML SYSTEM "test.dtd"> > > But somehow, when I run my application and it tries to parse the XML > document, I get the following SAXParserException: > Relative URI "test.dtd"; can not be resolved without a document URI. > > Both, the xml file and dtd file are sitting in the same directory. > Not to take chance, I have tried giving the absolute path of the dtd. > > I was getting the same kind of error in Windows as well. It was because > of the way I referred to my DTD. > e.g > <!DOCTYPE TestXML SYSTEM "c:\\test.dtd"> was generating the same error. > But when I referred it as: > <!DOCTYPE TestXML SYSTEM "file:///c:/test.dtd"> ; it started working. > > Now how to do the same thing on a UNIX system???
# Help... I'm overwhelmed.. ###### Question: Help... I'm overwhelmed.. ### I can run an average of 6 miles per hour. is This Continuos Or Discrete I can run an average of 6 miles per hour. is This Continuos Or Discrete... ### How has the meaning of Thanksgiving changed for Americans over the years For example, Many Americans spend thanksgiving sleeping outside of stores to get in early for black Friday. Do Americans still care about the core meaning of thanksgiving? How has the meaning of Thanksgiving changed for Americans over the years For example, Many Americans spend thanksgiving sleeping outside of stores to get in early for black Friday. Do Americans still care about the core meaning of thanksgiving?... ### This question tests your knowledge of what activities are included in calculating GDP. Decide for each item whether it is part of GDP and, if so, to which expenditure category it belongs. 4.1. The cost of launching a space shuttle by NASA is: A. Not counted in GDP B. Counted as part of consumption C. Counted as part of investment D. Counted as part of government E. Counted as part of net exports This question tests your knowledge of what activities are included in calculating GDP. Decide for each item whether it is part of GDP and, if so, to which expenditure category it belongs. 4.1. The cost of launching a space shuttle by NASA is: A. Not counted in GDP B. Counted as part of consumpti... ### Calculate the pH of a solution that contains 2.7 M HF and 2.7 M HOC6H5. Also, calculate the concentration of OC6H5- in this solution at equilibrium. Ka(HF) = 7.2×10-4; Ka(HOC6H5) = 1.6×10-10. pH = [OC6H5-] = M Calculate the pH of a solution that contains 2.7 M HF and 2.7 M HOC6H5. Also, calculate the concentration of OC6H5- in this solution at equilibrium. Ka(HF) = 7.2×10-4; Ka(HOC6H5) = 1.6×10-10. pH = [OC6H5-] = M... ### What is 1.8×0.5×3.4+2.6= What is 1.8×0.5×3.4+2.6=... ### WILL MARK BRAINLIEST!!! 100 POINTS!! please and thank you 6. What is a cell? 7. What is a battery? 8. How do open and closed circuits differ? 9. What is the purpose of a switch? 10. Which type of current involves electrons moving in only one direction? 11. Which type of current involves electrons moving back and forth? 12. Which type of current is transferred in power lines? WILL MARK BRAINLIEST!!! 100 POINTS!! please and thank you 6. What is a cell? 7. What is a battery? 8. How do open and closed circuits differ? 9. What is the purpose of a switch? 10. Which type of current involves electrons moving in only one direction? 11. Which type of current involves electrons mo... ### How would steamboats help the economy of the new America? •MADE IT EXPENSIVE •CAUSED TRANSPORTATION PROBLEMS •DIVIDED THE NATION •OPENED UP WORLD TRADE How would steamboats help the economy of the new America? •MADE IT EXPENSIVE •CAUSED TRANSPORTATION PROBLEMS •DIVIDED THE NATION •OPENED UP WORLD TRADE... ### Find the 87th term of the arithmetic sequence –19, -13, -7,... Find the 87th term of the arithmetic sequence –19, -13, -7,...... ### With what racial issue, promoted by Booker T. Washington, did W.E.B. Dubois DISAGREE? 37C A. The idea that education was important. B. The idea that economic independence and a technical education alone would lead to social change. C. The idea that all black Americans should move to the North for more opportunities. D. The idea that a “Talented Tenth” of black Americans would lead social change. With what racial issue, promoted by Booker T. Washington, did W.E.B. Dubois DISAGREE? 37C A. The idea that education was important. B. The idea that economic independence and a technical education alone would lead to social change. C. The idea that all black Americans should move to the North ... ### Contemporary research suggests that the storm-and-stress notion of adolescence is Contemporary research suggests that the storm-and-stress notion of adolescence is... ### What are three ways that pedigrees are beneficial? What are three ways that pedigrees are beneficial?... ### How does illustration relate to water how does illustration relate to water... ### More easy math for points more easy math for points... ### When non-nerve cells become involved in response to signals, which type of receptor goes into action? (A) intracellular receptor (B) Endomembrane receptor (C) ion channel-linked receptor (D) enzyme linked receptor when non-nerve cells become involved in response to signals, which type of receptor goes into action? (A) intracellular receptor (B) Endomembrane receptor (C) ion channel-linked receptor (D) enzyme linked receptor... ### What does Haute couture mean? What does Haute couture mean?... ### Write an algebraic expression to represent the phrase the difference between two times b and eleven write an algebraic expression to represent the phrase the difference between two times b and eleven... ### Look at this painting by the Mughal painter Bichitr. This work shows the influence of A. European culture B. Japanese culture C. Chinese culture D. Islamic culture Look at this painting by the Mughal painter Bichitr. This work shows the influence of A. European culture B. Japanese culture C. Chinese culture D. Islamic culture...
Explain why this is consistent with $\ce{XeF2}$ having a linear structure. You also have the option to opt-out of these cookies. It is linear, F-Xe- F. Both ends will have the same electron density arounf them. There is no net dipole moment in the compound due to the arrangement of the valence electrons in symmetry. Jmol.jmolCheckbox(jmolApplet0,'set antialiasdisplay true; set antialiastranslucent true ','set antialiasdisplay false',"Antialias");Jmol.jmolButton(jmolApplet0,"draw pointgroup;","Show All Symmetry Elements"); Home / Structure and Bonding / Dipoles and Electrostatic surfaces / Dipoles and Electrostatic Surfaces XeF4, ClF3 and CCl3Br, Use the buttons to display the dipoles, charges and electrostatic potential mapped onto the surface of each molecule. Tell us how we can improve this page (in your own language if you prefer)? The size of a dipole is measured by its dipole moment ($$\mu$$). Copyright © 2021 Multiply Media, LLC. We'll assume you're ok with this, but you can opt-out if you wish. Jmol.jmolLink(jmolApplet0,"anim mode loop 1 2 ;frame play;echo Play loop;","Loop animation \ud83d\udd02"); Jmol.jmolLink(jmolApplet0,"anim off;echo ","Stop animation \u23F9"); Jmol.jmolLink(jmolApplet0,"anim rewind#;","Frame 1 \u23EB");Jmol.jmolHtml('    ') A) CH4 B) CH3Br C) CH3Cl D) CH3F E) CH3I. Having one on the left and another on the right makes the molecule symmetric, thus cancelling the dipole moment, making XeF2 non-polar. The less polar C-Br bond does not quite balance the more polar C-Cl bonds so there is a small molecular dipole. Which of the following molecules has a dipole moment? Hence Xenon Difluoride is nonpolar as there is no polarity observed in the molecule. Number of valence electrons = 6 + (4x7) = 34. XeF2 has NO dipole moment. Polar molecules must contain polar bonds due to a difference in electronegativity between the bonded atoms. Bromotrichloromethane is tetrahedral. Jmol.jmolLink(jmolApplet0,"Frame Next","Next \u23ED");Jmol.jmolHtml('    ');Jmol.jmolLink(jmolApplet0,"Frame Prev","Prev \u23EE"); It decomposes on contact with light or water vapor but is otherwise stable in storage. What is the smallest bond angle in clf5? The three lone pair are arranged in the same plane and the shape of the XeF2 molecule is linear. Xenon tetrafluoride is square planar. So SF4's geometry is seesaw. 5, linear. The … BeCl2,XeF2, NH3,PCl3F2,PCl2F3,BCl3,SF6, XeF4 Thirty are in lone pairs with F atoms. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. In the molecule XeF2, how many pairs of electrons surround Xe and what is the molecular geometry? Five molecules (I 3 − , C O 2 , H g C l 2 , I C l 2 − , X e F 2 ) are having zero dipole moment and linear shape. When did organ music become associated with baseball? Why don't libraries smell like bookstores? Jmol.jmolCheckbox(jmolApplet0,"spin on","spin off","Spin",false);Jmol.jmolHtml('    ') A step-by-step explanation of how to draw the XeF2 Lewis Structure. How much money do you start with in monopoly revolution? Show transcribed image text. It is linear, F-Xe- F. Both ends will have the same electron density arounf them. Animation controls: Jmol.jmolLink(jmolApplet0,"anim mode once;delay 0.5;frame play;set echo bottom center;font echo 16 sansserif bold;echo Plays once through, then stops;","Play once \u25b6\ufe0f");Jmol.jmolBr() Jmol.jmolLink(jmolApplet0,"select all;spacefill off; wireframe .1;","Sticks") These cookies will be stored in your browser only with your consent. Get 1:1 help now from expert Chemistry tutors XeF2 has NO dipole moment. Xenon difluoride is a dense, white crystalline solid.. But opting out of some of these cookies may have an effect on your browsing experience. XeF2. Be the first to rate this page. document.write("   ") No dipole moment since electronegative difference between C and F is canceled due to the geometry of molecule. pyramidal. Lewis structure-----F. F---S**---F-----F ** represent nonbonding electron pair on S atom. These cookies do not store any personal information. D. SO2. {\rm NF_3} {\rm CH_3NH_2} {\rm XeF_2} {\rm PCl_5} Expert Answer 100% (4 ratings) Previous question Next question Get more help from Chegg. The hybridization of the central atom in O3 is: sp2. Follow ChemTube3D on Kudos Get more help from Chegg. Answer Expert Verified. Zero dipole moment is because individual bond dipoles cancel each other. It is linear, F-Xe- F. Both ends will have the same electron density arounf them. What direction is the dipole moment and why? A. CH4 B. CCl4 C. CO2 D. SO2 E. none of these. document.write("   ") Bromotrichloromethane is tetrahedral. This website uses cookies to improve your experience while you navigate through the website. Dipole Moment. All Rights Reserved. Polar "In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole or multipole moment. ChemTube3D by Nick Greeves is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 2.0 UK: England & Wales License. This category only includes cookies that ensures basic functionalities and security features of the website. More molecules: Small Polar | Conjugated | Lone Pair Conjugation | Carbonyl Hydration. What was the weather in Pretoria on 14 February 2013? C) Which of the following species has the largest dipole moment (i.e., is the most polar)? The dipole moment of XeF2 is zero. The molecular structure of SOCl2 is. The Xe-F bonds are all polarized but they cancel one another out so the molecule has no dipole. We are sorry that this page was not useful for you! Jmol.jmolLink(jmolApplet0,"select all;spacefill 20%; wireframe .15;","Ball & Stick") Any cookies that may not be particularly necessary for the website to function and are used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. Question = Is CLO3- polar or nonpolar ? Click hereto get an answer to your question ️ Number of molecules having non - zero dipole moment. 2. Chlorine trifluoride has three polarized bonds and they combine to produce a small molecular dipole along the Cl-F bond. See the answer. This problem has been solved! Both Xe-F atoms have equal dipole moment but in the opposite direction of each other due to which the net dipole moment of the entire molecule … We gratefully acknowledge support from the UK Physical Sciences Centre, HEA (National Teaching Fellowship), JISC, Faculty of Science TQEF and EPSRC. As the bond dipoles is not arranged symmetrically, it is not a polar molecule. Answer = CLO3- (Chlorate) is Polar What is polar and non-polar? The Xe-F bonds are all polarized but they cancel one another out so the molecule has no dipole. Due to symmetry, the dipole moment vector cancels out and thus XeF2 is a non-polar molecule. Which one of the following molecules has a zero dipole moment? Jmol.jmolCheckbox(jmolApplet0,"select all;set showHydrogens FALSE;","select all;set showHydrogens TRUE;","Show/hide H",false);Jmol.jmolHtml('    ') It has a nauseating odour and low vapor pressure. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are as essential for the working of basic functionalities of the website. From what I know, since it has $3$ atoms it should have $3n - 5$ vibrational modes, thus $3(3) - 5 = 4$ modes. BF3. The vibrational modes of $\ce{XeF2}$ are at $555$, $515$, and $213~\mathrm{cm^{-1}}$ but only two are IR active. Does whmis to controlled products that are being transported under the transportation of dangerous goodstdg regulations? Which of the following substances would you expect to have a nonzero dipole moment? XeF2 SeO2 HCI SF4 PH3; Question: Which Of The Following Molecules Does Not Have A Dipole Moment (is Not Polar)? Having one on the left and another on the right makes the molecule symmetric, thus cancelling the dipole moment, making XeF2 non-polar. So accounting for 40 electrons. Organic Chemistry Animations Introduction, Acid Chloride Formation – Thionyl Chloride, Acid chloride formation-Phosphorus Pentachloride, Addition to C=O - loss of carbonyl oxygen, Molecules with a Plane of Symmetry – Feist’s Acid, Chiral Allenes Without Stereogenic Centres, Conformations of ethane – Newman projection, Conformational Analysis – Pea Moth Pheromone, Substrate structure controls substitution mechanism S, E2 Regioselective Elimination to Menthenes A, E2 Regioselective Elimination to Menthenes B, Formation of Diazonium Salt – Diazotization, Benzyne formation – Diazotization-decarboxylation, Enolisation and formation of syn aldol product, Enolisation and formation of anti aldol product, Simple Diastereoselectivity - cis gives syn aldol, Simple Diastereoselectivity - trans gives anti aldol, Conjugate Addition of MeSH to an Unsaturated Aldehyde, Conjugate Addition of Diethylamine to an Unsaturated Nitrile (Acrylonitrile), Conjugate Addition of Diethylamine to an Unsaturated Ester, Conjugate Addition of Enamine to Unsaturated Imine, Conjugate addition of peroxide to form epoxides, Regioselectivity 2-methoxybuta-1,3-diene and acrylonitrile, Regioselectivity 1,1-dimethylbutadiene and methyl acrylate, Stereochemistry of the dienophile - diesters, Stereochemistry of the dienophile - dinitrile, The Woodward Hoffman description of the Diels-Alder, Intramolecular Diels-Alder (E)-3-Methyldeca-1,3,9-triene, Intramolecular Diels-Alder – 1,3,9-decatrien-8-one, 2,3-Dimethylbutadiene and Acrolein(propenal), Quinone as Dienophile – Steroid Framework, Intramolecular Diels-Alder – Regioselectivity reversal, 8-Phenylmenthol auxiliary-controlled Diels-Alder, Paal-Knorr pyrrole synthesis via hemiaminal, Pyridine N-Oxide – Nucleophilic Substitution, Pyridine N-Oxide – Remote Oxidation And Rearrangement, 1,3-Dipolar Cycloaddition Isoxazole from nitrile oxide, Electrocyclic reactions are stereospecific, Conrotatory ring closure/opening - cyclobutene, Disrotatory ring closure/opening - hextriene, Semipinacol rearrangements of diazonium salts, Rearrangements with different nucleophiles, Retention of stereochemistry can indicate neighbouring group participation, Neighbouring group participation: alpha-lactone formation, Fragmentations are controlled by stereochemistry, Controlled by stereochemistry (Cis isomer), Controlled by stereochemistry (Trans – Less severe interactions), Controlled by stereochemistry (Trans – Severe interactions), Fragmentation of diastereoisomers (Trans-decalin I), Fragmentation of diastereoisomers (No ring fragmentation), Photolysis of diazomethane to produce a carbene, Methylation of carboxylic acid using diazomethane, Cyclopropanation of an Alkene by a Carbenoid, Stereoselective Aldol Reaction – Cis gives Syn, Stereoselective Aldol Reaction - Trans gives Anti, Endo-trig reactions (5-endo-trig orbital overlap), Hydroboration (Addition of boron hydride to alkenes), Pd-Carbonylative Kosugi-Migita-Stille Coupling Reaction, Pd-Butenolide Formation From Carbonylation Of A Vinyl Bromide, Pd-catalysed nucleophilic allylic substitution of functionalised compounds, Hydroboration of cyclopentadiene Ipc-borane, Acetylenic Ketone Reduction – Alpine Borane, Intermolecular aldol -proline – hydroxyacetone, BISCO Bismuth Strontium Calcium Copper Oxide – BSCCO, Chalcogenides, Intercalation Compounds and Metal-rich phases, Cathode (Positive electrode) material examples, Anode (Negative electrode) Material Examples, Compare shape and size of 1s, 2s and 2p orbitals, Orbital-orbital Interactions and Symmetry Adapted Linear Combinations, Distortions of a octahedral complex with chelating ligands, Ligand Substitution Square Planar Complex, Possible morphologies of Au Nanoparticles, Electrophilic Addition Addition of bromine to an alkene, Electrophilic addition to alkenes – Symmetrical and Unsymmetrical, Nucleophilic Addition Addition of Hydride, Cyanohydrin Formation – Nucleophilic addition to the carbonyl group, Nucleophilic Substitution at Saturated Carbon, Nucleophilic Substitution Cyanide + Ethyl Bromide, Elimination – E2 Stereoselective for E alkenes, Radical Reactions Synthesis of Chloroalkanes, Radical Reactions CFCs and the Ozone Layer, Polyvinyl Chloride Poly(chloroethene) PVC, Creative Commons Attribution-Noncommercial-Share Alike 2.0 UK: England & Wales License. ChemTube3D.com uses cookies to improve your experience. The linear shape is due to s p or s p 3 d hybridization. NF3. Red regions are nucleophilic (electron rich) and blue regions are electrophilic (electron poor). Display controls: Jmol.jmolLink(jmolApplet0,"select all;spacefill 100%; wireframe off;","Spacefill") What is the balance equation for the complete combustion of the main component of natural gas? XeF2 SeO2 HCI SF4 PH3. Necessary cookies are absolutely essential for the website to function properly. Chlorine trifluoride has three polarized bonds and they combine to produce a small molecular dipole along the Cl-F bond. CHEMWORK Predict the molecular structure, bond angles, and polarity (dipole moment) for each of the following Formula Molecular structure Bond angles Dipole moment BeHy BF3 Kr02 KrF4 NH4 PF Subrmit Show ints Subimit Answer Try Another Version 7 item altempts remaining . XeF2 has NO dipole moment. Jmol.jmolCheckbox(jmolApplet0,"zoom 300","zoom 100","Zoom",false);Jmol.jmolBr() No votes so far! Jmol.jmolLink(jmolApplet0,"anim mode palindrome 1 2 ;frame play;echo Play repeatedly, backwards and forwards;","Play back and forth \ud83d\udd01");Jmol.jmolBr() Xenon difluoride is a powerful fluorinating agent with the chemical formula XeF 2, and one of the most stable xenon compounds.Like most covalent inorganic fluorides it is moisture-sensitive. A) CO B) CH2Cl2 C) SO3 D) SO2 E) the NH3 ... PBr5 B) CCl4 C) BrF5 D) XeF2 E) XeF4. It is mandatory to procure user consent prior to running these cookies on your website. In XeF2 molecule, two fluorine atoms are arranged symmetrically on the outside with the central atom Xenon in the middle. When two electrical charges, of opposite sign and equal magnitude, are separated by a distance, an electric dipole is established. We also use third-party cookies that help us analyze and understand how you use this website. Dip ole moment is measured in Debye units, which is equal to the distance between the charges multiplied by the charge (1 Debye eq uals $$3.34 \times 10^{-30}\; C\, m$$). Is f2 polar or nonpolar? Which Of The Following Molecules Does Not Have A Dipole Moment (is Not Polar)? 5.3K views Not quite balance the more polar C-Cl bonds so there is a molecular! Ch4 B. CCl4 C. CO2 D. SO2 E. none of these cookies may have an effect on browsing... In monopoly revolution Not useful for you | Conjugated | lone pair Conjugation | Carbonyl Hydration but is otherwise in! Electrons in symmetry and low vapor pressure XeF2 molecule is linear, F-Xe- F. Both will! Polarized bonds and they combine to produce a small molecular dipole along the Cl-F.! Electrons surround Xe and what is the most polar ) in monopoly revolution weather in Pretoria on February. Between C and F is canceled due to s p 3 d hybridization and! Cl-F bond on the outside with the central atom Xenon in the middle page was Not for. To your Question ️ Number of valence electrons in symmetry Not polar ) $! In monopoly revolution also have the option to opt-out of these cookies may have an effect on website. To have a nonzero dipole moment in the compound due to s p or s p 3 d hybridization:... The following species has the largest dipole moment ( \ ( \mu\ ) ) & Wales.! Dipoles is Not polar ) to improve your experience while you navigate through the website Xenon Difluoride is dense. Bcl3, SF6, XeF4 XeF2 has no dipole can improve this page was Not for. Understand how you use this website a step-by-step explanation of how to draw the XeF2 Lewis structure -- -- --... To improve your experience while you navigate through the website we are that... Vapor but is otherwise stable in storage February 2013 expect to have a dipole moment absolutely essential for the.... Right makes the molecule }$ having a linear structure so the molecule XeF2, NH3, PCl3F2 PCl2F3. Arranged symmetrically, it is linear represent nonbonding electron pair on s atom to symmetry, the moment! D ) CH3F E ) CH3I function properly electrical charges, of opposite sign and equal magnitude are. But you can opt-out if you prefer ) Lewis structure -- -- *! Contain polar bonds due to symmetry, the dipole moment F -- -S * * represent nonbonding electron pair s. Is nonpolar as there is no polarity observed in the molecule XeF2, how many of... That are being transported under the transportation of dangerous goodstdg regulations dipoles is Not arranged symmetrically, is... The three lone pair Conjugation | Carbonyl Hydration cancelling the dipole moment ( (! Right makes the molecule XeF2, NH3, PCl3F2, PCl2F3, BCl3,,... Cookies will be stored in your own language if you wish C-Br bond Not... ) CH3I molecules having non - zero dipole moment PH3 ; Question: which of the following substances would expect. Bonds are all polarized but they cancel one another out so the molecule has no dipole because individual bond is! Browser only with your consent -- -S * * represent nonbonding electron on. Was the weather in Pretoria on 14 February 2013 have a nonzero dipole moment cancels... More polar C-Cl bonds so there is no net dipole moment is because individual bond dipoles cancel each.. Atom Xenon in the middle { XeF2 } $having a linear structure balance the more polar C-Cl bonds there! Molecules has a zero dipole moment ( is Not polar ), PCl3F2, PCl2F3 BCl3. Transportation of dangerous goodstdg regulations which of the following species has the largest dipole?... You also have the same electron density arounf them ends will have the same density! Of dangerous goodstdg regulations opt-out if you prefer ) or water vapor but is otherwise in... So the molecule has no dipole moment we 'll assume you 're ok with this but! I.E., is the molecular geometry transportation of dangerous goodstdg regulations the moment!, making XeF2 non-polar electrical charges, of opposite sign and equal magnitude, are separated a., PCl3F2, PCl2F3, BCl3, SF6, XeF4 XeF2 has no dipole dense, white solid! ( \mu\ ) ) surround Xe and what is the balance equation for the complete combustion of valence... You use this website but is otherwise stable in storage molecules having non zero... But you can opt-out if you prefer ) Cl-F bond polar ) + ( 4x7 ) = 34 are. Xe-F bonds are all polarized but they cancel one another out so the molecule of opposite sign and magnitude. In the molecule XeF2, NH3, PCl3F2, xef2 dipole moment, BCl3,,. Also have the same plane and the shape of the central atom Xenon in the molecule,! * * -- -F xef2 dipole moment -- -F. F -- -S * * represent nonbonding electron pair s! Polar what is the balance equation for the website do you start with in monopoly revolution s p or p! \Mu\ ) ) another on the left and another on the outside with the central Xenon! The hybridization of the following molecules Does Not have a dipole moment ( is Not )! The left and another on the left and another on the left and another on the left and another the! That help us analyze and understand how you use this website one another out the..., XeF4 XeF2 has no dipole moment ( is Not a polar molecule expert Chemistry XeF2. ( \mu\ ) ) CH4 B. CCl4 C. CO2 D. SO2 E. none of these may! ) and blue regions are nucleophilic ( electron rich ) and blue are. Vector cancels out and thus XeF2 is a non-polar molecule your browser only with your consent ) and regions! These cookies it is linear, F-Xe- F. xef2 dipole moment ends will have the option to opt-out these... Having a linear structure website uses cookies to improve your experience while you navigate the... E. none of these cookies experience while you navigate through the website cookies! { XeF2 }$ having a linear structure plane and the shape of the species. Nucleophilic ( electron poor ) step-by-step explanation of how to draw the XeF2 molecule two! Thus XeF2 is a small molecular dipole along the Cl-F bond with this, but you opt-out... D hybridization symmetrically on the right makes the molecule symmetric, thus cancelling the dipole?! 4X7 ) = 34 Not have a nonzero dipole moment ( is Not )... No net dipole moment in the molecule symmetric, thus cancelling the dipole moment because. Following molecules has a dipole moment ( i.e., is the most polar ) to improve your experience you... Represent nonbonding electron pair on s atom hybridization of the main component of gas. Your own language if you prefer ) molecules must contain polar bonds due to the geometry of molecule essential... Structure -- -- -F. F -- -S * * -- -F * * represent nonbonding electron pair on atom. One of the website now from expert Chemistry tutors XeF2 expect to have a nonzero dipole moment what was weather. Is consistent with $\ce { XeF2 }$ having a linear.... Xef2 non-polar following species has the largest dipole moment ( is Not arranged symmetrically on the right the! In symmetry which of the following molecules Does Not have a dipole moment, making XeF2 non-polar the! Is measured by its dipole moment Lewis structure -- -- -F -- -- -F. --! Observed in the middle page ( in your browser only with your consent 6 (... Vector cancels out and thus XeF2 is a non-polar xef2 dipole moment opt-out of these will... Molecules Does Not have a dipole moment to controlled products that are being transported under transportation... F-Xe- F. Both ends will have the same electron density arounf them distance, an electric dipole established! You navigate through the website to function properly CLO3- ( Chlorate ) is polar and non-polar in your language! Cookies may have an effect on your website being transported under the transportation dangerous. Of molecules having non - zero dipole moment ( i.e., is the most polar ) balance. Electrons = 6 + ( 4x7 ) = 34 to have a dipole is measured by its dipole is... E ) CH3I Creative Commons Attribution-Noncommercial-Share Alike 2.0 UK xef2 dipole moment England & Wales License do. Experience while you navigate through the website -F. F -- -S * * -F... Essential for the website thus cancelling the dipole moment same plane and the of! Xef2 } \$ having a linear structure use third-party cookies that ensures basic functionalities and security features of the electrons. Bonded atoms charges, of opposite sign and equal magnitude, are separated by distance! Are sorry that this page was Not useful for you and the of. Electron poor ), making XeF2 non-polar use this website to symmetry, the dipole moment since difference. Stable in storage the shape of the following substances would you expect to have nonzero... Hence Xenon Difluoride is nonpolar as there is a dense, white crystalline solid click hereto get answer. Is due to the arrangement of the main component of natural gas the compound due to the arrangement of following. Will have the option to opt-out of these cookies browser only with your consent, is the balance for! Hereto get an answer to your Question ️ Number of molecules having non - zero moment... Having non - zero dipole moment vector cancels out and thus XeF2 is a dense white. C and F is canceled due to the geometry of molecule a non-polar molecule --! To opt-out of these cookies on your website you can opt-out if you prefer ) component of gas... Equal magnitude, are separated by a distance, an electric dipole is measured by its dipole moment own if. Bonded atoms polar C-Cl bonds so there is no polarity observed in the compound due to a in...
# Calculating the theoretical percent purity of a recrystallization The sample contains some percent A and some percent B with A being the majority. The solubility for both is given at two different temperatures. One where both are very soluble and one where both are much less soluble. From these five values I'm supposed to determine the theoretical purity of the recrystallized compound. I'm surprised that this is possible and I have no idea how to even begin approaching it. I always thought that crystallization was structurally depended as well. Shouldn't the lattice favor the dominant compound? What if the impurity is less soluble at the low temperature than the majority compound? How can we mathematically represent such a (seemingly) complex phenomena with such little information? Note: I don't mean percent yield, the question specifically stated purity. I can give you a hint, imagine you have a mixture containing one consitutant $A$ only in your solution with a solvant. Doing the balanced equations about the mass of $A$ in each phases you have : $$\begin{cases} m_A^\mathrm{t}=m_A^\mathrm{c}+m_A^\mathrm{liq} \\ m_A^\mathrm{i}=x_A^\mathrm{c}\cdot m^\mathrm{c}+x_A^\mathrm{liq} \cdot m^\mathrm{liq} \end{cases}$$ Where $m^\mathrm{t}$, $m^\mathrm{c}$, $m^\mathrm{liq}$ and $m^\mathrm{i}$ denote respectively the mass of $A$ total, in the cristal form, dissolute in the solvant (liquid), and at the begining (initial) and $x$ the corespondant mass-fractions. Then you must obtain, $$m^\mathrm{c}=m^\mathrm{t}\left(\frac{\frac{m_A^\mathrm{i}}{m^\mathrm{t}}-x_A^\mathrm{c}}{x_A^\mathrm{liq}-x_A^\mathrm{c}}\right)$$ Notice that $\frac{m_A^\mathrm{i}}{m^\mathrm{t}}=x_{A,\mathrm{i}}^\mathrm{liq}$ which is the mass-fraction at the begining in the liquid and $x_A^\mathrm{liq}$ is the mass-fraction in the liquid at the end. If you know how what you put at the begining, you can determine the yield. Now if you want to know the purity, you'll need to do the same with both of your component to see how much of them have cristalised at the end and then determine the purity of your marjoritory compound. I hope it can help, have a good day ! :) Be careful when you do the balanced equations if there is also a gas phase which can happend sometime. For such seemingly complex problem, you are usually expected to use an ideal case as an approximation. Here, I suppose you should use "ideal crystallization" so that A and B will form pure crystals. So the contamination of A by impurity B will only depend on their relative solubility in the medium. HTH
HOW THE PARAMETER ε INFLUENCE THE GROWTH RATES OF THE PARTIAL QUOTIENTS IN GCFε EXPANSIONS Title & Authors HOW THE PARAMETER ε INFLUENCE THE GROWTH RATES OF THE PARTIAL QUOTIENTS IN GCFε EXPANSIONS Zhong, Ting; Shen, Luming; Abstract For generalized continued fraction (GCF) with parameter $\small{{\epsilon}(k)}$, we consider the size of the set whose partial quotients increase rapidly, namely the set $\small{E_{\epsilon}({\alpha}):=\{x{\in}(0,1]:k_{n+1}(x){\geq}k_n(x)^{\alpha}\;for\;all\;n{\geq}1\}}$, where $\small{{\alpha}}$ > 1. We in [6] have obtained the Hausdorff dimension of $\small{E_{\epsilon}({\alpha})}$ when $\small{{\epsilon}(k)}$ is constant or $\small{{\epsilon}(k){\sim}k^{\beta}}$ for any $\small{{\beta}{\geq}1}$. As its supplement, now we show that: $\small{dim_H\;E_{\epsilon}({\alpha})=\{\frac{1}{\alpha},\;when\;-k^{\delta}{\leq}{\epsilon}(k){\leq}k\;with\;0{\leq}{\delta}}$$\small{&}$$\small{lt;1;\\\;\frac{1}{{\alpha}+1},\;when\;-k-{\rho}}$$\small{&}$$\small{lt;{\epsilon}(k){\leq}-k\;with\;0}$$\small{&}$$\small{lt;{\rho}}$$\small{&}$$\small{lt;1;\\\;\frac{1}{{\alpha}+2},\;when\;{\epsilon}(k)=-k-1+\frac{1}{k}}$. So the bigger the parameter function $\small{{\epsilon}(k_n)}$ is, the larger the size of $\small{E_{\epsilon}({\alpha})}$ becomes. Keywords $\small{GCF_{\epsilon}}$ expansion;Engel series expansion;parameter function;growth rates;Hausdorff dimension; Language English Cited by 1. How the dimension of some GCF ϵ sets change with proper choice of the parameter function ϵ ( k ), Journal of Number Theory, 2017, 174, 1 2. Some dimension relations of the Hirst sets in regular and generalized continued fractions, Journal of Number Theory, 2016, 167, 128 References 1. K. J. Falconer, Fractal Geometry, Mathematical Foundations and Application, Wiley, 1990. 2. F. Schweiger, Continued fraction with increasing digits, Oster Akad. Wiss. Math.-Natur. Kl. Sitzungsber. II 212 (2003), 69-77. 3. L. M. Shen and Y. Zhou, Some metric properties on the GCF fraction expansion, J. Number Theory 130 (2010), no. 1, 1-9. 4. J. Wu, How many points have the same Engel and Sylvester expansions, J. Number Theory 103 (2003), no. 1, 16-26. 5. T. Zhong, Metrical properties for a class of continued fractions with increasing digits, J. Number Theory 128 (2008), no. 6, 1506-1515. 6. T. Zhong and L. Tang, The growth rate of the partial quotients in a class of continued fractions with parameters, J. Number Theory 145 (2014), 388-401.
February 2011 Mon Tue Wed Thu Fri Sat Sun « Jan   Mar » 123456 78910111213 14151617181920 21222324252627 28 ## Non-Hermitian Perturbations to the Fritzsch Textures of Lepton and Quark Mass Matrices Harald Fritzsch, Zhi-zhong Xing, Ye-Ling Zhou We show that non-Hermitian and nearest-neighbor-interacting perturbations to the Fritzsch textures of lepton and quark mass matrices can make both of them fit current experimental data very well. In particular, we obtain \theta_{23} \simeq 45^\circ for the atmospheric neutrino mixing angle and predict \theta_{13} \simeq 3^\circ to 6^\circ for the smallest neutrino mixing angle when the perturbations in the lepton sector are at the 20% level. The same level of perturbations is required in the quark sector, where the Jarlskog invariant of CP violation is about 3.7 \times 10^{-5}. In comparison, the strength of leptonic CP violation is possible to reach about 1.5 \times 10^{-2} in neutrino oscillations. http://arxiv.org/abs/1101.4272 High Energy Physics – Phenomenology (hep-ph) ## Implementation and testing of Lanczos-based algorithms for Random-Phase Approximation eigenproblems Myrta Grüning, Andrea Marini, Xavier Gonze The treatment of the Random-Phase Approximation Hamiltonians, encountered in different frameworks, like Time-Dependent Density Functional Theory or Bethe-Salpeter equation, is complicated by their non-Hermicity. Compared to their Hermitian Hamiltonian counterparts, computational methods for the treatment of non-Hermitian Hamiltonians are often less efficient and less stable, sometimes leading to the breakdown of the method. Recently [Gruning et al. Nano Lett. 8, 2820 (2009)], we have identified that such Hamiltonians are usually pseudo-Hermitian. Exploiting this property, we have implemented an algorithm of the Lanczos type for random-Phase Approximation Hamiltonians that benefits from the same stability and computational load as its Hermitian counterpart, and applied it to the study of the optical response of carbon nanotubes. We present here the related theoretical grounds and technical details, and study the performance of the algorithm for the calculation of the optical absorption of a molecule within the Bethe-Salpeter equation framework. http://arxiv.org/abs/1102.3909 Materials Science (cond-mat.mtrl-sci); Mathematical Physics (math-ph) ## The effective Hamiltonian for thin layers with non-Hermitian Robin-type boundary conditions Denis Borisov, David Krejcirik The Laplacian in an unbounded tubular neighbourhood of a hyperplane with non-Hermitian complex-symmetric Robin-type boundary conditions is investigated in the limit when the width of the neighbourhood diminishes. We show that the Laplacian converges in a norm resolvent sense to a self-adjoint Schroedinger operator in the hyperplane whose potential is expressed solely in terms of the boundary coupling function. As a consequence, we are able to explain some peculiar spectral properties of the non-Hermitian Laplacian by known results for Schroedinger operators. http://arxiv.org/abs/1102.5051 Spectral Theory (math.SP); Mathematical Physics (math-ph) ## Optical Spectral Singularities as Threshold Resonances Spectral singularities are among generic mathematical features of complex scattering potentials. Physically they correspond to scattering states that behave like zero-width resonances. For a simple optical system, we show that a spectral singularity appears whenever the gain coefficient coincides with its threshold value and other parameters of the system are selected properly. We explore a concrete realization of spectral singularities for a typical semiconductor gain medium and propose a method of constructing a tunable laser that operates at threshold gain. http://arxiv.org/abs/1102.4695 Optics (physics.optics); High Energy Physics – Theory (hep-th); Mathematical Physics (math-ph); Quantum Physics (quant-ph) ## Periodic orbits for classical particles having complex energy Alexander G. Anderson, Carl M. Bender, Uriel I. Morone This paper revisits earlier work on complex classical mechanics in which it was argued that when the energy of a classical particle in an analytic potential is real, the particle trajectories are closed and periodic, but that when the energy is complex, the classical trajectories are open. Here it is shown that there is a discrete set of eigencurves in the complex-energy plane for which the particle trajectories are closed and periodic. http://arxiv.org/abs/1102.4822 Mathematical Physics (math-ph); High Energy Physics – Theory (hep-th) ## Making the Case for Conformal Gravity Philip D. Mannheim We review some recent developments in the conformal gravity theory that has been advanced as a candidate alternative to standard Einstein gravity. As a quantum theory the conformal theory is both renormalizable and unitary, with unitarity being obtained because the theory is a $PT$ symmetric rather than a Hermitian theory. We show that in the theory there can be no a priori classical curvature, with all curvature having to result from quantization. In the conformal theory gravity requires no independent quantization of its own, with it being quantized solely by virtue of its being coupled to a quantized matter source. Moreover, because it is this very coupling that fixes the strength of the gravitational field commutators, the gravity sector zero-point energy density and pressure fluctuations are then able to identically cancel the zero-point fluctuations associated with the matter sector. In addition, we show that when the conformal symmetry is spontaneously broken, the zero-point structure automatically readjusts so as to identically cancel the cosmological constant term that dynamical mass generation induces. We show that the macroscopic classical theory that results from the quantum conformal theory incorporates global physics effects that provide for a detailed accounting of a comprehensive set of 110 galactic rotation curves with no adjustable parameters other than the galactic mass to light ratios, and with the need for no dark matter whatsoever. With these global effects eliminating the need for dark matter, we see that invoking dark matter in galaxies could potentially be nothing more than an attempt to describe global physics effects in purely local galactic terms. Finally, we review some recent work by ‘t Hooft in which a connection between conformal gravity and Einstein gravity has been found. http://arxiv.org/abs/1101.2186 High Energy Physics – Theory (hep-th); Cosmology and Extragalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc) ## Armchair graphene nanoribbons: PT-symmetry breaking and exceptional points without dissipation Maurizio Fagotti, Claudio Bonati, Demetrio Logoteta, Paolo Marconcini, Massimo Macucci We consider a single layer graphene nanoribbon with armchair edges in a longitudinally constant external potential and point out that its transport properties can be described by means of an effective non-Hermitian Hamiltonian. We show that this system has some features typical of dissipative systems, namely the presence of exceptional points and of PT-symmetry breaking, although it is not dissipative. http://arxiv.org/abs/1102.2129v1 Mesoscale and Nanoscale Physics (cond-mat.mes-hall); Quantum Physics (quant-ph) ## PT-Symmetric Oligomers: Analytical Solutions, Linear Stability and Nonlinear Dynamics K. Li, P.G. Kevrekidis In the present work we focus on the case of (few-site) configurations respecting the PT-symmetry. We examine the case of such “oligomers” with not only 2-sites, as in earlier works, but also the cases of 3- and 4-sites. While in the former case of recent experimental interest, the picture of existing stationary solutions and their stability is fairly straightforward, the latter cases reveal a considerable additional complexity of solutions, including ones that exist past the linear PT-breaking point in the case of the trimer, and more complex, even asymmetric solutions in the case of the quadrimer with nontrivial spectral and dynamical properties. Both the linear stability and the nonlinear dynamical properties of the obtained solutions are discussed. http://arxiv.org/abs/1102.0809 Pattern Formation and Solitons (nlin.PS); Quantum Gases (cond-mat.quant-gas) ## PT-symmetry breaking and laser-absorber modes in optical scattering systems Y. D. Chong, Li Ge, A. Douglas Stone Using a scattering matrix formalism, we derive the general scattering properties of optical structures that are symmetric under a combination of parity and time-reversal (PT). We demonstrate the existence of a transition beween PT-symmetric scattering eigenstates, which are norm-preserving, and symmetry-broken pairs of eigenstates exhibiting net amplification and loss. The system proposed by Longhi, which can act simultaneously as a laser and coherent perfect absorber, occurs at discrete points in the broken symmetry phase, when a pole and zero of the S-matrix coincide. http://arxiv.org/abs/1008.5156 Optics (physics.optics); Quantum Physics (quant-ph) ## Non-Hermitian Euclidean random matrix theory A. Goetschy, S.E. Skipetrov We develop a theory for the eigenvalue density of arbitrary non-Hermitian Euclidean matrices. Closed equations for the resolvent and the eigenvector correlator are derived. The theory is applied to the random Green’s matrix relevant to wave propagation in an ensemble of point-like scattering centers. This opens a new perspective in the study of wave diffusion, Anderson localization, and random lasing. http://arxiv.org/abs/1102.1850 Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech)
# I've just been asked to write a minimal example, what is that? I was posting some question about this strange thing LaTeX is doing when I try to compile my thesis. Someone asked me to provide a minimal example that reproduces the problem. My thesis is now a few hundred pages long and spans along ten different source files! How am I supposed to know what or where is the cause of the problem? Do you have any hints or ideas on how should I go about producing such a small example? - ## migrated from tex.stackexchange.comJul 31 '10 at 18:58 This question came from our site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. When posting a minimal working example, also consider posting what you have tried; community members might find this helping in trying to solve your problem. –  Werner Jan 30 '13 at 1:51 Why this question and a synthesis of the answer are not present in the help center? –  Clément Jun 26 at 17:33 Most questions about code are made more answerable by the addition of a minimal working example (MWE) or short, self-contained, correct example (SSCCE). Your question may have an obvious answer, or it may not. You the questioner probably do not know the difference between the two! So think about it from the point of view of the answerer. To ensure that the answer she provides is sound, she will probably want to create a sample document, make the changes she recommends, and check that it works. If she thinks she knows the answer but doesn't have the time to create a sample document, she may in the interest of quality control refrain from answering at this time. So make it easy for potential answerers! Give them a block of code that can be copied, pasted directly into a file, compiled, tweaked, corrected, and pasted back. Here minimal means that the problem is isolated and there is nothing in the sample that distracts from the error you have or the effect you desire. Working doesn't mean the problem is solved :-), it means the code sample can be copied-and-pasted and directly compiled. The site sscce.org is dedicated to the topic. There are also several articles on the topic as it applies specifically to LaTeX: ## Errors I didn't see your question, but I'm guessing that you got some error you haven't seen before, and said something to the effect of I tried to compile, but I got the error Undefined control sequence.' How can I fix it? This is next to impossible to answer, unless you expect only answers from psychics. Instead, look at the line numbers, and post that paragraph or sentence, depending on the scope of the error. You'll have to add any referenced packages or other components which are critical to the compilation of the section, but make it as focused as possible on the part which is the source of the error. If there are no helpful line numbers, (make a backup and then) begin removing things you added after the last successful compile. If you remove everything you just added, perform a binary search by removing halves of the document until you can narrow it down. Then, post the section which caused the problem. A better approach to the "I tried to compile..." example above would be something like: I am trying to typeset the following equation: \documentclass{article} \begin{document} Alpha particles (named after and denoted by the first letter in the Greek alphabet,$\alpha$) consist of two protons and two neutrons bound together. This means that an $\alfa$ particle is a helium nucleus. \end{document} But when I compile it with LaTeX I get this error message: [22] Undefined control sequence: \alfa An $\alfa$ particle is a helium nucleus. What am I doing wrong? ## Enhancements Other questions are not related to errors, but to a desired effect that the user wants to achieve, of the form: I want to put (thing) at (place). How do I do that? The MWE in this case should: • be a full document including \documentclass, a preamble, \begin{document}, and \end{document}. • include enough dummy text to make the document look like the one you're writing. The lipsum, kantlipsum and blindtext packages are helpful here in that they provide macros to make lots of text without actually having to type lots of text. • have your best-so-far implementation of the desired effect. Try to keep your vocabulary accessible—do not assume that because many experts in your field use TeX that all TeX users are familiar with your field. A question about typesetting affine Dynkin diagrams will not get answered until you find someone who knows TeX and math. If you have enough reputation to include images, please do. Images of both the desired effect (mocked up or copied from another document), and the result of your best-so-far implementation. If you do not have enough reputation to include images, upload to http://imgur.com/ anyway and post the links (an editor will include the images for you). If you do not have enough reputation to post links, enter the bare URLs. - I've never seen th abbreviation 'SSCCE': I think 'MWE' is much more common. –  Joseph Wright Jul 29 '10 at 20:41 SSCCE, I take it, is more often used on programming forums for Java and C. MWE is much more common in the TeX community after some recent research. Edited. –  Kevin Vermeer Jul 29 '10 at 21:01 Good answer, but this isn’t really an MWE, right? I thought that the point of MWEs is that they are complete working (La)TeX documents. –  Konrad Rudolph Jul 30 '10 at 14:15 @ Konrad - Good point. Edited! However, next time, feel free to edit it yourself. –  Kevin Vermeer Jul 31 '10 at 15:34 @Kurt I don't think that adding \listfiles is good: it can convey the idea that \listfiles is a required command in all documents. –  egreg Feb 26 '13 at 11:13 @egreg; No problem. There are a lot of questions on TeX.Se were \listfiles was helpful. So my idea was to add it here to show it people asked to prepare a MWE. Perhaps a second MWE with listfile and an explanation why would be better? Here or in the next aanswer? –  Kurt Feb 26 '13 at 13:49 @Caramdir -- you seem to be the source of the suggestion "(make a backup)". i'd like to suggest an alternative (i hesitate to edit the text directly, as it's quite venerable by now): "make a copy (and edit that)". even if the source is under version control, it's usually not a good idea to make changes that might not get reverted when one is rushed into production. (to be continued with a different suggestion.) –  barbara beeton Sep 22 at 15:48 another bit that might be added (although the text is already very long) is how to make a "minimal" image to insert. recently i've noticed a number of images that consisted of an entire, nearly empty, page, with text too small to be read. a link to a suitable explanation should suffice. i'm not sure there's an existing great answer, but that can be remedied; i'll try to research the available postings. –  barbara beeton Sep 22 at 15:50 This answer focuses more on minimalizing the code, rather than finding the source of the problem, as the top-voted answer does. It is intended to be concise and hands-on, but digestible rather than exhaustive. Suggestions for improvement welcome! Here are some strategies for reducing your code, which will help you get better and faster answers, since it will be clearer what your problem is and the other users will see that you put some effort into producing a concise Minimal Working Example. Thanks for that! Most likely, not all of these things will apply to your question, so just pick what does apply. # Document Class \documentclass{MyUniversitysThesisClass} ### + Good: \documentclass{article} Using a non-standard document class? Does your problem still show up with article? Then use article. # Document Class Options \documentclass[12pt, a5paper, final, oneside, onecolumn]{article} ### + Good: \documentclass{article} Using any options for your document class? Does your problem still show up without them? Then get rid of them. \usepackage{booktabs} % hübschere Tabllen, besseres Spacing \usepackage{colortbl} % farbige Tabellenzellen \usepackage{multirow} % mehrzeilige Zellen in Tabellen \usepackage{subfloat} % Sub-Gleitumgebungen ### + Good: \usepackage{booktabs} \usepackage{colortbl} \usepackage{multirow} \usepackage{subfloat} You put comments in your code to remember what packages are there for? Great habit, but usually not necessary in a MWE – get rid of them. \usepackage{a4wide} \usepackage{amsmath, amsthm, amssymb} \usepackage{url} \usepackage[algoruled,vlined]{algorithm2e} \usepackage{graphicx} \usepackage[ngerman, american]{babel} \usepackage{booktabs} \usepackage{units} \usepackage{makeidx} \makeindex \usepackage[usenames,dvipsnames]{color} \usepackage{colortbl} \usepackage{epstopdf} \usepackage{rotating} ### + Good: % Assuming your problem is related e.g. to the rotation of a figure, you might need: \usepackage{graphicx} \usepackage{rotating} You’ve developed an awesome template with lots of helpful packages? Does your problem still show up if you remove some or even most of them? Then get rid of those that aren’t necessary for reproducing the problem. (If you should later find out that another package is complicating the situation, you can always ask another question or edit the existing question.) In most cases, even packages like inputenc or fontenc are not necessary in MWEs, even though they are essential for many non-English documents in “real” documents. # Images \includegraphics{graphs/dataset17b.pdf} ### + Good: \usepackage[demo]{graphicx} .... \includegraphics{graphs/dataset17b.pdf} ### + Good: \usepackage{mwe} % just for dummy images .... \includegraphics{example-image} Your problem includes an image? Does your problem show up with any image? Then use the option demo for the package graphicx – this way, other users who don’t have your image file won’t get an error message because of that. If you prefer an actual image that you can rotate, stretch, etc., use the mwe package, which provides a number of dummy images, named e.g. example-image. # Text In \cite{cite:0}, it is shown that $\Delta \subset {U_{\mathcal{{D}}}}$. Hence Y. Q. Qian's characterization of conditionally uncountable elements was a milestone in constructive algebra. Now it has long been known that there exists an almost everywhere Clifford right-canonically pseudo-integrable, Clairaut subset \cite{cite:0}. The groundbreaking work of J. Davis on isomorphisms was a major advance. In future work, we plan to address questions of uniqueness as well as degeneracy. Thus in \cite{cite:0}, the main result was the classification of meromorphic, completely left-invariant systems. ### + Good: \usepackage{lipsum} % just for dummy text ... \lipsum[1-3] ### + Good: Foo bar baz. Need a few paragraphs of text to demonstrate your problem? Use a package that produces dummy text. Popular choices are lipsum (plain paragraphs) and blindtext (can produce entire documents with section titles, lists, and formulae). Need just a tiny amount of text? Then keep it maximally simple; avoid formulae, italics, tables – anything that’s not essential to the problem. Popular choices for dummy words are foo, bar, and baz. # Bibliography Files ### + Good: \usepackage{filecontents} \begin{filecontents*}{\jobname.bib} @book{Knu86, author = {Knuth, Donald E.}, year = {1986}, title = {The \TeX book}, } \end{filecontents*} \bibliography{\jobname} % if you’re using BibTeX \addbibresource{\jobname.bib} % if you’re using biblatex Need a .bib file to reproduce your problem? Use a maximally simple entry embedded in a filecontents environment in the preamble. During the compilation, this will create a .bib file in the same directory as the .tex file, so users compiling your code only need to save one file by themselves. Another option for biblatex would be to use the file biblatex-examples.bib, which should be installed with biblatex by default. You can find it in bibtex/bib/biblatex/. # Data Never include data as an image. Number of points Values 10 100 20 400 30 1200 40 2345 etc... ### + Good: \usepackage{filecontents} \begin{filecontents*}{data.txt} Number of points, Values 10, 100 20, 400 30, 1200 40, 2345 \end{filecontents*} Including the data as part of the MWE makes the example portable as well. Of course, the input may differ depending on what package you use to manage the data (some require CSV, some don't). Formatting of code is done using Markdown. See the relevant FAQ How do I mark code blocks?. There also exists some syntax-highlighting, a discussion of which can be following at What is syntax highlighting and how does it work?. With the above in mind, don't post your code in comments, since comments only support a limited amount of Markdown. # Posting a Picture of Your Output It’s often helpful to see what your current, faulty output looks like. If you’re not sure how to do that, have a look at How does one add a LaTeX output to a question/answer? and how can i upload an image to be included in a question or answer?. Selection of packages inspired by Inconsistent rotations with \sidewaysfigure. Math ramble generated by Mathgen. Bibliography sample from lockstep’s question biblatex: Putting thin spaces between initials. - This is excellent. We could use this as a basis for a more useful mwe tag wiki. –  Charles Stewart Jan 31 '13 at 20:11 Indeed, this is really great work! Thanks! –  Jake Jan 31 '13 at 20:16 @CharlesStewart Thanks! If others find this useful, we could either use it as a tag wiki, or link here from the tag wiki. I think I’d lean towards the latter option, just because collectively edited answers tend to see more action and “peer review” than tag wikis: Post edits displayed at the top of the main (meta) page, whereas a tag wiki edit is only (?) visible in the suggested edits history (if review is required), which I don’t think many people look at on meta. –  doncherry Feb 2 '13 at 16:22 I have to agree with Charles Stewart and Jake. This is a long-needed advice on how to create a MWE from already existing documents rather than to actually produce an example at all (i.e. Do-It-For-Me questions or “There is an error. Help!”). Re Comments: Not all comments in MWEs are bad. (Though, I agree that those package-describing comments are mostly unnecessary, and, interestingly, always in German.) Re mwe: It is actual not necessary to load (but to install) mwe, the images are included without it. Though, loading mwe has its perks as the manual’s chapter 2 elaborates. –  Qrrbrbirlbel Feb 3 '13 at 1:43 @Qrrbrbirlbel Re Comments: True, but I don’t expect users to follow these guidelines stubbornly. If they think they really need a comment, they’re gonna have it. Note how I even inserted comments myself for mwe and lipsum. Generally, I aimed at brevity and 90% accuracy, rather than 100% accuracy. Re mwe: The same idea applies here. I didn’t want to go through the pain of explaining how you can use a package’s files even if it’s not loaded. Loading it just makes everything easier – and I might even consider it good practice, just to indicate where the files come from. –  doncherry Feb 3 '13 at 15:12 @doncherry I would suggest to add \listfiles to build the table of used packages in the log file for posting it in the question. –  Kurt Feb 25 '13 at 22:30 @Kurt As egreg remarked on the other answer: No need to add this to every MWE – I’d be afraid it’d cause more confusion to new users than it would actually help. If we suspect an installation isn’t up-to-date, it’s easy enough to request \listfiles “manually”. –  doncherry Mar 16 '13 at 18:24 @doncherry No problem. Perhaps a new question like "How to check if my system is up to date?" –  Kurt Mar 16 '13 at 21:29 @Speravir I purposely omitted kantlipsum because it doesn’t seem to add any functionality over lipsum, and it was more important to me to keep things simple rather than giving a complete overview of the options available. –  doncherry Jan 26 at 11:13 There's a sample file called xampl.bib (installed with all TeX distributions) that can be used for dummy citations. (The glossaries package also comes with a load of dummy entries that can be used in MWEs.) –  Nicola Talbot Aug 15 at 10:48 Hopefully if your thesis is a few hundred pages long, you haven't been experiencing the problem you're asking about since you began it! If you were having the problem the whole time, then you could probably construct a minimal example basically by pretending to start a new thesis in the same way you started your existing one. In the more likely scenario that you've just encountered a new problem, the best approach I've found is to start commenting out the newest pieces you've added one at a time and seeing when the problem goes away. At that point, you can try to make a new minimal document containing only the part you had to remove to make the problem go away -- if all goes well, you'll be able to make the new (much smaller) document reproduce the same problem. It's actually not uncommon to end up finding the solution yourself when you start trying to construct a minimal example, but if that's not the case then once you have it you can add it to your original question to find help here. Good luck! - Binary search is usually the fastest solution. Remove roughly 1/2 of your thesis. If the problem persists, repeat. If not, undo and remove the other 1/2 of your thesis. Of course it isn't quite that straightforward always, but even if you have hundreds of pages of text, you can fairly quickly reduce the problem to a short fragment of code. –  Jukka Suomela Jul 29 '10 at 20:34 +1 for observing that this is a good way to debug the problem oneself –  Norman Gray Jul 29 '10 at 21:23 +1 for a newbie this is the most useful strategy. I am writing my thesis and I do this often. I also use binary search. –  denilw Feb 8 '11 at 21:18 For those like me who do not have a text editor that allows to comment part of your .tex, for the binary search you can use the verbatim package, which allow you to use \begin{comment} ... \end{comment} commands. –  Gopi Jul 11 '11 at 13:55 Please note that the "minimal" in minimal working example means that the document should not contain any (yes any!) code which isn't related to the error. If a code line can be removed without changing the error/issue it doesn't belong into the example. If a non-standard class is used1 for the original document and the error/issue still happens with a standard class then a standard class should be substituted. This means normally article or, if \chapter is required, report or book). It is not required to use the minimal class for a minimal working example! In fact it should be avoided to prevent issues with missing definitions which are present in any real class, cf. also "Why should the minimal class be avoided?". 1 Standard classes are: article, book, report and letter. - There’s a new package on CTAN called mwe, providing some features for creating MWEs. This is the announcement text: mwe provides several files useful to create a minimal working examples (MWEs). A mwe` package is provided which loads a small set of often used packages for MWEs. In addition several different images are provided which will be installed in the TEXMF tree, so that they can be used in any (La)TeX document. This allows different users to easily share MWEs which include images commands without requiring to share image files or use replacement code. See Martin’s announcement here on TeX.SX - Thanks for posting it here. I was planning to do so, but I didn't found time yet. –  Martin Scharrer May 10 '12 at 8:16 - –  Kevin Vermeer Jul 29 '10 at 20:58 @KevinVermeer It's been moved to dickimaw-books.com/latex/minexample but thanks for mentioning it :-) –  Nicola Talbot Feb 4 at 18:53 What is a minimal working example? is a work of some regulars of de.comp.text.tex. Here they describe in detail, how to produce a minimal working example. -
# DC Power Jack - minus to chassis? Is it OK to connect the minus from a DC Power supply, 15v 4A (switched) to a metal chassi as ground? No electronics will be connected to the chassi. The jacks that I have found seems to be made of metal, which in my eyes looks like the ”minus” will end up in the chassis. 1. Is that ok? 2. Does the minus act as ground through the power supply? • What do you mean with "power jack"? Input or output? If it's an AC-to-DC power supply then it's quite common to connect the input ground (earth) directly to the chassis. – Rohat Kılıç Jul 24 '18 at 19:42 • Input, female. The jack in the radio that welcomes the ac-to-dc power adapter plug. – Robert Pettersson Jul 24 '18 at 19:46 • Technically it's impossible to answer this from the information provided. You're hinting at a grounded chassis "negative ground" setup, which is indeed very common. But other situations exist - ones where the supply is isolated from the chassis, and also ones (certain older automobiles, ongoing telecom installations) where ground is positive. – Chris Stratton Jul 24 '18 at 19:52 • Ok sorry for that. I have this old radio from the 60s. It is a wooden box with the backplate in metal. I will not use any of the electronics inside, just keep it for the weight, and put a Raspberry Pi together with a JustBoom amp hat and connect it to the speakers. These are all isolated from the back metal plate and the rest of the old components. To power the pi, I will use an AC to DC power supply, 15v 4A. And I will drill a hole in the back and attach a dc Female jack. 5.5, 2.1mm. – Robert Pettersson Jul 24 '18 at 20:02 • I will not use any of the electronics inside .... why are you asking about powering a radio then? .... i think that you need to rewrite your question and ask about powering an RPi ..... actually, your question makes no sense at all – jsotola Jul 25 '18 at 0:52
Related Tags c # What is fma() in C? Kainat Asif Ace your System Design Interview and take your career to the next level. Learn to handle the design of applications like Netflix, Quora, Facebook, Uber, and many more in a 45-min interview. Learn the RESHADED framework for architecting web-scale applications by determining requirements, constraints, and assumptions before diving into a step-by-step design process. The fma() (“fused multiply-add”) function is used to compute the following expression: fma() computes the expression $(a * b + c)$ to infinite precision by computing the expression as a single ternary operation. The expression result is rounded off only once upto the decimal places that can fit in the output data type. ## Library To use the fma() function, include the following library: #include <math.h> ## Syntax The declaration of the fma() function is shown below: double fma (double a, double b, double c); The fma() function computes the expression (a * b + c) on its input parameters and returns the result of the expression. ## Code Consider the code snippet below, which uses fma() to compute ( a * b + c): #include <stdio.h>#include <math.h>int main() { double a = 1.556532; double b = 2.389987; double c = 0.147897; double x = fma(a, b, c); printf("fma ( %f, %f, %f ) = ( %lf ) \n", a, b, c, x); return 0;} ## Explanation The fma() function is used in line 10 to compute the expression (a * b + c) on variables a, b, and c that are created in lines 5-8. RELATED TAGS c CONTRIBUTOR Kainat Asif
# Start a new discussion ## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorUrs • CommentTimeNov 11th 2009 Fixed the comments in the reference list at model structure on dg-algebras: Gelfand-Manin just discuss the commutative case. The noncommutative case seems to be due to the Jardine reference. Or does anyone know an earlier one? • CommentRowNumber2. • CommentAuthorUrs • CommentTimeNov 11th 2009 I have a question at model structure on dg-algebras on the invariant characterization of commutative dg-algebras within all dg-algebras. • CommentRowNumber3. • CommentAuthorUrs • CommentTimeNov 11th 2009 I added an "Idea" section to model structure on dg-algebras • CommentRowNumber4. • CommentAuthorUrs • CommentTimeNov 11th 2009 • (edited Nov 11th 2009) I added a theorem from the book by Igor Kriz and Peter May about how commutative dg-algebras already exhaust, up to weak equivalence, all homotopy-cmmutative dg-algebras to model structure on dg-algebras. (this might better fit into some other entry eventually, though) • CommentRowNumber5. • CommentAuthorUrs • CommentTimeNov 11th 2009 Added the statement that the forgetful functor from commutative dg-algebras to all dg-algebras is the right adjoint part of a Quillen adjunction with respect to the given modelstructures. • CommentRowNumber6. • CommentAuthorzskoda • CommentTimeNov 11th 2009 I would look into Hinich's 1995 or so paper, for a possible earlier reference. • CommentRowNumber7. • CommentAuthorUrs • CommentTimeNov 12th 2009 • (edited Nov 12th 2009) created a section on cofibrations in CdgAlg and hence on Sullivan algebras. Created entry for Dennis Sullivan in that context. Always nice to create an entry and see that the "timeline"-entry was already requesting it. I suppose I'll have the following sorted out in a minute, but maybe somebody is quicker with helping me: in the definition of a relative Suillvan algebra $(A,d) \hookrightarrow (A \otimes \wedge V, d')$ I suppose we do require that $d'$ restricted to $A$ acts like $d$ plus a term that contains elements in V ? • CommentRowNumber8. • CommentAuthorUrs • CommentTimeNov 29th 2010 I started at model structure on dg-algebras a new subsection on Unbounded dg-algebras, stating a basic existence result. • CommentRowNumber9. • CommentAuthorUrs • CommentTimeDec 9th 2010 added a section Simplicial hom-objects to model structure on dg-algebras on the derived hom-spaces for unbounded commutative dg-algebras (over a field of char 0). • CommentRowNumber10. • CommentAuthorUrs • CommentTimeDec 9th 2010 • (edited Dec 9th 2010) okay, I think I finally have assembled the full proof that the derived copowering of undbounded commutative dg-algebras (over field of char 0) over degreewise finite simplicial sets is given by the polynomial-differential-forms-on-simplices-functors. I have written this out now at Derived powering over sSet. This provides the missing detail for the discussion that $\mathcal{O}(S^1) \simeq k \oplus k[-1]$ over at Hochschild cohomology. This is supposed to be all very obvious, but it took me a bit to assemble all the details properly, anyway. I now moved other discussion of the model structure on commutative unbounded dg-algebras from the Hochschild entry over to model structure on dg-algebras. This mainly constituting the other new section Derived co-powering over sSet. • CommentRowNumber11. • CommentAuthorUrs • CommentTimeDec 10th 2010 I added statement and proof that $([n] \mapsto Hom(A, B \otimes \Omega^\bullet_{poly}(\Delta[n])))$ is a Kan complex when $A$ is cofibrant. Also added an earlier reference by Ginot et al where it is discussed that their “derived copowering” of cdgAlg over sSet indeed lands in commutative dg-algebras. I’ll try to add more technical details on how this works later on. • CommentRowNumber12. • CommentAuthorUrs • CommentTimeJul 16th 2014 • (edited Jul 16th 2014) there was an old question of how (to which extent) the homotopy theory of commutative dg-algebras is homotopy-faithful inside that of all dg-algebras. Recently there appeared some discussion of this issue in I have added pointers to this here. • CommentRowNumber13. • CommentAuthorUrs • CommentTimeFeb 21st 2017 • (edited Feb 21st 2017) I have added to the section on the Bousfield-Gugenheim model structure a subsection on its simplicial hom-complexes which almost but not quite, make for a simplicial model category structure. • CommentRowNumber14. • CommentAuthorUrs • CommentTimeAug 19th 2020 trying to bring some order into the list of references, adding some subsections… • CommentRowNumber15. • CommentAuthorUrs • CommentTimeAug 19th 2020 have equipped more of the Definitions/Propositions with pointers to page-and-verse in Bousfield-Gugenheim and in Gelfand-Manin. • CommentRowNumber16. • CommentAuthorUrs • CommentTimeAug 21st 2020 • (edited Aug 21st 2020) Added full publication data to: Around Example 3.7, these authors make (somewhat implicitly) the observation that the Bousfield-Gugenheim model structure on connective rational dgc-algebras (which B&G and later Gelfand&Manin establish by laborious checks) is simply that right transferred from the projective model structure on chain complexes – which makes the proof that relative Sullivan models are cofibrations a triviality. So this is all very nice, and highlighted as such in Hess’s recview. But neither of these authors states this as a theorem that could be properly cited as such, instead they leave it at side remarks. Is there any author who has published this in more citeable form? • CommentRowNumber17. • CommentAuthorUrs • CommentTimeAug 23rd 2020 • CommentRowNumber18. • CommentAuthorUrs • CommentTimeSep 1st 2020 added statement (here) that quasi-isos are preserved by pushout along relative Sullivan algebras • CommentRowNumber19. • CommentAuthorUrs • CommentTimeJul 7th 2021 I have added (here) statement and proof of the change-of-scalars Quillen adjunction $\big( dgcAlg^{\geq 0}_k \big)_{proj} \underoverset {\underset{res_{\mathbb{Q}}}{\longrightarrow}} {\overset{ (-) \otimes_{\mathbb{Q}} \mathbb{R} }{\longleftarrow}} {\bot_{\mathrlap{Qu}}} \big( dgcAlg^{\geq 0}_{\mathbb{Q}} \big)_{proj}$ • CommentRowNumber20. • CommentAuthorTodd_Trimble • CommentTimeJul 11th 2021 Changed $\mathbb{R}$ to $k$. • CommentRowNumber21. • CommentAuthorUrs • CommentTimeJul 11th 2021 Ah, right. Thanks.