text
stringlengths
104
605k
# Thread: remainders over divisions 1. ## remainders over divisions Hey guys, if i had a question of say: QUESTION: find the remainders when the following numbers are divided by 89. i) $809$ ii) $800^8$ is there an easy was to work this out? thanks! 2. First note that $-a\equiv b-a\pmod{b}$. It is easy to see that 10 * 89 = 809 + 81. Move 81 to the left side and consider both sides modulo 89. If you represent 800 as $a * 89 + b$, then $800^8=(a * 89 + b)^8$. By the Binomial theorem (or just by expanding) one sees that all terms have the factor 89 except for the last one, namely, $b^8$.
# Quantum Field Theory - variational principle ## Main Question or Discussion Point Quantum Field Theory -- variational principle In non-relativistic quantum mechanics, the ground state energy (and wavefunction) can be found via the variational principle, where you take a function of the n particle positions and try to minimize the expectation value of that function with the hamiltonian. In relativistic quantum mechanics (but fixed particle number, like the dirac equation), the same can be said. Is there something equivalent in quantum field theory? What exactly would I be varying as there doesn't really seem to be a wavefunction anymore? Could one write the state as a sum of a wavefunction in front of each possible number of particles? Something like: $$\Psi = \left( \phi(\mathbf{r}_1)a^{\dagger}(\mathbf{r}_1) + \phi(\mathbf{r}_1,\mathbf{r}_2)a^{\dagger}(\mathbf{r}_1)a^{\dagger}(\mathbf{r}_2) + \phi(\mathbf{r}_1,\mathbf{r}_2,\mathbf{r}_3)a^{\dagger}(\mathbf{r}_1)a^{\dagger}(\mathbf{r}_2)a^{\dagger}(\mathbf{r}_3) + ... \right) |0\rangle$$ Is there an exact solution to the hydrogen atom in quantum field theory? Or do they use the relativistic quantum mechanics solutions are a starting point and do perturbations around that? Related Quantum Physics News on Phys.org Fredrik Staff Emeritus Gold Member I don't know the answer to your question, so I'll just mention that I don't consider relativistic wave equations like Klein-Gordon and Dirac to be "relativistic quantum mechanics". I think of "QM" as all that stuff about Hilbert spaces, not including the Schrödinger equation, and the way to define a theory of a single non-interacting particle in that framework, is to impose the requirement that the Hilbert space is the vector space associated with an irreducible representation of the symmetry group of spacetime. If you take that group to be the Galilei group, you have non-relativistic QM. If you use the Poincaré group, you have special relativistic QM. (Both include a Schrödinger equation, because the time translation subgroup is represented by operators that satisfy U(t)U(t')=U(t+t'), which implies U(t)=exp(-iHt) and therefore idU/dt=HU). I think of the field equations and the procedure of "canonical quantization" as a way to explicitly construct those representations. Physics Monkey Homework Helper There is definitely something like it in quantum field theory. One of the most famous variational states of all time is the BCS state for superconductivity. It has the form (conventionally written in momentum space) $$| \text{BCS} \rangle \sim \prod_k (u_k + v_k \, c^+_{k \,\text{up}} c^+_{-k \,\text{down}} ) | \text{vac} \rangle$$ where c^+ is an electron creation operator. If you expand out the product you will see that this state is exactly a sum of terms of the form you wrote. This is a variational quantum state of the quantum field theory describing the non-relativistic electrons in the superconductor. Could one write the state as a sum of a wavefunction in front of each possible number of particles? Something like: $$\Psi = \left( \phi(\mathbf{r}_1)a^{\dagger}(\mathbf{r}_1) + \phi(\mathbf{r}_1,\mathbf{r}_2)a^{\dagger}(\mathbf{r}_1)a^{\dagger}(\mathbf{r}_2) + \phi(\mathbf{r}_1,\mathbf{r}_2,\mathbf{r}_3)a^{\dagger}(\mathbf{r}_1)a^{\dagger}(\mathbf{r}_2)a^{\dagger}(\mathbf{r}_3) + ... \right) |0\rangle$$ I would say that this is a pretty good representation of the most general state in QFT. Is there an exact solution to the hydrogen atom in quantum field theory? In a good approximation the state vector of the hydrogen atom belongs to the 2-particle sector "1 proton + 1 electron". The full Hamiltonian of QFT does not conserve the number of particles, so that a more exact state vector has admixtures from higher sectors "1 proton + 1 electron + N photons + M electron-positron-pairs". However, in order to see hydrogen as an eigenstate of the QED Hamiltonian H you'll need to use the "dressed particle" version of H. The presence of the bound state electron+proton is not obvious when looking at the "traditional" QED Hamiltonian. Eugene. Last edited: Is there something equivalent in quantum field theory? What exactly would I be varying as there doesn't really seem to be a wavefunction anymore? Could one write the state as a sum of a wavefunction in front of each possible number of particles? There is most definitely a wavefunction in relativistic quantum field theory. It's just that everyone has been hiding it from you. See Samalkhaiat's lengthy https://www.physicsforums.com/showthread.php?t=388556&page=2" for details. There was a program in the '80s when people tried to apply variational principles to quantum field theory; in fact, if I remember correctly Richard Feynman was quite involved. One can write an ansatz for the wavefunction of the ground state (vacuum state) of an interacting quantum field theory. In fact, this is what the gaussian effective potential is based on. Last edited by a moderator: One can write an ansatz for the wavefunction of the ground state (vacuum state) of an interacting quantum field theory. What the xxxx is "the wavefunction of the vacuum state"? Vacuum, by definition, is a state with no particles. Any experiment performed on vacuum yields null result. So, vacuum wavefunction must be just plain zero. Eugene. Haelfix People have in mind QCD there, where the vacuum is most assuredly not empty. People have in mind QCD there, where the vacuum is most assuredly not empty. What experiment can be performed in the "QCD vacuum" to demonstrate that it is "not empty"? Eugene. Haelfix You know that what you ask is impossible of course. However I can tell you what would instantly falsify the proposition-- The discovery of an isolated quark in nature. Anyway without delving into semantics at this point, there is no such thing as a Fock vacuum for physical QCD (it is unstable), whatever 'it' is, is something else and essentially the definition of color confinement. You know that what you ask is impossible of course. I just wanted to draw your attention to this curious situation: In QCD we first assume existence of particles (quarks, gluons) which have not been observed. Then, in order to justify their non-observability, we introduce some strange vacuum, whose properties cannot be observed either. Then we postulate the existence of some "confinement mechanism", which, as far as I know, is still hypothetical. This piling up of unprovable assumptions looks rather suspicious to me. I can believe that QCD makes accurate predictions for certain observations. But still ... Eugene. I just wanted to draw your attention to this curious situation: In QCD we first assume existence of particles (quarks, gluons) which have not been observed. Then, in order to justify their non-observability, we introduce some strange vacuum, whose properties cannot be observed either. Then we postulate the existence of some "confinement mechanism", which, as far as I know, is still hypothetical. This piling up of unprovable assumptions looks rather suspicious to me. I can believe that QCD makes accurate predictions for certain observations. But still ... Eugene. There is a widespread misconception that for every quantum field in the theory, there must correspond a particle. I believe this misconception has been the source of many conceptual problems for students of field theory. One should view the quarks and gluons not as particles, but only as fields. And the quantum dynamics are so severe that the states hardly resemble the elementary excitations of these fields. Because phi-4 theory and QED exhibits a milder form of this, quantum excitations seem to resemble the field content of the theory. Physics Monkey Homework Helper There is a widespread misconception that for every quantum field in the theory, there must correspond a particle. I believe this misconception has been the source of many conceptual problems for students of field theory. One should view the quarks and gluons not as particles, but only as fields. And the quantum dynamics are so severe that the states hardly resemble the elementary excitations of these fields. Because phi-4 theory and QED exhibits a milder form of this, quantum excitations seem to resemble the field content of the theory. Indeed, I like this sentiment very much. In condensed matter physics we suffered from a similar kind of confusion for many years. The high energy (a few eV) description of basically any terrestrial condensed matter system is in terms of electrons and Coulomb interaction (plus nuclei). In my opinion, the field seemed to think for a long time that this somehow meant that the low energy physics always contained objects (quasiparticles) that basically resemble electrons. It was a great moment when we understood clearly that the low energy physics was potentially much richer including things like "deconfinement" where the low energy degrees of freedom resemble fractions or pieces of the electrons. QCD is quite similar. The high energy description is in terms of nearly free quarks and gluons, but the low energy description of dilute qcd matter is in terms of totally different objects. samalkhaiat meopemuk;2649840]What the xxxx is "the wavefunction of the vacuum state"? It is the lowest (ground state) energy solution of Schrodinger equation, often written as $$\Psi_{0}(x) = \langle x | 0 \rangle \equiv \langle x |\Psi_{0} \rangle$$ For harmonic oscillator, it is given by the following everywhere-positive function; $$\langle x|0 \rangle = ( \frac{m\omega}{\pi \hbar})^{1/4} \exp (-m\omega x^{2}/2 \hbar)$$ Vacuum, by definition, is a state with no particles. Indeed, it is a state with no real quanta. However, this does not mean no energy. Again the vacuum oscillator energy is $1/2 \omega \hbar$. Any experiment performed on vacuum yields null result. This is plain wrong! Even in the vacuum, a cupfull of Helium (which is a quantum object) does not solidify (at sufficiently low temperatures at atmospheric pressure). Sir. it is already an industry. So, vacuum wavefunction must be just plain zero. I find your "logic" rather strange! I have just given you an example where it is NON-ZERO EVERYWHERE. regards sam Last edited: Indeed, it is a state with no real quanta. However, this does not mean no energy. Again the vacuum oscillator energy is $1/2 \omega \hbar$. I don't think that there is a good analogy between vacuum and harmonic oscillator. (Yes, I've read QFT textbooks where such an analogy is proposed, but I am not convinced). So, if the ground state of oscillator has a non-zero energy, this does not mean that vacuum energy must be non-zero (actually, infinite!) too. IMHO (measurable) energy can be associated only with particles. Vacuum is a state without particles. So, vacuum energy is exactly zero. Even in the vacuum, a cupfull of Helium (which is a quantum object) does not solidify (at sufficiently low temperatures at atmospheric pressure). Sir. it is already an industry. Sorry, I missed your point. By "vacuum" I mean empty space. If there is a "cupfull of Helium" then it is not vacuum. There are no physical objects in the real vacuum. So, any measurement performed there should yield null result. That's my point. Eugene. Last edited:
# MLIR Multi-Level IR Compiler Framework # MLIR Python Bindings Current status: Under development and not enabled by default ## Building ¶ ### Pre-requisites ¶ • A relatively recent Python3 installation • Installation of python dependencies as specified in mlir/python/requirements.txt ### CMake variables ¶ • MLIR_ENABLE_BINDINGS_PYTHON:BOOL Enables building the Python bindings. Defaults to OFF. • Python3_EXECUTABLE:STRING Specifies the python executable used for the LLVM build, including for determining header/link flags for the Python bindings. On systems with multiple Python implementations, setting this explicitly to the preferred python3 executable is strongly recommended. • MLIR_BINDINGS_PYTHON_LOCK_VERSION:BOOL Links the native extension against the Python runtime library, which is optional on some platforms. While setting this to OFF can yield some greater deployment flexibility, linking in this way allows the linker to report compile time errors for unresolved symbols on all platforms, which makes for a smoother development workflow. Defaults to ON. It is recommended to use a python virtual environment. Many ways exist for this, but the following is the simplest: # Make sure your 'python' is what you expect. Note that on multi-python # systems, this may have a version suffix, and on many Linuxes and MacOS where # python2 and python3 co-exist, you may also want to use python3. which python python -m venv ~/.venv/mlirdev source ~/.venv/mlirdev/bin/activate # Note that many LTS distros will bundle a version of pip itself that is too # old to download all of the latest binaries for certain platforms. # The pip version can be obtained with python -m pip --version, and for # Linux specifically, this should be cross checked with minimum versions # here: https://github.com/pypa/manylinux # It is recommended to upgrade pip: python -m pip install --upgrade pip # Now the python command will resolve to your virtual environment and # packages will be installed there. python -m pip install -r mlir/python/requirements.txt # Now run cmake, ninja, et al. For interactive use, it is sufficient to add the tools/mlir/python_packages/mlir_core/ directory in your build/ directory to the PYTHONPATH. Typically: export PYTHONPATH=\$(cd build && pwd)/tools/mlir/python_packages/mlir_core Note that if you have installed (i.e. via ninja install, et al), then python packages for all enabled projects will be in your install tree under python_packages/ (i.e. python_packages/mlir_core). Official distributions are built with a more specialized setup. ## Design ¶ ### Use cases ¶ There are likely two primary use cases for the MLIR python bindings: 1. Support users who expect that an installed version of LLVM/MLIR will yield the ability to import mlir and use the API in a pure way out of the box. 2. Downstream integrations will likely want to include parts of the API in their private namespace or specially built libraries, probably mixing it with other python native bits. ### Composable modules ¶ In order to support use case #2, the Python bindings are organized into composable modules that downstream integrators can include and re-export into their own namespace if desired. This forces several design points: • Separate the construction/populating of a py::module from PYBIND11_MODULE global constructor. • Introduce headers for C++-only wrapper classes as other related C++ modules will need to interop with it. • Separate any initialization routines that depend on optional components into its own module/dependency (currently, things like registerAllDialects fall into this category). There are a lot of co-related issues of shared library linkage, distribution concerns, etc that affect such things. Organizing the code into composable modules (versus a monolithic cpp file) allows the flexibility to address many of these as needed over time. Also, compilation time for all of the template meta-programming in pybind scales with the number of things you define in a translation unit. Breaking into multiple translation units can significantly aid compile times for APIs with a large surface area. ### Submodules ¶ Generally, the C++ codebase namespaces most things into the mlir namespace. However, in order to modularize and make the Python bindings easier to understand, sub-packages are defined that map roughly to the directory structure of functional units in MLIR. Examples: • mlir.ir • mlir.passes (pass is a reserved word :( ) • mlir.dialect • mlir.execution_engine (aside from namespacing, it is important that “bulky”/optional parts like this are isolated) In addition, initialization functions that imply optional dependencies should be in underscored (notionally private) modules such as _init and linked separately. This allows downstream integrators to completely customize what is included “in the box” and covers things like dialect registration, pass registration, etc. LLVM/MLIR is a non-trivial python-native project that is likely to co-exist with other non-trivial native extensions. As such, the native extension (i.e. the .so/.pyd/.dylib) is exported as a notionally private top-level symbol (_mlir), while a small set of Python code is provided in mlir/_cext_loader.py and siblings which loads and re-exports it. This split provides a place to stage code that needs to prepare the environment before the shared library is loaded into the Python runtime, and also provides a place that one-time initialization code can be invoked apart from module constructors. It is recommended to avoid using __init__.py files to the extent possible, until reaching a leaf package that represents a discrete component. The rule to keep in mind is that the presence of an __init__.py file prevents the ability to split anything at that level or below in the namespace into different directories, deployment packages, wheels, etc. ### Use the C-API ¶ The Python APIs should seek to layer on top of the C-API to the degree possible. Especially for the core, dialect-independent parts, such a binding enables packaging decisions that would be difficult or impossible if spanning a C++ ABI boundary. In addition, factoring in this way side-steps some very difficult issues that arise when combining RTTI-based modules (which pybind derived things are) with non-RTTI polymorphic C++ code (the default compilation mode of LLVM). ### Ownership in the Core IR ¶ There are several top-level types in the core IR that are strongly owned by their python-side reference: • PyContext (mlir.ir.Context) • PyModule (mlir.ir.Module) • PyOperation (mlir.ir.Operation) - but with caveats All other objects are dependent. All objects maintain a back-reference (keep-alive) to their closest containing top-level object. Further, dependent objects fall into two categories: a) uniqued (which live for the life-time of the context) and b) mutable. Mutable objects need additional machinery for keeping track of when the C++ instance that backs their Python object is no longer valid (typically due to some specific mutation of the IR, deletion, or bulk operation). ### Optionality and argument ordering in the Core IR ¶ The following types support being bound to the current thread as a context manager: • PyLocation (loc: mlir.ir.Location = None) • PyInsertionPoint (ip: mlir.ir.InsertionPoint = None) • PyMlirContext (context: mlir.ir.Context = None) In order to support composability of function arguments, when these types appear as arguments, they should always be the last and appear in the above order and with the given names (which is generally the order in which they are expected to need to be expressed explicitly in special cases) as necessary. Each should carry a default value of py::none() and use either a manual or automatic conversion for resolving either with the explicit value or a value from the thread context manager (i.e. DefaultingPyMlirContext or DefaultingPyLocation). The rationale for this is that in Python, trailing keyword arguments to the right are the most composable, enabling a variety of strategies such as kwarg passthrough, default values, etc. Keeping function signatures composable increases the chances that interesting DSLs and higher level APIs can be constructed without a lot of exotic boilerplate. Used consistently, this enables a style of IR construction that rarely needs to use explicit contexts, locations, or insertion points but is free to do so when extra control is needed. #### Operation hierarchy ¶ As mentioned above, PyOperation is special because it can exist in either a top-level or dependent state. The life-cycle is unidirectional: operations can be created detached (top-level) and once added to another operation, they are then dependent for the remainder of their lifetime. The situation is more complicated when considering construction scenarios where an operation is added to a transitive parent that is still detached, necessitating further accounting at such transition points (i.e. all such added children are initially added to the IR with a parent of their outer-most detached operation, but then once it is added to an attached operation, they need to be re-parented to the containing module). Due to the validity and parenting accounting needs, PyOperation is the owner for regions and blocks and needs to be a top-level type that we can count on not aliasing. This let’s us do things like selectively invalidating instances when mutations occur without worrying that there is some alias to the same operation in the hierarchy. Operations are also the only entity that are allowed to be in a detached state, and they are interned at the context level so that there is never more than one Python mlir.ir.Operation object for a unique MlirOperation, regardless of how it is obtained. The C/C++ API allows for Region/Block to also be detached, but it simplifies the ownership model a lot to eliminate that possibility in this API, allowing the Region/Block to be completely dependent on its owning operation for accounting. The aliasing of Python Region/Block instances to underlying MlirRegion/MlirBlock is considered benign and these objects are not interned in the context (unlike operations). If we ever want to re-introduce detached regions/blocks, we could do so with new “DetachedRegion” class or similar and also avoid the complexity of accounting. With the way it is now, we can avoid having a global live list for regions and blocks. We may end up needing an op-local one at some point TBD, depending on how hard it is to guarantee how mutations interact with their Python peer objects. We can cross that bridge easily when we get there. Module, when used purely from the Python API, can’t alias anyway, so we can use it as a top-level ref type without a live-list for interning. If the API ever changes such that this cannot be guaranteed (i.e. by letting you marshal a native-defined Module in), then there would need to be a live table for it too. ## Style ¶ In general, for the core parts of MLIR, the Python bindings should be largely isomorphic with the underlying C++ structures. However, concessions are made either for practicality or to give the resulting library an appropriately “Pythonic” flavor. ### Properties vs get*() methods ¶ Generally favor converting trivial methods like getContext(), getName(), isEntryBlock(), etc to read-only Python properties (i.e. context). It is primarily a matter of calling def_property_readonly vs def in binding code, and makes things feel much nicer to the Python side. For example, prefer: m.def_property_readonly("context", ...) Over: m.def("getContext", ...) ### repr methods ¶ Things that have nice printed representations are really great :) If there is a reasonable printed form, it can be a significant productivity boost to wire that to the __repr__ method (and verify it with a doctest ). ### CamelCase vs snake_case ¶ Name functions/methods/properties in snake_case and classes in CamelCase. As a mechanical concession to Python style, this can go a long way to making the API feel like it fits in with its peers in the Python landscape. If in doubt, choose names that will flow properly with other PEP 8 style names . ### Prefer pseudo-containers ¶ Many core IR constructs provide methods directly on the instance to query count and begin/end iterators. Prefer hoisting these to dedicated pseudo containers. For example, a direct mapping of blocks within regions could be done this way: region = ... for block in region: pass However, this way is preferred: region = ... for block in region.blocks: pass print(len(region.blocks)) print(region.blocks[0]) print(region.blocks[-1]) Instead of leaking STL-derived identifiers (front, back, etc), translate them to appropriate __dunder__ methods and iterator wrappers in the bindings. Note that this can be taken too far, so use good judgment. For example, block arguments may appear container-like but have defined methods for lookup and mutation that would be hard to model properly without making semantics complicated. If running into these, just mirror the C/C++ API. ### Provide one stop helpers for common things ¶ One stop helpers that aggregate over multiple low level entities can be incredibly helpful and are encouraged within reason. For example, making Context have a parse_asm or equivalent that avoids needing to explicitly construct a SourceMgr can be quite nice. One stop helpers do not have to be mutually exclusive with a more complete mapping of the backing constructs. ## Testing ¶ Tests should be added in the test/Bindings/Python directory and should typically be .py files that have a lit run line. We use lit and FileCheck based tests: • For generative tests (those that produce IR), define a Python module that constructs/prints the IR and pipe it through FileCheck. • Parsing should be kept self-contained within the module under test by use of raw constants and an appropriate parse_asm call. • Any file I/O code should be staged through a tempfile vs relying on file artifacts/paths outside of the test module. • For convenience, we also test non-generative API interactions with the same mechanisms, printing and CHECKing as needed. ### Sample FileCheck test ¶ # RUN: %PYTHON %s | mlir-opt -split-input-file | FileCheck # TODO: Move to a test utility class once any of this actually exists. def print_module(f): m = f() print("// -----") print("// TEST_FUNCTION:", f.__name__) print(m.to_asm()) return f # CHECK-LABEL: TEST_FUNCTION: create_my_op @print_module def create_my_op(): m = mlir.ir.Module() builder = m.new_op_builder() # CHECK: mydialect.my_operation ... builder.my_op() return m ## Integration with ODS ¶ The MLIR Python bindings integrate with the tablegen-based ODS system for providing user-friendly wrappers around MLIR dialects and operations. There are multiple parts to this integration, outlined below. Most details have been elided: refer to the build rules and python sources under mlir.dialects for the canonical way to use this facility. Users are responsible for providing a {DIALECT_NAMESPACE}.py (or an equivalent directory with __init__.py file) as the entrypoint. ### Generating _{DIALECT_NAMESPACE}_ops_gen.py wrapper modules ¶ Each dialect with a mapping to python requires that an appropriate _{DIALECT_NAMESPACE}_ops_gen.py wrapper module is created. This is done by invoking mlir-tblgen on a python-bindings specific tablegen wrapper that includes the boilerplate and actual dialect specific td file. An example, for the StandardOps (which is assigned the namespace std as a special case): #ifndef PYTHON_BINDINGS_STANDARD_OPS #define PYTHON_BINDINGS_STANDARD_OPS include "mlir/Bindings/Python/Attributes.td" include "mlir/Dialect/StandardOps/IR/Ops.td" #endif In the main repository, building the wrapper is done via the CMake function add_mlir_dialect_python_bindings, which invokes: mlir-tblgen -gen-python-op-bindings -bind-dialect={DIALECT_NAMESPACE} \ {PYTHON_BINDING_TD_FILE} The generates op classes must be included in the {DIALECT_NAMESPACE}.py file in a similar way that generated headers are included for C++ generated code: from ._my_dialect_ops_gen import * ### Extending the search path for wrapper modules ¶ When the python bindings need to locate a wrapper module, they consult the dialect_search_path and use it to find an appropriately named module. For the main repository, this search path is hard-coded to include the mlir.dialects module, which is where wrappers are emitted by the abobe build rule. Out of tree dialects and add their modules to the search path by calling: mlir._cext.append_dialect_search_prefix("myproject.mlir.dialects") ### Wrapper module code organization ¶ The wrapper module tablegen emitter outputs: • A _Dialect class (extending mlir.ir.Dialect) with a DIALECT_NAMESPACE attribute. • An {OpName} class for each operation (extending mlir.ir.OpView). • Decorators for each of the above to register with the system. Note: In order to avoid naming conflicts, all internal names used by the wrapper module are prefixed by _ods_. Each concrete OpView subclass further defines several public-intended attributes: • OPERATION_NAME attribute with the str fully qualified operation name (i.e. std.absf). • An __init__ method for the default builder if one is defined or inferred for the operation. • @property getter for each operand or result (using an auto-generated name for unnamed of each). • @property getter, setter and deleter for each declared attribute. It further emits additional private-intended attributes meant for subclassing and customization (default cases omit these attributes in favor of the defaults on OpView): • _ODS_REGIONS: A specification on the number and types of regions. Currently a tuple of (min_region_count, has_no_variadic_regions). Note that the API does some light validation on this but the primary purpose is to capture sufficient information to perform other default building and region accessor generation. • _ODS_OPERAND_SEGMENTS and _ODS_RESULT_SEGMENTS: Black-box value which indicates the structure of either the operand or results with respect to variadics. Used by OpView._ods_build_default to decode operand and result lists that contain lists. #### Default Builder ¶ Presently, only a single, default builder is mapped to the __init__ method. The intent is that this __init__ method represents the most specific of the builders typically generated for C++; however currently it is just the generic form below. • One argument for each declared result: • For single-valued results: Each will accept an mlir.ir.Type. • For variadic results: Each will accept a List[mlir.ir.Type]. • One argument for each declared operand or attribute: • For single-valued operands: Each will accept an mlir.ir.Value. • For variadic operands: Each will accept a List[mlir.ir.Value]. • For attributes, it will accept an mlir.ir.Attribute. • Trailing usage-specific, optional keyword arguments: • loc: An explicit mlir.ir.Location to use. Defaults to the location bound to the thread (i.e. with Location.unknown():) or an error if none is bound nor specified. • ip: An explicit mlir.ir.InsertionPoint to use. Default to the insertion point bound to the thread (i.e. with InsertionPoint(...):). In addition, each OpView inherits a build_generic method which allows construction via a (nested in the case of variadic) sequence of results and operands. This can be used to get some default construction semantics for operations that are otherwise unsupported in Python, at the expense of having a very generic signature. #### Extending Generated Op Classes ¶ Note that this is a rather complex mechanism and this section errs on the side of explicitness. Users are encouraged to find an example and duplicate it if they don’t feel the need to understand the subtlety. The builtin dialect provides some relatively simple examples. As mentioned above, the build system generates Python sources like _{DIALECT_NAMESPACE}_ops_gen.py for each dialect with Python bindings. It is often desirable to to use these generated classes as a starting point for further customization, so an extension mechanism is provided to make this easy (you are always free to do ad-hoc patching in your {DIALECT_NAMESPACE}.py file but we prefer a more standard mechanism that is applied uniformly). To provide extensions, add a _{DIALECT_NAMESPACE}_ops_ext.py file to the dialects module (i.e. adjacent to your {DIALECT_NAMESPACE}.py top-level and the *_ops_gen.py file). Using the builtin dialect and FuncOp as an example, the generated code will include an import like this: try: from . import _builtin_ops_ext as _ods_ext_module except ImportError: _ods_ext_module = None Then for each generated concrete OpView subclass, it will apply a decorator like: @_ods_cext.register_operation(_Dialect) @_ods_extend_opview_class(_ods_ext_module) class FuncOp(_ods_ir.OpView): See the _ods_common.py extend_opview_class function for details of the mechanism. At a high level: • If the extension module exists, locate an extension class for the op (in this example, FuncOp): • First by looking for an attribute with the exact name in the extension module. • Falling back to calling a select_opview_mixin(parent_opview_cls) function defined in the extension module. • If a mixin class is found, a new subclass is dynamically created that multiply inherits from ({_builtin_ops_ext.FuncOp}, _builtin_ops_gen.FuncOp). The mixin class should not inherit from anything (i.e. directly extends object only). The facility is typically used to define custom __init__ methods, properties, instance methods and static methods. Due to the inheritance ordering, the mixin class can act as though it extends the generated OpView subclass in most contexts (i.e. issubclass(_builtin_ops_ext.FuncOp, OpView) will return False but usage generally allows you treat it as duck typed as an OpView). There are a couple of recommendations, given how the class hierarchy is defined: • For static methods that need to instantiate the actual “leaf” op (which is dynamically generated and would result in circular dependencies to try to reference by name), prefer to use @classmethod and the concrete subclass will be provided as your first cls argument. See _builtin_ops_ext.FuncOp.from_py_func as an example. • If seeking to replace the generated __init__ method entirely, you may actually want to invoke the super-super-class mlir.ir.OpView constructor directly, as it takes an mlir.ir.Operation, which is likely what you are constructing (i.e. the generated __init__ method likely adds more API constraints than you want to expose in a custom builder). A pattern that comes up frequently is wanting to provide a sugared __init__ method which has optional or type-polymorphism/implicit conversions but to otherwise want to invoke the default op building logic. For such cases, it is recommended to use an idiom such as: def __init__(self, sugar, spice, *, loc=None, ip=None): ... massage into result_type, operands, attributes ... OpView.__init__(self, self.build_generic( results=[result_type], operands=operands, attributes=attributes, loc=loc, ip=ip)) Refer to the documentation for build_generic for more information.
# Difference between revisions of "stat340s13" ## Introduction, Class 1 - Tuesday, May 7 {{ Template:namespace detect | type = style | image = | imageright = | style = | textstyle = | text = This article may require cleanup to meet Wikicoursenote's quality standards. The specific problem is: use math environment and LaTex notations for formulas. For example instead of y=1-e^(-λx) write $y=1-e^{-\lambda x}$. Please improve this article if you can. | small = | smallimage = | smallimageright = | smalltext = }} ### Course Instructor: Ali Ghodsi Lecture: 001: TTh 8:30-9:50 MC1085 002: TTh 1:00-2:20 DC1351 Tutorial: 2:30-3:20 Mon M3 1006 ### Midterm Monday June 17 2013 from 2:30-3:30 ### TA(s): TA Day Time Location Lu Cheng Monday 3:30-5:30 pm M3 3108, space 2 Han ShengSun Tuesday 4:00-6:00 pm M3 3108, space 2 Yizhou Fang Wednesday 1:00-3:00 pm M3 3108, space 1 Huan Cheng Thursday 3:00-5:00 pm M3 3111, space 1 Wu Lin Friday 11:00-1:00 pm M3 3108, space 1 ### Four Fundamental Problems 1. Classification: Given an input object X, we have a function which will take in this input X and identify which 'class (Y)' it belongs to (Discrete Case) i.e taking value from x, we could predict y. (For example, an image of a fruit can be classified, through some sort of algorithm to be a picture of either an apple or an orange.) 2. Regression: Same as classification but in the continuous case except y is discrete 3. Clustering: Use common features of objects in same class or group to form clusters.(in this case, x is given, y is unknown) 4. Dimensionality Reduction (aka Feature extraction, Manifold learning) ### Applications Most useful when structure of the task is not well understood but can be characterized by a dataset with strong statistical regularity Examples: • Computer Vision, Computer Graphics, Finance (fraud detection), Machine Learning • Search and recommendation (eg. Google, Amazon) • Automatic speech recognition, speaker verification • Text parsing • Face identification • Tracking objects in video • Financial prediction(e.g. credit cards) • Fraud detection • Medical diagnosis ### Course Information Prerequisite: (One of CS 116, 126/124, 134, 136, 138, 145, SYDE 221/322) and (STAT 230 with a grade of at least 60% or STAT 240) and (STAT 231 or 241) Antirequisite: CM 361/STAT 341, CS 437, 457 General Information • No required textbook • Recommended: "Simulation" by Sheldon M. Ross • Computing parts of the course will be done in Matlab, but prior knowledge of Matlab is not essential (will have a tutorial on it) • First midterm will be held on Monday, June 17 from 2:30 to 3:30 • Announcements and assignments will be posted on Learn. • Other course material on: http://wikicoursenote.com/wiki/ • Log on to both Learn and wikicoursenote frequently. • Email all questions and concerns to [email protected]. Do not use your personal email address! Wikicourse note (10% of final mark): When applying an account in the wikicourse note, please use the quest account as your login name while the uwaterloo email as the registered email. This is important as the quest id will use to identify the students who make the contributions. Example: User: questid Email: [email protected] After the student has made the account request, do wait for several hours before students can login into the account using the passwords stated in the email. During the first login, students will be ask to create a new password for their account. As a technical/editorial contributor: Make contributions within 1 week and do not copy the notes on the blackboard. Must do both All contributions are now considered general contributions you must contribute to 50% of lectures for full marks • A general contribution can be correctional (fixing mistakes) or technical (expanding content, adding examples, etc) but at least half of your contributions should be technical for full marks Do not submit copyrighted work without permission, cite original sources. Each time you make a contribution, check mark the table. Marks are calculated on honour system, although there will be random verifications. If you are caught claiming to contribute but didn't, you will lose marks. Wikicoursenote contribution form : [1] - you can submit your contributions in multiple times. - you will be able to edit the response right after submitting - send email to make changes to an old response : [email protected] ### Tentative Topics - Random variable and stochastic process generation - Discrete-Event Systems - Variance reduction - Markov Chain Monte Carlo ### Tentative Marking Scheme Item Value Assignments (~6) 30% WikiCourseNote 10% Midterm 20% Final 40% The final exam is going to be closed book and only non-programmable calculators are allowed A passing mark must be achieved in the final to pass the course ## Sampling (Generating random numbers), Class 2 - Thursday, May 9 ### Introduction Some people believe that sampling activities such as rolling a dice and flipping a coin are not truly random but are deterministic, since the result can be reliably calculated using things such as physics and math. In general, a deterministic model produces specific results given certain inputs by the model user, contrasting with a stochastic model which encapsulates randomness and probabilistic events. A computer cannot generate truly random numbers because computers can only run algorithms, which are deterministic in nature. They can, however, generate Pseudo Random Numbers; numbers that seem random but are actually deterministic. Although the pseudo random numbers are deterministic, these numbers have a sequence of value and all of them have the appearances of being independent uniform random variables. Being deterministic, pseudo random numbers are valuable and beneficial due to the ease to generate and manipulate. ### Mod Let $n \in \N$ and $m \in \N^+$, then by Division Algorithm, $\exists q, \, r \in \N \;\text{with}\; 0\leq r \lt m, \; \text{s.t.}\; n = mq+r$, where $q$ is called the quotient and $r$ the remainder. Hence we can define a binary function $\mod : \N \times \N^+ \rightarrow \N$ given by $r:=n \mod m$ which means take the remainder after division by m. Note: $\mod$ here is different from the modulo congruence relation in $\Z_m$, which is an equivalence relation instead of a function. ### Multiplicative Congruential Algorithm This is a simple algorithm used to generate uniform pseudo random numbers. It is also referred to as the Linear Congruential Method or Mixed Congruential Method. We define the Linear Congruential Method to be $x_{k+1}=(ax_k + b) \mod m$, where $x_k, a, b, m \in \N, \;\text{with}\; a, m \gt 0$. ( $\mod m$ means taking the remainder after division by m) Given a "seed"(all integers and an initial value x0 called seed) $.(x_0 \in \N$, we can obtain values for $x_1, \, x_2, \, \cdots, x_n$ inductively. The Multiplicative Congruential Method may also refer to the special case where $b=0$. An interesting fact about Linear Congruential Method is that it is one of the oldest and best-known pseudorandom number generator algorithms. It is very fast and requires minimal memory to retain state. However, this method should not be used for applications where high-quality randomness is required. They should not be used for Monte Carlo simulation and cryptographic applications. First consider the following algorithm $x_{k+1}=x_{k} \mod m$ Example $\text{Let }x_{0}=10,\,m=3$ \begin{align} x_{1} &{}= 10 mod{3} = 1 \\ x_{2} &{}= 1 mod{3} = 1 \\ x_{3} &{}= 1 mod{3} =1 \\ \end{align} $\ldots$ Excluding x0, this example generates a series of ones. In general, excluding x0, the algorithm above will always generate a series of the same number less than m. We modify this algorithm to form the Multiplicative Congruential Algorithm. Multiplicative Congruential Algorithm $x_{k+1}=(a \cdot x_{k} + b) \mod m$ Example $\text{Let }a=2,\, b=1, \, m=3, \, x_{0} = 10$ \begin{align} \text{Step 1: } 0&{}=(2\cdot 10 + 1) &{}\mod 3 \\ \text{Step 2: } 1&{}=(2\cdot 0 + 1) &{}\mod 3 \\ \text{Step 3: } 0&{}=(2\cdot 1 + 1) &{}\mod 3 \\ \end{align} $\ldots$ This example generates a sequence with a repeating cycle of two integers. (If we choose the numbers properly, we could get a sequence of "random" numbers. However, how do we find the value of $a,b,$ and $m$? At the very least $m$ should be a very large, preferably prime number. The larger $m$ is, the higher possibility people get a sequence of "random" numbers. This is easier to solve in Matlab. In Matlab, the command rand() generates random numbers which are uniformly distributed in the interval (0,1)). Matlab uses $a=7^5, b=0, m=2^31-1$(2 to the power of 31 minus 1) – recommended in a 1988 paper, "Random Number Generators: Good Ones Are Hard To Find" by Stephen K. Park and Keith W. Miller (Important part is that $m$ should be large) MatLab Instruction for Multiplicative Congruential Algorithm: Before you start, you need to clear all existing defined variables and operations: >>clear all >>close all >>a=17 >>b=3 >>m=31 >>x=5 >>mod(a*x+b,m) ans=26 >>x=mod(a*x+b,m) (Note: 1. Keep repeating this command over and over again and you will seem to get random numbers – this is how the command rand works in a computer. 2. There is a function called RAND to generate a number between 0 and 1. 3. If we would like to generate 1000 and more numbers, we could use a for loop) (Note on MATLAB commands: 1. clear all: clears all variables. 2. close all: closes all figures. 3. who: displays all defined variables. 4. clc: clears screen.) >>a=13 >>b=0 >>m=31 >>x(1)=1 >>for ii=2:1000 x(ii)=mod(a*x(ii-1)+b,m); end >>size(x) ans=1 1000 >>hist(x) (Note: The semicolon after the x(ii)=mod(a*x(ii-1)+b,m) ensures that Matlab will not show the entire vector of x. It will instead calculate it internally and you will be able to work with it. Adding the semicolon to the end of this line reduces the run time significantly.) This algorithm involves three integer parameters $a, b,$ and $m$ and an initial value, $x_0$ called the seed. A sequence of numbers is defined by $x_{k+1} = ax_k+ b \mod m$. $\mod m$ means taking the remainder after division by $m$. Note: For some bad $a$ and $b$, the histogram may not looks uniformly distributed. Note: hist(x) will generate a graph about the distribution. Use it after run the code to check the real sample distribution. Example: $a=13, b=0, m=31$ The first 30 numbers in the sequence are a permutation of integers from 1 to 30, and then the sequence repeats itself so it is important to choose $m$ large to decrease the probability of each number repeating itself too early. Values are between $0$ and $m-1$. If the values are normalized by dividing by $m-1$, then the results are approximately numbers uniformly distributed in the interval [0,1]. There is only a finite number of values (30 possible values in this case). In MATLAB, you can use function "hist(x)" to see if it looks uniformly distributed. If $x_0=1$, then $x_{k+1} = 13x_{k}\mod{31}$ So, \begin{align} x_{0} &{}= 1 \\ x_{1} &{}= 13 \times 1 + 0 &{}\mod{31} = 13 \\ x_{2} &{}= 13 \times 13 + 0 &{}\mod{31} = 14 \\ x_{3} &{}= 13 \times 14 + 0 &{}\mod{31} =27 \\ \end{align} etc. For example, with $a = 3, b = 2, m = 4, x_0 = 1$, we have: $x_{k+1} = (3x_{k} + 2)\mod{4}$ So, \begin{align} x_{0} &{}= 1 \\ x_{1} &{}= 3 \times 1 + 2 \mod{4} = 1 \\ x_{2} &{}= 3 \times 1 + 2 \mod{4} = 1 \\ \end{align} etc. FAQ: 1.Why in the example above is 1 to 30 not 0 to 30? b = 0 so in order to have 0 remainder at xk, xk-1 must be 0 (since a=13 is not a multiple of 31). However, our seed is 1. Hence, we will never observe 0 in the sequence. 2.Will the number 31 ever appear?Is there a probability that a number never appears? The number 31 will never appear. When you perform the operation mod m, the highest possible answer that you could receive is m-1. The probability that a particular number in the range from 0 to m - 1 never appears in the above algorithm will be dependent on the values chosen for a, b and m. Examples:[From Textbook] If x0=3 and xn=(5xn-1+7)mod 200, find x1,...,x10. Solution: x1= (15+7) mod 200= 22 x2= 117 mod 200= 117 x3= 592 mod 200 = 192 x4= 2967 mod 200= 167 x5= 14842 mod 200= 42 x6= 74217 mod 200 = 17 x7= 371092 mod 200= 92 x8= 1855467 mod 200= 67 x9= 9277342 mod 200 = 142 x10= 46386717 mod 200 = 117 Typically, it is good to choose m such that m is large, and m is prime. Careful selection of parameters 'a' and 'b' also helps generate relatively "random" output values, where it is harder to identify patterns. For example, when we used a non prime number such as 40 for m, our results were not satisfactory in producing an output resembling a uniform distribution. The computed values are between 0 and m-1. If the values are normalized by dividing by m-1, their result is numbers uniformly distributed on the interval [0,1] (similar to computing from uniform distribution). From the example shown above, we can see to create a good random number sequence generator, we need to select a large m. As the xn value is dependent on the (5xn-1+7)value, such that the value it can be is between 0 to m after that a value will be repeated; and once this happens the whole sequence will began to repeat. Thus, if we want to create large group of random number, it is better to have large m such that the random value generated will not be repeated after several recursion. Example: For xn = (2xn-1+1) mod 3 where x0=2, x1 = 5 mod 3 = 2 Notice that, with the small value m, the random number generated repeated itself is faster than when the value is large enough. There has been a research about how to choose uniform sequence. Many programs give you the chance to choose the seed. Sometimes the first number is chosen by CPU. Moreover, it is fact that not all variables are uniform. Some has normal distribution, or exponential distribution, or binomial distribution as well. ### Inverse Transform Method This method is useful for generating types of distribution other than uniform distribution, such as exponential distribution and normal distribution. However, to use this method in generating pseudorandom numbers, the probability distribution consumed must be a cdf such that it is continuous for the F-1 always exists. Exponential distribution has the property that generated numbers are frequently close to 0. Normal distribution has the property that generated numbers are frequently close to its mean. Theorem: If U ~ U[0,1] then the random variable $x=F^{-1}(U)$ has the distribution function F(•) Where $F(x) = P(X$$x)$ is the CDF and; $F^{-1}(U)$denotes the inverse function of F(•) Or that $F(x)=U$ $\Rightarrow$ $x=F^{-1}(U)$ Proof of the theorem: For all u є [0,1] and for all x є $F^{-1}(U)$, the generalized inverse satisfies F($F^{-1}(u)$) > u and $F^{-1}(F(x))$ < x, Therefore, $F(x) = P(X$$x)$ $= P(F^{-1}(U)$$x)$ $= P(F(F^{-1}(U))$$F(x))$  ::"Since F in a non-decreasing function on R,the we can add F to both sides" $= P(U$$F(x))$ $= F(x)$  ::"Because $Pr(U$$y)=y$,since U is uniform on the unit interval" Therefore, in order to generate a random variable X~F, it can generate U according to U(0,1) and then make the transformation x=$F^{-1}(U)$ Example: $f(x) = \lambda e^{-\lambda x}$ $F(x)= \int_0^x f(x) dx$ $= \int_0^x \lambda e ^{-\lambda x}\ dx$ $= \frac{\lambda}{-\lambda}\, e^{-\lambda x}\, | \underset{0}{x}$ $= -e^{\lambda x} + e^0$ $=1 - e^{- \lambda x}$ $y=1-e^{- \lambda x}$ $1-y=e^{- \lambda x}$ $x=-ln(1-y)/{\lambda}$ $y=-ln(1-x)/{\lambda}$ $F^{-1}(x)=-ln(1-x)/{\lambda}$ Step 1: Draw U ~U[0,1]; Step 2: $x=\frac{-ln(1-U)}{\lambda}$ Example: $X= a + (b-a),$ U is uniform on [a, b] $x=\frac{-ln(U)}{\lambda}$ is exponential with parameter ${\lambda}$ Example 2: Given a CDF of X: $F(x) = x^5$, transform U~U[0,1]. Sol: Let $y=x^5$, solve for x: $x=y^{1/5}$. Therefore, $F^{-1} (x) = x^{1/5}$ Hence, to obtain a value of x from F(x), we first set u as an uniform distribution, then obtain the inverse function of F(x), and set $x= u^{1/5}$ Example 3: Given u~U[0,1], generate x from BETA(1,β) Solution: F(x)= 1-(1-x)^β u= 1-(1-x)^β Solve for x: (1-x)^β = 1-u 1-x = (1-u)^(1/β) x = 1-(1-u)^(1/β) Example 4-Estimating pi: Let's use rand() and Monte Carlo Method to estimate $pi$ N= total number of points Nc = total number of points inside the circle Prob[(x,y) lies in the circle]=$Area of circle/Area of Square$ If we take square of size 2, circle will have area pi. Thus pi= $4*(Nc/N)$ Matlab Code: N=10000;Nc=0;a=0,b=2; for t=1:1000 x=a+(b-a)*rand() y=a+(b-a)*rand()[/itex] if (x-1).^2+(y-1).^2<=1 Nc=Nc+1; end end In Matlab, you can use functions: "who" to see what variables you have defined "clear all" to clear all variables you have defined "close all" to close all figures MatLab for Inverse Transform Method: >>u=rand(1,1000) >>hist(u) #will generate a fairly uniform diagram #let λ=2 in this example; however, you can make another value for λ >>x=(-log(1-u))/2; >>size(x) #1000 in size >>figure >>hist(x) #exponential Limitations: 1. This method is flawed since not all functions are invertible nor monotonic. 2. It may be impractical since some CDF's and/or integrals are not easy to compute such as Gaussian distribution. ### Probability Distribution Function Tool in MATLAB >>disttool #shows different distributions This command allows users to explore the effect of changes of parameters on the plot of either a CDF or PDF. ## (Generating random numbers continue) Class 3 - Tuesday, May 14 ### Recall the Inverse Transform Method 1. Draw U~U(0,1) 2. X = F-1(U) Proof First note that $P(U\leq a)=a, \forall a\in[0,1]$ $P(X\leq x)$ $= P(F^{-1}(U)\leq x)$ $= P((F(F^{-1}(U))\leq F(x))$ (since $F(\cdot )$ is monotonically increasing) $= P(U\leq F(x))$ $= F(x)$ This is the c.d.f. of X. Note that the CDF of a U(a,b) random variable is: $F(x)= \begin{cases} 0 & \text{for }x \lt a \\[8pt] \frac{x-a}{b-a} & \text{for }a \le x \lt b \\[8pt] 1 & \text{for }x \ge b \end{cases}$ Thus, for U~U(0,1), we have $P(U\leq 1) = 1$ and $P(U\leq 1/2) = 1/2$. More generally, we see that $P(U\leq a) = a$. For this reason, we had $P(U\leq F(x)) = F(x)$. Reminder: This is only for uniform distribution $U~ \sim~ Unif [0,1]$ P(U<=1)=1 P(U<=0.5)=0.5 $P(U\lt =a)=a$ ### Discrete Case We want to generate a discrete random variable x, that has probability mass function: In general in the discrete case, we have $x_0, \dots , x_n$ where: \begin{align}P(X = x_i) &{}= p_i \end{align} $x_0 \leq x_1 \leq x_2 \dots \leq x_n$ $\sum p_i = 1$ Algorithm for applying Inverse Transformation Method in Discrete Case: 1: Draw $U~ \sim~ Unif [0,1]$ 2. $X=x_i,$ if $F(x_{i-1})\lt U\leq F(x_i)$ Example in class:(Coin Flipping Example) We want to simulate a coin flip. We have U~U(0,1) and X = 0 or X = 1. We can define the U function so that: If U <= 0.5, then X = 0 and if 0.5 < U <= 1, then X =1. This allows the probability of Heads occurring to be 0.5 and is a good generator of a random coin flip. $U~ \sim~ Unif [0,1]$ \begin{align} P(X = 0) &{}= 0.5\\ P(X = 1) &{}= 0.5\\ \end{align} • Notice: For the bounds of X, it does not matter where you place the "=" sign; since U is a continuous random variable, the probability of f(x) equal to a point is zero. • Code >>for ii=1:1000 u=rand; if u<0.5 x(ii)=0; else x(ii)=1; end end >>hist(x) Note: The role of semi-colon in Matlab: Matlab will not print out the results if the line ends in a semi-colon and vice versa. Example in class: Suppose we have the following discrete distribution: \begin{align} P(X = 0) &{}= 0.3 \\ P(X = 1) &{}= 0.2 \\ P(X = 2) &{}= 0.5 \end{align} The cumulative density function (cdf) for this distribution is then: $F(x) = \begin{cases} 0, & \text{if } x \lt 0 \\ 0.3, & \text{if } x \lt 1 \\ 0.5, & \text{if } x \lt 2 \\ 1, & \text{if } x \gt 2 \end{cases}$ Then we can generate numbers from this distribution like this, given $U \sim~ Unif[0, 1]$: $x = \begin{cases} 0, & \text{if } U\leq 0.3 \\ 1, & \text{if } 0.3 \lt U \leq 0.5 \\ 2, & \text{if } 0.5 \lt U\leq 1 \end{cases}$ • Code Matlab code in class Use Editor window to edit the code >>close all >>clear all >>for ii=1:1000 u=rand; if u<0.3 x(ii)=0; elseif u<0.5 x(ii)=1; else x(ii)=2; end end >>size(x) >>hist(x) Example: Generating a Bernoulli random variable \begin{align} P(X = 1) = p, P(X = 0) = 1 - p\end{align} $F(x) = \begin{cases} 1-p, & \text{if } x \lt 1 \\ 1, & \text{if } 1 \leq x \end{cases}$ 1. Draw $U~ \sim~ Unif [0,1]$ 2. $X = \begin{cases} 1, & \text{if } U\leq p \\ 0, & \text{if } U \gt p \end{cases}$ Example: Generating Geometric Distribution: Consider Geo(p) where p is the probability of success, and define random variable X such that X is the number of failure before the first success. x=1,2,3..... We have pmf: P(X=xi) = (1-p)xi-1 *p We have CDF: F(x) = P(X<=x) = 1-p(X>x) = 1 - (1-p)x, p(X>x) means we get at least x failures before observe the first success. Now consider the inverse transform: $x = \begin{cases} 1, & \text{if } U\leq p \\ 2, & \text{if } p \lt U \leq 1-(1-p)^2 \\ 3, & \text{if } 1-(1-p)^2 \lt U\leq 1-(1-p)^3 \\ .... k, & \text{if } 1-(1-p)^(k-1) \lt U\leq 1-(1-p)^k .... \end{cases}$ Note: Unlike the continuous case, the discrete inverse-transform method can always be used for any discrete distribution (but it may not be the most efficient approach) General Procedure 1. Draw U ~ U(0,1) 2. If $U \leq P_{0}$ deliver $x = x_{0}$ 3. Else if $U \leq P_{0} + P_{1}$ deliver $x = x_{1}$ 4. Else if $U \leq P_{0} + P_{1} + P_{2}$ deliver $x = x_{2}$ ... Else if $U \leq P_{0} + ... + P_{k}$ deliver $x = x_{k}$ The problem with the inverse transformation method is that we have to find $F^{-1}$ while not all functions have an inverse. Unfortunately, with many distributions such as the Gaussian(also known as Normal distribution, the only difference is Gaussian is represented by standard deviation while normal is represented by variance) , it is too difficult to find the inverse of $F(x)$. In this case, another alternative is to use the acceptance-rejection method that follows. ### Acceptance-Rejection Method Although the inverse transformation method does allow us to change our uniform distribution, it has two limits; 1. Not all functions have inverse functions (ie, the range of x and y have limit and do not fix the inverse functions) 2. For some distributions, such as Gaussian, it is too difficult to find the inverse To generate random samples for these functions, we will use different methods, such as the Acceptance-Rejection Method. Suppose we want to draw random sample from a target density function f(x), x∈Sx, where Sx is the support of f(x). If we can find some constant c(≥1) and a density function g(x) having the same support Sx so that f(x)≤cg(x), ∀x∈Sx, then we can apply the procedure for Acceptance-Rejection Method. Typically we choose a density function that we already know how to sample from for g(x). {{ Template:namespace detect | type = style | image = | imageright = | style = | textstyle = | text = This article may require cleanup to meet Wikicoursenote's quality standards. The specific problem is: Do not write $c*g(x)$. Instead write $c \times g(x)$ or $\,c g(x)$. Please improve this article if you can. | small = | smallimage = | smallimageright = | smalltext = }} Note: If the red line was only g(x) as opposed to c*g(x) (i.e. c=1), then g(x)>= f(x) for all values of x iff g and f are the same functions. This because both g(x) and f(x) are pdfs, so the area under each function is equal to 1. To elaborate further on this, notice that g(x) is larger than f(x) for the entire visible domain. However, if we observe the tails over a large enough domain, we will notice that after some point f will become larger than g. This happens because we are looking at pdfs, as mentioned above. Also remember that c*g(x) always generates higher probability than what we need. Thus we need an approach of getting the proper probabilities. c must be chosen so that $f(x)\eqslantless c*g(x)$ for all value of x. c can only equal 1 when f and g have the same distribution. Otherwise: Either use a software package to test if $f(x)\eqslantless c*g(x)$ for an arbitrarily chosen c > 0, or: 1. Find first and second derivatives of f(x) and g(x). 2. Identify and classify all local and absolute maximums and minimums, using the First and Second Derivative Tests, as well as all inflection points. 3. Verify that $f(x)\eqslantless c*g(x)$ at all the local maximums as well as the absolute maximums. 4. Verify that $f(x)\eqslantless c*g(x)$ at the tail ends by calculating $\lim_{x \to +\infty} \frac{f(x)}{c*g(x)}$ and $\lim_{x \to -\infty} \frac{f(x)}{c*g(x)}$ and seeing that they are both < 1. Use of L'Hopital's Rule should make this easy, since both f and g are p.d.f's, resulting in both of them approaching 0. c should be close to 1, otherwise there is a high chance we will end up rejecting our sample. value around x1 will be sampled more often under cg(x) than under f(x).<=More samples than we actually need, if $\frac{f(y)}{c*g(y)}$ is small, the acceptance-rejection technique will need to be done to these points to get the accurate amount.In the region above x1, we should accept less and reject more. around x2:number of sample that are drawn and the number we need are much closer. So in the region above x2, we accept more. As a result, g(x) and f(x) are comparable. Procedure 1. Draw Y~g(.) 2. Draw U~u(0,1) (Note: U and Y are independent) 3. If $u\leq \frac{f(y)}{cg(y)}$ (which is $P(accepted|y)$) then x=y, else return to Step 1 Note: Recall $P(U\leq a)=a$. Thus by comparing u and $\frac{f(y)}{c*g(y)}$, we can get a probability of accepting y at these points. For instance, at some points that cg(x) is much larger than f(x), the probability of accepting x=y is quite small. ie. At X1, low probability to accept the point since f(x) much smaller than cg(x). At X2, high probability to accept the point. $P(U\leq a)=a$ in Uniform Distribution. Note: Since U is the variable for uniform distribution between 0 and 1. It equals to 1 for all. The condition depends on the constant c. so the condition changes to $c\leq \frac{f(y)}{g(y)}$ ### Proof We want to show that P(x)(which is original distribution) can be obtained/sampled using a known distribution g(y). Therefore, mathematically we want to show that: $P(x) = P(y|accepted) = f(y)$ $P(y|accepted)=f(y)$ $P(y|accepted)=\frac{P(accepted|y)P(y)}{P(accepted)}$ Recall the conditional probability formulas: $P(a|b)=\frac{P(a,b)}{P(b)}$, or $P(a|b)=\frac{P(b|a)P(a)}{P(b)}$ based on the concept from procedure-step1: $P(y)=g(y)$ $P(accepted|y)=\frac{f(y)}{cg(y)}$ (the higher the value is the larger the chance it will be selected) \begin{align} P(accepted)&=\int_y\ P(accepted|y)P(y)\\ &=\int_y\ \frac{f(s)}{cg(s)}g(s)ds\\ &=\frac{1}{c} \int_y\ f(s) ds\\ &=\frac{1}{c} \end{align} Therefore: \begin{align} P(x)&=P(y|accepted)\\ &=\frac{\frac{f(y)}{cg(y)}g(y)}{1/c}\\ &=\frac{\frac{f(y)}{c}}{1/c}\\ &=f(y)\end{align} Here is an alternative introduction of Acceptance-Rejection Method -Acceptance-Rejection Method is not good for all cases. One obvious cons is that it could be very hard to pick the g(y) and the constant c in some cases. And usually, c should be a small number otherwise the amount of work when applying the method could be HUGE. -Note: When f(y) is very different than g(y), it is less likely that the point will be accepted as the ratio above would be very small and it will be difficult for u to be less than this small value. Acceptance-Rejection Method Example 1 (discrete case) We wish to generate X~Bi(2,0.5), assuming that we cannot generate this directly. We use a discrete distribution DU[0,2] to approximate this. f(x)=Pr(X=x)=2Cx*(0.5)^2 x 0 1 2 f(x) 1/4 1/2 1/4 g(x) 1/3 1/3 1/3 c=f(x)/g(x) 3/4 3/2 3/4 f(x)/(c*g(x)) 1/2 1 1/2 Since we need c>=f(x)/g(x) We need c=3/2 Therefore, the algorithm is: 1. Generate u,v~U(0,1) 2. Set y=floor(3*u) (This is using uniform distribution to generate DU[0,2] 3. If (y=0) and (v<1/2), output=0 If (y=2) and (v<1/2), output=2 Else if y=1, output=1 An elaboration of “c” c is the expected number of times the code runs to output 1 random variable. Remember that when u < f(x)/(c*g(x)) is not satisfied, we need to go over the code again. Proof Let f(x) be the function we wish to generate from, but we cannot use inverse transform method to generate directly. Let g(x) be the helper function Let kg(x)>=f(x) Since we need to generate y from g(x), Pr(select y)=g(y) Pr(output y|selected y)=Pr(u<f(y)/(c*g(y)))= (y)/(c*g(y)) (Since u~Unif(0,1)) Pr(output y)=Pr(output y1|selected y1)Pr(select y1)+ Pr(output y2|selected y2)Pr(select y2)+…+ Pr(output yn|selected yn)Pr(select yn)=1/c Consider that we are asking for expected time for the first success, it is a geometric distribution with probability of success=c Therefore, E(X)=1/(1/c))=c Acknowledgements: Some materials have been borrowed from notes from Stat340 in Winter 2013. ### Example of Acceptance-Rejection Method Generating a random variable having p.d.f. $f(x) = 20x(1 - x)^3, 0\lt x \lt 1$ Since this random variable (which is beta with parameters 2, 4) is concentrated in the interval (0, 1), let us consider the acceptance-rejection method with g(x) = 1, 0 < x < 1 To determine the constant c such that f(x)/g(x) <= c, we use calculus to determine the maximum value of f(x)/g(x) = 20x(1 - x)^3 Differentiation of this quantity yields $d/dx[f(x)/g(x)]=20*[(1-x)^3-3x(1-x)^2]$ Setting this equal to 0 shows that the maximal value is attained when x = 1/4, and thus, $f(x)/g(x)\lt = 20*(1/4)*(3/4)^3=135/64=c$ Hence, $f(x)/c*g(x)=(256/27)*(x*(1-x)^3)$ and thus the simulation procedure is as follows: 1) Generate two random numbers U1 and U2 . 2) If U2<(256/27)*U1*(1-U1)3, set X=U2, and stop Otherwise return to step 1). The average number of times that step 1) will be performed is c = 135/64. (The above example is from http://www.cs.bgu.ac.il/~mps042/acceptance.htm, example 2.) ### Simple Example of Acceptance-Rejection Method Consider the random variable X, with distribution $X$ ~ $U[0,0.5]$ So we let $f(x) = 2x$ on $[0, 1/2]$ Let $g(.)$ be $U[0,1]$ distributed. So $g(x) = x$ on $[0,1]$ Then take $c = 2$ So $f(x)/cg(x) = (2x) / {(2)(x) } = 1$ on the interval $[0, 1/2]$ and $f(x)/cg(x) = (0) / {(2)(x) } = 0$ on the interval $(1/2, 1]$ So we reject: None of the numbers generated in the interval $[0, 1/2]$ All of the numbers generated in the interval $(1/2, 1]$ And this results in the distribution $f(.)$ which is $U[0,1/2]$ ### Another Example of Acceptance-Rejection Method Generate a random variable from: $f(x)=3*x^2$, 0< x <1 Assume g(x) to be uniform over interval (0,1), where 0< x <1 Therefore: $c = max(f(x)/(c*g(x)))= 3$ $f(x)/(c*g(x))= x^2$ Acknowledgement: this is example 1 from http://www.cs.bgu.ac.il/~mps042/acceptance.htm ## Class 4 - Thursday, May 16 • When we want to find a target distribution, denoted as $f(x)$; we need to first find a proposal distribution $g(x)$ which is easy to form. • The relationship between the proposal distribution and target distribution is: $c \cdot g(x) \geq f(x)$. • Chance of acceptance is less if the distance between $f(x)$ and $c \cdot g(x)$ is big, and vice-versa, $c$ keeps $\frac {f(x)}{c \cdot g(x)}$ below 1 (so $f(x) \lt c*g(x)$), so we have to choose the constant $C$ to achieve this. How to find C: \begin{align} &cg(x) \geq f(x)\\ &c\geq \frac{f(x)}{g(x)} \\ &c= \max \left(\frac{f(x)}{g(x)}\right) \end{align} logic behind it: The Acceptance-Rejection method is about to find a distribution that is known how to sample (g(x)) from and multiply by a constant c so that c*g(x) is always greater or equal to f(x). Mathmetically, we want cg(x)>=f(x). And it means c has to be greater or equal to f(x)/g(x). So the smallest possible c that satisfies the condition is the maximum value of f(x)/g(x) • For this method to be efficient, the constant c must be selected so that the rejection rate is low. • It is easy to show that the expected number of trials for an acceptance is c. Thus the smaller c is, the lower the rejection rate, better the algorithm: Let $X$ be the number of trials for an acceptance, $X \sim~ Geo(\frac{1}{c})$ $\mathbb{E}[X] = \frac{1}{\frac{1}{c}} = c$ • So far, the only distribution we know how to sample from is the UNIFORM distribution. Procedure: 1. Choose $g(x)$ (simple density function that we know how to sample, i.e. Uniform so far) The easiest case is UNIF(0,1). However, in other cases we need to generate UNIF(a,b). We may need to perform a linear transformation on the UNIF(0,1) variable. 2. Find a constant c such that :$c \cdot g(x) \geq f(x)$ Recall the general procedure of Acceptance-Rejection Method 1. Let $Y \sim~ g(y)$ 2. Let $U \sim~ Unif [0,1]$ 3. If $U \leq \frac{f(x)}{c \cdot g(x)}$ then X=Y; else return to step 1] (This is not the way to find C. This is the general procedure. Example:Generate a random variable from the pdf $f(x) = \begin{cases} 2x, & \mbox{if }0 \leqslant x \leqslant 1 \\ 0, & \mbox{otherwise} \end{cases}$ We can note that this is a special case of Beta(2,1), where, $beta(a,b)=\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}x^{(a-1)}(1-x)^{(b-1)}$ Where Γ (n)=(n-1)! if n is positive integer $beta(2,1)= \frac{\Gamma(3)}{(\Gamma(2)\Gamma(1))}x^1 (1-x)^0 = 2x$ $g~u(0,1)$ $y~g$ $f(x)\lt =(c(g(x))$ $c\gt =\frac{f(x)}{g(x)}$ $c = max \frac{f(x)}{g(x)}$ we assume g is uniform. g~u(0,1) $c = max (2x/1), 0\lt =x\lt =1$ Taking x = 1 gives the highest possible c, which is c=2 Note that c is a scalar greater than 1. Comment: From the picture above, we could observe that the area under f(x)=2x is a half of the area under the pdf of UNIF(0,1). This is why in order to sample 1000 points of f(x) we need to sample approximately 2000 points in UNIF(0,1). And in general, if we want to sample n points from a distritubion with pdf f(x), we need to scan approximately n*c points from the proposal distribution (g(x)) in total. Step 1. Draw y~u(0,1) 2. Draw u~u(0,1) 3. if u<=(2*y)/(2*1), x=y else go to 1 Matlab Code close all clear all ii=1; jj=1; while ii<1000 y=rand; u=rand; jj=jj+1; if u<=Y x(ii)=y; ii=ii+1; end end *Note: The reason that a for loop is not used is that we need continue the looping until we get 1000 successful samples. We will reject some samples during the process and therefore do not know the number of y we are going to generate. *Note2: In this example, we used c=2, which means we accept half of the points we generate on average. Generally speaking, 1/c would be an indicator of the efficiency of your chosen proposal distribution and algorithm. Use Inverse Method for this Example $F(x)=\int_0^x \! 2s\,ds={x^2} -0={x^2}$ $y=x^2$ $x=\sqrt y$ $F^{-1}\left (\, x \, \right) =\sqrt x$ • Procedure 1: Draw $U~ \sim~ Unif [0,1]$ 2: $x=F^{-1}\left (\, u\, \right) =\sqrt u$ Matlab Code U=rand(1,1000) x=U.^.5 Matlab Tip: Periods, ".", are used to describe the operation you want performed on a vector. In the above example, to take the square root of every element in U, the notation U.^0.5 is used. However if you want to take the Square root of the entire matrix U the period, "*.*" would be excluded. i.e. Let matrix B=U^0.5, then $B^T*B=U$ ##### Example of Acceptance-Rejection Method $f(x)=3x^2, 0\lt x\lt 1;$ $g(x)=1, 0\lt x\lt 1$ $c = max \frac{f(x)}{g(x)} = max \frac{3x^2}{1} = 3$ $\frac{f(x)}{c \cdot g(x)} = x^2$ 1. Generate two uniform numbers u1 and u2 2. If $u_2 \leqslant (u_1)^2$, accept u1 as the random variable from f, if not return to Step 1 We can also use g(x)=2x for a more efficient algorithm $c = max \frac{f(x)}{g(x)} = max \frac {3x^2}{2x} = \frac {3x}{2}$ Use the inverse method to sample from g(x) $G(x)=x^2$ Generate U from U(0,1) and set $x=sqrt(u)$ 1. Generate two uniform numbers u1 and u2 2. If $u_2\lt =3sqrt(u_1)/2$, accept u1 as the random variable from f, if not return to Step 1 Possible Limitations This method could be computationally inefficient depending on the rejection rate. We may have to sample many points before we get the 1000 accepted points. For example in the example we did in class relating the f(x)=2x, we had to sample around 2070 points before we finally accepted 1000 sample points. ### How to transform $U(0,1)$ to $U(a, b)$ 1. Draw U from $U(0,1)$ 2. Take $Y=(b-a)U+a$ 3. Now Y follows $U(a,b)$ Example: Generate a random variable z from the Semicircular density $f(x)= \frac{2}{\pi R^2} \sqrt{R^2-x^2}$, -R<=x<=R. -> Proposal distribution: UNIF(-R, R) -> We know how to generate using U~UNIF(0,1) Let Y= 2R*U-R=R(2U-1) Now, we need to find c: Since c=max[f(x)/g(x)], where $f(x)= \frac{2}{\pi R^2} \sqrt{R^2-x^2}$, $g(x)=\frac{1}{2R}$, $-R\leq x\leq R$ Thus, we have to maximize R^2-x^2. => When x=0, it will be maximized. Therefore, c=4/pi We will accept the points with limit f(x)/[c*g(x)]. Since f(y)/[c*g(y)]=[(2/pi*R^2)*sqrt(R^2-y^2)]/[4/pi * 1/2R]=[(2/pi*R^2)*sqrt(R^2-R^2*(2u-1)]/[4/pi * 1/2R] • Note: Y= R(2U-1) Thus, f(y)/[c*g(y)]=sqrt(1-(2u-1)^2 1. Draw $\ U$ from $\ U(0,1)$ $Y=R(2U-1)$ 2. Draw $\ U_{1}$ from $\ U(0,1)$ 3. If $U_{1} \lt = \sqrt{1-(2U-1)^2} x = y$ else return to step 1. The condition is $U_{1} \lt = \sqrt{(1-(2U-1)^2)}$ $\ U_{1}^2 \lt = 1 - (2U -1)^2$ $\ U_{1}^2 - 1 \lt = (2U - 1)^2$ $\ 1 - U_{1}^2 \gt = (2U - 1)^2$
Alternate series question 1. Mar 15, 2005 kdinser I'm reviewing power series for use in differential equations and I'm having some trouble remembering how to deal with alternating series. For instance, if I have: $$\sum(-1)^n(\frac{3}{2})^n$$ if $$a_n=(\frac{3}{2})^n$$ This fails the alternate series test because the limit of $$a_n$$ as n goes to infinity doesn't equal 0. Can I group the (-1)^n into the fraction and call it a geometric series? In that case, it would diverge, |r| would be greater then 1. 2. Mar 15, 2005 learningphysics Yes, both ways are fine. 3. Mar 15, 2005 kdinser thanks for the help. 4. Mar 15, 2005 Data If the summand doesn't go to zero, the series cannot converge, regardless of whether it is alternating or not (using the most common definition of convergence). 5. Mar 15, 2005 learningphysics Yes, you're right. This is the best way to solve the problem. The summand doesn't go to zero so the series diverges.
# Description You wrote down all integers from 0 to $10^n - 1$, padding them with leading zeroes so their lengths are exactly n. For example, if $n=3$ then you wrote out 000, 001, ..., 998, 999. A block in an integer x is a consecutive segment of equal digits that cannot be extended to the left or to the right. For example, in the integer $00027734000$ there are three blocks of length $1$, one block of length $2$ and two blocks of length $3$. For all integers $i$ from $1$ to $n$ count the number of blocks of length $i$ among the written down integers. Since these integers may be too large, print them modulo $998244353$. Input The only line contains one integer $n$ ($1 \le n \le 2 \cdot 10^5$). Output In the only line print $n$ integers. The $i$-th integer is equal to the number of blocks of length $i$. Since these integers may be too large, print them modulo $998244353$. Examples input 1 output 10 input 2 output 180 10 # Problem solving 1. 这个串在最左边或者最右边。此时这个长度为$i$的子串有$10$种情况($0-9$),与这个子串相邻的下(上)一位不能和它相等,若相等长度就是$i+1$了。所以只有$9$种情况。剩下的$n-i-1$位这$10$个数可以随便填。所以这种情况下的长度为$i$的字串个数$=10*2*9*10^{n-i-1}$,乘$2$是因为可以在左右两边。 2. 这个串处于中间。此时这个长度为$i$的子串也是有$10$种情况($0-9$),与它相邻的两位,都只有$9$种情况。剩下的$n-i-2$位可以随便填。在长度为$n$的串中长度为$i$的子串个数为$n-i+1$,但是左右两边的不是这种情况讨论的,所以还剩下$n-i-1$种。所以这种情况下的长度为$i$的子串个数$=10*9*9*10^{n-i-2}*(n-i+1)$ # Code #include<bits/stdc++.h> using namespace std; typedef long long ll; #define pb push_back #define mp make_pair const int maxn = 2e5 + 10; const int mod = 998244353; ll ten[maxn]; int main() { ios::sync_with_stdio(0); ten[0]=1; for(int i=1;i<maxn;i++) ten[i]=ten[i-1]*10%mod; ll n; cin>>n; for(int i=1;i<n;i++){ ll ans=(2ll*9ll*ten[n-i]%mod+9ll*9ll*ten[n-i-1]*(n-i-1)%mod)%mod; cout<<ans<<" "; } cout<<10<<endl; return 0; }
Notebook ## RVOL - 20 day Relative Volume Calculaterer¶ In [14]: """ First, if you don't know what RVOL is: it is used to compare the accumulated volume at a specific point in time on the day of interest to the average acuumulated volume by the same point over the past 20 days. Why not just make an SMA of volume and compare your current volume to that? RVOL indicates changes in participation minute by minute. For example: Comparing 5 minutes into the open when accumulated volume is 100K on an average day, and on the day in question it is 1M (RVOL would be 10). RVOL would give you a better idea of a change in the participation character of the stock in question. If the stock normally does 5M in average volume in a day, the 900K change by that minute might not be picked up as significant with an SMA because it is still below average. By the end of the day, volume might be 15M or 20M. For any given minute in a day, if RVOL is elevated (>1.0 means higher than average, >3.0 means there is an abnormal amount of volume), the stock is probably being traded by more participants, which in turn might cause it to move more than average (up or down), or range aggressively. SMB capital, a propietary trading firm located in uptown Manhattan, who's main speaker is Mike Bellafiore, author of the trading classics "one good trade" and "the playbook", uses the term "in play" to describe this... (if you have ever watched more than one of their videos on youtube, you'll understand that last sentence, it wasn't an advertising plug for them). When combined with a catalyst like news, its a good benchmark to let you know the stock is something you should spend time watching due to the increased volitility that usually comes with the combination. Summary of the code: -Create a relative volume calculation based on a symbol and date of interest as inputs. Relative volume needs to be instantiated as an object first, and then .calculate(date, "symbol") is called on that object (see the bottom 3 lines for an example) -Look back through 20 days of minute data -Make a rolling sum of the volume data minute to minute (accumulated the volume) -Make a dataframe of the accumulated volumes over the 20 days, the last column to be made (the day we are interested in) is removed from the dataframe and appended to the output dataframe as 'vol_today'. -Average accumulated volumes for each row (row = minute() from timestamps), add into a new column using [dataframe].mean(axis = 1). -Remove and appended the column to the output RVOL dataframe as 'avg_vol'. -Once that is done, vol_today is divided by avg_vol to give RVOL by minute -You could change the code to only copy the 'vol_today' if you wanted to continue to roll data into the dataframe over more than 1 day. -I am no expert programmer. There are likely issues with the code, I have run into a few. This is the cleanest most elegant code I could hack together to make this indicator, sorry. You are welcome to do what you want with it and share it. I will change it as I come across issues, but it is up to you to adapt it to your needs. ***updated to be able to use on local jupyter notebook with something like a .csv file, got rid of NaN values by filling in all values with the rolling sums, added a line to be able to use pre/post market, convert times from UTC to Eastern US """ import numpy as np import pandas as pd from datetime import time, timedelta, date class RelativeVolume: def __init__(self): self.rvol = [] #use this function to create dates from timestamps def dateparse(timestamp): timer = pd.to_datetime(timestamp, unit ='s') - pd.Timedelta(hours = 5) return timer def calculate(self, tdate, symb): #-----intialize variables-------------------------------------------------------# self.separated_dates = [] #timestamp date handler self.separated_times = [] #timestamp time handler #20 trading days is standard for RVOL calculation, which is 4 - 5 day weeks when you remove weekends self.trading_date = tdate #trading date we are interested in, last day in data self.rvol_start = self.trading_date - timedelta(days=20) #rvol starting date self.symb = symb #symbol that is passed in #get minute pricing for the symbol and dates of interest pricing = get_pricing(self.symb, start_date=str(self.rvol_start), end_date=str(self.trading_date), frequency='minute') #converts UTC to US/Eastern pricing.index = pricing.index - pd.Timedelta(hours=5) #-----separate dates and times from timestamps into their own arrays----------# for x in pricing.index: mydates = x.date() if mydates not in self.separated_dates: self.separated_dates.append(mydates) #uncomment the below line if you are using data with pre/post market #time_range = pd.date_range(start = str(self.separated_dates[-1]) + ' ' + str(time(4,0)), end = str(self.separated_dates[-1]) + ' ' + str(time(20,0)), freq = "T") time_range = pd.date_range(start = str(self.separated_dates[-1]) + ' ' + str(time(9,30)), end = str(self.separated_dates[-1]) + ' ' + str(time(16,0)), freq = "T") for y in time_range: mytimes = y.time() if mytimes not in self.separated_times: self.separated_times.append(mytimes) #print(self.separated_dates) #print(self.separated_times) #-----Rolling volume accumulation--------------------------------------------# #initialize dataframe used to store accumulated volume values #if the day used to build this is a holiday, you may have an issue FYI, I didnt test rvol_helper = pd.DataFrame(index = self.separated_times, columns = self.separated_dates) #print(rvol_helper) #make a rolling sum of the volume by minute for each date for x in range(len(self.separated_dates)): currDate = self.separated_dates[x] #date we are working on minutes = [] #helper array rvolx = 0 #used to accumulate volume #print(currDate) #loop through the pricing data, sum the volumes for the specific day #append to the minutes array, which is then used to build the rvol_helper day by day for y in range(len(self.separated_times)): time_now = self.separated_times[y] date_check = pd.Timestamp.combine(currDate, time_now) try: rvolInt = pricing['volume'].loc[date_check] except: rvolInt = rvolx else: rvolx += rvolInt rvol_helper.at[time_now, currDate] = rvolx #------Parse data to dataframes for handling--------------------------------# #create RVOL dataframe self.rvol = pd.DataFrame(index = self.separated_times, columns = ('vol_today', 'avg_vol', 'rvol')) #print(rvol_helper) #debug #remove day of interest volume, add to RVOL dataframe as 'vol_today' self.rvol['vol_today'] = rvol_helper.pop(rvol_helper.columns[-1]) #average the 19 days of volume by minute (across rows) rvol_helper['avg_vol'] = rvol_helper.mean(axis=1) #print(rvol_helper) #debug #remove the column of average volumes per minute, append to RVOL as 'avg_vol' self.rvol['avg_vol'] = rvol_helper.pop(rvol_helper.columns[-1]) #calculate RVOL for the day using today volume and average volume for x in self.rvol.index: volTod = self.rvol['vol_today'].loc[x] avgVol = self.rvol['avg_vol'].loc[x] if volTod > 0 and avgVol > 0: self.rvol.at[x, 'rvol'] = volTod/avgVol else: self.rvol.at[x, 'rvol'] = 0 return(self.rvol) #---------variables to call the class/method------------# dajob = RelativeVolume() outputs = dajob.calculate(dadate, 'MBOT') #good symbol and date example to see why massive RVOL matters print(outputs) vol_today avg_vol rvol 09:30:00 0 0.000000 0 09:31:00 343648 4923.285714 69.8005 09:32:00 447008 6405.928571 69.7804 09:33:00 537217 7446.571429 72.1429 09:34:00 646290 11637.000000 55.5375 09:35:00 831077 13340.428571 62.2976 09:36:00 1.17107e+06 15557.428571 75.2739 09:37:00 1.79596e+06 17914.214286 100.253 09:38:00 2.12366e+06 18549.857143 114.484 09:39:00 2.31845e+06 19824.857143 116.947 09:40:00 2.77747e+06 21488.928571 129.251 09:41:00 2.93821e+06 23653.642857 124.218 09:42:00 3.07304e+06 25250.000000 121.705 09:43:00 3.31698e+06 25994.500000 127.603 09:44:00 3.99378e+06 26967.000000 148.099 09:45:00 4.42318e+06 27968.000000 158.151 09:46:00 4.79563e+06 29683.214286 161.56 09:47:00 5.00895e+06 30811.142857 162.569 09:48:00 5.40892e+06 31812.142857 170.027 09:49:00 5.54614e+06 32525.071429 170.519 09:50:00 5.83742e+06 33253.357143 175.544 09:51:00 6.24339e+06 34612.428571 180.38 09:52:00 6.67343e+06 35368.214286 188.684 09:53:00 7.03187e+06 35612.500000 197.455 09:54:00 7.22378e+06 36642.000000 197.145 09:55:00 7.36751e+06 37450.642857 196.726 09:56:00 7.52873e+06 38628.214286 194.902 09:57:00 7.66577e+06 39586.214286 193.647 09:58:00 8.19024e+06 40221.642857 203.628 09:59:00 8.57363e+06 41060.357143 208.806 ... ... ... ... 15:31:00 2.07192e+07 516437.928571 40.1194 15:32:00 2.07192e+07 518198.071429 39.9831 15:33:00 2.07192e+07 518719.500000 39.9429 15:34:00 2.07192e+07 519735.142857 39.8648 15:35:00 2.07192e+07 521389.428571 39.7384 15:36:00 2.07192e+07 522062.285714 39.6871 15:37:00 2.07192e+07 522744.857143 39.6353 15:38:00 2.07192e+07 523424.642857 39.5838 15:39:00 2.07192e+07 523846.000000 39.552 15:40:00 2.07192e+07 524171.000000 39.5275 15:41:00 2.07192e+07 524808.642857 39.4795 15:42:00 2.07192e+07 530611.785714 39.0477 15:43:00 2.07192e+07 538947.928571 38.4437 15:44:00 2.07192e+07 546611.428571 37.9047 15:45:00 2.07192e+07 553815.428571 37.4117 15:46:00 2.07192e+07 556650.285714 37.2211 15:47:00 2.07192e+07 559428.428571 37.0363 15:48:00 2.07192e+07 560865.000000 36.9414 15:49:00 2.07192e+07 562848.428571 36.8113 15:50:00 2.07192e+07 564035.428571 36.7338 15:51:00 2.07192e+07 565365.000000 36.6474 15:52:00 2.07192e+07 567933.142857 36.4817 15:53:00 2.07192e+07 575167.785714 36.0228 15:54:00 2.07192e+07 579138.142857 35.7759 15:55:00 2.07192e+07 580823.428571 35.672 15:56:00 2.07192e+07 584993.000000 35.4178 15:57:00 2.07192e+07 586573.642857 35.3224 15:58:00 2.07192e+07 588663.857143 35.1969 15:59:00 2.07192e+07 593617.642857 34.9032 16:00:00 2.07192e+07 597703.642857 34.6646 [391 rows x 3 columns] In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]: In [ ]:
# Limits from equations Contents Now that we have all the conceptual stuff laid down, we can start have some fun with finding limits of various functions. Some of these limits don't want you to find them so fast, but we're sure you'll get them in the end! 10 exercises available ### Limits from equations (direct substitution) Take your first steps in finding limits algebraically. For example, find the limit of x²+5x at x=2, or determine whether the limit of 2x/(x+1) at x=-1 exists. ### Limits from equations (factoring & rationalizing) There are some limits that want us to work a little before we find them. Learn about two main methods of dealing with such limits: factorization and rationalization. For example, find the limit of (x²-1)/(x-1) at x=1. ### Squeeze theorem The Squeeze theorem (or Sandwich theorem) states that for any three functions f, g, and h, if f(x)≤g(x)≤h(x) for all x-values on an interval except for a single value x=a, and the limits of f and h at x=a are equal to L, then the limit of g at x=a must be equal to L as well. This may seem simple but it's pure genius. Learn how it helps us find tricky limits like sin(x)/x at x=0. ### Limits of trigonometric functions Find limits of trigonometric functions by manipulating the functions (using trigonometric identities) into expressions that are nicer to handle. For example, find the limit of sin(x)/sin(2x) at x=0. ### Limits of piecewise functions Remember one-sided limits? Well, these are very useful when dealing with piecewise functions. For example, analyze the limit at x=2 of the function that gives (x-2)² for values lower than 2 and 2-x² for values lager than 2. ### Removable discontinuities Removable discontinuities are points where a function isn't continuous but can become continuous with a small adjustment. Analyze such points and determine what adjustments should be made to "remove" them. ### Review: Limits from equations Review your limit-evaluation skills with some challenge problems.
OpenCV  3.4.12-pre Open Source Computer Vision Optical Flow Prev Tutorial: Meanshift and Camshift ## Goal In this chapter, • We will understand the concepts of optical flow and its estimation using Lucas-Kanade method. • We will use functions like cv.calcOpticalFlowPyrLK() to track feature points in a video. • We will create a dense optical flow field using the cv.calcOpticalFlowFarneback() method. ## Optical Flow Optical flow is the pattern of apparent motion of image objects between two consecutive frames caused by the movement of object or camera. It is 2D vector field where each vector is a displacement vector showing the movement of points from first frame to second. Consider the image below (Image Courtesy: Wikipedia article on Optical Flow). image It shows a ball moving in 5 consecutive frames. The arrow shows its displacement vector. Optical flow has many applications in areas like : • Structure from Motion • Video Compression • Video Stabilization ... Optical flow works on several assumptions: 1. The pixel intensities of an object do not change between consecutive frames. 2. Neighbouring pixels have similar motion. Consider a pixel $$I(x,y,t)$$ in first frame (Check a new dimension, time, is added here. Earlier we were working with images only, so no need of time). It moves by distance $$(dx,dy)$$ in next frame taken after $$dt$$ time. So since those pixels are the same and intensity does not change, we can say, $I(x,y,t) = I(x+dx, y+dy, t+dt)$ Then take taylor series approximation of right-hand side, remove common terms and divide by $$dt$$ to get the following equation: $f_x u + f_y v + f_t = 0 \;$ where: $f_x = \frac{\partial f}{\partial x} \; ; \; f_y = \frac{\partial f}{\partial y}$ $u = \frac{dx}{dt} \; ; \; v = \frac{dy}{dt}$ Above equation is called Optical Flow equation. In it, we can find $$f_x$$ and $$f_y$$, they are image gradients. Similarly $$f_t$$ is the gradient along time. But $$(u,v)$$ is unknown. We cannot solve this one equation with two unknown variables. So several methods are provided to solve this problem and one of them is Lucas-Kanade. We have seen an assumption before, that all the neighbouring pixels will have similar motion. Lucas-Kanade method takes a 3x3 patch around the point. So all the 9 points have the same motion. We can find $$(f_x, f_y, f_t)$$ for these 9 points. So now our problem becomes solving 9 equations with two unknown variables which is over-determined. A better solution is obtained with least square fit method. Below is the final solution which is two equation-two unknown problem and solve to get the solution. $\begin{bmatrix} u \\ v \end{bmatrix} = \begin{bmatrix} \sum_{i}{f_{x_i}}^2 & \sum_{i}{f_{x_i} f_{y_i} } \\ \sum_{i}{f_{x_i} f_{y_i}} & \sum_{i}{f_{y_i}}^2 \end{bmatrix}^{-1} \begin{bmatrix} - \sum_{i}{f_{x_i} f_{t_i}} \\ - \sum_{i}{f_{y_i} f_{t_i}} \end{bmatrix}$ ( Check similarity of inverse matrix with Harris corner detector. It denotes that corners are better points to be tracked.) So from the user point of view, the idea is simple, we give some points to track, we receive the optical flow vectors of those points. But again there are some problems. Until now, we were dealing with small motions, so it fails when there is a large motion. To deal with this we use pyramids. When we go up in the pyramid, small motions are removed and large motions become small motions. So by applying Lucas-Kanade there, we get optical flow along with the scale. ## Lucas-Kanade Optical Flow in OpenCV OpenCV provides all these in a single function, cv.calcOpticalFlowPyrLK(). Here, we create a simple application which tracks some points in a video. To decide the points, we use cv.goodFeaturesToTrack(). We take the first frame, detect some Shi-Tomasi corner points in it, then we iteratively track those points using Lucas-Kanade optical flow. For the function cv.calcOpticalFlowPyrLK() we pass the previous frame, previous points and next frame. It returns next points along with some status numbers which has a value of 1 if next point is found, else zero. We iteratively pass these next points as previous points in next step. See the code below: (This code doesn't check how correct are the next keypoints. So even if any feature point disappears in image, there is a chance that optical flow finds the next point which may look close to it. So actually for a robust tracking, corner points should be detected in particular intervals. OpenCV samples comes up with such a sample which finds the feature points at every 5 frames. It also run a backward-check of the optical flow points got to select only good ones. Check samples/python/lk_track.py). See the results we got: image ## Dense Optical Flow in OpenCV Lucas-Kanade method computes optical flow for a sparse feature set (in our example, corners detected using Shi-Tomasi algorithm). OpenCV provides another algorithm to find the dense optical flow. It computes the optical flow for all the points in the frame. It is based on Gunner Farneback's algorithm which is explained in "Two-Frame Motion Estimation Based on Polynomial Expansion" by Gunner Farneback in 2003. Below sample shows how to find the dense optical flow using above algorithm. We get a 2-channel array with optical flow vectors, $$(u,v)$$. We find their magnitude and direction. We color code the result for better visualization. Direction corresponds to Hue value of the image. Magnitude corresponds to Value plane. See the code below: See the result below: image
## Introduction Indocyanine green (ICG) is a molecule developed in the 1950s at Kodak’s R &D laboratories1, applied in the field of infrared photography. This molecule is the first substance discovered capable of emitting fluorescence in the near infrared (NIR) spectrum, as it becomes fluorescent when illuminated with infrared light. This substance has negligible toxicity and is quickly disposed of by the body without side effects, except for rare allergic reactions easy to prevent2. In 1959, the Food and Drug Administration (FDA) approved its use in clinical settings3, and since then, it has been widely used for diagnostic investigations for pathology affecting heart, eyes, liver, and lungs. This substance is injected into the patient’s vein before surgery or near the tumor mass to be removed the day before surgery. The molecule binds to plasma proteins present in the blood, giving its fluorescent properties to the blood, liver, and biliary circulation4. More recently, ICG has been largely used in the surgical field thanks to the introduction of fluorescence detectors, namely optical systems for excitation and detection of the emitted fluorescence. A relevant topic of research is the adoption of ICG to estimate the perfusion quality in laparoscopic surgery5,6,7,8,9,10. This is essential to assess whether the intestine is adequately perfused; this, in fact, serves as an indication of the outcome of the procedure11,12,13. In fact, a perfusion deficiency at the point where an anastomosis is performed increases the risk of anastomotic dehiscence, which consists in a failure to heal the sutures with the consequent appearance of fistulas and tissue perfusion14. Therefore, assessing the quality of the perfusion by means of ICG allows the surgeon to promptly intervene while the surgical procedure is ongoing. The most used technique to verify the perfusion of an intestinal segment is to inject ICG into the patient body. This element makes the blood fluorescent with a green tinge if lightened with infrared light. The evaluation of the intensity and the uniformity of this fluorescence allows to assert if the parts are adequately perfused. This technique was successfully used by Boni14 to provide information related to perfusion during colorectal surgery, and assist the surgeon in adopting the best strategy in the phases of colorectal anastomosis (stitching of two conical stumps), often necessary in colorectal interventions. Moreover, other applications of ICG relate to the dynamic discrimination of primary colorectal cancer using systemic indocyanine green with NIR endoscopy15, intraoperative ureter identification, and lymph node dissection16. Currently, the fluorescence brightness of ICG is evaluated only qualitatively and subjectively by the surgeon, basing on experience. Indeed, at the state of the art, there are no systems or techniques used to quantify it, and to objectively support the surgeons in their assessment. At the state of the art, several attempts to design systems capable to help the surgeons in the assessment of perfusion quality have been made17,18,19,20. These approaches are mostly based on the diffusion speed of the indocyanine in the tissues. The perfusion of the colorectal segment is estimated by looking at the gradient of the intensity of ICG fluorescence brightness captured by the camera. The output provided by these systems is a heat map highlighting the intestinal portions characterized by a faster increase of ICG fluorescence brightness after the injection. Furthermore, they try to correlate the heat map with the post-surgery result. However, the methods described in these works are not able to automatically assess if the perfusion is good or not in the analyzed area, as they only provide a graphical output that has to be interpreted by the surgeons subjectively. To overcome this issue, the branch of technology that is becoming increasingly popular is Artificial Intelligence (AI)21,22, and in particular Machine Learning (ML). The capillary diffusion of powerful calculators, along with the effort of researchers to develop effective algorithms23,24,25 have contributed to the widespread adoption of this technology in a large variety of application contexts, such as healthcare26,27. At the state of art, there are many examples of ML-based approaches used in the medical field, as proof that this technology can help and assist the operators to minimize the risks for the patients and prevent complications. For example, a decision support system based on AI was used by Cahill et al.28 in colorectal cancer intra-operative tissue classification. Instead, Park et al.29 adopted AI to evaluate the feasibility of AI-based real-time analysis of microperfusion to predict the risk of anastomotic complication in the patient with laparoscopic colorectal cancer surgery. According to Igaki et al.30, a first study to use an image-guided navigation system with total mesorectal excision was conducted. Moreover, Sanchez et al.31 provided a systematic literature review regarding the use of AI to find colorectal polyps in colonoscopy. Finally, Kitaguchi et al.32 used AI to identify laparoscopic surgical videos, in order to facilitate the automation of time-consuming manual processes, such as video analysis, indexing, and video-based skill assessment. Nevertheless, there are still no methods based on AI that automatically assess the quality of perfusion in the analyzed area. Starting from these considerations, in this paper, a ML-based system to objectively assess if an intestinal sector is adequately perfused after an injection of ICG is proposed. This system is used to (i) acquire information related to perfusion during laparoscopic colorectal surgery, and (ii) objectively support the surgeon in assessing the outcome of the procedure. In particular, the algorithm provides an output that classifies the perfusion as adequate or inadequate. From an implementation point of view, the system works on a video extracted from a laparoscopic camera and a Region of Interest (ROI). The ROI is selected by a member of the operating room team (e.g., an assistant surgeon) and contains the area to be assessed. Then, it adopts a set of pre-processing steps to build the input of a Feed Forward Neural Network used to evaluate the quality of perfusion. The precise tuning of the neural network hyper-parameters allows the proposed architecture to have a prediction accuracy high enough to anticipate the possible adoption of the system as a standard routine to be applied during surgery. As a proof-of-concept demonstration, the case study, based on perfusion analysis applied to abdominal laparoscopic surgery at University Hospital Federico II in Naples, Italy, is reported. The feasibility of such approach in real time is proven with optimal performance. Thus, the system can represent an effective decision support for both less-experienced surgeons and those at the beginning of the learning curve. ## Materials and methods The problem addressed in this work can be formally expressed as a two-class classification problem involving frames (from a video streaming) corresponding to an adequately - or inadequately - perfused area. To this purpose, the idea was to develop a system that automatically assesses the quantity of ICG present in the ROI, by computing the histogram of the green band of the acquired frames, then providing an output corresponding to an adequate or to an inadequate perfusion. The study was conducted according to the guidelines of the Declaration of Helsinki. Because the study does not include a pharmacological experimentation, using medical devices or patient data, but only the computer analysis of video material (collected during routine clinical practice), approval by the Ethics Committee is not necessary. Each patient signed an informed consent for the surgical procedure and approved the use of their data by third parties. ### System architecture The overall architecture of the proposed system is shown in Fig. 1. The input of the system are (i) the frames coming from the Video streaming, and (ii) the ROI identified as a rectangular box selected by the user. The ROI, which identifies the portion of the frame to be analyzed, is selected by the OR operator using the mouse or the track-pad on the computer when starting the algorithm. The architecture, from left to right, is composed of the following three functional blocks: • The first block consists of a Fast tracking algorithm, which is used to track the selected ROI during the video execution. In particular, the Minimum Output Sum of Squared Error (MOSSE) tracker33 was exploited, as it uses adaptive correlation to track objects, resulting in a better robustness to variations in lightning, pose, scale and non rigid transformations. The MOSSE implements also an auto pause and resume functionality if the object to track disappear (for example, if the surgeon covers it) and then it reappears again. Moreover, the exploited tracker can work at high frame rates (more than 450 fps). • Once the frames containing the ROI are extracted from the video source, the second block performs a Features extraction. The frames are in the RGB format, namely the colored image is obtained by a combination of three images, one for each color channel: red, green and blue. Each pixel has an 8-bit resolution; this value represents the intensity of the pixel. Afterwards, the ROI of each frame is divided into 20 vertical equal slices and, for each of them, the histogram of the green band and its area is computed according to Eq. 1: \begin{aligned} A_i= \sum _{l=k}^{255} count_i(l)\cdot [b(l+1)-b(l)]\,\,\,\, 1\le i \le 20 \end{aligned} (1) where $$A_i$$ is the generic element of the features vector corresponding to the slice i; b(l) is the bin value at the level l; $$count_i(l)$$ is the number of occurrences of the green intensity at the level l for the slice i; and k is a parameter used to exclude pixels with low values of green. In this work, $$k=25$$ was chosen as it guaranteed the best classification performance. Finally, a vector of 20 elements (features vector) is obtained. This vector becomes the input to the last functional block. • This step of the process binarily Classifies the feature vector and establishes whether it corresponds to an adequate/1 or to a inadequate/0 perfusion of the colorectal portion. This classifier is obtained by a Feed Forward Neural Network. The binary cross-entropy was selected as a loss function, and the optimizer Adam34,35 was chosen. ### Model evaluation and selection The following Neural Networks (NN) were evaluated as classifiers: • One-hidden-layer NN: In this case, a classic feed forward neural network (FFNN) with one hidden layer was used. The output layer had a single neuron with a sigmoidal activation function. The hidden layer was preliminarly tested with (i) 20 neurons and a Rectifier Linear Unit (ReLU) activation function, and (ii) 80 neurons and a Tanh activation function. After, this network was further tested with the following activation functions: Tanh, Sigmoid, and Rectifier Linear Unit (ReLU): here, for each activation function the number of neurons changed between 10 and 100, with step 10. • Two-hidden-layer NN: In this case, a FFNN composed by 2 hidden layers was used. Different combinations of ReLU, Sigmoid and Tanh activation function, considering a number of neurons equal to 50, 70, and 90, were tested. SoftMax activation function, and two neurons were used for the output layer. Therefore, the tuned hyper-parameters were (i) activation functions, and (ii) number of neurons for each hidden layer. Moreover, Support Vector Machine (SVM)36 method was used with linear and Gaussian kernel as a baseline classifier. For each hyper-parameter configuration, all the aforementioned ML models were validated on the entire data set using the K-fold Cross Validation (CV)37 with K=10 folds. K-fold CV is a standard approach to assess and select a ML model in a statistically significant manner and without overfitting38. The data set is divided in K folds and the network is trained K times for each combination of hyper-parameters. Each time the network is trained, one of the K folds of the data set is used as test set and all the remaining K-1 as training set. The selection of the best model was conducted according to mean of the obtained accuracies over the K test folds, defined as the percentage of correct classification. After the selection, the chosen model was trained again on the entire data set in order to extract as much information as possible from the data39, to use it in real-time in an actual surgery scenario. The proposed algorithm was developed in Python 2.7 on Windows 10. The open-source framework and libraries used are TensorFlow, Keras, and OpenCV. The training of the proposed NNs was conducted by setting a number of epochs equal to 100 and the batch size equal to 5. ### Ethical approval The study was conducted according to the guidelines of the Declaration of Helsinki Approval of the institutional review committee was not required because the data of the present study were collected during routine clinical practice. Each patient signed an informed consent for the surgical procedure and approved the use of their data by third parties. ## Experimental results and discussion In this section, first, the laboratory experimental validation is described: during this phase, a data set provided by the surgeons was used to train and validate the ML classifiers adopted by the proposed algorithm. Then, an online validation in OR was carried out, by employing the best ML model obtained after the training. ### Laboratory experimental validation 1. 1. Setup A total of 11 videos in .M4V format were provided by the surgeons: the videos were collected and labelled by the medical staff during routine clinical practice. An anonymisation procedure was applied to protect patients privacy. In particular, any metadata was removed from the original files. These files contain the video, acquired directly from the endoscope during surgery, related to the portions of the intestine where the anastomosis was being performed. When the ICG was injected, the portion that was well perfused became fluorescent. An example of frames extracted from the dataset is shown in Fig. 2, which shows the intraoperative use of ICG technology. In particular, Fig. 2a refers to the fluorescence angiography which shows the vascular perfusion of the intestinal segment that delimits the section point. Fig. 2b shows the anastomosis being performed with the residual colon. Fluorescence angiography was performed using a laparoscopic system (Olympus OTV-S300, Olympus Europe SE & Co. KG, Hamburg, Germany) with a light source (Olympus CLV-S200 IR) which allowed the use of both visible and near-infrared light. 2. 2. Results The performance of the developed NNs was validated by processing the 11 videos available from the dataset. From each video of the data set, different frames were extracted. To properly train the model in assessing the quality of perfusion, frames containing ROIs with clear evidence of ICG were selected as well as more tricky ones. From each ROI, 20 features vectors were obtained. The total size of the dataset is constituted by 470 frames. Figure 3 illustrates the overall process of features extraction. The considered frame is shown in Step 1. As aforementioned, the ROI selected by the user is splitted into 20 slices (Step 2); therefore, for each obtained slice (Step 3), the histogram of green, showing the occurrence of each level of green, is computed (Step 4). The area of the histogram constitutes an element of the feature vector. The extracted features are given as input to the aforementioned Classifiers: in Table 1 the chosen set of hyper parameters for each of the classifiers with the corresponding obtained accuracy, in terms of means and $$1-\sigma$$ repeatability, is summarized. The obtained experimental results show that the one-hidden layer (L1) NN with 20 neurons and ReLU as activation function is the one that achieves the best performance, with an average accuracy of $$99.9\%$$ and a $$1-\sigma$$ repeatability of $$1.9\%$$. It outperforms the results obtained with the use of Sigmoidal and Tanh activation functions even with more neurons in the hidden layer. It was also found that both SVM and two-hidden layer NN exhibit worse results than the one-hidden layer networks. In fact, with SVM the best achieved accuracy is $$54.5\%$$, while with the two-hidden layer NN the best accuracy reached $$85.2\%$$. Since the NN with one hidden layer achieved the best results, further tests were dedicated to fine tune the number of neurons. Table 2 and Fig. 4 summarize the detail of the performance for the one-hidden layer networks as both the number of neurons and the activation function are varied. It can be observed that the best results are always obtained using ReLU as activation function. The number of neurons which achieved the greatest accuracy was confirmed to be 20. The results reported in Table 2 were statistically validated by means of One-Way ANOVA and Fischer test, by verifying the statistical significance of the differences between the mean accuracies obtained by the three different activation functions used (Tanh, Sigmoid, and ReLU). The chosen null hypothesis $$H_0$$ was that the groups belonged to the same population with a significance level $$\alpha$$ = 1.0%. The test rejected the null hypothesis with a P-value = $$0.0\%$$. Therefore, the Paired t-test was carried out to understand which of the three groups is different from the others. The significance level $$\alpha$$ was again set equal to 1.0%. For all three tests the hypothesis $$H_0$$, that assumed the groups were identical, was rejected. The analysis was conducted by means of the online tool Statistic Kindgom40. Further details are reported in Table 3. The tests confirmed that the results obtained with ReLU activation function and 20 neurons are statistically relevant. In fact, there is a significant difference between this model and the others with different activation functions. Hence, this model was chosen for the classification stage of the proposed system and used in the prototype version. For the sake of example, Fig. 5 shows the results obtained by applying the proposed algorithm to frames of the data set with good and bad perfusion. For each considered frame, the corresponding ROI is indicated. In particular, Fig. 5a and d have ROI classified with good perfusion (good amount of green) and prediction 1. On the other hand, Fig. 5b and c has prediction output equal to 0 because the ROI are considered by the algorithm as bad perfused due to a low grade of green or due to a not uniform presence of green in the selected ROI. In each of the considered cases, the correctness of the classification was confirmed by the surgeons. ### Operating room experimental validation 1. 1. Setup After the offline validation, the algorithm was further validated using the equipment available at the University Hospital Federico II in Naples, Italy. The aim was to ensure the possibility of interfacing the proposed system with the medical equipment. An additional aspect to consider is, in fact, the real-time interfacing with the endoscope. The endoscope used was the Olympus Visera Elite II. It is an imaging platform for general surgery, urology, gynecology, and more, which links the OR to other devices and facilities around the hospital. An S-video to USB adapter was used to connect the endoscope to a PC equipped with Windows 10 and Python 2.7. The video captured from the endoscope was transmitted in real time to an elaboration unit kept outside the OR. Therefore, surgeons who did not take part in the operation were asked to select the ROIs. However, this workflow can be also conducted by the main surgical team inside the OR. 2. 2. Results The algorithm was able to receive and process at least 30 frame per seconds (fps) from the video source: this frame rate was considered acceptable for the surgeons to select the ROI and use the system. Moreover, the output provided by the algorithm might effectively help surgeons to take the decision even in unclear situations (i.e., low brightness). These additional trials demonstrated the feasibility of the practical implementation of the proposed ML-based algorithm Fig. 6, the output of the system working under three different levels of green brightness is shown. This also demonstrates that the proposed system is able to correctly classify the frame regardless of the level of green brightness. ## Conclusions A system based on ML classifiers is proposed to assist the surgeons during laparoscopic colorectal surgery. It is a decision-support system able to automatically asses if the quality of the perfusion is adequate or inadequate after an injection of indiocyainine green dye. Different models of classifiers were tested on a dataset of videos of several anastomoses carried out at the Federico II Hospital. Overall, the one-hidden-layer NN with 20 neurons and ReLU activation function achieved the best performance. In fact, the obtained results showed a prediction accuracy of $$99.9\%$$ with a $$1-\sigma$$ repeatability of $$1.9\%$$. These results were statistically validated by means of (i) ANOVA, and (ii) Fischer and Paired-t tests. Therefore, this model was selected for the system implementation. The proposed system was successfully validated also in relation to the interfacing with actual equipment used at the University Hospital Federico II in Naples, Italy. It can represent an important decision support to surgeons during the operation, especially in condition of uncertainty - where it is not clear whether the blood perfusion is adequate or not - due to an unclear presence of ICG. Future work will be addressed to overcome the current research weakness, by (i) introducing more levels between adequate and inadequate perfusion, in order to increase the resolution of the assessment and further enhance accuracy of prediction, (ii) identifying a method to automatically select the ROIs, and (iii) enrich the dataset by facing circumstances when the blood perfusion is impaired by underlying pathologies (e.g., atherosclerosis). In this case, in fact, both the classifier and the surgeon are not trained to correctly assess whether perfusion is adequate or not.
<meta http-equiv="refresh" content="1; url=/nojavascript/"> You are viewing an older version of this Concept. Go to the latest version. # Powers and Roots of Complex Numbers ## De Moivre's Theorem using polar form. 0% Progress Practice Powers and Roots of Complex Numbers Progress 0% Powers and Roots of Complex Numbers Manually calculating (simplifying) a statement such as: or in present (rectangular) form would be a very intensive process at best. Fortunately you will learn in this lesson that there is an alternative: De Moivre's Theorem. De Moivre's Theorem is really the only practical method for finding the powers or roots of a complex number, but there is a catch... What must be done to a complex number before De Moivre's Theorem can be utilized? ### Watch This This lesson reviews the two opposite operations involving De Moivre's Theorem: finding powers, and finding roots. The video lessons review each operation separately. Embedded Video: a. Raising Complex Numbers to a Power: b. Finding Roots of Complex Numbers: ### Guidance Powers of Complex Numbers How do we raise a complex number to a power? Let’s start with an example (-4 - 4i)3 = (-4 - 4i) x (-4 - 4i) x (-4 - 4i) = In rectangular form, this can get very complex. What about in r cis θ form? So the problem becomes and using our multiplication rule from the previous section, Notice, (a + bi)3 = r3 cis 3 θ In words: Raise the r-value to the same degree as the complex number is raised and then multiply that by cis of the angle multiplied by the number of the degree. Reflecting on the example above, we can identify De Moivre's Theorem: Let z = r(cos θ + i sin θ) be a complex number in rcisθ form. If n is a positive integer, zn is zn = rn (cos() + i sin()) It should be clear that the polar form provides a much faster result for raising a complex number to a power than doing the problem in rectangular form. Roots of Complex Numbers I imagine you noticed long ago that when an new operation is presented in mathematics, the inverse operation often follows. That is generally because the inverse operation is often procedurally similar, and it makes good sense to learn both at the same time. This is no exception: The inverse operation of finding a power for a number is to find a root of the same number. a) Recall from Algebra that any root can be written as x1/n b) Given that the formula for De Moivre’s theorem also works for fractional powers, the same formula can be used for finding roots: #### Example A Find the value of and θ is in the 1st quadrant, so Using our equation from above: Expanding cis form: Finally we have z4 = -8 - 13.856i #### Example B Find Solution First, rewriting in exponential form: (1 + i)½ And now in polar form: Expanding cis form, Using the formula: In decimal form, we get =1.189( 0.924 + 0.383i) =1.099 + 0.455i To check, we will multiply the result by itself in rectangular form: #### Example C Find the value of Solution First we put in polar form. Use to obtain let in rectangular form in polar form Use De Moivre’s Equation to find the first solution: or Leave answer in cis form to find the remaining solutions: n = 3 which means that the 3 solutions are radians apart or and Note: It is not necessary to add again. Adding three times equals 2π. That would result in rotating around a full circle and to start where it all began- that is the first solution. The three solutions are: Each of these solutions, when graphed will be apart. Check any one of these solutions to see if the results are confirmed. Checking the second solution: Does (-0.965 – 0.810i)3 or (-0.965 – 0.810i) (-0.965 – 0.810i) (-0.965 – 0.810i) Concept question follow-up A complex number operation written in rectangular form, such as: must be converted to polar form to utilize De Moivre's Theorem. ### Vocabulary De Moivre's Theorem is the only practical manual method of identifying the powers or roots of complex numbers. ### Guided Practice 1) What are the two square roots of i? 2) Calculate What are the four fourth roots of 1? 3) Calculate Solutions 1) Let or Utilizing De Moivre’s Theorem: or or or Check for z1 solution: (0.707 + 0.707i)2 = i? 0.500 + 0.500i + 0.500i + 0.500i2 = 0.500 + i + 0.500(-1) or i 2) Let z = 1 or z = 1 + 0i Then the problem becomes find z1/4 = (1 + 0i)1/4 Since with or or That root is not a surprise. Now use De Moivre’s to find the other roots: Since there are 4 roots, dividing 2π by 4 yields 0.5π or 0 + i or just i which yields z3 = -1 Finally or The four fourth roots of 1 are 1, i, -1 and -i 3) To calculate start by converting to form First find Recall If and then and is in quadrant I. Now that we have trigonometric form, the rest is easy: ..... Write the original problem in form ..... De Moivre's Theorem ..... Simplify ..... Simplify again ### Practice Perform indicated operation on these complex numbers: 1. Divide: 2. Multiply: 3. Multiply: 4. Find the product using polar form: 5. Multiply: 6. Multiply: 7. Divide: 8. Divide: Use De Moivre’s Theorem: 1. Identify the 3 complex cube roots of 2. Identify the 4 complex fourth roots of 3. Identify the five complex fifth roots of ### Vocabulary Language: English complex number complex number A complex number is the sum of a real number and an imaginary number, written in the form $a + bi$. De Moivre's Theorem De Moivre's Theorem De Moivre's theorem is the only practical manual method for identifying the powers or roots of complex numbers. The theorem states that if $z= r(\cos \theta + i \sin \theta)$ is a complex number in $r cis \theta$ form and $n$ is a positive integer, then $z^n=r^n (\cos (n\theta ) + i\sin (n\theta ))$.
# Principal Normal Section [closed] On the surface $$x=u^2+v^2, \ \ \ y=u^2-v^2,\ \ \ z=uv$$ we take the point $P(u=1,v=1)$. $1)$ Compute the principal curvatures of the surface at point $P.$ $2)$ Find the equations of the tangents $PT_1, \ PT_2$ to the principal normal sections at the indicated point. $3)$ Find the curvature of the normal section passing through the tangent to the curve $v=u^2$ I have solved the first point and the principal curvatures are: $$\kappa_1=0, \ \kappa_2=\frac{160}{80^{\frac{3}{2}}}$$ but I do not know what should I do in the next two points. So, can someone please help me ? ## closed as off-topic by user99914, Claude Leibovici, Juniven, Jaideep Khare, zz20sMay 15 '17 at 11:49 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Community, Claude Leibovici, Juniven, Jaideep Khare, zz20s If this question can be reworded to fit the rules in the help center, please edit the question. • Can anyone help me please ? – Hitman May 15 '17 at 9:42
## 94.21 Comparison In this section we collect some results on comparing cohomology defined using stacks and using algebraic spaces. Lemma 94.21.1. Let $S$ be a scheme. Let $\mathcal{X}$ be an algebraic stack over $S$ representable by the algebraic space $F$. 1. If $\mathcal{I}$ injective in $\textit{Ab}(\mathcal{X}_{\acute{e}tale})$, then $\mathcal{I}|_{F_{\acute{e}tale}}$ is injective in $\textit{Ab}(F_{\acute{e}tale})$, 2. If $\mathcal{I}^\bullet$ is a K-injective complex in $\textit{Ab}(\mathcal{X}_{\acute{e}tale})$, then $\mathcal{I}^\bullet |_{F_{\acute{e}tale}}$ is a K-injective complex in $\textit{Ab}(F_{\acute{e}tale})$. The same does not hold for modules. Proof. This follows formally from the fact that the restriction functor $\pi _{F, *} = i_ F^{-1}$ (see Lemma 94.10.1) is right adjoint to the exact functor $\pi _ F^{-1}$, see Homology, Lemma 12.29.1 and Derived Categories, Lemma 13.31.9. To see that the lemma does not hold for modules, we refer the reader to Étale Cohomology, Lemma 58.93.1. $\square$ Lemma 94.21.2. Let $S$ be a scheme. Let $f : \mathcal{X} \to \mathcal{Y}$ be a morphism of algebraic stacks over $S$. Assume $\mathcal{X}$, $\mathcal{Y}$ are representable by algebraic spaces $F$, $G$. Denote $f : F \to G$ the induced morphism of algebraic spaces. 1. For any $\mathcal{F} \in \textit{Ab}(\mathcal{X}_{\acute{e}tale})$ we have $(Rf_*\mathcal{F})|_{G_{\acute{e}tale}} = Rf_{small, *}(\mathcal{F}|_{F_{\acute{e}tale}})$ in $D(G_{\acute{e}tale})$. 2. For any object $\mathcal{F}$ of $\textit{Mod}(\mathcal{X}_{\acute{e}tale}, \mathcal{O}_\mathcal {X})$ we have $(Rf_*\mathcal{F})|_{G_{\acute{e}tale}} = Rf_{small, *}(\mathcal{F}|_{F_{\acute{e}tale}})$ in $D(\mathcal{O}_ G)$. Proof. Part (1) follows immediately from Lemma 94.21.1 and (94.10.2.1) on choosing an injective resolution of $\mathcal{F}$. Part (2) can be proved as follows. In Lemma 94.10.2 we have seen that $\pi _ G \circ f = f_{small} \circ \pi _ F$ as morphisms of ringed sites. Hence we obtain $R\pi _{G, *} \circ Rf_* = Rf_{small, *} \circ R\pi _{F, *}$ by Cohomology on Sites, Lemma 21.19.2. Since the restriction functors $\pi _{F, *}$ and $\pi _{G, *}$ are exact, we conclude. $\square$ Lemma 94.21.3. Let $S$ be a scheme. Consider a $2$-fibre product square $\xymatrix{ \mathcal{X}' \ar[r]_{g'} \ar[d]_{f'} & \mathcal{X} \ar[d]^ f \\ \mathcal{Y}' \ar[r]^ g & \mathcal{Y} }$ of algebraic stacks over $S$. Assume that $f$ is representable by algebraic spaces and that $\mathcal{Y}'$ is representable by an algebraic space $G'$. Then $\mathcal{X}'$ is representable by an algebraic space $F'$ and denoting $f' : F' \to G'$ the induced morphism of algebraic spaces we have $g^{-1}(Rf_*\mathcal{F})|_{G'_{\acute{e}tale}} = Rf'_{small, *}((g')^{-1}\mathcal{F}|_{F'_{\acute{e}tale}})$ for any $\mathcal{F}$ in $\textit{Ab}(\mathcal{X}_{\acute{e}tale})$ or in $\textit{Mod}(\mathcal{X}_{\acute{e}tale}, \mathcal{O}_\mathcal {X})$ Proof. Follows formally on combining Lemmas 94.20.3 and 94.21.2. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
Proof model category chain complexes In the model category of chain complexes one defines a cofibration to be a chain map $M\to N$ such that for each $k>0$ the map $f_k:M_k\to N_k$ is a monomorphism with a projective $R$-module as its cokernel, and a fibration if for each $k > 1$ the map $f_k : M_k → N_k$ is an epimorphism. Weak equivalences are quasi isomorphisms. Now I want to show that cofibrations have the LLP with respect to trivial fibrations. So look at a diagram $\require{AMScd} \begin{CD} A @>{g}>> X;\\ @V{i}VV @VV{p}V \\ B @>{h}>> Y; \end{CD}$ where $i$ is a cofibration and $p$ is a trivial fibration. In particular $p$ is surjective in all degrees. Dwyer and Spalinsky define the map degree by degree. In zeroth degree: By assumption on $i$ the image splits into $A_0\oplus P_0$ where $P_0$ is projective. Now define the lifting $f$ on $P_0$ to be any lifting of the map $p$ and define the lifting to be $g_0$ on $A_0$. My question: Why does the relation $g_0=f_0\circ i_0$ hold? This makes sense if the composition of $i_0$ with the isomorphism of $B_0\cong A_0\oplus P_0$ is the inclusion, but why does this hold? That's just what you mean by the image splitting. $i_0$ is a split monomorphism, so there's a biproduct structure on $B_0$, canonical up to a choice of splitting, whose inclusion at $A_0$ is $i_0$ and whose projection to $P_0$ is a chosen cokernel map. You should prove this if it doesn't strike you as clear. • Thanks for your answer. So this seems to be a pure category theoretical question. Can you be more explicit why $i_0$ is the inclusion at $A_0$ of $B_0$? – userZ617 Aug 8 '16 at 22:57 • I tried to show that for any map $t:A_0\to C_0$ there is a unique map $\psi:B_0\to A_0\oplus P_0\to C_0$ such that $t=\psi\circ i$. Is this useful and how do I choose the map? – userZ617 Aug 8 '16 at 23:37
# Softmax Loss vs Binary Loss for classification? I was trying to understand the final section of the paper "Revisiting Baselines for Visual Question Answering". Authors state that their model performs better with a binary loss in comparison to a softmax loss. I wonder what actually a binary loss mean in this case? I think the softmax loss is the same term for binary cross-entropy. Could someone explain what exactly a binary loss is? Thanks. • Welcome to the site. This is two questions, one of which (about tensorflow) is off topic. If you remove that part, it will be a fine question. – Peter Flom Jan 16 at 10:55 • @PeterFlom Done! – hexpheus Jan 16 at 18:36 There is a nice explanation here Binary Cross-Entropy Loss is also called Sigmoid Cross-Entropy loss. It is a Sigmoid activation plus a Cross-Entropy loss. Unlike Softmax loss it is independent for each vector component (class), meaning that the loss computed for every vector component is not affected by other component values. The term binary stands for number of classes = 2. I think that binary loss is the one based Shannon on entropy $$-\sum_ip_i\ln p_i$$, while the softmax is based on Boltzmann distribution: $$\frac{e^{z_i}}{\sum_j e^{z_i}}$$ Softmax itself shouldn't be a loss function though. It's the probability like function that you use to pick the output of classificator. For instance, you could use it to calculate probabilities $$\hat p_{ij}$$ of classificator outcomes for categories $$i$$ for sample $$j$$. Then you can use the entropy based loss function to evaluate the fit: $$\sum_ip_{ij}\ln\hat p_{ij}$$, where $$p_{ij}$$ is a binary outcome of a category $$i$$.
11:18 AM "generally be better support for proper rendering across browsers, mobile devices, screen readers and other accessibility tools," <-- this. I find that on mobile Chrome the MathJax doesn't know how to space properly, so the text immediately following it is moved left so that it overlaps partially with the content within $...$. This is a pain, but even worse when an author does it just for text. — David Roberts 13 hours ago @DavidRoberts If you encounter some problem with MathJax (like the one mentioned in your comment), it might be reasonable to post a bug report somewhere. Both Meta MathOverflow and Mathematics Meta might be an option - since they are visited by several people from MathJax team (Davide Cervone, Peter Krautzberger). It seems that they are quite active on Mathematics Meta - maybe a bit less on this meta. (But it might be just an impression caused by smaller sample.) — Martin Sleziak 4 hours ago No, it's a browser problem, as it happens on the nLab and the nForum as well. — David Roberts 50 mins ago @DavidRoberts As this is only tangentially related, I do not want to leave too many comments about this (=reporting MathJax problem) here. If there is more to be said we can continue this discussion in chat. I have also asked a question about this on Mathematics Meta: Where to report MathJax bugs?Martin Sleziak 25 secs ago As an experiment, I have also started MathJax chatroom some time ago. But it never really generated enough interest.
# Distances/shortest paths between all pairs of vertices¶ Distances/shortest paths between all pairs of vertices This module implements a few functions that deal with the computation of distances or shortest paths between all pairs of vertices. Efficiency : Because these functions involve listing many times the (out)-neighborhoods of (di)-graphs, it is useful in terms of efficiency to build a temporary copy of the graph in a data structure that makes it easy to compute quickly. These functions also work on large volume of data, typically dense matrices of size $$n^2$$, and are expected to return corresponding dictionaries of size $$n^2$$, where the integers corresponding to the vertices have first been converted to the vertices’ labels. Sadly, this last translating operation turns out to be the most time-consuming, and for this reason it is also nice to have a Cython module, and version of these functions that return C arrays, in order to avoid these operations when they are not necessary. Memory cost : The methods implemented in the current module sometimes need large amounts of memory to return their result. Storing the distances between all pairs of vertices in a graph on $$1500$$ vertices as a dictionary of dictionaries takes around 200MB, while storing the same information as a C array requires 4MB. ## The module’s main function¶ The C function all_pairs_shortest_path_BFS actually does all the computations, and all the others (except for Floyd_Warshall) are just wrapping it. This function begins with copying the graph in a data structure that makes it fast to query the out-neighbors of a vertex, then starts one Breadth First Search per vertex of the (di)graph. What can this function compute ? • The matrix of predecessors. This matrix $$P$$ has size $$n^2$$, and is such that vertex $$P[u,v]$$ is a predecessor of $$v$$ on a shortest $$uv$$-path. Hence, this matrix efficiently encodes the information of a shortest $$uv$$-path for any $$u,v\in G$$ : indeed, to go from $$u$$ to $$v$$ you should first find a shortest $$uP[u,v]$$-path, then jump from $$P[u,v]$$ to $$v$$ as it is one of its outneighbors. Apply recursively and find out what the whole path is !. • The matrix of distances. This matrix has size $$n^2$$ and associates to any $$uv$$ the distance from $$u$$ to $$v$$. • The vector of eccentricities. This vector of size $$n$$ encodes for each vertex $$v$$ the distance to vertex which is furthest from $$v$$ in the graph. In particular, the diameter of the graph is the maximum of these values. What does it take as input ? • gg a (Di)Graph. • unsigned short * predecessors – a pointer toward an array of size $$n^2\cdot\text{sizeof(unsigned short)}$$. Set to NULL if you do not want to compute the predecessors. • unsigned short * distances – a pointer toward an array of size $$n^2\cdot\text{sizeof(unsigned short)}$$. The computation of the distances is necessary for the algorithm, so this value can not be set to NULL. • int * eccentricity – a pointer toward an array of size $$n\cdot\text{sizeof(int)}$$. Set to NULL if you do not want to compute the eccentricity. Technical details • The vertices are encoded as $$1, ..., n$$ as they appear in the ordering of G.vertices(). • Because this function works on matrices whose size is quadratic compared to the number of vertices when computing all distances or predecessors, it uses short variables to store the vertices’ names instead of long ones to divide by 2 the size in memory. This means that only the diameter/eccentricities can be computed on a graph of more than 65536 nodes. For information, the current version of the algorithm on a graph with $$65536=2^{16}$$ nodes creates in memory $$2$$ tables on $$2^{32}$$ short elements (2bytes each), for a total of $$2^{33}$$ bytes or $$8$$ gigabytes. In order to support larger sizes, we would have to replace shorts by 32-bits int or 64-bits int, which would then require respectively 16GB or 32GB. • In the C version of these functions, infinite distances are represented with <unsigned short> -1 = 65535 for unsigned short variables, and by INT32_MAX otherwise. These case happens when the input is a disconnected graph, or a non-strongly-connected digraph. • A memory error is raised when data structures allocation failed. This could happen with large graphs on computers with low memory space. Warning The function all_pairs_shortest_path_BFS has no reason to be called by the user, even though he would be writing his code in Cython and look for efficiency. This module contains wrappers for this function that feed it with the good parameters. As the function is inlined, using those wrappers actually saves time as it should avoid testing the parameters again and again in the main function’s body. AUTHOR: • Nathann Cohen (2011) REFERENCE: [KRG96b] (1, 2) S. Klavzar, A. Rajapakse, and I. Gutman. The Szeged and the Wiener index of graphs. Applied Mathematics Letters, 9(5):45–49, 1996. [GYLL93c] I. Gutman, Y.-N. Yeh, S.-L. Lee, and Y.-L. Luo. Some recent results in the theory of the Wiener number. Indian Journal of Chemistry, 32A:651–661, 1993. ## Functions¶ sage.graphs.distances_all_pairs.diameter(G) Returns the diameter of $$G$$. EXAMPLE: sage: from sage.graphs.distances_all_pairs import diameter sage: g = graphs.PetersenGraph() sage: diameter(g) 2 sage.graphs.distances_all_pairs.distances_all_pairs(G) Returns the matrix of distances in G. This function returns a double dictionary D of vertices, in which the distance between vertices u and v is D[u][v]. EXAMPLE: sage: from sage.graphs.distances_all_pairs import distances_all_pairs sage: g = graphs.PetersenGraph() sage: distances_all_pairs(g) {0: {0: 0, 1: 1, 2: 2, 3: 2, 4: 1, 5: 1, 6: 2, 7: 2, 8: 2, 9: 2}, 1: {0: 1, 1: 0, 2: 1, 3: 2, 4: 2, 5: 2, 6: 1, 7: 2, 8: 2, 9: 2}, 2: {0: 2, 1: 1, 2: 0, 3: 1, 4: 2, 5: 2, 6: 2, 7: 1, 8: 2, 9: 2}, 3: {0: 2, 1: 2, 2: 1, 3: 0, 4: 1, 5: 2, 6: 2, 7: 2, 8: 1, 9: 2}, 4: {0: 1, 1: 2, 2: 2, 3: 1, 4: 0, 5: 2, 6: 2, 7: 2, 8: 2, 9: 1}, 5: {0: 1, 1: 2, 2: 2, 3: 2, 4: 2, 5: 0, 6: 2, 7: 1, 8: 1, 9: 2}, 6: {0: 2, 1: 1, 2: 2, 3: 2, 4: 2, 5: 2, 6: 0, 7: 2, 8: 1, 9: 1}, 7: {0: 2, 1: 2, 2: 1, 3: 2, 4: 2, 5: 1, 6: 2, 7: 0, 8: 2, 9: 1}, 8: {0: 2, 1: 2, 2: 2, 3: 1, 4: 2, 5: 1, 6: 1, 7: 2, 8: 0, 9: 2}, 9: {0: 2, 1: 2, 2: 2, 3: 2, 4: 1, 5: 2, 6: 1, 7: 1, 8: 2, 9: 0}} sage.graphs.distances_all_pairs.distances_and_predecessors_all_pairs(G) Returns the matrix of distances in G and the matrix of predecessors. Distances : the matrix $$M$$ returned is of length $$n^2$$, and the distance between vertices $$u$$ and $$v$$ is $$M[u,v]$$. The integer corresponding to a vertex is its index in the list G.vertices(). Predecessors : the matrix $$P$$ returned has size $$n^2$$, and is such that vertex $$P[u,v]$$ is a predecessor of $$v$$ on a shortest $$uv$$-path. Hence, this matrix efficiently encodes the information of a shortest $$uv$$-path for any $$u,v\in G$$ : indeed, to go from $$u$$ to $$v$$ you should first find a shortest $$uP[u,v]$$-path, then jump from $$P[u,v]$$ to $$v$$ as it is one of its outneighbors. The integer corresponding to a vertex is its index in the list G.vertices(). EXAMPLE: sage: from sage.graphs.distances_all_pairs import distances_and_predecessors_all_pairs sage: g = graphs.PetersenGraph() sage: distances_and_predecessors_all_pairs(g) ({0: {0: 0, 1: 1, 2: 2, 3: 2, 4: 1, 5: 1, 6: 2, 7: 2, 8: 2, 9: 2}, 1: {0: 1, 1: 0, 2: 1, 3: 2, 4: 2, 5: 2, 6: 1, 7: 2, 8: 2, 9: 2}, 2: {0: 2, 1: 1, 2: 0, 3: 1, 4: 2, 5: 2, 6: 2, 7: 1, 8: 2, 9: 2}, 3: {0: 2, 1: 2, 2: 1, 3: 0, 4: 1, 5: 2, 6: 2, 7: 2, 8: 1, 9: 2}, 4: {0: 1, 1: 2, 2: 2, 3: 1, 4: 0, 5: 2, 6: 2, 7: 2, 8: 2, 9: 1}, 5: {0: 1, 1: 2, 2: 2, 3: 2, 4: 2, 5: 0, 6: 2, 7: 1, 8: 1, 9: 2}, 6: {0: 2, 1: 1, 2: 2, 3: 2, 4: 2, 5: 2, 6: 0, 7: 2, 8: 1, 9: 1}, 7: {0: 2, 1: 2, 2: 1, 3: 2, 4: 2, 5: 1, 6: 2, 7: 0, 8: 2, 9: 1}, 8: {0: 2, 1: 2, 2: 2, 3: 1, 4: 2, 5: 1, 6: 1, 7: 2, 8: 0, 9: 2}, 9: {0: 2, 1: 2, 2: 2, 3: 2, 4: 1, 5: 2, 6: 1, 7: 1, 8: 2, 9: 0}}, {0: {0: None, 1: 0, 2: 1, 3: 4, 4: 0, 5: 0, 6: 1, 7: 5, 8: 5, 9: 4}, 1: {0: 1, 1: None, 2: 1, 3: 2, 4: 0, 5: 0, 6: 1, 7: 2, 8: 6, 9: 6}, 2: {0: 1, 1: 2, 2: None, 3: 2, 4: 3, 5: 7, 6: 1, 7: 2, 8: 3, 9: 7}, 3: {0: 4, 1: 2, 2: 3, 3: None, 4: 3, 5: 8, 6: 8, 7: 2, 8: 3, 9: 4}, 4: {0: 4, 1: 0, 2: 3, 3: 4, 4: None, 5: 0, 6: 9, 7: 9, 8: 3, 9: 4}, 5: {0: 5, 1: 0, 2: 7, 3: 8, 4: 0, 5: None, 6: 8, 7: 5, 8: 5, 9: 7}, 6: {0: 1, 1: 6, 2: 1, 3: 8, 4: 9, 5: 8, 6: None, 7: 9, 8: 6, 9: 6}, 7: {0: 5, 1: 2, 2: 7, 3: 2, 4: 9, 5: 7, 6: 9, 7: None, 8: 5, 9: 7}, 8: {0: 5, 1: 6, 2: 3, 3: 8, 4: 3, 5: 8, 6: 8, 7: 5, 8: None, 9: 6}, 9: {0: 4, 1: 6, 2: 7, 3: 4, 4: 9, 5: 7, 6: 9, 7: 9, 8: 6, 9: None}}) sage.graphs.distances_all_pairs.distances_distribution(G) Returns the distances distribution of the (di)graph in a dictionary. This method ignores all edge labels, so that the distance considered is the topological distance. OUTPUT: A dictionary d such that the number of pairs of vertices at distance k (if any) is equal to $$d[k] \cdot |V(G)| \cdot (|V(G)|-1)$$. Note We consider that two vertices that do not belong to the same connected component are at infinite distance, and we do not take the trivial pairs of vertices $$(v, v)$$ at distance $$0$$ into account. Empty (di)graphs and (di)graphs of order 1 have no paths and so we return the empty dictionary {}. EXAMPLES: An empty Graph: sage: g = Graph() sage: g.distances_distribution() {} A Graph of order 1: sage: g = Graph() sage: g.distances_distribution() {} A Graph of order 2 without edge: sage: g = Graph() sage: g.distances_distribution() {+Infinity: 1} The Petersen Graph: sage: g = graphs.PetersenGraph() sage: g.distances_distribution() {1: 1/3, 2: 2/3} A graph with multiple disconnected components: sage: g = graphs.PetersenGraph() sage: g.distances_distribution() {1: 8/33, 2: 5/11, +Infinity: 10/33} The de Bruijn digraph dB(2,3): sage: D = digraphs.DeBruijn(2,3) sage: D.distances_distribution() {1: 1/4, 2: 11/28, 3: 5/14} sage.graphs.distances_all_pairs.eccentricity(G) Returns the vector of eccentricities in G. The array returned is of length n, and its ith component is the eccentricity of the ith vertex in G.vertices(). EXAMPLE: sage: from sage.graphs.distances_all_pairs import eccentricity sage: g = graphs.PetersenGraph() sage: eccentricity(g) [2, 2, 2, 2, 2, 2, 2, 2, 2, 2] sage.graphs.distances_all_pairs.floyd_warshall(gg, paths=True, distances=False) Computes the shortest path/distances between all pairs of vertices. For more information on the Floyd-Warshall algorithm, see the Wikipedia article on Floyd-Warshall. INPUT: • gg – the graph on which to work. • paths (boolean) – whether to return the dictionary of shortest paths. Set to True by default. • distances (boolean) – whether to return the dictionary of distances. Set to False by default. OUTPUT: Depending on the input, this function return the dictionary of paths, the dictionary of distances, or a pair of dictionaries (distances, paths) where distance[u][v] denotes the distance of a shortest path from $$u$$ to $$v$$ and paths[u][v] denotes an inneighbor $$w$$ of $$v$$ such that $$dist(u,v)= 1 + dist(u,w)$$. Warning Because this function works on matrices whose size is quadratic compared to the number of vertices, it uses short variables instead of long ones to divide by 2 the size in memory. This means that the current implementation does not run on a graph of more than 65536 nodes (this can be easily changed if necessary, but would require much more memory. It may be worth writing two versions). For information, the current version of the algorithm on a graph with $$65536=2^{16}$$ nodes creates in memory $$2$$ tables on $$2^{32}$$ short elements (2bytes each), for a total of $$2^{34}$$ bytes or $$16$$ gigabytes. Let us also remember that if the memory size is quadratic, the algorithm runs in cubic time. Note When paths = False the algorithm saves roughly half of the memory as it does not have to maintain the matrix of predecessors. However, setting distances=False produces no such effect as the algorithm can not run without computing them. They will not be returned, but they will be stored while the method is running. EXAMPLES: Shortest paths in a small grid sage: g = graphs.Grid2dGraph(2,2) sage: from sage.graphs.distances_all_pairs import floyd_warshall sage: print floyd_warshall(g) {(0, 1): {(0, 1): None, (1, 0): (0, 0), (0, 0): (0, 1), (1, 1): (0, 1)}, (1, 0): {(0, 1): (0, 0), (1, 0): None, (0, 0): (1, 0), (1, 1): (1, 0)}, (0, 0): {(0, 1): (0, 0), (1, 0): (0, 0), (0, 0): None, (1, 1): (0, 1)}, (1, 1): {(0, 1): (1, 1), (1, 0): (1, 1), (0, 0): (0, 1), (1, 1): None}} Checking the distances are correct sage: g = graphs.Grid2dGraph(5,5) sage: dist,path = floyd_warshall(g, distances = True) sage: all( dist[u][v] == g.distance(u,v) for u in g for v in g ) True Checking a random path is valid sage: u,v = g.random_vertex(), g.random_vertex() sage: p = [v] sage: while p[0] is not None: ... p.insert(0,path[u][p[0]]) sage: len(p) == dist[u][v] + 2 True Distances for all pairs of vertices in a diamond: sage: g = graphs.DiamondGraph() sage: floyd_warshall(g, paths = False, distances = True) {0: {0: 0, 1: 1, 2: 1, 3: 2}, 1: {0: 1, 1: 0, 2: 1, 3: 1}, 2: {0: 1, 1: 1, 2: 0, 3: 1}, 3: {0: 2, 1: 1, 2: 1, 3: 0}} TESTS: Too large graphs: sage: from sage.graphs.distances_all_pairs import floyd_warshall sage: floyd_warshall(Graph(65536)) Traceback (most recent call last): ... ValueError: The graph backend contains more than 65535 nodes sage.graphs.distances_all_pairs.is_distance_regular(G, parameters=False) Tests if the graph is distance-regular A graph $$G$$ is distance-regular if for any integers $$j,k$$ the value of $$|\{x:d_G(x,u)=j,x\in V(G)\} \cap \{y:d_G(y,v)=j,y\in V(G)\}|$$ is constant for any two vertices $$u,v\in V(G)$$ at distance $$i$$ from each other. In particular $$G$$ is regular, of degree $$b_0$$ (see below), as one can take $$u=v$$. Equivalently a graph is distance-regular if there exist integers $$b_i,c_i$$ such that for any two vertices $$u,v$$ at distance $$i$$ we have • $$b_i = |\{x:d_G(x,u)=i+1,x\in V(G)\}\cap N_G(v)\}|, \ 0\leq i\leq d-1$$ • $$c_i = |\{x:d_G(x,u)=i-1,x\in V(G)\}\cap N_G(v)\}|, \ 1\leq i\leq d,$$ where $$d$$ is the diameter of the graph. For more information on distance-regular graphs, see its associated wikipedia page. INPUT: • parameters (boolean) – if set to True, the function returns the pair (b,c) of lists of integers instead of True (see the definition above). Set to False by default. EXAMPLES: sage: g = graphs.PetersenGraph() sage: g.is_distance_regular() True sage: g.is_distance_regular(parameters = True) ([3, 2, None], [None, 1, 1]) Cube graphs, which are not strongly regular, are a bit more interesting: sage: graphs.CubeGraph(4).is_distance_regular() True sage: graphs.OddGraph(5).is_distance_regular() True Disconnected graph: sage: (2*graphs.CubeGraph(4)).is_distance_regular() True TESTS: sage: graphs.PathGraph(2).is_distance_regular(parameters = True) ([1, None], [None, 1]) sage: graphs.Tutte12Cage().is_distance_regular(parameters=True) ([3, 2, 2, 2, 2, 2, None], [None, 1, 1, 1, 1, 1, 3]) sage.graphs.distances_all_pairs.shortest_path_all_pairs(G) Returns the matrix of predecessors in G. The matrix $$P$$ returned has size $$n^2$$, and is such that vertex $$P[u,v]$$ is a predecessor of $$v$$ on a shortest $$uv$$-path. Hence, this matrix efficiently encodes the information of a shortest $$uv$$-path for any $$u,v\in G$$ : indeed, to go from $$u$$ to $$v$$ you should first find a shortest $$uP[u,v]$$-path, then jump from $$P[u,v]$$ to $$v$$ as it is one of its outneighbors. The integer corresponding to a vertex is its index in the list G.vertices(). EXAMPLE: sage: from sage.graphs.distances_all_pairs import shortest_path_all_pairs sage: g = graphs.PetersenGraph() sage: shortest_path_all_pairs(g) {0: {0: None, 1: 0, 2: 1, 3: 4, 4: 0, 5: 0, 6: 1, 7: 5, 8: 5, 9: 4}, 1: {0: 1, 1: None, 2: 1, 3: 2, 4: 0, 5: 0, 6: 1, 7: 2, 8: 6, 9: 6}, 2: {0: 1, 1: 2, 2: None, 3: 2, 4: 3, 5: 7, 6: 1, 7: 2, 8: 3, 9: 7}, 3: {0: 4, 1: 2, 2: 3, 3: None, 4: 3, 5: 8, 6: 8, 7: 2, 8: 3, 9: 4}, 4: {0: 4, 1: 0, 2: 3, 3: 4, 4: None, 5: 0, 6: 9, 7: 9, 8: 3, 9: 4}, 5: {0: 5, 1: 0, 2: 7, 3: 8, 4: 0, 5: None, 6: 8, 7: 5, 8: 5, 9: 7}, 6: {0: 1, 1: 6, 2: 1, 3: 8, 4: 9, 5: 8, 6: None, 7: 9, 8: 6, 9: 6}, 7: {0: 5, 1: 2, 2: 7, 3: 2, 4: 9, 5: 7, 6: 9, 7: None, 8: 5, 9: 7}, 8: {0: 5, 1: 6, 2: 3, 3: 8, 4: 3, 5: 8, 6: 8, 7: 5, 8: None, 9: 6}, 9: {0: 4, 1: 6, 2: 7, 3: 4, 4: 9, 5: 7, 6: 9, 7: 9, 8: 6, 9: None}} sage.graphs.distances_all_pairs.wiener_index(G) Returns the Wiener index of the graph. The Wiener index of a graph $$G$$ can be defined in two equivalent ways [KRG96b] : • $$W(G) = \frac 1 2 \sum_{u,v\in G} d(u,v)$$ where $$d(u,v)$$ denotes the distance between vertices $$u$$ and $$v$$. • Let $$\Omega$$ be a set of $$\frac {n(n-1)} 2$$ paths in $$G$$ such that $$\Omega$$ contains exactly one shortest $$u-v$$ path for each set $$\{u,v\}$$ of vertices in $$G$$. Besides, $$\forall e\in E(G)$$, let $$\Omega(e)$$ denote the paths from $$\Omega$$ containing $$e$$. We then have $$W(G) = \sum_{e\in E(G)}|\Omega(e)|$$. EXAMPLE: From [GYLL93c], cited in [KRG96b]: sage: g=graphs.PathGraph(10) sage: w=lambda x: (x*(x*x -1)/6) sage: g.wiener_index()==w(10) True #### Previous topic Weakly chordal graphs #### Next topic LaTeX options for graphs
# Math Help - Relation 1. ## Relation Define the relation R on the real numbers R by xRy if and only if [x] = [y] where [.] is the greatest integer function, that is, [t] is defined to be the greatest integer less than or equal to t. Prove that for every x E R there exists a y E Z such that x E y/R. I dont know where to start plz help thanks 2. Originally Posted by logglypop Define the relation R on the real numbers R by xRy if and only if [x] = [y] where [.] is the greatest integer function, that is, [t] is defined to be the greatest integer less than or equal to t. Prove that for every x E R there exists a y E Z such that x E y/R. If $\mathbb{Z}$ is the set of integers then $x\mathcal{R}y$ if and only if $\left( {\exists n \in \mathbb{Z}} \right)\left[ {\left\{ {x,y} \right\} \subset \left[n,n + 1\right)} \right]$
Article | Open | Published: # Effects of prey density, temperature and predator diversity on nonconsumptive predator-driven mortality in a freshwater food web Scientific Reportsvolume 7, Article number: 18075 (2017) | Download Citation ## Abstract Nonconsumptive predator-driven mortality (NCM), defined as prey mortality due to predation that does not result in prey consumption, is an underestimated component of predator-prey interactions with possible implications for population dynamics and ecosystem functioning. However, the biotic and abiotic factors influencing this mortality component remain largely unexplored, leaving a gap in our understanding of the impacts of environmental change on ecological communities. We investigated the effects of temperature, prey density, and predator diversity and density on NCM in an aquatic food web module composed of dragonfly larvae (Aeshna cyanea) and marbled crayfish (Procambarus fallax f. virginalis) preying on common carp (Cyprinus carpio) fry. We found that NCM increased with prey density and depended on the functional diversity and density of the predator community. Warming significantly reduced NCM only in the dragonfly larvae but the magnitude depended on dragonfly larvae density. Our results indicate that energy transfer across trophic levels is more efficient due to lower NCM in functionally diverse predator communities, at lower resource densities and at higher temperatures. This suggests that environmental changes such as climate warming and reduced resource availability could increase the efficiency of energy transfer in food webs only if functionally diverse predator communities are conserved. ## Introduction Investigating the effects of environmental drivers on food webs is crucial to better understand global change impacts on energy and nutrient fluxes across trophic levels. A growing number of studies have thus investigated the effects of global change drivers such as temperature, enrichment, pollutants, and habitat fragmentation on trophic interactions1,2,3,4. For example, previous studies have shown that predation rate often increases with temperature but decreases with prey density5,6,7,8. Thermal effects on predation rate are mainly driven by the acceleration of physiological processes (metabolism and digestion) leading to higher energetic demands of the predators, and by more frequent predator-prey encounters due to faster movement of predators or prey with warming. The effects of prey density are caused by the non-linearity of the predator feeding rate that increases with prey density and reaches a plateau at high prey densities (i.e., saturating Holling type II or III functional responses). Altogether, these temperature- and density-dependent effects on predation rates can alter population dynamics and species persistence by modifying trophic interaction strengths9. However, prey also face predation-induced types of death other than direct consumption by predators. Predator attacks are not always successful and injured prey sometime escape and die later away from the predator10,11. Predators can also abandon or only partially consume some of the killed prey, a widespread behaviour in many invertebrate and vertebrate predators referred to as surplus killing12,13,14. This feeding behaviour is an important component of consumer-resource interactions that can influence population dynamics and predator-prey co-evolution15,16,17,18. Finally, the “ecology of fear” framework posits that the presence of predators can mobilize stress hormone secretion and consequently decrease prey energetic reserves19,20. Persistent stress reaction may thus “scare prey to death” and further increase prey mortality rates21,22,23,24. While surplus killing is well documented and relatively common, cases of prey mortality linked to high stress levels and unsuccessful predator attacks remain largely unexplored. All these phenomena contribute to nonconsumptive predator-driven mortality (hereafter NCM) in food webs. Overall, the proportion of dead prey not eaten by predators can be substantial11,25. These prey individuals do not contribute to the flux of energy and nutrients to higher trophic levels, which can alter ecosystem functioning through lowered trophic transfer efficiency. Altogether, this suggests that NCM is relevant to the understanding and predictions of global change impacts on energy flux and ecosystem functioning. Factors modulating NCM strength are insufficiently understood. Previous studies reported that prey availability strongly influences surplus killing12,26,27,28,29,30,31,32,33, which typically increases with prey density. Nevertheless, the dependence of surplus killing on prey density varies strongly among taxa and can be linear or unimodal34,35,36. Moreover, we are not aware of any study about prey density effects on prey mortality linked to high stress levels caused by predation risk. The role of global change drivers in NCM is essentially unknown. Human activities lead to rapid environmental changes including pollution, habitat alteration, nutrient enrichment, and global warming. Understanding how these drivers impact organisms and their interactions, including surplus killing and other processes affecting energy transfer across trophic levels, is important to better predict global change consequences on Earth’s biota4,37,38,39,40. To our knowledge, the effects of temperature (or any other of the above drivers) on the “prey scared to death” phenomenon and unsuccessful predator attacks remain unexplored. Only one study reported a decrease in surplus killing with warming34, possibly due to higher metabolic demands of predators at warmer temperatures and hence higher ingestion rates required to fulfil these demands23. Most food webs consist of multiple predators that share similar prey14,41 and provide important ecosystem services42,43. There is mounting evidence that the effects of multiple predators on prey populations can rarely be predicted from single-predator effects. Interactions among multiple predators and their prey often result in emergent effects such as predation risk reduction or enhancement6,44,45,46. Does this disconnect between observations based on single and multiple predators also apply to NCM? Benke47 suggested that interactions among conspecific predators increase surplus killing, which could in turn exacerbate the effects of exploitative competition among conspecific predators48. However, no study has compared prey surplus killing in intraspecific and interspecific predator assemblages, which limits our knowledge about the relative importance of the effects of predator density versus diversity on surplus killing. The impact of multiple predators and functionally diverse predator communities on the amount of prey “scared to death” has not been thoroughly explored either. More generally, knowledge of the impact of multiple predators and predator functional diversity on NCM are limited. Previous studies reported that predator diet breadth and functional diversity within predator assemblages can strongly affect the relationship between predator diversity and ecosystem functioning (e.g. primary production and prey suppression via trophic cascades)49,50. For instance, Finke and Snyder51 suggested that communities composed of generalist consumers exploit resources better than those including only specialists. Similarly, communities including diverse consumer types such as predators, omnivores and scavengers should exhibit reduced NCM values compared to communities composed only of predators24, e.g., when scavengers and omnivores eat prey killed by other predators. These observations provide qualitative insights but do not sufficiently advance our ability to quantify NCM strengths in predator-rich communities. In this study, we experimentally investigated the effects of temperature, resource availability (i.e., prey density), and predator density and diversity on NCM strengths (i.e., the proportion of dead prey not eaten by predators). Changes in temperature and resource availability are two of the most important global change drivers7. It is thus crucial to investigate their impacts on energy fluxes to better understand the consequences of global change on ecological communities6,52. Our study provides an initial step in the exploration of the effects of abiotic and biotic factors on NCM strengths in food webs. It helps better understand and predict global change consequences on energy fluxes in ecological communities. ## Results While prey mortality was negligible in controls without predators (0–2% of the initial prey died during the experiment, mean ± SD = 0.84 ± 1.06%), the proportion of dead uneaten prey per predator (see Methods for details; hereafter only NCM strength) was significantly positive in treatments with predators. The overall average value of NCM in treatments with predators was 4% with a maximum of 20%. Moreover, we found dead uneaten prey at all prey densities as well as in each predator assemblage. In addition, dead uneaten prey were found in 64% of the replicates with predators. NCM strength varied significantly with temperature, prey density and predator assemblage (Table 1). Furthermore, temperature effect depended on predator assemblage (significant temperature × predator assemblage interaction, Table 1 and Fig. 1). Warming decreased the strength of NCM caused by dragonfly pairs, but had the opposite effect in the single dragonfly treatment and did not affect NCM strength in the other predator treatments (Fig. 1A). NCM strength increased significantly with prey density (Fig. 1B) and this effect was independent of temperature or predator treatments (Table 1). In addition, we found a temperature-dependent effect of predator density on NCM strength that was independent of prey density and predator species and size (Table 2). NCM strength tended to decrease with predator density, but this effect was more pronounced and significant only at 20 °C (Table 2 and Fig. 2). When grouping the treatments by predator functional groups (predators, scavengers and their mix), we found that NCM strength varied significantly among functional groups and with prey density but did not depend on temperature or any statistical interactions of these three variables (Table 3). NCM was lowest in mixed treatments involving one scavenger and one predator and highest in treatments involving only predators (Table 3 and Fig. 3). The estimated dependence of NCM strength on prey density was nearly the same as when we grouped the data by predator assemblages (compare Figs 1B and 3B). Predictions from the multiplicative risk model mostly overestimated the observed NCM strengths except for the treatment with two dragonflies at 16 °C, in which the observation exceeded the prediction (Fig. 4). That is, NCM strengths in predator assemblages were almost always weaker than expected from single-predator treatments. Temperature, prey density and predator assemblages and density affected the per capita proportions of dead prey with and without visible attack marks in very similar but not identical ways to how they affected the overall NCM strength. Both per capita proportions were significantly affected by temperature, prey density and predator assemblage, and the statistical interaction between temperature and predator assemblage (Table S1). Warming significantly reduced the per capita proportion of dead prey with visible attack marks in two intraspecific predator assemblages (D_D and LC_LC) but had no effect in the other assemblages (Table S1 and Fig. S1). Warming also reduced the per capita proportion of dead prey without visible attack marks in the D_D assemblage but had the opposite effects in the single dragonfly treatment (Table S2 and Fig. S2). Finally, the per capita proportions of both types of dead prey increased with initial prey density (Figs S1B and S2B). ## Discussion Nonconsumptive predator-driven mortality (NCM) is a common but underestimated component of predator-prey interactions. Previous studies mainly focused on consumptive mortality and often neglected NCM linked to unsuccessful predator attacks, surplus killing or predator-induced high stress levels. These three different sources of mortality are widespread across many invertebrate and vertebrate taxa12,23,24,34,35,53,54 and can influence population dynamics, food web structure, and the co-evolution of predators and prey16,17,34. However, the magnitude and dependence of NCM on external factors remains largely unexplored, which limits our understanding of when and how biotic and abiotic factors influence the strength of consumer-resource interactions and thus the dynamics and structure of ecological communities. Here, we investigated the effects of temperature, prey density, and predator density and functional diversity on NCM strengths in an aquatic food web module. Like all laboratory studies, our experiments have limitations that prevent strong quantitative inference from the results. They were conducted in an artificial environment at a small spatio-temporal scale that cannot be directly extrapolated to long-term community and ecosystem dynamics. The environment and arena size could have influenced prey mortality, but we found it to be negligible in control trials without predators and much lower than the observed magnitude of NCM in predation trials. This suggests that the qualitative patterns found in our experiment are sufficiently robust. Moreover, the habitat volume and duration of our experimental trials fall within the range commonly used in predation experiments with aquatic invertebrate predators. We therefore think that our study helps identify factors influencing NCM strengths and provides an additional step towards a better understanding of the effects of biotic and abiotic factors on predator-prey interaction strengths in aquatic systems. ### Effect of temperature on NCM We found that NCM strength was not influenced by temperature except in treatments involving only dragonflies. This indicates that the effect of temperature on NCM is species specific and potentially related to consumer functional type (pure predator vs. scavenger). Moreover, the effects of temperature depended on dragonfly density: warming increased NCM in treatments with a single dragonfly whereas it decreased NCM in treatments with two dragonflies. Our more detailed analyses revealed that the effect of temperature in the single dragonfly treatment was caused by a magnified “scared to death” phenomenon rather than a change in surplus killing. Although the mechanisms and physiological processes underlying these effects remain to be investigated in more detail, our results suggest that the additional stressor (i.e., warming) led to increased mortality of fish fry in the presence of a single dragonfly predator. As the per capita prey density was reduced in treatments with two dragonfly predators, it is plausible that they fed on the prey more efficiently to compensate for the joint effect of higher metabolic demands and lower prey availability at the higher temperature. Additional aspects of predator and prey behaviour that would alter NCM strength with temperature may also change with predator density. For instance, predators can become more careful when catching and handling prey and hence feed more efficiently in the presence of other predators14,43,44, and their awareness of other predators may increase with temperature due to more frequent mutual encounters. Overall, our results indicate that warming effects on NCM strengths depend on predator identity and density, which we discuss next in more detail. It is currently difficult to generalize our findings given the paucity of studies on this topic. We thus call for further studies investigating the effects of temperature on NCM strength in food webs. ### Effect of prey density on NCM NCM strength significantly increased with prey density and this effect was independent of temperature and predator identity or density. Prey density effect on overall NCM strength was driven by a combined increase of surplus killing and “scared-to-death” mortality with prey density. Previous studies have also show that surplus killing is more frequent at higher prey densities12,34,35,55, but the shape of this relationship varies among taxa from linear to unimodal. We found a nearly linear relationship between prey density and surplus killing, which corroborates the results of previous studies on predatory aquatic insects including larvae of the damselfly Anomalagrion hastatum 56, aquatic bug Diplonychus rusticus 27 and the backswimmer Notonecta hoffmanni 36. To our knowledge, the effect of prey density on nonconsumptive predator-induced mortality has never been explored. We found that, while prey mortality in the absence of predators was negligible and did not increase with prey density, the proportion of dead prey without visible attack marks increased strongly with prey density, suggesting that prey are more “scared to death” by predators in denser prey populations. Higher stress levels in the prey may result from oxygen depletion or more frequent physical contacts with conspecifics. These stressors alone may be sublethal but can become lethal when magnified by or combined with an additional stressor such as predator presence57,58. For instance, predators can increase prey respiration rate (e.g., if predator avoidance requires faster or more frequent swimming), which would accelerate oxygen depletion and increase prey mortality. This effect is likely to be stronger at high prey densities when prey are more likely to deplete oxygen. Although we cannot resolve the mechanism underlying the “scared to death” phenomenon in our experiment, our results indicate that predator presence can modify this type of prey mortality. Interestingly, the effects of prey density on “scared-to-death” mortality and surplus killing were independent of predator species and assemblage, suggesting a general effect of prey density on NCM strengths. We thus predict that declines in trophic transfer efficiency due to NCM will become more pronounced at higher prey densities. This would act as a stabilizing factor in communities with fluctuating predator and prey population densities59. ### Effects of predator density and functional diversity on NCM The observed decline in per capita NCM with predator density, especially at the higher temperature, can be explained by a combination of two behavioural responses: increased individual feeding rates and the ability of predators to recognize conspecifics. The former response would help cover higher metabolic demands of predators at warmer temperatures23, while the latter would enable them to adjust to the perceived scarcity of resources. Further investigations are needed to determine which of these two behavioural responses contributes most to the observed pattern. Interactions among predators and predator functional types can strongly influence consumer-resource interactions in species-rich communities6,24,60,61. We found that NCM strength varied substantially among predator assemblages, being higher in pure predators (i.e., dragonflies) than in scavengers (i.e., crayfish). Interestingly, NCM strength was lowest when a predator and a scavenger were paired together. The underlying mechanisms remain to be investigated in more detail. We assume that scavengers either feed on the dead prey abandoned by dragonfly larvae that cannot locate immobile prey62 or that scavengers and predators modify their behaviour when together. Whatever the exact mechanism, our results suggest that increased predator functional diversity in food webs can lower NCM strengths. Moreover, multi-predator NCM strengths in our experiment could not be predicted from single-predator NCM strengths alone. Both predator density and predator diversity, including the functional differences between pure predators (dragonfly larvae) and scavengers (crayfish), thus affected NCM strength. Overall, our results suggest that trophic transfer efficiency is higher in functionally diverse ecosystems, which may have important implications for population dynamics and community structure. ## Conclusions Nonconsumptive mortality is an important but under-appreciated component of consumer-resource interactions. Here we showed that abiotic and biotic factors such as temperature, prey density, predator functional diversity and density influence NCM strength. The effect of temperature on NCM strength varied among predator assemblages and was often not significant. On the other hand, NCM strength increased with prey density independently of temperature and predator assemblage, suggesting a general effect of prey density on NCM strength. Moreover, NCM strength declined in functionally diverse predator assemblages. Our results indicate that energy transfer across trophic levels is more efficient in functionally diverse predator communities, at lower resource densities and at higher temperatures, which has important implications for community dynamics, ecosystem services, and biological conservation. ## Material and Methods Experiments were conducted at the Research Institute of Fish Culture and Hydrobiology in Vodňany (RIFCH), Czech Republic during summer 2015. No specific permissions were required for capturing and manipulating the organisms used in the experiments. The study did not involve endangered or protected species. All experimental manipulations (capture, rearing and measurements) followed principles of animal welfare and their protection against abuse. We used two size classes of marbled crayfish Procambarus fallax f. virginalis (Decapoda; Cambaridae) and one size class of the dragonfly Aeshna cyanea (Odonata; Aeshnidae) as predators preying on common carp Cyprinus carpio (Cypriniformes; Cyprinidae) fry in the protopterygiolarval ontogenetic phase55. Marbled crayfish is an actively searching, benthic omnivore that is currently invading most freshwater ecosystems in Europe63. Larvae of the dragonfly Aesha cyanea are widespread native predators that can alternate between a ‘sit-and-wait’ and active foraging strategy targeting moving prey, and are often top predators in small fishless water bodies62. Dragonfly larvae were collected in small sandpit pools in southern Bohemia and released back to the source locality after the experiments. Fish fry were obtained from a hatchery belonging to RIFCH. Crayfish were obtained from laboratory cultures maintained at RIFCH. Before the experiment, predators and prey were maintained at 16 °C and respectively fed in excess with sludge worm (Tubifex tubifex) and brine shrimp (Artemia salina) nauplii. Dragonfly larvae were maintained individually in 0.5-litre plastic boxes (125 × 45 × 80 mm) with 0.4 litres of aged tap water containing a willow twig as a perching site. Crayfish were kept in groups at low densities (0.8 ind.L−1) in 50-litre aquaria with access to shelters (>1 per animal) to avoid excessive competition and cannibalism. ### Experimental design We standardized prey size (mean total length ± SD: 6.42 ± 0.20 mm) and used F-1 instar dragonfly larvae (further abbreviated as D) (total length: 30.1 ± 2.3 mm, wet weight: 0.53 ± 0.12 g) and two sizes of crayfish: small (abbreviated as SC; mean carapace length: 11.3 ± 0.9 mm, measured from the tip of the rostrum to the posterior edge of cephalothorax; wet weight: 0.45 ± 0.13 g) and large (LC; mean carapace length: 15.5 ± 1.0 mm; wet weight: 1.12 ± 0.18 g). One day before the experiment, predators were placed individually without food in 0.5-litre plastic boxes (125 × 45 × 80 mm) filled with 0.4 litres of aged tap water. Four hours before the experiment, predators were acclimated to the experimental temperature (16 or 20 °C). Similarly, prey were acclimated to the experimental temperature four hours before the experiment and were kept in 20-litre buckets. Experimental arenas consisted of plastic boxes (163 × 118 × 62 mm) filled with 1 litre of aged tap water and lined with a 1 cm layer of fine crystalline sand. We performed a full factorial experiment with two temperature regimes (16 and 20 °C), three prey densities (70, 110, 220 ind. L−1, representing low, medium, and high prey densities based on pilot experiments), and nine predator treatments with the three predator types: single predators (3 treatments), pairs with two predators of the same size and species (3 treatments), and pairs with two predators differing in size or species (3 treatments). Each combination of temperature, prey density and predator treatment was replicated seven times. In addition, five controls without predators were deployed to assess background mortality of prey for each combination of temperature and prey density. Prey were introduced into the experimental arenas for acclimation one hour before the start of the experiment. All predators were simultaneously introduced into the experimental arenas at the start of the experiment. After 24 hours, predators were removed and the number of living, killed (with visible attack marks), and dead prey (without visible attack marks) were recorded. During a pilot experiment, we observed that prey killed by the predators used in this study always had visible attack marks and all predator attacks were successful. Moreover, we did not observe partially eaten prey in this experiment. Although we did not directly measure stress levels of the fish fry, we attribute dead prey without visible attack marks to mortality due to high stress levels associated with predator presence. ### Statistical analyses Prey mortality in controls without predators was negligible (range 0–2% of initial prey) and prey mortality in controls without predators did not increase with prey density (GLM, F1,68 = 2.07, p = 0.15). The data were thus not corrected for background mortality. We calculated per capita NCM strength as the ratio of dead uneaten prey density over initial prey density, divided by the number of predators. In addition, we also calculated an alternative measure of per capita NCM strength as the ratio of the density of dead uneaten prey over the density of eaten prey, divided by the number of predators. As the results were qualitatively similar, we do not present results for the latter NCM metric. We also tested the goodness of fit of our models using the Hosmer-Lemeshow test and verified that all models fitted the data well (P > 0.05). All model results are shown as mean ± 95% Wald confidence interval (CI). We tested whether the per capita NCM strength (hereafter only NCM strength) is influenced by temperature, prey density, predator assemblage and their interactions using a GLM with a quasibinomial distribution to account for overdispersion64. The most parsimonious model was determined by sequential deletion of the least significant explanatory parameters or interaction terms from the full model. Parameter significance was evaluated using F-tests from the analysis of deviance. The final model included only parameters with significant p-values, and post-hoc Tukey tests were used to assess significant differences among treatment means. Finally, we grouped predator assemblages by functional groups: predators (i.e., only dragonfly larvae), scavengers (i.e., only crayfish), and mixed treatment (one scavenger and one predator) and analysed the effect of temperature, prey density and functional group on NCM strength as described above. We tested whether NCM strength in multiple-predator assemblages can be predicted using our experimental data from single predator treatments. For this purpose, we used the multiplicative risk model that often appears in studies investigating predation rate by multiple predators on a single prey species65: $$N{C}_{ab}={N}_{p}({P}_{a}+{P}_{b}-{P}_{a}{P}_{b})$$ (1) where NC ab is the predicted NCM strength measured as the density of dead uneaten prey, N p is the initial prey density, and P a and P b are NCM strengths measured as the respective proportions of dead uneaten prey in single predator a and b treatments. To better understand the mechanisms underlying our results, we further tested the influence of temperature, prey density, predator assemblage and their interactions on the per capita (i.e., per predator) proportion of dead prey with and without visible attack marks using two GLMs (one for each dependent variable) with quasibinomial distribution. The most parsimonious model was determined by sequential deletion of the least significant explanatory parameters or interaction terms from the full model and parameter significance was evaluated using F-tests from the analysis of deviance. Finally, we tested whether per capita NCM strength depended on predator density along with predator identity, temperature, prey density and their interactions. Only single predator treatments and treatments with predator pairs of the same size and species were used in this analysis. We again used GLMs (one for each dependent variable) with quasibinomial distribution and proceeded with model selection and evaluation of parameter significance as above. All analyses were implemented in R version 3.2.566. ### Data availability Primary data used in this study are available on Dryad repository server. ## Additional information Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Brook, B. W., Sodhi, N. S. & Bradshaw, C. J. Synergies among extinction drivers under global change. Trends in Ecology & Evolution 23, 453–460 (2008). 2. 2. Potts, S. G. et al. Global pollinator declines: trends, impacts and drivers. Trends in Ecology & Evolution 25, 345–353 (2010). 3. 3. Hoegh-Guldberg, O. & Bruno, J. F. The impact of climate change on the world’s marine ecosystems. Science 328, 1523–1528 (2010). 4. 4. Porter, E. M. et al. Interactive effects of anthropogenic nitrogen enrichment and climate change on terrestrial and aquatic biodiversity. Biogeochemistry 114, 93–120 (2013). 5. 5. Tylianakis, J. M., Didham, R. K., Bascompte, J. & Wardle, D. A. Global change and species interactions in terrestrial ecosystems. Ecology Letters 11, 1351–1363 (2008). 6. 6. Sentis, A., Gémard, C., Jaugeon, B. & Boukal, D. S. Predator diversity and environmental change modify the strengths of trophic and nontrophic interactions. Global Change Biology 23, 2629–2640 (2017). 7. 7. Corvalan, C., Hales, S. & McMichael, A. Ecosystems and Human Well-Being. Vol. 5 (Island press Washington, DC, 2005). 8. 8. Brown, J. H., Gillooly, J. F., Allen, A. P., Savage, V. M. & West, G. B. Toward a metabolic theory of ecology. Ecology 85, 1771–1789 (2004). 9. 9. Rall, B. C., Vucic-Pestic, O., Ehnes, R. B., Emmerson, M. & Brose, U. Temperature, predator–prey interaction strength and population stability. Global Change Biology 16, 2145–2157 (2010). 10. 10. Jeschke, J. M., Kopp, M. & Tollrian, R. Predator functional responses: discriminating between handling and digesting prey. Ecological Monographs 72, 95–112 (2002). 11. 11. Preisser, E. L. & Bolnick, D. I. The many faces of fear: comparing the pathways and impacts of nonconsumptive predator effects on prey populations. PloS ONE 3, e2465 (2008). 12. 12. Kruuk, H. Surplus killing by carnivores. Journal of Zoology 166, 233–244 (1972). 13. 13. Oksanen, T., Oksanen, L. & Fretwell, S. D. Surplus killing in the hunting strategy of small predators. The American Naturalist 126, 328–346 (1985). 14. 14. Sih, A., Englund, G. & Wooster, D. Emergent impacts of multiple predators on prey. Trends in Ecology and Evolution 13, 350–355 (1998). 15. 15. Charnov, E. L. Optimal foraging, the marginal value theorem. Theoretical Population Biology 9, 129–136 (1976). 16. 16. Short, J., Kinnear, J. & Robley, A. Surplus killing by introduced predators in Australia—evidence for ineffective anti-predator adaptations in native prey species? Biological Conservation 103, 283–301 (2002). 17. 17. Moore, N., Roy, S. & Helyar, A. Mink (Mustela vison) eradication to protect ground‐nesting birds in the Western Isles, Scotland, United Kingdom. New Zealand Journal of Zoology 30, 443–452 (2003). 18. 18. Peck, D., Faulquier, L., Pinet, P., Jaquemet, S. & Le Corre, M. Feral cat diet and impact on sooty terns at Juan de Nova Island, Mozambique Channel. Animal Conservation 11, 65–74 (2008). 19. 19. Stoks, R., Govaert, L., Pauwels, K., Jansen, B. & De Meester, L. Resurrecting complexity: the interplay of plasticity and rapid evolution in the multiple trait response to strong changes in predation pressure in the water flea Daphnia magna. Ecology Letters 19, 180–190 (2016). 20. 20. Trussell, G. C., Ewanchuk, P. J. & Matassa, C. M. The fear of being eaten reduces energy transfer in a simple food chain. Ecology 87, 2979–2984 (2006). 21. 21. Fraker, M. E. Predation risk assessment by green frog (Rana clamitans) tadpoles through chemical cues produced by multiple prey. Behavioral Ecology and Sociobiology 63, 1397–1402 (2009). 22. 22. Preisser, E. L. The physiology of predator stress in free‐ranging prey. Journal of Animal Ecology 78, 1103–1105 (2009). 23. 23. Siepielski, A. M., Wang, J. & Prince, G. Nonconsumptive predator-driven mortality causes natural selection on prey. Evolution 68, 696–704 (2014). 24. 24. McCauley, S. J., Rowe, L. & Fortin, M.-J. The deadly effects of “nonlethal” predators. Ecology 92, 2043–2048 (2011). 25. 25. Preisser, E., Bolnick, D. & Benard, M. The high cost of fear: behavioral effects dominate predator-prey interactions. Ecology 86, 501–509 (2005). 26. 26. Stephens, D. W. & Krebs, J. R. Foraging Theory. (Princeton University Press, 1986). 27. 27. Sih, A. Optimal foraging: partial consumption of prey. American Naturalist 116, 281–290 (1980). 28. 28. Knarrum, V. et al. Brown bear predation on domestic sheep in central Norway. Ursus 17, 67–74 (2006). 29. 29. Gende, S., Quinn, T. & Willson, M. Consumption choice by bears feeding on salmon. Oecologia 127, 372–382 (2001). 30. 30. Samu, F. & Biro, Z. Functional response, multiple feeding and wasteful killing in a wolf spider (Araneae: Lycosidae). European Journal of Entomology 90, 471–476 (1993). 31. 31. Andersson, M. & Erlinge, S. Influence of predation on rodent populations. Oikos 29, 591–597 (1977). 32. 32. Patterson, B. R. Surplus killing of White-tailed deer, Odocoileus virginianus, by coyotes, Canis lantrans, in Nova-Scotia. Canadian Field-Naturalist 108, 484–487 (1994). 33. 33. Mech, L. D., Smith, D. W., Murphy, K. M. & MacNulty, D. R. Winter severity and wolf predation on a formerly wolf-free elk herd. The Journal of Wildlife Management 65, 998–1003 (2001). 34. 34. Fantinou, A., Perdikis, D. C., Maselou, D. & Lambropoulos, P. Prey killing without consumption: Does Macrolophus pygmaeus show adaptive foraging behaviour? Biological Control 47, 187–193 (2008). 35. 35. Maupin, J. L. & Riechert, S. E. Superfluous killing in spiders: a consequence of adaptation to food-limited environments? Behavioral Ecology 12, 569–576 (2001). 36. 36. Dudgeon, D. Feeding by the aquatic heteropteran, Diplonychus rusticum (Belostomatidae): an effect of prey density on meal size. Hydrobiologia 190, 93–96 (1990). 37. 37. Neves, R. & Angermeier, P. Habitat alteration and its effects on native fishes in the upper Tennessee River system, east‐central USA. Journal of Fish Biology 37, 45–52 (1990). 38. 38. Kennish, M. J. Pollution Impacts on Marine Biotic Communities. Vol. 14 (CRC Press, 1997). 39. 39. Poff, N., Brinson, M. M. & Day, J. Aquatic ecosystems and globalclimate change. Pew Center on Global Climate Change, Arlington, VA 44, 1–36 (2002). 40. 40. Harley, C. D. et al. The impacts of climate change in coastal marine systems. Ecology Letters 9, 228–241 (2006). 41. 41. Barrios-O’Neill, D., Dick, J., Emmerson, M., Ricciardi, A. & MacIsaac, H. Predator‐free space, functional responses and biological invasions. Functional Ecology 29, 377–384 (2015). 42. 42. Duffy, J. E. et al. The functional role of biodiversity in ecosystems: incorporating trophic complexity. Ecology Letters 10, 522–538 (2007). 43. 43. Schmitz, O. J. Predator diversity and trophic interactions. Ecology 88, 2415–2426 (2007). 44. 44. Wasserman, R. J. et al. Using functional responses to quantify interaction effects among predators. Functional Ecology 30, 1988–1998 (2016). 45. 45. McCoy, M. W., Stier, A. C. & Osenberg, C. W. Emergent effects of multiple predators on prey survival: the importance of depletion and the functional response. Ecology Letters 15, 1449–1456 (2012). 46. 46. Snyder, W. E., Snyder, G. B., Finke, D. L. & Straub, C. S. Predator biodiversity strengthens herbivore suppression. Ecology Letters 9, 789–796 (2006). 47. 47. Benke, A. C. Interactions among coexisting predators–a field experiment with dragonfly larvae. The Journal of Animal Ecology 47, 335–350 (1978). 48. 48. Russo, R. Comparison of predatory behavior in five species of Toxorhynchites (Diptera: Culicidae). Annals of the Entomological Society of America 79, 715–722 (1986). 49. 49. Finke, D. L. & Denno, R. F. Predator diversity and the functioning of ecosystems: the role of intraguild predation in dampening trophic cascades. Ecology Letters 8, 1299–1306 (2005). 50. 50. Finke, D. L. & Denno, R. F. Predator diversity dampens trophic cascades. Nature 429, 407–410 (2004). 51. 51. Finke, D. L. & Snyder, W. E. Niche partitioning increases resource exploitation by diverse communities. Science 321, 1488–1490 (2008). 52. 52. Rall, B. C. et al. Universal temperature and body-mass scaling of feeding rates. Philosophical Transactions of the Royal Society of London B: Biological Sciences 367, 2923–2934 (2012). 53. 53. Jędrzejewska, B. & Jędrzejewski, W. Seasonal surplus killing as hunting strategy of the weasel Mustela nivalis-test of a hypothesis. Acta Theriologica 34, 347–359 (1989). 54. 54. Montagnes, D. J. & Fenton, A. Prey-abundance affects zooplankton assimilation efficiency and the outcome of biogeochemical models. Ecological Modelling 243, 1–7 (2012). 55. 55. Lang, A. & Gsödl, S. “Superfluous killing” of aphids: a potentially beneficial behaviour of the predator Poecilus cupreus (L.)(Coleoptera: Carabidae)?„Töten von Blattläusen im Überfluss “: ein potentiell vorteilhaftes Verhalten des Räubers Poecilus cupreus (L.)(Coleoptera: Carabidae)? Zeitschrift für Pflanzenkrankheiten und Pflanzenschutz/Journal of Plant Diseases and Protection 110, 583–590 (2003). 56. 56. Johnson, D. M., Akre, B. G. & Crowley, P. H. Modeling arthropod predation: wasteful killing by damselfly naiads. Ecology 56, 1081–1093 (1975). 57. 57. Anderson, T. W. Predator responses, prey refuges, and density‐dependent mortality of a marine fish. Ecology 82, 245–257 (2001). 58. 58. Conte, F. Stress and the welfare of cultured fish. Applied Animal Behaviour Science 86, 205–223 (2004). 59. 59. Worm, B. et al. Impacts of biodiversity loss on ocean ecosystem services. Science 314, 787–790 (2006). 60. 60. Griffin, J., Byrnes, J. & Cardinale, B. Effects of predator richness on prey suppression: a meta-analysis. Ecology 94, 2180–2187 (2013). 61. 61. Gilman, S. E., Urban, M. C., Tewksbury, J., Gilchrist, G. W. & Holt, R. D. A framework for community interactions under climate change. Trends in Ecology & Evolution 25, 325–331 (2010). 62. 62. Corbet, P. Dragonflies: Behavior and Ecology of Odonata. (Harley Books, United Kingdom., 1999). 63. 63. Patoka, J. et al. Predictions of marbled crayfish establishment in conurbations fulfilled: evidences from the Czech Republic. Biologia 71, 1380–1385 (2016). 64. 64. Zuur, A., Ieno, E. N., Walker, N., Saveliev, A. A. & Smith, G. M. Mixed Effects Models and Extensions in Ecology with R. (Springer, 2009). 65. 65. Soluk, D. A. Multiple predator effects: predicting combined functional response of stream fish and invertebrate predators. Ecology 74, 219–225 (1993). 66. 66. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/. (2016). Download references ## Acknowledgements This study was supported by the Ministry of Education, Youth, and Sports of the Czech Republic (projects CENAKVA – CZ.1.05/2.1.00/01.0024 and CENAKVA II – LO1205 under the NPU I program) and the Grant Agency of the University of South Bohemia (012/2016/Z). Work of D.S. Boukal and A. Sentis was supported by the Grant Agency of the Czech Republic (14–29857S). A. Sentis was also supported by the “Development of postdoc positions at University of South Bohemia” project no. CZ.1.07/2.3.00/30.0049, co-founded by the European Social Fund and the state budget of the Czech Republic, by the French Laboratory of Excellence project ‘TULIP’ (ANR-10-LABX-41; ANR-11-IDEX-0002–02), and by the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme (FP7/2007-2013) under REA grant agreement n. PCOFUND-GA-2013-609102 through the PRESTIGE program coordinated by Campus France. We thank Irina Kuklina, Martin Fořt, Buket Yazicioglu and Martin Prchal for technical assistance and three anonymous reviewers for helpful comments. ## Author information ### Affiliations 1. #### University of South Bohemia in České Budějovice, Faculty of Fishery and Protection of Waters, South Bohemian Research Centre of Aquaculture and Biodiversity of Hydrocenoses, Zátiší 728/II, 389 25, Vodňany, Czech Republic • Lukáš Veselý • , Miloš Buřič • , Pavel Kozák •  & Antonín Kouba 2. #### University of South Bohemia, Faculty of Science, Department of Ecosystem Biology, Branišovská 1760, 370 05, České Budějovice, Czech Republic • David S. Boukal •  & Arnaud Sentis 3. #### Czech Academy of Sciences, Biology Centre, Institute of Entomology, Laboratory of Aquatic Insects and Relict Ecosystems, Branišovská 1160/31, 370 05, České Budějovice, Czech Republic • David S. Boukal •  & Arnaud Sentis 4. #### Unité Mixte de Recherche 5174 ‘Evolution et Diversité Biologique’, Université de Toulouse III - Institut de Recherche pour le Développement-Centre National de la Recherche Scientifique-École Nationale Supérieure de Formation de l’Enseignement Agricole. 118 route de Narbonne, F-31062, Toulouse, France • Arnaud Sentis ### Contributions L.V., D.S.B. and A.S. conceived the experiment and conducted data analyses. L.V., A.K., M.B., and P.K. conducted the experiment. L.V. wrote the first draft of the manuscript. D.S.B., A.K. and A.S. provided comments and additional revisions of the text. ### Competing Interests The authors declare that they have no competing interests. ### Corresponding author Correspondence to Lukáš Veselý. ## About this article ### DOI https://doi.org/10.1038/s41598-017-17998-4 ## Comments By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Vol. 1, No. 4, 2007 Recent Issues The Journal About the Journal Subscriptions Editorial Board Editors' Interests Submission Guidelines Submission Form Editorial Login Ethics Statement Author Index To Appear ISSN: 1944-7833 (e-only) ISSN: 1937-0652 (print) Other MSP Journals Dual graded graphs for Kac–Moody algebras Thomas F. Lam and Mark Shimozono Vol. 1 (2007), No. 4, 451–488 Abstract Motivated by affine Schubert calculus, we construct a family of dual graded graphs $\left({\Gamma }_{s},{\Gamma }_{w}\right)$ for an arbitrary Kac–Moody algebra $\mathfrak{g}$. The graded graphs have the Weyl group $W$ of $\mathfrak{g}\mathfrak{e}\mathfrak{h}$ as vertex set and are labeled versions of the strong and weak orders of $W$ respectively. Using a construction of Lusztig for quivers with an admissible automorphism, we define folded insertion for a Kac–Moody algebra and obtain Sagan–Worley shifted insertion from Robinson–Schensted insertion as a special case. Drawing on work of Proctor and Stembridge, we analyze the induced subgraphs of $\left({\Gamma }_{s},{\Gamma }_{w}\right)$ which are distributive posets. Keywords dual graded graphs, Schensted insertion, affine insertion Mathematical Subject Classification 2000 Primary: 05E10 Secondary: 57T15, 17B67
### Little g lab Report This report should adhere to a more formal lab report structure. You can see what that entails here. ### Report Question 1 Now we have two ways to calculate the uncertainty of an experimental measurement. The first simply looks at the average of the uncertainties over a repeated measurements, as you did in the first lab. The new method uses the standard deviation. How do multiple measurements of $d$ change the uncertainty? Compare the uncertainties in the velocity of the ball using these two different methods. ### Report Question 2 Within the limits of your experimental accuracy, is momentum conserved during the collision? ### More Report Questions 3. Derive equation (1), starting from general physics principles. 4. From your results, compute the fractional loss of kinetic energy of translation during impact. Disregard rotational energy of the sphere. 5. Derive an expression for the fractional loss of kinetic energy of translation in terms only of $m$ and $M$, and compare with the value calculated in the preceding question. Consider the collision as a totally inelastic one.
# Generate necklaces, Lyndon words, and relatives Generate necklaces, Lyndon words and other objects with equivalence under rotation of length $n$ over the alphabet $\{0,1,\ldots,k-1\}$. The case $k=2$ are bitstrings. Object type necklaces (unrestricted) N: fixed density N: fixed content N: forbidden substring N: bracelets N: bracelets - fixed content N: unlabeled (binary) N: charm bracelet N: charm bracelet - fixed content chord diagrams Lyndon words (unrestricted) L: fixed density L: fixed content L: forbidden substring L: bracelets L: bracelets - fixed content L: unlabeled (binary) Lyndon brackets String length $n$ (max. 20) Alphabet size $k$ 2 3 4 5 6 7 8 9 10 Fixed density (between 0 and $n$) Fixed content (e.g. 5 2 when $n=7$ and $k=2$) Forbidden substring (e.g. 0 0 1) Output format graphics (≤500 objects) text (≤10000 objects) file (≤100000 objects) Output  numbering graphics ## Object info A necklace is an equivalence class of $k$-ary strings under rotation. We take the lexicographically smallest such string as the representative of each equivalence class and use this in the output of the program. A Lyndon word is an aperiodic necklace representative. Unlabelled necklaces are an equivalence classes of necklaces under permutation of alphabet symbols. Note that by permuting the alphabet symbols of a necklace and then rotating the string into its lexicographically smallest position, the result is a necklace representative. Among all permutations of alphabet symbols that resulting necklace that is lexicographically smallest is used as the representative of an unlabelled necklace. For example, here is an equivalence class of unlabelled ternary necklaces: $\{011222, 022111, 001112, 002221, 000122, 000211\}$. The lexicographically smallest of these is $000122$ and so it is chosen as the representative. For information on generating necklaces and Lyndon words and their unlabeled counterparts in constant amortized time see [CRS+00]. In many applications not all necklaces are required, but only those of fixed density, meaning that the number of non-zero entries is fixed. In the more general case, one may want a list of necklaces with fixed content where the number of occurrences for every character is fixed. For information on generating necklaces with fixed content and fixed density in constant amortized time see [Saw03] and [RS99]. Necklaces that do not contain a specified sequence as a substring are known as necklaces with forbidden substrings. For example, when considering all necklaces with $n=4$ and $k=2$ with the restriction that there are no $00$ substrings we get the set: $\{0101, 0111 ,1111\}$. Notice that this is precisely the set of necklaces that start with $0^i$ where $i$ is less than $t$. For more information on generating necklaces with forbidden substrings see [RS00]. A bracelet is a necklace that can be turned over. For information on generating bracelets in constant amortized time see [Saw01]. For bracelets with fixed content see [KSAH13]. The number of nonisomorphic unit interval graphs is the same as the number of binary bracelets of length $2t-1$ with content $(t,t-1)$. A charm bracelet is a generalization of a bracelet considering the action of the group of affine transformations $j \mapsto a+dj\pmod n$ on the indices. For more information see [ĐKRS15]. A chord diagram is a set of $2n$ points on an oriented circle (counterclockwise) joined pairwise by $n$ chords. For information on efficiently generating all chord diagrams with $n$ chords see [Saw02]. We encode such a diagram by walking around the circle, and for each point we record the number of forward steps (in counterclockwise direction) to reach its partner along the same chord. For instance, for $n=3$ the diagram with three chords on the convex hull is encoded by 151515. Lyndon brackets correspond to the basis of the $n$th homogeneous component of the free Lie algebra. For information on generating such a basis in $O(n)$ time per element see [SR03]. ## Enumeration (OEIS) Links to the binary objects in the Online Encyclopedia of Integer Sequences: Enumeration formulae for $k$-ary necklaces and $k$-ary Lyndon words: $N_k(n) = \frac{1}{n} \sum_{d\vert n} \varphi(d)k^{n/d}, \qquad L_k(n) = \frac{1}{n} \sum_{d\vert n} \mu(d)k^{n/d}.$ Enumeration formula for $k$-ary bracelets: $B_k(n) = \left\{ \begin{array}{ll} \frac{1}{2} N_k(n) + \frac{1}{4}(k+1)k^{n/2} & \text{ if n is even,} \\ \frac{1}{2} N_k(n) + \frac{1}{2}k^{(n+1)/2} & \text{ if n is odd.} \end{array} \right.$ The number of nonisomorphic unit interval graphs is the same as the number of bracelets with $n$ black and $n-1$ white beads. Enumeration formula for binary unlabeled necklaces: $U_2(n) = N_2(n) - \frac{1}{2n} \sum_{\text{odd } d\vert n} \varphi(d)2^{n/d}.$ Enumeration formula for chord diagrams: $C(n) = \left\{ \begin{array}{ll} \frac{1}{2n} \displaystyle{\sum_{pq=2n} \varphi(p)p^{q/2}(q-1)!!} & \text{ if p is odd,} \\ \frac{1}{2n} \displaystyle{\sum_{pq=2n} \varphi(p)\sum_{j=0}^{\lfloor q/2 \rfloor}{q \choose 2j}(2j-1)!!} & \text{ if p is even.} \end{array} \right.$ Enumeration formula for $k$-ary necklaces with fixed content: $N_k(n_1,n_2,\ldots, n_k) = \frac{1}{n} \sum_{j \vert \gcd(n_1,n_2, \ldots, n_k)} \varphi(j)\frac{(n/j)!}{(n_1/j)!(n_2/j)!\cdots (n_k/j)!}.$ When $k=2$, the above formula can be applied and simplified for the case of binary fixed-density necklaces.
Go Rand is a feature often used in development, but there are some pitfalls in it. This article will completely break down Go math/rand and make it easy for you to use Go Rand. First of all, a question: Do you think rand will panic ? ## Source Code Analysis The math/rand source code is actually quite simple, with just two important functions. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 func (rng *rngSource) Seed(seed int64) { rng.tap = 0 rng.feed = rngLen - rngTap //... x := int32(seed) for i := -20; i < rngLen; i++ { x = seedrand(x) if i >= 0 { var u int64 u = int64(x) << 40 x = seedrand(x) u ^= int64(x) << 20 x = seedrand(x) u ^= int64(x) u ^= rngCooked[i] rng.vec[i] = u } } } This function is setting the seed, which is actually setting the value of each position of rng.vec. The size of rng.vec is 607. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 func (rng *rngSource) Uint64() uint64 { rng.tap-- if rng.tap < 0 { rng.tap += rngLen } rng.feed-- if rng.feed < 0 { rng.feed += rngLen } x := rng.vec[rng.feed] + rng.vec[rng.tap] rng.vec[rng.feed] = x return uint64(x) } This is the function we end up calling when we use other functions like Intn(), Int31n(), etc. You can see that each call is the result of adding two values from rng.vec using the rng.feed rng.tap. The result is also put back into rng.vec. Note here that when using rngSource with rng.go, since rng.vec sets the value of rng.vec at the same time as the random number, there will be data competition when multiple goroutines are called at the same time. math/rand solves this by locking sync.Mutex when calling rngSource. 1 2 3 4 5 6 func (r *lockedSource) Uint64() (n uint64) { r.lk.Lock() n = r.src.Uint64() r.lk.Unlock() return } We can also use rand.Seed() , rand.Intn(100) directly, because math/rand initializes a globalRand variable. 1 2 3 4 5 var globalRand = New(&lockedSource{src: NewSource(1).(*rngSource)}) func Seed(seed int64) { globalRand.Seed(seed) } func Uint32() uint32 { return globalRand.Uint32() } Note that since the call to rngSource is locked, using rand.Int32() directly will result in global goroutine lock contention, so in high concurrency scenarios, when your program’s performance is stuck here, you need to consider using New(&lockedSource{src: NewSource(1). (*rngSource)}) to generate separate rands for different modules. However, based on current practice, using globalRand locks is not as competitive as we might think. There is a pitfall in using New to generate a new rand, which is how the panic in the opening post was created, as we will see later. ## Seed What exactly is the role of seeds? 1 2 3 4 5 6 7 func main() { for i := 0; i < 10; i++ { fmt.Printf("current:%d\n", time.Now().Unix()) rand.Seed(time.Now().Unix()) fmt.Println(rand.Intn(100)) } } Results: 1 2 3 4 5 6 7 current:1613814632 65 current:1613814632 65 current:1613814632 65 ... This example leads to the conclusion that the same seed will give the same result every time it is run. Why is that? When you use math/rand, you must set the seed by calling rand.Seed, which actually sets the corresponding value for the 607 slots in rng.vec. Seed will call a seedrand function to calculate the value of the corresponding slot. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 func seedrand(x int32) int32 { const ( A = 48271 Q = 44488 R = 3399 ) hi := x / Q lo := x % Q x = A*lo - R*hi if x < 0 { x += int32max } return x } The results of this function are not random, but are actually calculated according to seed. In addition, this function is not just written, but has a mathematical proof. This means that the same seed is set to the same value in rng.vec, and the same value is retrieved by Intn. ## The traps I encountered ### 1. rand panic The screenshot at the beginning of the article is a panic that occurred one day during the development of the project using the underlying library wrapped by someone else. The approximate code implementation is as follows. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 // random.go var ( rrRand = rand.New(rand.NewSource(time.Now().Unix())) ) type Random struct{} func (r *Random) Balance(sf *service.Service) ([]string, error) { // .. 通过服务发现获取到一堆ip+port, 然后随机拿到其中的一些ip和port出来 randIndexes := rrRand.Perm(randMax) // 返回这些ip 和port } This Random will be called concurrently, and since rrRand is not concurrently safe, it will occasionally panic when calling rrRand. When using math/rand, some people use math.Intn() to initialize a new rand with rand.New, because they are worried about lock competition, but note that the rand initialized by rand.New is not concurrency-safe. The fix: replace rrRand with globalRand, which has little effect on global locking in online high concurrency scenarios. ### 2. always load to the same machine It is also an underlying rpc library that uses random traffic distribution, and after running online for a while, the traffic is routed to one machine, causing the service to go down. The approximate implementation code is as follows. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 func Call(ctx *gin.Context, method string, service string, data map[string]interface{}) (buf []byte, err error) { ins, err := ral.GetInstance(ctx, ral.TYPE_HTTP, service) if err != nil { // 错误处理 } defer ins.Release() if b, e := ins.Request(ctx, method, data, head); e == nil { // 错误处理 } // 其他逻辑, 重试等等 } func GetInstance(ctx *gin.Context, modType string, name string) (*Instance, error) { // 其他逻辑.. switch res.Strategy { case WITH_RANDOM: if res.rand == nil { res.rand = rand.New(rand.NewSource(time.Now().Unix())) } which = res.rand.Intn(res.count) case 其他负载均衡查了 } // 返回其中一个ip和port } The reason for the problem: It can be seen that each request comes with an ip and port using GetInstance, and if we use the Random method of traffic load balancing, each time we reinitialize a rand. We already know that when the same seed is set, the result is the same for each run. When the instant traffic is too large, the concurrent request GetInstance, because the value of time.Now().Unix() is the same at that moment, this will result in getting the same random number, so the last ip, port are the same, the traffic is distributed to this machine. Fix: Change it to globalRand. ## rand future expectations Basically, you can see that in order to prevent global lock competition, when using math/rand, the first thing that comes to mind is custom rand, but it’s easy to get into some kind of trouble. Why does math/rand need to be locked? We all know that math/rand is pseudo-random, but after setting the seed, the value of rng.vec array is basically determined, which is obviously not random anymore. .vec, so it is necessary to lock rng.vec to protect it. Using rand.Intn() does have a global lock competition problem, I wonder if math/rand will be optimized in the future.
# Use of partial derivatives as basis vector I am trying to understand use of partial derivatives as basis functions from differential geometry In tangent space $\mathbb{R^n}$ at point $p$, the basis vectors $e_1, e_2,...,e_n$ can be written as $\frac {\partial}{\partial x^1} \bigg|_p,\frac {\partial}{\partial x^2} \bigg|_p,...,\frac {\partial}{\partial x^n} \bigg|_p$ Let's say in 2 dimensional Euclidean space, a function $f : \mathbb {R^2}\rightarrow \mathbb {R^2}$ is $x^2 + y^2=4$ , a circle with radius 2. Tangent at point $p$ (2,0) will be $0e_1 + e_2$. If I say $f =x^2 + y^2-4 =0$, $\frac {\partial f}{\partial x} \bigg|_{p=(2,0)} = 4 \quad$ and $\quad \frac {\partial f}{\partial y} \bigg|_{p=(2,0)} = 4$ This does not make sense of the partial derivatives as basis vectors. Any comments? • Hm, first of all, your $f$ is a funtion $\mathbb{R}^2\rightarrow \mathbb{R}$. Then, the $\frac{\partial}{\partial x^i}$ are said to be tangent vectors, but you are looking at $\frac{\partial f}{\partial x^i}$, which is an entirely different object. – Thomas Dec 25 '15 at 18:44 • Do you have any specific issues / confusions about these tangent space bases in general that you want addressed, or do you just want your example made coherent? – epimorphic Dec 27 '15 at 23:59 • @epimorphic, Wanted to know how, in general, partial derivatives can be used as basis functions to represent a vector. – 343_458 Dec 28 '15 at 23:36 • Sorry for the super late response. You might want to look at some existing answers on this topic first, such as math.stackexchange.com/a/509515. Let me know whether it helped or not. – epimorphic Jan 8 '16 at 3:44 Imagine a ruler. The ruler, when paired with an object, provides its length. The length is different from the ruler, of course. One could say that the ruler evaluates a length on a given object. The ruler, here, is the tangent vector: $\frac{\partial}{\partial x}$. By doing $\frac{\partial f}{\partial x}$ you are evaluating your "ruler" on the object: the function. And that is what a tangent vector is (when interpretated as a derivation): it takes functions to real numbers. But the evaluation and the evaluator are two different objects altogether. Cartesian coordinates constitute the forest that hides the trees, i.e. the intrinsic notion of a tangent vector. In $E={\Bbb R}^n$ or more generally in any normed (and complete) vector space the intuitive notion you have of a tangent vector works fine. It is probably something as follows: Let $c: ]-\epsilon,\epsilon[ \rightarrow E$ be a smooth curve. Then the tangent vector to the curve at $p=c(0)$ is given by: $$v = c'(0)= \lim_{h\rightarrow 0} \frac{1}{h} \left( c(h) - c(0) \right)$$ So in a vector space a tangent vector may simply be viewed as an element of the vector space itself. You may compose with functions, take derivatives etc. as in your example. No need for adapting a different point of view. The statement you are asking about, may, however, be viewed as a preparation towards working with manifolds. Now, if $E$ were a manifold (whatever that is, I omit the details) then the situation is different. Although it is ok to talk of a smooth curve as before, the difference $c(h)-c(0)$ does not make sense in a general manifold, so how would you describe this $v$ that doesn't exist? Well, it does make sense to look at a smooth function of the manifold into a vector space, say $A: E \rightarrow {\Bbb R}$. Then the composed function $A\circ c: ]-\epsilon,\epsilon[\rightarrow {\Bbb R}$ is a map between reals and this we know how to differentiate. So the wanted tangent vector "$v=c'(0)$" at $p=c(0)$ may be given an interpretation as a differential operator on $A$ acting as follows: $$L_v A = \frac{d}{dt}_{|t=0} A(c(t)) = (A\circ c)'(0)$$ Now, a coordinate system is a suitable collection of $n$ smooth maps $x_1,...,x_n: E \rightarrow {\Bbb R}$ so, in particular, we may act upon them to define a collection of real numbers: $$v_k = L_v x_k = (x_k \circ c)'(0), \; k=1,...,n$$ Also, when we have a coordinate system, we may express the function $A$ as some smooth function of the coordinates: $A(\xi) = a(x_1(\xi),...,x_n(\xi))$, $\xi\in E$. But then we are able to calculate another expression for the above derivative: $$L_v A = \frac{d}{dt}_{|t=0} a (x_1 \circ c(t)),...,x_n\circ c(t)) = v_1\frac{\partial}{\partial x_1} a (p) + \cdots + v_n \frac{\partial}{\partial x_n} a(p)$$ So in the given coordinate system we may express the above 'action' as: $$L_v = v_1 \left( \frac{\partial}{\partial x_1}\right)_{|p} + \cdots + v_n \left( \frac{\partial}{\partial x_n}\right )_{|p}$$ Our tangent vector has then coordinates $v_1$,...,$v_n$ ($n$ real numbers) in the basis consisting of partial derivatives $\partial_{x_1}$,...,$\partial_{x_n}$ (evaluated at the point $p$). This point of view also allows you to describe how tangent vectors transform under change of coordinates and all this leads to the understanding of manifolds and how to do calculus on a manifold. The space of the OP's circle is not a tangent space, that's why the simple example does not make any sense. One must stay inside the circle, so to speak. So let's start over again and define that circle with radius $2$ as: $$\begin{cases} x = 2\cos(\phi) \\ y = 2\sin(\phi) \end{cases} \quad \Longrightarrow \quad x^2+y^2=4$$ There are no partial derivatives because the manifold is one-dimensional. And there is only one basis (tangent) vector: $$\begin{bmatrix} dx/d\phi \\ dy/d\phi \end{bmatrix} = \begin{bmatrix} -2\sin(\phi) \\ 2\cos(\phi) \end{bmatrix}$$ Normed to a unit vector $\,\vec{t}\,$ if we divide by $2$ : $$\vec{t} = \begin{bmatrix} dx/d\phi \\ dy/d\phi \end{bmatrix} / 2 = \begin{bmatrix} -\sin(\phi) \\ \cos(\phi) \end{bmatrix} = -\sin(\phi)\,\vec{e_1} + \cos(\phi)\,\vec{e_2}$$ Now specify for $\,\phi=0\,$ and we're done, for this example at least. The simplest way to approach this may be to fix $\vec p\in \mathbb R^n$, let $f:\mathbb R^n\to \mathbb R$ and consider (assuming these exist) the directional derivatives $\textbf D_\vec vf(\vec p)=\nabla f(\vec p)\cdot \vec v=\sum_{k=1}^{n}v_k\frac{\partial f}{\partial x_k}(\vec p)$. Now notice that this motivates the following: If we define for $1\leq k\leq n,\ \frac{\partial }{\partial x_k}:\mathcal C(\mathbb R^n,\mathbb R)\to \mathbb R$ in the obvious way by $f\mapsto \frac{\partial f}{\partial x_k}(\vec p)$ then $\left \{ \frac{\partial }{\partial x_k} \right \}_{1\leq k\leq n}$may be regarded as a basis for a vector space which we denote $T_{\vec p}(\mathbb R^n)$. An arbitrary element of $T_{\vec p}(\mathbb R^n)$is then given by a linear combination of the basis elements, that is $\textbf v=\sum_{k=1}^{n}v_k\frac{\partial }{\partial x_k}$, whose effect on functions $f$ at the point $\vec p$ is simply $\textbf v(f)=\textbf D_\vec vf(\vec p)$.
# Tag Info 1 If two events both have a spacetime interval of zero, can they both be said to be happening “now”? There is an interval associated with any two events but there is not an interval associated with an event. From the Wikipedia article "Spacetime": In spacetime, the separation between two events is measured by the invariant interval between the ... 0 I'm not sure this is a complete answer to your question, but thinking about special relativity that way will get you into trouble. Essentially, that way of interpreting special relativity attributes all of its weirdness to signal delay. Here's how I think you're interpreting the barn door experiment: The ladder is put stationary in the barn and is found to ... 0 OK, just one more try to end this stupid question. There IS a way to formulate physics using light rays as your basis: a double null coordinate system${}^{1}$. If you have a ray moving in the $+x$ direction, define the two coordinates $$2\xi = t + x\;\;\quad\quad\quad2\eta = t - x$$ Then, the metric becomes $$ds^{2} = -4d\xi \,d\eta + dy^{2} + dz^{2}$$ ... 2 According to Einstein's theory of mass-energy equivalence, if the photon is a particle of pure energy, and if $E=mc^2$, then the photon is theoretically traveling at $c^2$; not $c$ You have neglected dimensional analysis: E $\to$ Joule = $\frac{\rm kg\,m^2}{\rm s^2}$ m $\to$ kilogram = $\rm kg$ which means that $c^2$ has units of m$^2$/s$^2$, not ... 8 Actually you're quite correct, though possibly not in the way you expected. Ordinary velocity isn't an invariant because obviously different observers moving at different speeds will measure different velocities. However there is an invariant form of velocity called the four velocity that is an invariant under special relativistic (i.e. Lorentz) ... 0 Instead of using existing spacecraft, let's use a photon rocket powered by the gamma rays emitted by matter anti-matter annihilations in its reacor. Where does the anti-mater come from? We will produce it using solar energy. We'll use giant solar panels that generate a huge voltage in vacuum which leads to Swinger pair production. Moving away from the Earth ... 0 From ScienceMuseum: Apollo 10 holds the record as the fastest manned vehicle, reaching speeds of almost 40,000 km per hour (11.08 km/s or 24,791 mph to be exact) during its return to Earth on 26th May 1969. Using the formula (as above). After traveling for 40 years, you would be a little over 0.86 seconds younger. Added: I did some calculating and ... 2 Almost none. Let's be much more generous than your idea of human-carrying craft. Let's just use the fastest probe. The Helios II craft, after nearing the sun, reached a heliocentric speed somewhere near 70 km/s. Obviously, its speed was more due to the gravitational influence of the sun than its engines. $$t = \frac{t_o}{\sqrt{1 - \frac{v^2}{c^2}}}$$ ... 2 So let's just say that the spacecraft can accelerate until it's moving away from the Earth at the speed of the fastest currently-existing spacecraft First, note that the fastest speed, relative to Earth, that a spacecraft has obtained is an exceedingly small fraction of the $c$ and, thus, one should not expect significant time dilation. For ... 0 Well, obviously it would be the speed of light times two. However, this can be misleading. Equations are all fine and dandy, but if you do not understand them, they are not of complete help. Imagine that you have the following... 1) A 300,000 km long spaceship which is at rest in space. 2) A clock is located at each opposite end of the spaceship, and these ... 0 Your notion seems to be based on the thinking that light is a bunch of photons, and a photon is some kind of weird particle that travels at the speed of light, like some tiny spaceship. Then you ask, how can this tiny spaceship violate physical laws? What makes it so special? But a photon isn't a particle in any classical sense. It's not like a tiny ... 0 Special relativity states: ... I'll select and discuss the given statements in some particular order (which may be called "in order of simplicity of discussion") ... [...] The observer is (anything [...]) Right. Synonymous to "observer" or "anything", in the context of the theory of relativity, there are also the descriptions "material point" or ... 3 Just to add to John Rennie's answer, the objects where we expect to see the largest frame dragging effects are spinning black holes. There, there is actually a surface called the ergosphere (outside of the event horizon), where it is impossible for observers to stay stationary with respect to observers far from the black hole. In a sense, their reference ... 3 I will expand my comment above into an answer, but I will not comment further on it to avoid the usual very long discussions of your posts. In my opinion, you are trying to argue on a logical level, but it is not clear if you have enough knowledge of logical theories to do so on a mathematical/physical level. Without entering too much into details, a ... 3 The spacetime outside a spinning mass is described by the Kerr metric. To explain how the Kerr metric produces frame dragging is hard, because it's not something for which there's an easy intuitive model. Frame dragging arises because the spacetime geometry links the angle measured around the spinning object to time, and this means the angle changes with ... 1 In special relativity, in the rest-frame of the proton, the moving magnet m appears as a magnet m’ and an electric dipole p’. The electrostatic E field created by the proton makes rotate this electric dipole, actually the magnet. And the E’ created by the electric dipole is the responsible for the force the proton experiences. Remark that you will not found ... 10 To make progress we need to be clear what we mean by the laws of physics and observer. A law of physics is just some set of equations that we use to predict what happens. So if for example we're trying to describe how charges interact with light our set of equations, i.e. our law of physics, would be Maxwell's equations. But to write down Maxwell's ... 1 This is my first problem, as the modulus of a vector shouldn't be negative. First, while there are many useful properties of introductory linear algebra you should keep in mind with GR, thinking in Cartesian terms with positive definite matrices simply has to go. Vectors in relativity can very much have negative norm. Even though it's not often done in ... 5 Let's start at the beginning: The setting for relativity - be it special or general - is that spacetime is a manifold $\mathcal{M}$, i.e. something that is locally homeomorphic to Cartesian space $\mathbb{R}^n$ ($n = 4$ in the case of relativity), but not globally. Such manifolds possess a tangent space $T_p\mathcal{M}$ at every point, which is where the ... 1 You vastly overestimate the meaning of frame. A frame is a (local) choice of coordinates on the spacetime manifold $\mathcal{M}$. All physical laws can be directly formulated on the manifold itself, without referring to frames at all. That is at the heart of relativity, and that is what Lorentz invariance means. Let's go through your numbered points one by ... 0 In response to the extended discussion in the comments I wrote a program to clarify things interactively. The program is here: https://www.khanacademy.org/cs/relativistic2/6050744190369792 Frame S: The three perfectly vertical lines are the points A, O, and B being stationary. The other three lines are A', O', and B'. In frame S, the flashes of light at ... 2 First of all, light waves and matter waves may be treated together, using the same maths, because the waves associated with light and the waves associated with matter are fundamentally the same thing. Second, all the waves before they interfere and after they interfere may be written in terms of the probability current $j^\mu (x,y,z,t)$, and its ... 2 No, it doesn't mean that. One must distinguish two things: "laws of physics that apply to an object" and "laws of physics formulated from an object's viewpoint". These are two different things. Laws of physics apply to all objects. And the behavior of the objects may be described relatively to many coordinate systems or "frames of reference". The special ... 6 Their race is ill-defined. You can't declare a winner if you can't agree on the ordering of events. If they failed to pick a reference frame for the race before starting it, then of course an argument over the winner may ensue. No laws of physics have been violated. If you try and extend this to "malfunctioning machines", then yes, a machine that was not ... 3 That way of thinking about this is a nice one. From “The Elegant Universe”? Anyway, mathematically, you can define this four-velocity like the normal velocity, the time derivative of the position. However, in special relativity, the position is space and time $(t, x, y, z)$. And the derivative has to be with respect to the proper time (or curve parameter) ... 0 Your calculations are correct. The light does indeed take longer to reach the other end of the box when they are moving in the same direction, and vice-versa. How could this help, however, to convince the guy in the moving box that he was the one moving, and not me? When he compares the time that the light takes reaching the end of the box, and the time ... 4 If you look at the Earth's geoid, you can see that there isn't a particular "band" of gravitational variation along the equator. So while time would move slower / faster at varying locations around the globe, it is not correlated with the equator. Here is, I believe, the latest model, in 2D: And there is also a very nice 3D animation here. Regarding the ... 3 I've said it before and I will say it again: There are no frames travelling at the speed of light As David Z says in the very link you give, it is meaningless to ask what you would perceive travelling at the speed of light. You cannot. And even though there are particles that can, there are no frames associated with them. Have a look at the Lorentz boost. ... 12 The current answers by Luboš and David do a good job of explaining why it is essential to include general relativity in the picture. In fact, this is even more of an issue because the irregularities in the shape of the Earth do matter. It's fairly easy to understand why this is the case: it's been known since 2010 that atomic clocks are sensitive to height ... 0 Things are much simpler here if one thinks in terms of spacetime events and their coordinates in the relatively moving reference frames. As best as I can tell, there are three events of interest: Event A: two photons are emitted from station A Event B: one of the photons is received at station B Event C: the other photon is received at station C ... 9 Is there a flaw in my reasoning or have I simply not been reading the right journals? Yes. The flaw is that you are ignoring general relativity. The poles are closer to the center of the Earth and are thus deeper in the Earth's gravity well than is the equator. The combined effects of gravitational and special relativistic time mean that clocks at sea ... 25 The difference would indeed be measurable with state-of-the-art atomic clocks but it's not there: it cancels. The reasons actually boil down to the very first thought experiments that Einstein went through when he realized the importance of the equivalence principle for general relativity – it was in Prague around 1911-1912. See e.g. the end of ... 2 The time dilation due to motion in a circle, relative to an observer at the centre, is just the usual Lorentz time dilation due to the velocity of the motion. If you're interested, in my answer to Is gravitational time dilation fundamentally different than other forms of time dilation? I showed how this is derived from the metric. Anyhow, as you say, the ... -1 Your calculation refers to time intervals between two events inside the ship, as seen from a different reference frame (Earth?), and in Special Relativity this is equivalent to what would be seen from the ship if in the other reference frame the same experiment was being performed. However this is not the correct way to solving the paradox, this paradox ... 1 I do not find easy to understand your calculations, but can give you an explanation which is not based on specific distances. It is easy to see why the observer inside the ship will perceive the events as simultaneous and the one outside the ship will not. First, notice that for every observer the speed of light is the same, c. So the observer on the ships ... 1 If you do not know anything more about $F$, the only way to do this is to simply plug the transform result $\Lambda^\mu_\nu x^\nu$ into $F$ and see what happens. General functions into $\mathbb{R}$ cannot be expected to behave nicely in any way. However, I cannot really fathom a situation where you would have to use a function in $\mathbb{R}$ where you ... 0 So how much has the exposure to light darkened the photographic plate? As much as expected for 301 ns or 33 ns ? For 33 ns. That is the time that the clock inside the ship-muon indicates. We measure 301 ns of exposure, but a much shorter time was experienced by the moving photographic plate. This question is tricky also because the light will be ... 1 Trying to defend the application of SR in my question, I discovered my mistake. From the observation framework of the photographic plate, it's a simple race between a muon speed space ship and light. Not so to the other observer. from the muon speed space ship's observation, the space ship is stationary and points m and e are approaching at 0.994c. When ... 2 In this case which observer will find the events to be simultaneous and why? Briefly, the events are simultaneous in $S$ and the reason is that you've stipulated the events are simultaneous in $S$. In more detail... when two events occur at A and B in S and the corresponding points In S' are A' and B'. The wording here is puzzling. Events ... 1 Because of length contraction predicted by special relativity, the distance between points m and e is 1637 m. The light from point m takes 5.49540 μs to reach point e. Point e reaches the ("stationary") space ship after 5.46254 μs. So the photographic plate is exposed to light for 32.9 ns 1637m is after length contraction in the reference frame ... 0 1. would it really be devastating for relativity that we find out in the future that photons have a tiny mass and move slower than ... uhh... the speed of light? As the phrase "... uhh ... " in your question anticipates: there is some devastation lurking in that question; namely a contradiction to the essential understanding of "light" as "any signal ... 0 The Transactional Interpretation of QM suggests that Maxwell's equations work backwards in time, carrying the information of one test back to the point of entanglement, where it can affect the entangled particle. This explanation bypasses any issue of violation of SR. 2 A quick alternate perspective: Not really. Relativity only requires the existence of an invariant speed $c$, it doesn't require that anything actually travels at that speed. So if photons were massive, there would be no problem, although some results in cosmology might have to be modified a bit. Pretty much every time you see it, $c$ means the invariant ... 0 There is what appears to be a very good lecture that deals with many of the issues. It is a Google Techtalk from Ron Garret entitled "The Quantum Conspiracy:What the Popularizers of QM don't want you to know". A rather tongue-in-cheek title that should not mislead you into thinking it is not a serious and worthwhile talk. www.youtube.com/watch?v=HQIJgheuYNU ... 2 it appears that electromagnetism has some of preponderant role in the universe compared to other theories This is not true. It played an important historic role, but is in no way theoretically "unique" because the photon travels at speed c. Indeed, the gluon also travels at speed c. If photons were found to be slightly massive, it would change a lot of ... 1 what are we measuring when we measure light? When considering "(average) speed of light (in vacuum)", or rather primarily: "(average) speed of a signal front" this means the quotient of the distance of a signal source and a receiver between each other (i.e. under the condition that these two participants had been at rest with respect to each other), ... -1 As far as I understand your experiment, A and B, as well as A' and B' should actually be considered sources of light. Now, if A and B are co-moving with O (and therefore, all three are stationary wrt. each other), the rays from A and B will reach the observer O at the same time. However, O and A' are moving toward each other, and therefore the observer O ... 0 In the first figure ... actually: in the bottom parts of both the first and the second figures ... A and B are two equidistant points from the observer O in S. In detail, therefore A, O, and B were and remained (pairwise) at rest to each other, with O having been and remained the "middle between" A and B, for each signal indication stated by ... 0 The ladder won't be trapped in both reference frames. If an observer with the garage closes the doors at the same time, so as to "trap" the ladder inside the garage, the events corresponding to the two doors closing will be simultaneous in the garage frame, but not in the ladder frame. An observer with the ladder will see the front door closing first and the ... -1 According to Special Relativity: Given, at time t=0 AO=OB=A'O'=O'B'. To the observer O, since AO=OB, the rays from A and B will reach O at the same time and hence, O will find the events to be simultaneous. Furthermore, since, at t=0 (the instant when the events occurred) A coincides with A' and B with B' and the speed of light is absolute, the ... Top 50 recent answers are included
Determining $\lim_{(x, y) \to (2y, y)} \exp(\frac{|x-2y|}{(x-2y)^2})$ Find the limit of $$\exp\left(\frac{|x-2y|}{(x-2y)^2}\right)$$ when $$(x,y) \to (2y,y)$$. I have considered two cases: $$(x-2y)<0$$ and $$(x-2y)>0$$. But in first case the limit turns out to be $$0$$ and in the second case limit is undefined. I am not sure if my solution is correct or not. • hint $(x-2y)^2=|x-2y|^2$ – dmtri Dec 15 '18 at 8:36 • $e^{\frac{1}{0}}=\infty$ if we go frrom the right of $0$ – dmtri Dec 15 '18 at 8:39 $$\lim_{(x,y) \to (2y,y)} \exp\left({\frac{|x-2y|}{(x-2y)^2}}\right) = \lim_{(x,y) \to (2y,y)} \exp\left({\frac{|x-2y|}{|x-2y|^2}}\right)$$ $$=$$ $$\lim_{(x,y) \to (2y,y)} \exp\left({\frac{1}{|x-2y|}}\right) \equiv \lim_{z \to 0} \exp\left(\frac{1}{|z|} \right) = \infty$$ We have that by $$t=x-2y \to 0$$ we reduce to the simpler $$\large e^{\frac{|x-2y|}{(x-2y)^2}}=e^{{|t|}/{t^2}}=e^{{1}/{|t|}}\to e^\infty=\infty$$
### A man buys a field of agricultural land for Rs. 3,60,000. He sells one-third at a loss of 20% and two-fifths at a gain of 25%. At what price must he sell the remaining field so as to make an overall profit of 10%? A. Rs.1,00,000 B. Rs. 1,15,000 C. Rs. 1,20,000 D. Rs. 1,25,000 Answer: Option C ### Solution(By Apex Team) $\begin{array}{l}\text{CP}=360000\\ \text{To gain 10% on whole land,}\\ \text{SP}=360000+10\%\text{ of }360000\\ \quad=\text{ Rs. }396000\\ \frac{1}{3}\ \text{of the land sold on 20% loss.}\\ \text{SP}\ \text{of }\frac{1}{3}\text{land}\\ =\frac{360000}{3}-20\%\ \text{of}\ \frac{360000}{3}\\ =\text{Rs.}\ 96000\\ \text{SP of }\frac{2}{5}\text{of the land}\\ =360000\times\frac{2}{5}+25\%\ \text{of}\ 360000\times\frac{2}{5}\\ =\text{Rs.}\ 180000\\ \text{Thus,}\\ \text{SP of the remaining land}\\ =396000-96000-180000\\ =\text{Rs.}\ 120000\end{array}$ ## Related Questions on Profit and Loss A. 45 : 56 B. 45 : 51 C. 47 : 56 D. 47 : 51 A. Rs. 2600 B. Rs. 2700 C. Rs. 2800 D. Rs. 3000 ### A sells an article to B at a profit of 10% B sells the article back to A at a loss of 10%. In this transaction: A. A neither losses nor gains B. A makes a profit of 11% C. A makes a profit of 20% D. B loses 20%
# RBSE Class 9 Maths Important Questions Chapter 7 Triangles Rajasthan Board RBSE Class 9 Maths Important Questions Chapter 7 Triangles Important Questions and Answers. Rajasthan Board NCERT New Syllabus RBSE Solutions for Class 9 Guide Pdf free download in Hindi Medium and English Medium are part of RBSE Solutions. Here we have given RBSE Class 9th Books Solutions. ## RBSE Class 9 Maths Chapter 7 Important Questions Triangles I. Multiple Choice Questions: Choose the correct answer from the given four options. Question 1. In triangles ABC and DEF, AB = FD and ∠A = ∠D. The two triangle will be congruent by SAS axiom if: (a) BC = EF (b) AC = DE (c) AC = EF (d) BC = DE (b) AC = DE Question 2. Which of the following is not a interior for congruency of triangles? ' (a) SAS (b) ASA (c) SSA (d) SSS (c) SSA Question 3. If it is given that ∆ABC ≅ ∆FDE, AB = 5 cm, ∠B = 40° and ∠A = 80°. Then which of the following is true? (a) DF = 5 cm, ∠F = 60° (b) DF = 5 cm, ∠E = 60° (c) DF = 5 cm, ∠E = 60° (d) DE = 5 cm, ∠D = 40° (b) DF = 5 cm, ∠E = 60° Question 4. In the adjoining figure, ABCD is a quadrilateral in which AD = CB and AB = CD, then ∠ACB is equal to: (a) ∠ACD (b) ∠BAC Question 5. In the adjoining figure, AC = BD. If ∠CAB = ∠DBA, then ∠ACB is equal to: (b) ∠ABC (c) ∠ABD (d) ∠BDA (d) ∠BDA Question 6. In the adjoining figure, AB = AC and BD = CD. The ratio ∠ABD : ∠ACD A is: (a) 1:1 (b) 2 : 1 (c) 1 : 2 (d) 2 : 3 (a) 1:1 Question 7. In AQPR, if ∠R > ∠Q, then : (a) QR > PR (b) PQ > PR (c) PQ < PR (d )PR < PQ Answer: (b) PQ > PR Question 8. Two sides of a triangles are of lengths 5 cm and 1.5 cm. The length of the third side of the triangle cannot be : (a) 3.6 cm (b) 4.1 cm (c) 3.8 cm (d) 3.4 cm (d) 3.4 cm Question 9. In the adjoining figure, PQ > PR. If OQ and OR are bisectors of ∠Q and ∠R respectively, then: (a) OQ = OR (b) OQ < OR (c) OQ ≤ OR (d) OQ > OR (d) OQ > OR Question 10. If triangle ABC is obtuse angled and ∠C is obtuse, then (a) AB > BC (b) AB = BC (c) AB < BC (d) AC > AB (a) AB > BC Question 11. In the adjoining figure, ABCD is a quadrilateral in which BN and DM are drawn perpendiculars to AC such that BN = DM. If OB = 4 cm, then BD is: (a) 6 cm (b) 8 cm (c) 10 cm (d) 12 cm (b) 8 cm Question 12. D is a point on the side BC of AABC such that AD bisects ∠BAC, then : (a) BD = CD (b) BA > BD (c) BD > BA (d) CD > CA (b) BA > BD Question 13. If AB = QR, BC = PR and CA = PQ, then : (a) ∆ABC ≅ ∆PQR (b) ∆CBA ≅ ∆PQR (c) ∆BAC ≅ ∆RPQ (d) ∆PQR ≅ ∆BCA (b) ∆CBA ≅ ∆PQR Question 14. Which is true? (a) A triangle can have two right angles. (b) A triangle can have two obtuse angles. (c) A triangle can have two acute angles. (d) An interior angle of a triangle is less them either of the interior opposite angles. (c) A triangle can have two acute angles. Question 15. In the given figure, BE ⊥ CA and CF ⊥ BA such that BE = CF. Then which of the following is true? (a) ∆ABE ≅ ∆ACF (b) ∆ABE ≅ ∆AFC (c) ∆ABE ≅ ∆CAF (d) ∆ABE ≅ ∆FAC (a) ∆ABE ≅ ∆ACF Question 16. In the given figure, AE = DB, CB = EF and ∠ABC = ∠FED. Then, which of the following is true? (a) ∆ABC ≅ ∆DEF (b) ∆ABC ≅ ∆EFD (c) ∆ABC ≅ ∆FED (d) ∆ABC ≅ ∆DEF (a) ∆ABC ≅ ∆DEF II. Fill in the Blanks: Question 1. In the figure, if AB = AC, then value of x is _________ . 70° Question 2. If AB = QR, BC = PR and CA = PQ, then ∆ABC ≅ __________ . ∆QRP Question 3. In the figure, if OA = OB, OD = OC, then ∆AOD ≅ ∆BOC by congruence rule ___________ . SAS Question 4. In the figure, the ratio ∆ABD : ∆ACD is ___________ . 1 : 1 Question 5. In ∆PQR, ∠P = 60° and ∠Q = 50°, then the longest side of the triangle is ___________ . PQ III. True/False: State whether the following statements are True or False. Question 1. Two figures are congruent if they are of the same shape and of the same size. True Question 2. If two triangles ∆ABC and ∆PQR are congruent, then it is expressed as ∆ABC ≅ ∆PQR. True Question 3. If two sides and the included angle of one triangle are equal to the two sides and the included angle of the other triangle, then the two triangles are congruent. True Question 4. If two angles and one side of one triangle are equal to two angles and corresponding side of the other triangle, then the two triangles are congruent. True Question 5. In a triangle sum of two sides is smaller than the third side. False Question 6. In a triangle sum of two angles is greater than the third angle. False Question 7. In a triangle sum of two sides is greater than the third side. True IV. Match the Columns: Match the column I with the column II. Column I Column II (1) In ∆ABC, if AB - AC and ∠A - 70°, then ∠C = ................. (i) less (2) The vertical angle of an isosceles triangle is 120°. Each base angle is …………….. (ii) greater (3) The sum of three medians of a triangle is .................. than the perimeter. (iii) 30° (4) In a triangle, the sum of any two sides is always ……………. than the third side. (iv) 55° Column I Column I (1) In ∆ABC, if AB - AC and ∠A - 70°, then ∠C = ................. (iv) 55° (2) The vertical angle of an isosceles triangle is 120°. Each base angle is …………….. (iii) 30° (3) The sum of three medians of a triangle is .................. than the perimeter. (i) less (4) In a triangle, the sum of any two sides is always ……………. than the third side. (ii) greater V. Very Short Answer Type Questions: Question 1. In a triangle ABC, BC = AB and ∠B = 80°, then find the value of ∠A. ∆ABC is an isoscales triangle (∵ AB = BC) ⇒ ∠A = ∠C In ∆ABC, ∠A + ∠B + ∠C = 180° (Angle sum property of a triangle) ⇒ ∠A + 80° + ∠A = 180° ⇒ 2∠A = 180 - 80° = 100° ⇒ ∠A = $$\frac{100}{2}$$ = 50° ∠A = ∠C = 50° Question 2. In the figure, if AB = AC, then find x. In ∆ABC, ∠ABD + ∠ABC = 180° (Linear pair angles) ⇒ 125° + ∠ABC = 180° ⇒ ∠ABC = 180 - 125 = 55° ⇒ ∠ACB = 55°. (∵ AB = AC given and ∴ ∠ABC = ∠ACB) Now in ∆ABC, ∠A + ∠B + ∠C = 180° ⇒ x + 55° + 55° = 180° ⇒ x + 110 = 180 ⇒ x = 180 - 110 = 70° Question 3. In the given figure, if OA = OB, OD = OC then ∆AOD ≅ ∆BOC by which congruency rule? In ∆AOD and ∆BOC and ∠AOD = ∠BOC (Vertically opposite angles) ⇒ By SAS, ∆AOD ≅ ∆BOC. Question 4. In two triangles ABC and DEF, ∠A = ∠D, ∠B = ∠E and AB = EF, then are the two triangles congruent? If yes, by which congruency rule? Yes, the two triangles are congruent by ASA congruency rule as ∠A = ∠D, ∠B = ∠E and AB = DF. Question 5. From the given figure : ∠B = ∠C = 40° (∵ AB = AC given) Now, ∠A + ∠B + ∠C = 180 ⇒ ∠A = 180 - (40 + 40) = 180 - 80 = 100° Also ΔABD ≅ ΔACD BD = DC ∴ By SSS, ΔABD ≅ ΔACD ⇒ ∠BAD = ∠CAD = $$\frac{\angle \mathrm{BAC}}{2}$$ = \frac{100}{2} = 50° Question 1. In the given figure, ΔABC is an equilateral triangle. If altitude AD is drawn from the vertex A, then prove that AB = AC = 2BD. Given: In ΔABC, AB = AC = BC (Equilateral triangle) and AD ⊥BC (Altitude on base) To prove: AB = AC = 2BD AB = AC (Given) Then, BD = DC ⇒ BC = 2BD ∵AB = AC = BC ∴ AB = AC = 2BD Hence Proved. Question 2. In the given figure, PQ > PR, QS and RS are the bisectors of ∠Q and ∠R respectively, then find the relation between the sides SQ and SR. Given: PQ >PR ⇒ ∠PRQ > ∠PQR (Since, angle opposite to longer side is greater.) $$\frac{1}{2}$$∠PRQ > $$\frac{1}{2}$$∠PQR ⇒ ∠SRQ > ∠SQR (Since, QS and RS are the bisectors of ∠Q and ∠R, respectively.) ∴ SQ > SR (Since, side opposite to greater angle of a triangle is longer.) Question 3. In ∆ABC, if ∠A = 40° and ∠B = 60°, then find the longest side of ∆ABC. In ∆ABC, ∠A + ∠B + ∠C = 180° (By angle sum property of a triangle) ⇒ ∠C = 180° - 100° = 80° As, 80° > 60° ⇒ ∠C > ∠B ⇒ AB > AC (∵ Side opposite to greatest angle is longest.) Question 4. Is it possible to construct a triangle with lengths of its sides as 4 cm, 3 cm and 7 cm? Give reason for your answer. No, it is not possible to construct a triangle with lengths of its sides as 4 cm, 3 cm and 7 cm, because here we see that sum of the lengths of two sides is equal to third side, i.e. 4 + 3 = 7. We know that, the sum of any two sides of a triangle is greater than its third side, so, construction of given sides of triangle is not possible. Question 5. Prove that the perimeter of a triangle is greater than the sum of its three medians. Consider ABC is a triangle and AD, BE and CF are the medians of ∆ABC. We know that, the sum of any two sides of a triangle is greater than twice the median drawn to the third side. ∴ AB + AC > 2AD ......... (1) AB + BC > 2BE ......... (2) and BC + AC > 2CF .......... (3) On adding equations (1), (2) and (3), we get: 2(AB + BC + AC) > 2(AD + BE + CF) ∴ AB + BC + AC > AD + BE + CF Hence, the perimeter of a triangle is greater than the sum of its three medians. Hence proved. Question 6. In the given figure, if l || m, ∠ABC = ∠ABD = 40° and ∠BAC = ∠BAD = 90°, then prove that ∆BCD is an isosceles triangle. In ∆BAC, ∠ACB + ∠ABC + ∠BAC = 180° (By angle sum property of a triangle) ∴ ∠ACB + 40° + 90° = 180° (∵ ∠ABC = 40° and ∠BAC = 90°) ⇒ ∠ACB = 180° - (40° + 90°) (By angle sum property of a triangle) ⇒ 90° + ∠ADB + 40° = 180° (∵ ∠BAD = 90° and ∠ABD = 40°) ⇒ ∠ADB = 180° - 130° = 50° In ∆BCD, ∠BCD = ∠CDB = 50° (∵ ∠BCD = ∠ABC = 50° and ∠BDA = ∠ADB = 50°) We know that, if two angles of a triangle are equal, then it is •an isosceles triangle. So, ∆BCD is an isosceles triangle. Hence proved. Question 7. In the given figure, AD bisects ∠A. Then, find the relation between the sides AB, AC and DC. Given, AD is the bisector of ∠BAC. In ∆ABC, sum of three angles of a triangle is 180°. ∴ ∠A = 180° - (∠B + ∠C) = 180° - (70° + 30°) = 80° = $$\frac{80^{\circ}}{2}$$ = 40° (Since, AD is the bisector of ∠A.) (Since, sum of three angles of a triangle is 180°.) = 180° - (40° + 70°) = 70° (Since, sides opposite to equal angles of a triangle are equal.) (Since, sum of three angles of a triangle is 180°.) = 180° - (40° + 30°) = 110° ⇒ AD < DC < AC Therefore, AB < DC < AC Question 8. In ∆ABC, the sides AB, AC are equal and the base BC is produced to any point D. From D, DE is drawn perpendicular to BA produced and DF perpendicular to AC produced. Prove that BD bisects ∠EDF. Given: In ∆ABC, AB = AC and the base BC is produced to D. From D, DE is drawn perpendicular to BA produced to E and DF is perpendicular to AC produced to F. To prove: BD bisects ∠EDF, i.e. ∠3 = ∠4 Proof: In ∆ABC, AC = AB (Given) ⇒ ∠1 = ∠2 ......... (1) (Since, angles opposite to equal sides of a triangle are equal.) In right-angled ∆BED, we have : ∠1 + ∠3 + ∠BED = 180° (By angle sum property of a triangle) ⇒ ∠l + ∠3 + 90° = 180° (∵ DE ⊥ BE ⇒ ∠BED = 90°) ⇒ ∠1 + ∠3 = 90° ⇒ ∠3 = 90° - ∠1 ........ (2) In right-angled ∆CFD, we have: ∠4 + ∠5 + ∠CFD = 180° (By angle sum property of a triangle) ⇒ ∠4 + ∠5 + 90° = 180° (∵ DE ⊥ BE ⇒ ∠CFD = 90°) ⇒ ∠4 + ∠5 = 90° ⇒ ∠4 = 90° - ∠5 ⇒ ∠4 = 90° - ∠2 ......... (3) From equations (1), (2) and (3), ∠3 = ∠4 Hence, BD bisects ∠EDF. Question 1. In the given figure, QX and RX are bisectors of ∠PQR and ∠PRQ respectively of ∆PQR. If XS ⊥ QR and XT ⊥ PQ, then prove that ∆XTQ ≅ ∆XSQ and PX bisects ∠P. Given : QX and RX are the bisectors of ∠PQR and ∠PRQ, respectively. Also, XS ⊥ QR and XT ⊥ PQ. To prove: ∆XTQ ≅ ∆XSQ and PX bisects ∠P. Construction : Draw XY ⊥ PR. Proof : In ∆XTQ and ∆XSQ, we have: ∠TQX = ∠SQX (Since, QX is the bisector of ∠PQR.) ∠XTQ = ∠XSQ (Each 90°) and QX = QX (Common side) ∴ ∆XTQ ≅ ∆XSQ (By AAS congruence rule) Then, XT = XS (By CPCT) ........ (1) Similarly, it can be proved that ∆XSR ≅ ∆XYR (By AAS congruence rule) Then, XS = XY (By CPCT) ............. (2) From equations (1) and (2), XT = XY In right-angled ∆XTP and ∆XYP, we have XT = XY ∠XTP = ∠XYP (Proved above) and XP=XP (Each 90°) ∴ ∆XTP ≅ ∆XYP (Common side) Then, ∠XPT = ∠XPY (By RHS congruence rule) Therefore, PX bisects ∠P. (By CPCT) Hence, ∆XTQ = ∆XSQ and PX bisects ∠P. Hence proved. Question 2. ABC is a triangle in which ∠B = 2∠C. D is a point on BC such that AD bisects ∠BAC and AB = CD. Prove that ∠BAC = 72°. In ∆ABC, given ⇒ ∠B = 2∠C ⇒ ∠B = 2y (Put ∠C = y) Since, AD is the bisector of ∠BAC. Let BP be the bisector of ∠ABC. Join PD. In ∆ABC, ∠CBP = ∠BCP = y ⇒ PC = BP (Since, sides opposite to equal angles of a triangle are equal.) In ∆ABP and ∆DCP, ∠ABP = ∠DCP = y (∵∠ABP = ∠CBP = y and ∠DCP = ∠BCP = y) AB = DC (Given) and BP = PC [From equation (1)] ∴ ∆ABP ≅ ∆DCP (By SAS congruence criterion) ⇒ ∠BAP = ∠CDP and AP = DP (By CPCT) ⇒ ∠CDP = 2x (From figure) and ∠ADP = ∠DAP = x (Since, angles opposite to equal sides of a triangle are equal.) In ∆ABD, ∠ADC = ∠ABD + ∠BAD (Since, exterior angle of a triangle is equal to the sum of opposite interior angles.) ⇒ x + 2x = 2y + x ⇒ x = y ....... (2) In ∆ABC, ∠A + ∠B + ∠C = 180° (By angle sum property of a triangle) ⇒ 2x + 2y + y = 180° ⇒ 2x + 2x + x = 180° [From equation (2)] ⇒ 5x = 180° ⇒ x = 36° ∴ ∠BAC = 2x = 2 × 36° = 72° Hence proved. Question 3. Show that the sum Of three altitudes of a triangle is less than the sum of the three sides of the triangle. Consider ABC is a triangle and AL, BM and CN are the altitudes. To prove: AL + BM + CN < AB + BC + AC Proof: We know that, the perpendicular AL drawn from the point A to the line BC is shorter than the line segment AB drawn from point A to the line BC. ∴ AL < AB Similarly, BM < BC and CN < AC On adding equations (1), (2) and (3), we get : AL + BM + CN < AB + BC + AC Question 4. The image of an object placed at a point A before a plane mirror LM is seen at the point B by an observer at D as shown in figure. Prove that the image is as far behind the mirror as the object is in front of the mirror. In ∆BOC and ∆AOC, we have : ∠1 = ∠2 (Each 90°) Also, ∠i = ∠r (∵ Incident angle = Reflected angle) On multiplying both sides by - 1 and then adding 90° both sides, we get: 90° - ∠i = 90° - ∠r ⇒ ∠ACO = ∠BCO (∵ ∠BCO = 180° - ∠OCN - r = 180° - 90° - r = 90° - r) and OC = OC (Common side) ∴ ∆BOC ≅ ∆AOC (By ASA congruence rule) Then, OB = OA (By CPCT) Hence, the image is as far behind the mirror as the object is in front of the mirror. Hence proved. Question 5. If P is any point in the square ABCD and DPQR is another square, then prove that AP = CR. Given: P is any point in the square ABCD and DPQR is another square. To prove: AP = CR Proof : Let ∠PDC = x° and ∠CDR = ∠PDR - ∠PDC = (90° - x) From equations (1) and (2),
Due: November 17th, 2017 Math 142A Assignment 6 Answers to these problems are now available here. The $\LaTeX$ file used to produce them is here. 1. For each of the following, evaluate the limit indicated or prove that it does not exist. 1. $\displaystyle\lim_{x \to 3} \frac{\sqrt{3x}-3}{x-3}$ 2. $\displaystyle\lim_{x \to \infty} \frac{3x^4+6x^3-12x+3}{(2x+1)(5x^2+4x+16)(x-2)}$ 3. $\displaystyle\lim_{x \to 0} \frac{3x^4 - 2x^3 + x^2 + 10x}{(4x^2-x)(x^5+x^3+x+2)}$ 4. $\displaystyle\lim_{x\to0^+}\displaystyle\lim_{y \to 0} x^y$ 5. $\displaystyle\lim_{y\to0}\displaystyle\lim_{x \to 0^+} x^y$ 2. Suppose $f, g : S \to \mathbb{R}$ and $h : f(S) \to \mathbb{R}$ are monotonic functions. For each of the following functions, either prove it is monotonic or provide an example to show that it is not necessarily monotonic. 1. $f+g$ 2. $fg$ 3. $h\circ f$ 4. $\frac1{f}$, assuming $0 \notin \color{red}{f(S)}$ 1. Suppose that $S \subset \mathbb{R}$ is bounded above, $f : S \to \mathbb{R}$ is monotonic, and $x = \sup S$. Show that the limit from the left of $f$ at $x$ either converges, diverges to $\infty$, or diverges to $-\infty$. 2. As a consequence of the above, show that if $S \subset \mathbb{R}$ is arbitrary (i.e., potentially unbounded) and $x \in \mathbb{R}$ is the limit of some increasing sequence in $S$, then the limit from the left of $f$ at $x$ either converges, diverges to $\infty$, or diverges to $-\infty$. 3. Suppose that $f : \mathbb{R} \to \mathbb{R}$ is continuous. Show that $f$ is monotonic if and only if $f^{-1}(\{y\})$ is an interval (although potentially empty) for every $y \in \mathbb{R}$. 4. Evaluate the following limits: 1. $\displaystyle\lim_{x\to1}\frac{x^3-1}{x-1}$ 2. $\displaystyle\lim_{x\to1}\frac{x^n-1}{x-1}$, for $n \in \mathbb{N}$ 3. $\displaystyle\lim_{x\to1}\frac{x^n-x^m}{x-1}$, for $n, m \in \mathbb{N}$ 5. Bonus: A set $S$ is said to be countable if there is some map $f : S \to \mathbb{N}$ which is one-to-one. In the first assignment, you showed that $\mathbb{Q}_+$ and $\mathbb{N}^2$ were countable. Suppose $f : \mathbb{R} \to \mathbb{R}$ is monotonically increasing. Show that the set of points at which $f$ is discontinuous is countable.
# American Institute of Mathematical Sciences September  2003, 9(5): 1133-1148. doi: 10.3934/dcds.2003.9.1133 ## Homoclinic bifurcations, fat attractors and invariant curves 1 Departamento de Matemática, Facultad de Ciencias, La Hechicera, Mérida, 5101, Venezuela Received  October 2000 Revised  November 2002 Published  June 2003 Here we show examples of homoclinic bifurcations which can be perturbed to produce invariant curves and attractors with high Hausdorff dimension. Citation: Leonardo Mora. Homoclinic bifurcations, fat attractors and invariant curves. Discrete and Continuous Dynamical Systems, 2003, 9 (5) : 1133-1148. doi: 10.3934/dcds.2003.9.1133 [1] G. A. Leonov. Generalized Lorenz Equations for Acoustic-Gravity Waves in the Atmosphere. Attractors Dimension, Convergence and Homoclinic Trajectories. Communications on Pure and Applied Analysis, 2017, 16 (6) : 2253-2267. doi: 10.3934/cpaa.2017111 [2] Stefano Marò. Relativistic pendulum and invariant curves. Discrete and Continuous Dynamical Systems, 2015, 35 (3) : 1139-1162. doi: 10.3934/dcds.2015.35.1139 [3] Xingbo Liu, Deming Zhu. On the stability of homoclinic loops with higher dimension. Discrete and Continuous Dynamical Systems - B, 2012, 17 (3) : 915-932. doi: 10.3934/dcdsb.2012.17.915 [4] Eleonora Catsigeras, Marcelo Cerminara, Heber Enrich. Simultaneous continuation of infinitely many sinks at homoclinic bifurcations. Discrete and Continuous Dynamical Systems, 2011, 29 (3) : 693-736. doi: 10.3934/dcds.2011.29.693 [5] Pablo Aguirre, Bernd Krauskopf, Hinke M. Osinga. Global invariant manifolds near a Shilnikov homoclinic bifurcation. Journal of Computational Dynamics, 2014, 1 (1) : 1-38. doi: 10.3934/jcd.2014.1.1 [6] Jordi-Lluís Figueras, Àlex Haro. A note on the fractalization of saddle invariant curves in quasiperiodic systems. Discrete and Continuous Dynamical Systems - S, 2016, 9 (4) : 1095-1107. doi: 10.3934/dcdss.2016043 [7] Peng Huang, Xiong Li, Bin Liu. Invariant curves of smooth quasi-periodic mappings. Discrete and Continuous Dynamical Systems, 2018, 38 (1) : 131-154. doi: 10.3934/dcds.2018006 [8] Michael L. Frankel, Victor Roytburd. Fractal dimension of attractors for a Stefan problem. Conference Publications, 2003, 2003 (Special) : 281-287. doi: 10.3934/proc.2003.2003.281 [9] Sergey Gonchenko, Ivan Ovsyannikov. Homoclinic tangencies to resonant saddles and discrete Lorenz attractors. Discrete and Continuous Dynamical Systems - S, 2017, 10 (2) : 273-288. doi: 10.3934/dcdss.2017013 [10] Enrique R. Pujals. On the density of hyperbolicity and homoclinic bifurcations for 3D-diffeomorphisms in attracting regions. Discrete and Continuous Dynamical Systems, 2006, 16 (1) : 179-226. doi: 10.3934/dcds.2006.16.179 [11] Enrique R. Pujals. Density of hyperbolicity and homoclinic bifurcations for attracting topologically hyperbolic sets. Discrete and Continuous Dynamical Systems, 2008, 20 (2) : 335-405. doi: 10.3934/dcds.2008.20.335 [12] Alexandre Vidal. Periodic orbits of tritrophic slow-fast system and double homoclinic bifurcations. Conference Publications, 2007, 2007 (Special) : 1021-1030. doi: 10.3934/proc.2007.2007.1021 [13] Xiao-Biao Lin, Changrong Zhu. Saddle-node bifurcations of multiple homoclinic solutions in ODES. Discrete and Continuous Dynamical Systems - B, 2017, 22 (4) : 1435-1460. doi: 10.3934/dcdsb.2017069 [14] Luis Barreira and Jorg Schmeling. Invariant sets with zero measure and full Hausdorff dimension. Electronic Research Announcements, 1997, 3: 114-118. [15] Katsutoshi Shinohara. On the index problem of $C^1$-generic wild homoclinic classes in dimension three. Discrete and Continuous Dynamical Systems, 2011, 31 (3) : 913-940. doi: 10.3934/dcds.2011.31.913 [16] Cristina Lizana, Leonardo Mora. Lower bounds for the Hausdorff dimension of the geometric Lorenz attractor: The homoclinic case. Discrete and Continuous Dynamical Systems, 2008, 22 (3) : 699-709. doi: 10.3934/dcds.2008.22.699 [17] V. Afraimovich, T.R. Young. Multipliers of homoclinic orbits on surfaces and characteristics of associated invariant sets. Discrete and Continuous Dynamical Systems, 2000, 6 (3) : 691-704. doi: 10.3934/dcds.2000.6.691 [18] Isaac A. García, Jaume Giné. Non-algebraic invariant curves for polynomial planar vector fields. Discrete and Continuous Dynamical Systems, 2004, 10 (3) : 755-768. doi: 10.3934/dcds.2004.10.755 [19] Peng Huang. Existence of invariant curves for degenerate almost periodic reversible mappings. Discrete and Continuous Dynamical Systems, 2022  doi: 10.3934/dcds.2022074 [20] Lana Horvat Dmitrović. Box dimension and bifurcations of one-dimensional discrete dynamical systems. Discrete and Continuous Dynamical Systems, 2012, 32 (4) : 1287-1307. doi: 10.3934/dcds.2012.32.1287 2021 Impact Factor: 1.588
16.1: Introduction to Chi-Square I don't know about you, but I am TIRED.  We've learned SO MUCH. Can you stay with me for one more chapter, though?  See, we've covered the appropriate analyses when we have means for different groups and when we have two different quantitative variables.  We've also briefly covered when we have ranks or medians for different groups, and when we have two binary or ranked variables.  But what we haven't talked about yet is when we only have qualitative variables.  When we have things with names, and all that we can do is count them.  For those types of situations, the Chi-Square ($$\chi^2$$) analysis steps in!  It's pronounced like "kite" not like "Chicago" or "chai tea". Let's practice a little to remind ourselves about qualitative and quantitative variables; it's been a minute since we first introduced these types of variables (and scales of measurement)! Exercise $$\PageIndex{1}$$ What type is each of the following?          Qualitative or Quantitative? 1. Hair color 2. Ounces of vodka 3. Type of computer (PC or Mac) 4. MPG (miles per gallon) 5. Type of music 1. Hair color:  Qualitative (it's a quality, a name, not a number) 2. Ounces of vodka:  Quantitative (it's a number that measures something) 3. Type of computer (PC or Mac):  Qualitative 4. MPG:  Quantitative 5. Type of music:  Qualitative Exercise $$\PageIndex{2}$$ Do you use means to find the average of qualitative or quantitative variables? Quantitative.  Means are mathematical averages, so the variable has to be a number that measures something. Instead of means, you use counts with qualitative variables. Frequency counts:           Counts of how many things are in each level of the categories. Introducing Chi-Square Our data for the $$\chi^{2}$$ test (the chi is a weird-looking X) are quantitative, (also known as nominal) variables. Recall from our discussion of scales of measurement that nominal variables have no specified order (no ranks) and can only be described by their names and the frequencies with which they occur in the dataset. Thus, we can only count how many "things" are in each category.  Unlike our other variables that we have tested, we cannot describe our data for the $$\chi^{2}$$ test using means and standard deviations. Instead, we will use frequencies tables. Table $$\PageIndex{1}$$: Pet Preferences Cat Dog Other Total Observed 14 17 5 36 Expected 12 12 12 36 Table $$\PageIndex{1}$$ gives an example of a contingency table used for a $$\chi^{2}$$ test. The columns represent the different categories within our single variable, which in this example is pet preference. The $$\chi^{2}$$ test can assess as few as two categories, and there is no technical upper limit on how many categories can be included in our variable, although, as with ANOVA, having too many categories makes interpretation difficult. The final column in the table is the total number of observations, or $$N$$. The $$\chi^{2}$$ test assumes that each observation comes from only one person and that each person will provide only one observation, so our total observations will always equal our sample size. There are two rows in this table. The first row gives the observed frequencies of each category from our dataset; in this example, 14 people reported liking preferring cats as pets, 17 people reported preferring dogs, and 5 people reported a different animal. This is our actualy data.  The second row gives expected values; expected values are what would be found if each category had equal representation. The calculation for an expected value is: $E=\dfrac{N}{C} \nonumber$ Where $$N$$ is the total number of people in our sample and $$C$$ is the number of categories in our variable (also the number of columns in our table). Thank the Higher Power of Statistics, formulas with symbols that finally mean something!  The expected values correspond to the null hypothesis for $$\chi^{2}$$ tests: equal representation of categories. Our first of two $$\chi^{2}$$ tests, the Goodness-of-Fit test, will assess how well our data lines up with, or deviates from, this assumption.
dc.contributor.author Yang, JunfengZhang, YinYin, Wotao 2018-06-19T17:13:03Z 2018-06-19T17:13:03Z 2008-10 Yang, Junfeng, Zhang, Yin and Yin, Wotao. "A Fast TVL1-L2 Minimization Algorithm for Signal Reconstruction from Partial Fourier Data." (2008) https://hdl.handle.net/1911/102105. https://hdl.handle.net/1911/102105 Recent compressive sensing results show that it is possible to accurately reconstruct certain compressible signals from relatively few linear measurements via solving nonsmooth convex optimization problems. In this paper, we propose a simple and fast algorithm for signal reconstruction from partial Fourier data. The algorithm minimizes the sum of three terms corresponding to total variation, $\ell_1$-norm regularization and least squares data fitting. It uses an alternating minimization scheme in which the main computation involves shrinkage and fast Fourier transforms (FFTs), or alternatively discrete cosine transforms (DCTs) when available data are in the DCT domain. We analyze the convergence properties of this algorithm, and compare its numerical performance with two recently proposed algorithms. Our numerical simulations on recovering magnetic resonance images (MRI) indicate that the proposed algorithm is highly efficient, stable and robust. 10 pp A Fast TVL1-L2 Minimization Algorithm for Signal Reconstruction from Partial Fourier Data Technical report October 2008 Text 
# zbMATH — the first resource for mathematics Positive-real structure and high-gain adaptive stabilization. (English) Zbl 0631.93058 This paper is a mathematical treatise presenting a proof of the stability conditions for a multivariable system with time-varying output feedback and subjected to linear or nonlinear perturbations of the state, output and input. It is assumed that the system is linear, controllable, observable and minimum-phase. The stability of the feedback system is proved with the Lyapunov equation and by relating ‘gain divergence’ $$(\lim_{t\to \infty}k(t)\to \infty$$, where k(t) is a feedback gain) to the system-theoretic criterion for positive-real matrices. No application is inserted and this paper is recommended for mathematicians with an interest in gain adaptation control rules. Reviewer: H.H.van de Ven ##### MSC: 93D15 Stabilization of systems by feedback 93C35 Multivariable systems, multidimensional control systems 34D10 Perturbations of ordinary differential equations 34D20 Stability of solutions to ordinary differential equations 93D05 Lyapunov and other classical stabilities (Lagrange, Poisson, $$L^p, l^p$$, etc.) in control theory 93C40 Adaptive control/observation systems time-dependent Full Text:
# Category: Software Development ## Adjust text color to be readable on light and dark backgrounds of user interfaces Most modern user interfaces are supporting different color schemes for day and night: the so called light and dark modes. Selecting a text color for each of those modes is not a big deal and it’s the way to go when designing the user interface. In some cases, the text color is driven by the displayed contents. In the example below, the tint color is matched to the color of the drink. The global tint color of this app is totally different, but this color adjustment gives a very nice effect. But as you might already see, there is a small problem when it comes to very light or very dark colors: each color either has a good readability on light or dark backgrounds. Some colors might fit to both, but that’s not always the case. In the example below, the light yellow is still visible, but when it comes to small icons or small text, the details are lost. To overcome this issue, a simple solution is to select two colors for each recipe so that each mode has a different one. That’s fine, but it might totally change the effect of this colored pages. ## Can we calculate a suitable color? Some time ago, there was an article about Black or white text on a colour background? In this one, I described different algorithms to calculate the best text color (black or white) for a colored background. But now, we need the opposite: a colored text that has a good readability on white (light) or black (dark) backgrounds. When we look at HSL and HSV/HSB color models, we already have a value for ‘lightness’ or ‘brightness’. The idea is to find a color that matches a given hue and saturation and that has a brightness which is readable on light and dark background. For this, we can use different algorithms. Very good results could be achieved with a ‘Weighted W3C Formula‘. This formula take into consideration that the human eye perceives some of the primary colors darker than others. f'(x) = r ? 0.299 + g ? 0.587 + b ? 0.11 Each color that is located at the border between the black and white overlay is suitable for light and dark backgrounds. Step 1: convert the given color to HSV/HSB Step 2: keep hue and saturation constant and adjust the brightness (make the color lighter or darker) Step 3: convert the HSV/HSB value back to the required color format ## Implementation in PHP A simple calculation for a given RGB color is shown below. The classes used in this snippet are available on GitHub. The code checks the initial brightness of the color and lightens or darkens the values until the ‘border’ calculated by the ‘Weighted W3C Formula’ is reached. This is the case for the value 127, the total range of the brightness is 0 to 255. $hsv = Convert::rgb2hsv($rgb); $step = 0.01;$brightness = Calculate::weightedW3C($rgb); if ($brightness < 127) { while ($brightness < 127 &&$hsv[2] >= 0 && $hsv[2] <= 1) {$hsv[2] += $step;$brightness = Calculate::weightedW3C(Convert::hsv2rgb($hsv)); } } else { while ($brightness > 127 && $hsv[2] >= 0 &&$hsv[2] <= 1) { $hsv[2] -=$step; $brightness = Calculate::weightedW3C(Convert::hsv2rgb($hsv)); } } return Convert::hsv2rgb($hsv); ## Some examples But how does this result look for different colors? Let’s start with some dark colors. Those are fine for a light background, but they become unreadable on a dark one. The top colors show the input color (before) and the color below shows the output of the calculation above (after). And now let’s look at some light colors which are fine for dark backgrounds, but they are totally unreadable on light backgrounds. The last color is similar to the example at the beginning and as you can see, the optimized color has a much better readability. This could be achieved for both light and dark colors. The code example shown above is written in PHP. An adoption should be easily possible for any other coding or scripting language The algorithm mentioned in this post is also available on GitHub https://github.com/mixable/color-utils. This package is usable with composer: composer require mixable/color-utils The optimized color can be calculated with: use Mixable\Color\Calculate; // ...$hex = '#ffcc00'; $optimizedColor = Calculate::readableColorForLightAndDarkBackground($hex); ## Xcode: how to disable an extension during app build Sometimes the development version of an app includes multiple code e.g. an extension that should not be released yet. In this case, it’s possible to exclude the extension when building an app. This keeps all your code, but does not include the extension during the build phase. To achieve this, simply open the Build Phases of your main app and remove the extension(s) from Dependencies and Embed App Extensions. You can add the extension later when required. Below is a screenshot of the setting in Xcode. Photo by Clément Hélardot on Unsplash ## Quick Look plugins for software development Quick Look already supports multiple file types. But there ist more – especially for software development. Here are some plugins that make Quick Look even better. Note: some of the plugins might not work instantly after brew install ... when you are on macOS Catalina or later. In this case, it is possible to download the plugin manually and copy the .qlgenerator file to ~/Library/QuickLook. This requires to run qlmanage -r (or a system restart) to enable the plugin. ## QLMarkdown QLMarkdown provides QuickLook support for markdown files (*.md). This plugin renders the markdown content and shows the result. To install QLMarkdown, use: brew cask install qlmarkdown For manual installation, the plugin is available at https://github.com/toland/qlmarkdown. ## QLStephen This Quick Look plugin provides a file preview for files without extension, e.g. README, INSTALL, Capfile, CHANGELOG, etc. It can be installed using Homebrew: brew cask install qlstephen For manual installation, the plugin is available at https://github.com/whomwah/qlstephen. ## QLColorCode This is a Quick Look plug-in that renders source code with syntax highlighting. To install the plugin, use Homebrew: brew cask install qlcolorcode For manual installation, the plugin is available at https://github.com/anthonygelibert/QLColorCode. If you want to configure QLColorCode, there are several defaults commands that are described on the download page. ## Quick Look Json This is a Quick Look plug-in that renders json files. To install the plugin, use Homebrew: brew cask install quicklook-json For manual installation, the plugin is available at http://www.sagtau.com/quicklookjson.html. ## WebP QuickLook This is an open-source QuickLook plugin to generate thumbnails and previews for WebP images. To install the plugin, use Homebrew: brew install webpquicklook For manual installation, the plugin is available at https://github.com/dchest/webp-quicklook. Something is missing? Please let me know in the comments, if there are any other plugins that might be helpful for software development. Photo by Shane Aldendorff on Unsplash ## UX improvements: enterkeyhint to define action label for the keyboard of mobile devices The enterkeyhint is a html attribute described in the HTML standard, which can be used to improve the context of action buttons of keyboards on mobile device. The enterkeyhint content attribute is an enumerated attribute that specifies what action label (or icon) to present for the enter key on virtual keyboards. This allows authors to customize the presentation of the enter key in order to make it more helpful for users. It allows the following fixed values: enter, done, go, next, previous, search and send. Let’s have a look at those values and the resulting keyboard style on iOS: ### <input> The default behavior without any value. ### <input enterkeyhint=”enter”> The user agent should present a cue for the operation ‘enter’, typically inserting a new line. ### <input enterkeyhint=”done”> The user agent should present a cue for the operation ‘done’, typically meaning there is nothing more to input and the input method editor (IME) will be closed. ### <input enterkeyhint=”go”> The user agent should present a cue for the operation ‘go’, typically meaning to take the user to the target of the text they typed. ### <input enterkeyhint=”next”> The user agent should present a cue for the operation ‘next’, typically taking the user to the next field that will accept text. ### <input enterkeyhint=”previous”> The user agent should present a cue for the operation ‘previous’, typically taking the user to the previous field that will accept text. ### <input enterkeyhint=”search”> The user agent should present a cue for the operation ‘search’, typically taking the user to the results of searching for the text they have typed. ### <input enterkeyhint=”send”> The user agent should present a cue for the operation ‘send’, typically delivering the text to its target. Photo by Melisa Hildt on Unsplash ## Synology: How do I update an existing Docker container with a new image? As always: before you do such an update, make sure to create a backup of all your files. If something goes wrong, this may lead to data loss! To update an existing Docker container, the following steps are necessary; 1. Go to Registry and download new image (mostly the “latest” version) 2. Go to Container, select the container you need to update and stop it 3. From Actions menu select Clear 4. Start the container again This will clear the complete container and start with the newly downloaded Docker image. Since the data folders are mounted into the container, this will not erase the apllications data. Configurations are also not affected by this. Photo by sergio souza on Unsplash ## Android Studio: when shortcuts do not work on macOS Compared to other IntelliJ® based software, some shortcuts in Android Studio did not work for me. For example cmd + shift + F, which should open the global search did not work. The reason for this is the keymap setting that was set to IntelliJ IDEA Classic. Setting the keymap to macOS (as shown in the figure below) solved the issue. ## Run GitLab console on Synology NAS As the Synology DSM uses Docker to run GitLab, we can use Docker as well to install GitLab Runner. For this, connect to the Synology using SSH: ssh <admin-user>@<synology> -p <port> To connect to the GitLab container, you can use the following command to open: docker exec -it synology_gitlab /bin/bash You might adjust the name of the GitLab docker depending on your system. To open the console, run: gitlab-rails console When GItLab is installed using the DSM package manager, just use the following commands: cd /home/git/gitlab/bin ./rails console production ## Commands for the console Below are some examples how to use the GitLab console. ### Check the mail delivery method ActionMailer::Base.delivery_method Output might be: => :smtp ### Check the smtp settings ActionMailer::Base.smtp_settings Output might be: => {:address=>"example.com", :port=>25, … ### Testing the SMTP configuration (see documentation) Notify.test_email('[email protected]', 'Subject', 'Mail Body').deliver_now ## GitLab on Synology: set ‘external_url’ There are two (or even more) solutions to install GitLab on a Synology: • Using Docker and the container gitlab/gitlab-ce • Using the DSM package manager Depending on the type of installation, different settings are required to update the external url. ## Using Docker container The external url of GitLab can e defined in /etc/gitlab/gitlab.rb. The parameter takes an url and can also handle a port: external_url 'http://example.synology.me:30000/' Important: when a port is specified in external_url, this will override the https/https port where nginx is listening. To use a different port for nginx, this requires an additional setting: nginx['listen_port'] = 80 After changing this setting, it’s necessary to run: gitlab-ctl reconfigure The settings above are necessary, if port routing is set like the following: ## Using DSM package manager installation This installation of GitLab on Synology uses localhost as a default value for external url. This may lead to some problems when accessing GitLab over another IP or host name. In my case, this lead to missing icons and a non functional WebIDE. An inspection of the html page shows, that some resources are requested over http://localhost/... which leads to 404 errors for those resources. Since the GitLab container on Synology is not based on the omnibus package, you can not use directly external_url in /etc/gitlab/gitlab.rb. If you want to change the url you can do it by changing the docker environment parameter GITLAB_HOST.
## A Near-Optimal Best-of-Both-Worlds Algorithm for Online Learning with Feedback Graphs ### Chloé Rouyer · Dirk van der Hoeven · Nicolò Cesa-Bianchi · Yevgeny Seldin ##### Hall J #839 Keywords: [ feedback graphs ] [ Online Learning ] [ Beyond Bandits ] [ Abstract ] [ [ [ Thu 1 Dec 2 p.m. PST — 4 p.m. PST Abstract: We consider online learning with feedback graphs, a sequential decision-making framework where the learner's feedback is determined by a directed graph over the action set. We present a computationally-efficient algorithm for learning in this framework that simultaneously achieves near-optimal regret bounds in both stochastic and adversarial environments. The bound against oblivious adversaries is $\tilde{O} (\sqrt{\alpha T})$, where $T$ is the time horizon and $\alpha$ is the independence number of the feedback graph. The bound against stochastic environments is $O\big((\ln T)^2 \max_{S\in \mathcal I(G)} \sum_{i \in S} \Delta_i^{-1}\big)$ where $\mathcal I(G)$ is the family of all independent sets in a suitably defined undirected version of the graph and $\Delta_i$ are the suboptimality gaps.The algorithm combines ideas from the EXP3++ algorithm for stochastic and adversarial bandits and the EXP3.G algorithm for feedback graphs with a novel exploration scheme. The scheme, which exploits the structure of the graph to reduce exploration, is key to obtain best-of-both-worlds guarantees with feedback graphs. We also extend our algorithm and results to a setting where the feedback graphs are allowed to change over time. Chat is not available.
{{ message }} # openjdk / panama-foreign Closed wants to merge 5 commits into from Closed wants to merge 5 commits into from ## Conversation ### ChrisHegarty commented May 20, 2020 • edited by openjdk bot Hi, As part of feedback on the Foreign Memory API (when experimenting with its usage internally in the JDK), a small number of potential usability enhancements could be made to the API. This is the fourth such, and last on my current todo list. This change proposes to add a new method: MemorySegment::mismatch The mismatch semantic is very useful for building equality and comparison logic on top of segments. I found that I needed such when modeling and comparing native socket address in the JDK implementation. It is possible to write your own, but requires a non-trivial amount of not-trivial code - to do it right! I also think that we can provide a more efficient implementation building on top of the JDK's internal mismatch support. I still need to do some perf testing and add a micro-benchmake ( Maurizio suggested possibly amending TestBulkOps ). There is also the question about possibly improving the JDK's internal implementation to work on long sizes (which could be done separately). For now, I just want to share the idea, along with the proposed specification and initial implementation. ### Progress • Change must not contain extraneous whitespace • Change must be properly reviewed ### Reviewers $git fetch https://git.openjdk.java.net/panama-foreign pull/180/head:pull/180 $ git checkout pull/180 Initial mismatch implementation ba7c832 ### bridgekeeper bot commented May 20, 2020 👋 Welcome back chegar! A progress list of the required criteria for merging this PR into foreign-memaccess will be added to the body of your pull request. bot added the label May 20, 2020 ### Webrevs reviewed Looks good - I've added some comments. Test is very comprehensive - thanks! Show resolved Hide resolved Show resolved Hide resolved } for (; off < minSize; off++) { if (UNSAFE.getByte(this.base(), this.min() + off) != UNSAFE.getByte(that.base(), that.min() + off)) { #### mcimadamore May 20, 2020 Collaborator This code could be simplified/rewritten to use MemoryAddress' and VH, instead of unsafe access with object/offset addressing. E.g. you could maintain a MemoryAddresslocal variable in the loop instead of theoffsetand keep increasing that address on each iteration ofArraySupport::vectorizedMismatch. Then, when you get out of the loop, the address already points at the base of the region to compare, and a simple for loop with an indexed VH should do the job. @Test(dataProvider = "slices") public void testSameValues(MemorySegment ss1, MemorySegment ss2) { out.format("testSameValues s1:%s, s2:%s\n", ss1, ss2); MemorySegment s1 = initializeSegment(ss1); MemorySegment s2 = initializeSegment(ss2); if (s1.byteSize() == s2.byteSize()) { assertEquals(s1.mismatch(s2), -1); // identical assertEquals(s2.mismatch(s1), -1); } else if (s1.byteSize() > s2.byteSize()) { assertEquals(s1.mismatch(s2), s2.byteSize()); // proper prefix assertEquals(s2.mismatch(s1), s2.byteSize()); } else { assert s1.byteSize() < s2.byteSize(); assertEquals(s1.mismatch(s2), s1.byteSize()); // proper prefix assertEquals(s2.mismatch(s1), s1.byteSize()); } } @Test(dataProvider = "slices") public void testDifferentValues(MemorySegment s1, MemorySegment s2) { out.format("testDifferentValues s1:%s, s2:%s\n", s1, s2); s1 = initializeSegment(s1); s2 = initializeSegment(s2); for (long i = s2.byteSize() -1 ; i >= 0; i--) { long expectedMismatchOffset = i; BYTE_HANDLE.set(s2.baseAddress().addOffset(i), (byte) 0xFF); if (s1.byteSize() == s2.byteSize()) { assertEquals(s1.mismatch(s2), expectedMismatchOffset); assertEquals(s2.mismatch(s1), expectedMismatchOffset); } else if (s1.byteSize() > s2.byteSize()) { assertEquals(s1.mismatch(s2), expectedMismatchOffset); assertEquals(s2.mismatch(s1), expectedMismatchOffset); } else { assert s1.byteSize() < s2.byteSize(); var off = Math.min(s1.byteSize(), expectedMismatchOffset); assertEquals(s1.mismatch(s2), off); // proper prefix assertEquals(s2.mismatch(s1), off); } } } Comment on lines +57 to +99 #### mcimadamore May 20, 2020 Collaborator How important is it that these tests operate on slices? Looking at the test code, it could have worked equally well if the input parameters were just two sizes, and then you did an explicit allocation (or maybe also receive a segment factory from the provider, so that you can test different segment kinds). #### ChrisHegarty May 20, 2020 Author Member Originally I had a version of the test that did compare specific segments, but it didn't scale well to different sizes and kinds ( we need to test both above and below the 8 byte threshold ). I removed a number of slice sizes, which greatly reduces the combinations. This may be enough, or I can certainly revisit the test's structure. approved these changes ### PaulSandoz left a comment Very nice. Improving the JDK implementation is good. In fact i think you could so that now. In ArraysSupport, with strip mining: public static int vectorizedMismatchLarge( Object a, long aOffset, Object b, long bOffset, long length, int log2ArrayIndexScale) Then you can specialize mismatch of memory segments for length threshold and type (the threshold only really makes sense for the first check, once you go over Integer.MAX_VALUE the cost of another round with a small length part is really low overall). ### openjdk bot commented May 20, 2020 • edited @ChrisHegarty This change now passes all automated pre-integration checks, type /integrate in a new comment to proceed. After integration, the commit message will be: Add MemorySegment::mismatch Reviewed-by: mcimadamore, psandoz If you would like to add a summary, use the /summary command. To credit additional contributors, use the /contributor command. To add additional solved issues, use the /issue command. Since the source branch of this PR was last updated there have been 143 commits pushed to the foreign-memaccess branch: f024ca7: Automatic merge of master into foreign-memaccess 046af8c: Automatic merge of jdk:master into master de37507: 8245619: Remove unused methods in UnixNativeDispatcher 113c48f: 8215401: Add isEmpty default method to CharSequence 7d330d3: 8245335: [TESTBUG] DeterministicDump.java fails with release JVM f72b574: 8245459: Add support for complex filter value var handle adaptation ea38873: 8239480: Support for CLDR version 37 b5b6ae3: 8245241: Incorrect locale provider preference is not logged e3be308: 8245260: Missing file header for test/hotspot/jtreg/containers/docker/TEST.properties 270674c: Merge ... and 133 more: https://git.openjdk.java.net/panama-foreign/compare/0dc2fed332fdfdbecbd4008c650d2f0ba77e1e6a...foreign-memaccess As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid automatic rebasing, please merge foreign-memaccess into your branch, and then specify the current head hash when integrating, like this: /integrate f024ca7de9d28bbaa27e57e8d4f547db5ea06002. As you do not have Committer status in this project, an existing Committer must agree to sponsor your change. Possible candidates are the reviewers of this PR (@mcimadamore, @PaulSandoz) but any other Committer may sponsor as well. ➡️ To flag this PR as ready for integration with the above commit message, type /integrate in a new comment. (Afterwards, your sponsor types /sponsor in a new comment to perform the integration). bot added the label May 20, 2020 ### mlbridge bot commented May 20, 2020 Mailing list message from Maurizio Cimadamore on panama-dev: On 20/05/2020 16:36, Paul Sandoz wrote: On Wed, 20 May 2020 14:06:48 GMT, Chris Hegarty wrote: Hi, As part of feedback on the Foreign Memory API (when experimenting with its usage internally in the JDK), a small number of potential usability enhancements could be made to the API. This is the fourth such, and last on my current todo list. This change proposes to add a new method: MemorySegment::mismatch The mismatch semantic is very useful for building equality and comparison logic on top of segments. I found that I needed such when modeling and comparing native socket address in the JDK implementation. It is possible to write your own, but requires a non-trivial amount of not-trivial code - to do it right! I also think that we can provide a more efficient implementation building on top of the JDK's internal mismatch support. I still need to do some perf testing and add a micro-benchmake ( Maurizio suggested possibly amending TestBulkOps ). There is also the question about possibly improving the JDK's internal implementation to work on long sizes (which could be done separately). For now, I just want to share the idea, along with the proposed specification and initial implementation. Comments welcome. Very nice. Improving the JDK implementation is good. In fact i think you could so that now. In ArraysSupport, with strip mining: public static int vectorizedMismatchLarge$$Object a\, long aOffset\, Object b\, long bOffset\, long length\, int log2ArrayIndexScale$$ Then you can specialize mismatch of memory segments for length threshold and type (the threshold only really makes sense for the first check, once you go over Integer.MAX_VALUE the cost of another round with a small length part is really low overall). To be clear - are you suggesting to add vectorizedMismatchLarge inside ArraySupport (but not add intrinsic support for that, for now) - and then to have the memory segment implementation to call either that or the standard version based on whether the segment is small or not? Maurizio ### mlbridge bot commented May 20, 2020 Mailing list message from Paul Sandoz on panama-dev: Yes, add the ?large? strip mining implementation to ArraysSupport. I think it unlikely this method ever needs to be made intrinsic. For large loop bounds it's important that we don?t starve the system (allow for safe points), so it?s easier to do that from Java than in the intrinsic stub or in C2. Then the memory segment code can use the those primitives based on thresholds. I suspect it can get away with just calling the large version, after a check for a small threshold value and a scalar loop. Paul. On May 20, 2020, at 8:53 AM, Maurizio Cimadamore wrote: On 20/05/2020 16:36, Paul Sandoz wrote: On Wed, 20 May 2020 14:06:48 GMT, Chris Hegarty wrote: Hi, As part of feedback on the Foreign Memory API (when experimenting with its usage internally in the JDK), a small number of potential usability enhancements could be made to the API. This is the fourth such, and last on my current todo list. This change proposes to add a new method: MemorySegment::mismatch The mismatch semantic is very useful for building equality and comparison logic on top of segments. I found that I needed such when modeling and comparing native socket address in the JDK implementation. It is possible to write your own, but requires a non-trivial amount of not-trivial code - to do it right! I also think that we can provide a more efficient implementation building on top of the JDK's internal mismatch support. I still need to do some perf testing and add a micro-benchmake ( Maurizio suggested possibly amending TestBulkOps ). There is also the question about possibly improving the JDK's internal implementation to work on long sizes (which could be done separately). For now, I just want to share the idea, along with the proposed specification and initial implementation. Comments welcome. Very nice. Improving the JDK implementation is good. In fact i think you could so that now. In ArraysSupport, with strip mining: public static int vectorizedMismatchLarge$$Object a\, long aOffset\, Object b\, long bOffset\, long length\, int log2ArrayIndexScale$$ Then you can specialize mismatch of memory segments for length threshold and type (the threshold only really makes sense for the first check, once you go over Integer.MAX_VALUE the cost of another round with a small length part is really low overall). To be clear - are you suggesting to add vectorizedMismatchLarge inside ArraySupport (but not add intrinsic support for that, for now) - and then to have the memory segment implementation to call either that or the standard version based on whether the segment is small or not? Maurizio Move implementation into vectorizedMismatchLarge, and address other r… c5414aa …eview comments. reviewed Show resolved Hide resolved Merge remote-tracking branch 'origin/foreign-memaccess' into mismatch add1fe0 Integrate Paul's review comment caf136f approved these changes approved these changes Looks good return i; } } return thisSize != thatSize ? length : -1; #### mcimadamore May 20, 2020 Collaborator I guess I'm a bit confused - shouldn't this method return (as per javadoc), -1 if there's no mismatch? In this code path we found no mismatch, and yet, if sizes are different we return length, suggesting there's a mismatch. I now realize that mismatch is taken quite literally - e.g. no mismatch really means the two things are identical in contents and size --- which is, I realize, what Arrays::mismatch also does. IMHO the javadoc of the various mismatch routines could use more clarity around what a mismatch is. But maybe that's something for another day. ### mlbridge bot commented May 20, 2020 Mailing list message from Paul Sandoz on panama-dev: If the two segments are are different lengths and one segment is a proper prefix of the other (all elements are equal) there is a mismatch in the length. It?s described more formally in the JavaDoc of the Arrays.mismatch methods. e.g * If the two arrays share a common prefix then the returned index is the * length of the common prefix and it follows that there is a mismatch * between the two elements at that index within the respective arrays. * If one array is a proper prefix of the other then the returned index is * the length of the smaller array and it follows that the index is only * valid for the larger array. * Otherwise, there is no mismatch. Paul. ### ChrisHegarty commented May 21, 2020 The proposed MemorySegment::mismatch spec wording is aligned with similar in the buffer classes and elsewhere. If it needs clarification then we should probably do so consistently. As @mcimadamore said, "that's something for another day" ;-) ### ChrisHegarty commented May 21, 2020 /integrate bot added the label May 21, 2020 ### openjdk bot commented May 21, 2020 Add null test scenario d525e57 bot removed the label May 21, 2020 approved these changes Looks good! ### openjdk bot commented May 21, 2020 @mcimadamore The PR has been updated since the change author (@ChrisHegarty) issued the integrate` command - the author must perform this command again. ### ChrisHegarty commented May 22, 2020 /integrate ### openjdk bot commented May 22, 2020 bot added the label May 22, 2020 ### mcimadamore commented May 25, 2020 bot closed this May 25, 2020 bot added and removed labels May 25, 2020
# vspace, hspace don't work in textblock i use vspace, hspace in textpos (texblock) but don't work. My minimal coding: \documentclass[12pt]{article} \usepackage[a4paper]{geometry} \usepackage[poster]{tcolorbox} \usepackage[absolute]{textpos} \pagestyle{empty} \begin{document} \begin{textblock*}{5cm}(4.0cm,2.0cm) % \begin{center} \small X \vspace*{0.55cm} \Huge Y \vspace*{0.25cm} \hspace*{0.45cm} \small Z \end{center} \end{textblock*} \end{document} The problem is that you essentially have one paragraph with all of your spaces in it (which is why you're getting the result that you're seeing. If you include paragraph breaks along with your \vspace commands that should give you something closer to what you're expecting.
dim(v) + dim(orthogonal complement of v) = n Video transcript Let's say I've got some subspace of Rn called V. So V is a subspace of Rn. And let's say that I know its basis. Let's say the set. So I have a bunch of-- let me make that bracket a little nicer-- so let's say the set of the vectors v1, v2, all the way to vk, let's say that this is equal to-- this is the basis for V. And just as a reminder, that means that V's vectors both span V and they're linearly independent. You can kind of see there's a minimum set of vectors in Rn that span V. So, if I were to ask you what the dimension of V is, that's just the number of vectors you have in your basis for the subspace. So we have 1, 2 and we count to k vectors. So it is equal to k. Now let's think about, if we can somehow figure out what the dimension of the orthogonal complement of V can be. And to do that, let's construct a matrix. Let's construct a matrix whose column vectors are these basis vectors. So let's construct a matrix A, and let's say it looks like this. First column is v1. This first basis vector right there. v2 is the second one, and then you go all the way to vk. Just to make sure we remember the dimensions, we have k of these vectors, so we're going to have k columns. And then how many rows are we going to have? Well, as a member of Rn, so these are all going to have n entries in each of these vectors, there's going to be an n-- we're going to have n rows and k columns. It's an n by k matrix. Now, what's another way of expressing the subspace V? Well, the basis for V is-- or V is spanned by these basis vectors, which is the columns of these. So if I talk about the span-- so let me write out this-- V is equal to the span of these guys, v1, v2, all the way to vk. And that's just the same thing as the column space of A. Right? These are the column vectors, and the span of them, that's equal to the column space of A. Now, I said a little while ago, we want to somehow relate to the orthogonal complement of V. Well, what's the orthogonal complement of the column space of A? The orthogonal complement of the column space of A, I showed you-- I think it was two or three videos ago-- that the column space of A's orthogonal complement is equal to-- you could either view it as the null space of A transpose, or another way you call it is the left null space of A. This is equivalent to the orthogonal complement of the column space of A, which is also going to be equal to, which is also since this piece right here is the same thing as V, you take it's orthogonal complement, that's the same thing as V's orthogonal complement. So if we want to figure out there orthogonal complement of-- if we want to figure out the dimension-- if we want to figure out the dimensional of the orthogonal complement of V, we just need to figure out the dimension of the left null space of A, or the null space of A's transpose. Let me write that down. So the dimension-- get you tongue-tied sometimes-- the dimension of the orthogonal complement of V is going to be equal do the dimension of A transpose. Or another way to think of it is-- sorry, not just the dimension of A transpose, the dimension of the null space of A transpose. And if you have a good memory, I don't use the word a lot, this thing is the nullity-- this is the nullity of A transpose. The dimension of your null space is nullity, the dimension of your column space is your rank. Now let's see what we can do here. So let's just take A transpose, so you can just imagine A transpose for a second. I can just even draw it out. It's going to be a k by n matrix that looks like this. These columns are going to turn into rows. This is going to be v1 transpose, v2 transpose, all the way down to vk transpose vectors. These are all now row vectors. So we know one thing. We know one relationship between the rank and nullity of any matrix. We know that they're equal to the number of columns we have. We know that the rank of A transpose plus the nullity of A transpose is equal to the number of columns of A transpose. We have n columns. Each of these have n entries. It is equal to n. We saw this a while ago. And if you want just a bit of a reminder of where that comes from, when you take a-- if I wrote A transpose as a bunch of column vectors, which I can, or maybe let me take some other vector B, because I want to just remind you where this, why this made sense. If I take some vector B here, and it has got a bunch of column vectors, b1, b2, all the way to bn, and I put it into reduced row echelon form, you're going to have some pivot columns and some non-pivot columns. So let's say this is a pivot column. You know, I got a 1 and a bunch of 0's, let's say that this is one of them, and then let's say I got one other one that's out, and it would be a 0 there, it's a 1 down there, and everything else is a non-pivot column. I showed you in the last video that your basis for your column space is the number of pivot columns you have. So these guys are pivot columns. The corresponding column vectors form a basis for your column space. I showed you that in the last video. And so, if you want to know the dimension of your column space, you just have to count these things. You just count these things. This was equal to the number of, well, for this B's case, the rank of B is just equal to the number of pivot columns I have. Now the nullity is the dimension of your null space. We've done multiple problems where we found the null space of matrices. And every time, the dimension, it's a bit obvious, and I actually showed you this proof, it's related to the number of free columns you have, or non-pivot columns. So, if you have no pivot columns, then you are -- if all of your columns are pivot columns, and none of them have free variables or are associated with free variables, then you're null space is going to be trivial. It's just going to have the 0 vector. But the more free variables you have, the more dimensionality your null space has. So the free columns correspond to the null space, and they form actually a basis for your null space. And because of that, the basis for your null space vectors, plus the basis for your column space, is equal to the total number of columns you have. I showed that to you in the past, but it's always good to remind ourselves where things come from. But this was just a bit of a side. I did this with a separate vector B. Just to remind ourselves where this thing right here came from. Now, in the last video, I showed you that the rank of A transpose is the same thing is the rank of A. This is equal to, this part right here, is the same thing as the rank of A. I showed you that in the last video. When you transpose a matrix, it doesn't change its rank, or it doesn't change the dimension of its column space. So we can rewrite this statement, right here, as the rank of A plus the nullity of A transpose is equal to n, and the rank of A is the same thing as the dimension of the column space of A. And then the nullity of A transpose is the same thing as the dimension of the null space of A transpose-- that's just the definition of nullity-- they're going to be equal to n. Now what's the dimension-- what's the column space of A? The column space of A, that's what's spanned by these vectors right here, which were the basis for V. So this is the same thing as the dimension of V. The column space of A is the same thing as the dimension of my subspace V that I started this video with. And what is the null space of A transpose? The null space of A transpose, we saw already, that's the orthogonal complement of V. So I could write this as plus the dimension of the orthogonal complement of V is equal to n. And that's the result we wanted. If V is a subspace of Rn, that n is the same thing as that n, then the dimension of V plus the dimension of the orthogonal complement of V is going to be equal to n.
# How do you write the first five terms of the sequence a_n=n/(n+2)? Nov 30, 2017 The first 5 terms are: $\left\{\frac{1}{3} , \frac{2}{4} , \frac{3}{5} , \frac{4}{6} , \frac{5}{7}\right\}$. (I didn't reduce so the pattern would be more clear.) #### Explanation: I'm assuming that we start at $n = 1$ for this. Substitute $n = 1$ to get ${a}_{1} = \frac{1}{3}$. Substitute $n = 2$ to get ${a}_{2} = \frac{2}{4} = \frac{1}{2}$. Substitute $n = 3$ to get ${a}_{3} = \frac{3}{5}$. Substitute $n = 4$ to get ${a}_{4} = \frac{4}{6} = \frac{2}{3}$. Substitute $n = 5$ to get ${a}_{5} = \frac{5}{7}$. So the first 5 terms are: $\left\{\frac{1}{3} , \frac{2}{4} , \frac{3}{5} , \frac{4}{6} , \frac{5}{7}\right\}$.
# Compact object (mathematics) In mathematics, compact objects, also referred to as finitely presented objects, or objects of finite presentation, are objects in a category satisfying a certain finiteness condition. ## Definition An object X in a category C which admits all filtered colimits (also known as direct limits) is called compact if the functor ${\displaystyle \operatorname {Hom} _{C}(X,\cdot ):C\to \mathrm {Sets} ,Y\mapsto \operatorname {Hom} _{C}(X,Y)}$ commutes with filtered colimits, i.e., if the natural map ${\displaystyle \operatorname {colim} \operatorname {Hom} _{C}(X,Y_{i})\to \operatorname {Hom} _{C}(X,\operatorname {colim} _{i}Y_{i})}$ is a bijection for any filtered system of objects ${\displaystyle Y_{i}}$ in C.[1] Since elements in the filtered colimit at the left are represented by maps ${\displaystyle X\to Y_{i}}$, for some i, the surjectivity of the above map amounts to requiring that a map ${\displaystyle X\to \operatorname {colim} _{i}Y_{i}}$ factors over some ${\displaystyle Y_{i}}$. The terminology is motivated by an example arising from topology mentioned below. Several authors also use a terminology which is more closely related to algebraic categories: Adámek & Rosický (1994) use the terminology finitely presented object instead of compact object. Kashiwara & Schapira (2006) call these the objects of finite presentation. ### Compactness in ∞-categories The same definition also applies if C is an ∞-category, provided that the above set of morphisms gets replaced by the mapping space in C (and the filtered colimits are understood in the ∞-categorical sense, sometimes also referred to as filtered homotopy colimits). ### Compactness in triangulated categories For a triangulated category C which admits all coproducts, Neeman (2001) defines an object to be compact if ${\displaystyle \operatorname {Hom} _{C}(X,\cdot ):C\to \mathrm {Ab} ,Y\mapsto \operatorname {Hom} _{C}(X,Y)}$ commutes with coproducts. The relation of this notion and the above is as follows: suppose C arises as the homotopy category of a stable ∞-category admitting all filtered colimits. (This condition is widely satisfied, but not automatic.) Then an object in C is compact in Neeman's sense if and only if it is compact in the ∞-categorical sense. The reason is that in a stable ∞-category, ${\displaystyle Hom_{C}(X,-)}$ always commutes with finite colimits since these are limits. Then, one uses a presentation of filtered colimits as a coequalizer (which is a finite colimit) of an infinite coproduct. ## Examples The compact objects in the category of sets are precisely the finite sets. For a ring R, the compact objects in the category of R-modules are precisely the finitely presented R-modules. In particular, if R is a field, then compact objects are finite-dimensional vector spaces. Similar results hold for any category of algebraic structures given by operations on a set obeying equational laws. Such categories, called varieties, can be studied systematically using Lawvere theories. For any Lawvere theory T, there is a category Mod(T) of models of T, and the compact objects in Mod(T) are precisely the finitely presented models. For example: suppose T is the theory of groups. Then Mod(T) is the category of groups, and the compact objects in Mod(T) are the finitely presented groups. The compact objects in the derived category R-modules are precisely the perfect complexes. Compact topological spaces are not the compact objects in the category of topological spaces. Instead these are precisely the finite sets endowed with the discrete topology.[2] The link between compactness in topology and the above categorical notion of compactness is as follows: for a fixed topological space X, there is the category O(X) whose objects are the open subsets of X (and inclusions as morphisms). Then, X is a compact topological space if and only if X is compact as an object in O(X). If C is any category, the category of presheaves ${\displaystyle P(C)}$ (i.e., the category of functors from ${\displaystyle C^{op}}$ to sets) has all colimits. The original category C is connected to P(C) by the Yoneda embedding ${\displaystyle j:C\to P(C),X\mapsto j(X):=\operatorname {Hom} (-,X)}$. For any object X of C, j(X) is a compact object (of P(C)). In a similar vein, any category C can be regarded as a full subcategory of the category Ind(C) of ind-objects in C. Regarded as an object of this larger category, any object of C is compact. In fact, the compact objects of Ind(C) are precisely the objects of C (or, more precisely, their images in Ind(C)). ## Compactly generated categories In most categories, the condition of being compact is quite strong, so that most objects are not compact. A category C is compactly generated if any object can be expressed as a filtered colimit of compact objects in C. For example, any vector space V is the filtered colimit of its finite-dimensional (i.e., compact) subspaces. Hence the category of vector spaces (over a fixed field) is compactly generated. Categories which are compactly generated and also admit all colimits are called accessible categories. ## Relation to dualizable objects For categories C with a well-behaved tensor product (more formally, C is required to be a monoidal category), there is another condition imposing some kind of finiteness, namely the condition that an object is dualizable. If the monoidal unit in C is compact, then any dualizable object is compact as well. For example, R is compact as an R-module, so this observation can be applied. Indeed, in the category of R-modules the dualizable objects are the finitely presented projective modules, which are in particular compact. In the context of ∞-categories, dualizable and compact objects tend to be more closely linked, for example in the ∞-category of complexes of R-modules, compact and dualizable objects agree. This and more general example where dualizable and compact objects agree are discussed in Ben-Zvi, Francis & Nadler (2010). ## References 1. Lurie (2009, §5.3.4) 2. Adámek & Rosický (1994, Chapter 1.A) • Adámek, Jiří; Rosický, Jiří (1994), Locally presentable and accessible categories, Cambridge University Press, doi:10.1017/CBO9780511600579, ISBN 0-521-42261-2, MR 1294136 • Ben-Zvi, David; Francis, John; Nadler, David (2010), "Integral transforms and Drinfeld centers in derived algebraic geometry", Journal of the American Mathematical Society, 23 (4): 909–966, arXiv:0805.0157, doi:10.1090/S0894-0347-10-00669-7, MR 2669705
# Perpetual Option Paying Chooser Option A perpetual option solves the ODE $$rSV_S+\frac{1}{2}\sigma^2S^2V_{SS}-rV=0$$ The general solution is $$V(S)=aS+bS^{\gamma}$$ where $$\gamma=-\frac{2r}{\sigma^2}<0$$. For an American put option with payoff $$K-S$$, we find $$a=0$$ because we require $$V(S)=0$$ as $$S\to\infty$$. We find the free boundary (exercise point, $$S^*$$) and the remaining free parameter ($$b$$) by value-matching and smooth-pasting of $$V(S)$$ with the payoff $$K-S$$ at $$S=S^*$$, that is \begin{align} b(S^*)^{\gamma} &= K-S^* \\ b\gamma(S^*)^{\gamma-1} &= -1 \end{align} The option value is then \begin{align} V(S)=\begin{cases} K-S &if\; S Question: What happens if the option pays $$\max(K_1-S,K_2-2S)=K_2-2S+\max(S+K_1-K_2,0)$$ instead of $$K-S$$? The payoff now resembles a chooser option (between two puts). We still require $$a=0$$ such that $$V(S)=0$$ as $$S\to\infty$$. But how to proceed? I don't think it's as simple as finding $$b$$ and $$S^*$$ by solving value-matching and smooth-pasting condition and setting \begin{align} V(S)=\begin{cases} \max(K_1-S,K_2-2S) &if\; S
# Remove "-\TP@blockbodyoffsetx" from the width of the block-parbox Issue #38 new Torbjørn T. created an issue In the definition of \block, the text is set in a \parbox of width \TP@blockbodywidth-2\TP@blockbodyinnersep-\TP@blockbodyoffsetx What is the last term doing there? It doesn't make sense that an offset for the whole body should reduce the width of the text block. Exaggerated MWE demonstrating that this is weird: \documentclass[a2paper]{tikzposter} \usepackage{lipsum} \begin{document} \maketitle \begin{columns} \column{0.5}
# LaTex Accents I always forget how to add accents in LaTex within the standard text. So I have placed some of them here, as much for myself as anyone else. LaTeX command Sample Description \{o} ò grave accent \'{o} ó acute accent \^{o} ô circumflex \"{o} ö umlaut or diaeresis \H{o} ő long Hungarian umlaut \~{o} õ tilde \c{c} ç cedilla \k{a} ą ogonek \l ł l with stroke \={o} ō macron accent (a bar over the letter) \b{o} o bar under the letter \.{o} ȯ dot over the letter \d{u} dot under the letter \r{a} å ring over the letter (for å there is also the special command \aa) \u{o} ŏ breve over the letter \v{s}` š caron/hacek (“v”) over the letter Reference Wikibooks # Bilston Community college loses international student license Bilston Community college, a privately-run college in the Black Country, has lost its “Tier 4 Visa” license. The collage claims that about 60% of its 200 or so students are international. The rules for this level of visa license are quite strict; for example a college has to ensure students attend classes regularly and that its teaching is of sufficient quality. There are other rules that must be adhered to. Where we find evidence that sponsors are not fulfilling their duties we will suspend or remove their license. We can confirm that Bilston Community College had its Tier 4 license revoked on 26 October, with immediate effect. The UK Border Agency Bilston Community College BBC News # A new post-16 mathematics curriculum focused on real problems Mathematics in Education and Industry (MEI) has been asked by Michael Gove (Secretary of State for Education) to develop a mathematics course aimed at sixth formers that focuses on real world problems. Image by Sweetness46 As compared to other countries, the UK has relatively low participation on mathematics past 16. The idea is for students who would be unlikely to study A-level mathematics, to continue to study mathematics past GCSE along side other subjects. Professor Timothy Gowers, of Cambridge University, in his blog, wrote about teaching mathematics to non-mathematicians with the focus on real problems. Many of these ideas will be incorporated into the MEI syllabus. Professor Timothy Gowers Professor Tim Gowers’s brilliant blog has sparked huge interest in how we could radically improve maths teaching. I am delighted that MEI is trying to develop the Gowers blog into a real course that could help thousands of students understand the power of mathematical reasoning and problem-solving skills. Michael Gove A sample problem A doctor tests a patient for a serious disease that one in ten thousand people have. The test is fairly reliable: if you have the disease, it gives a positive result, whereas if you don’t, then it gives a negative result in 99% of cases. So the only problem with it is that it occasionally gives a false positive. The patient tests positive. How worrying is this? Reference Expanding post-16 participation in mathematics: Developing a curriculum to promote mathematical problem solving, MEI press release (opens PDF)
Arithmetic Aptitude :: SIM Quiz 5 Home » Arithmetic Aptitude » SIM Quiz 5 » General Questions # free Online Test SIMPLIFICATION Questions and Answers Quiz 5 Description: Free Online Test questions and answers on SIMPLIFICATION with explanation for various competitive exams,entrance test. Solved examples with detailed answer test 5 Exercise "Hold a true friend with both hands." - (Proverb) 1 . Directions (1-5): What will come in place of the question mark (?) in the following questions? $Q.$ (5 × 7)%of (34 × 55) + 456.60 = 699.1 + ? 412 422 418 428 2 . $14 \times 627 \div \sqrt{1089} = (?)^3 + 141$ 5$\sqrt{5}$ (125)$^3$ 25 5 3 . 2$1.5\over5$ + 2$1\over6$ - 1$3.5\over15$ = $(?)^{1\over3}\over4$ + 1$7\over 30$ 2 8 512 324 4 . (80 x 0.40)$^3$$\div(40 x 1.6)^3$$\times$(128)$^ 3$ = (2)$^{? + 7}$ 25 11 12 18 5 . $(\sqrt{7} + 11)^2 = (?)^{1\over 3}$ + 2$\sqrt{847}$ + 122 36 + 44$\sqrt{7}$ 6 216 36 6 . Directions (6-10): What will come in place of the question mark (?) in the following questions? $Q.$ $1\over 6$ of 92 $\div$ of 1$1\over23$ of (650) = 85 + ? 18 21 19 28 7 . 92 x 576 $\div (\sqrt{1296}) = (?)^3 + \sqrt{49}$ 3 (9)$^2$ 9 27 8 . 3$1\over 4$ + 2$1\over 2$ - 1$5\over 6$ = $(?)^2\over 10$ + 1$5\over 12$ 25 $\sqrt{5}$ 15 5 9 . $(\sqrt{8}\times\sqrt{8})^{1\over 2} + (9)^{1\over2} = (?)^3 + \sqrt{8} - 340$ 10 . $(15 \times 0.40)^4\div(1080 \div 30)^4 \times(27 \div 8)^4 = (3 \times 2)^{? + 5}$
# Understanding electrical consumption I have problems understanding the difference between electrical power and consumption. Can you help me understand what electrical energy consumption is? Let's say that I have a bulb, on which is written 100 W. How much does this bulb consume in 1 hour? Can it have a power lower than 100 W? Can it have a power greater than 100 W? I would like to know what is the relation between electrical power and electrical consumption. I'm very sorry for the big disambiguation from my head. But I don’t know who else to ask. I hope that you will not consider my question as stupid, even if is so. • The monitor and laptop questions should be in a separate question. But since that isn't about electronics design, I removed them rather than closing the question. – Brian Carlton Oct 22 '12 at 17:40 • Actually it's too bad the monitor and laptop questions were removed, as devices with switching power supplies illustrate a counter example where unlike with the lightbulb the power consumption probably does not depend very much on the line voltage, but only on the needs of the load. – Chris Stratton Oct 22 '12 at 18:03 • use this.. all ur questions will be solved.. bijlibachao.com/Electricity-Bill/… – user29116 Sep 16 '13 at 13:38 The issue you're confused about seems to be the difference between power and energy. Energy is how much work you can do. Common units are joules or watt-hours. Power is how fast you do work. It's a rate of change. Common units are watts or horsepower. Horsepower is probably an instructive unit to consider. Say you wanted to move a large pile of straw. Whether it's moved by a horse or a housecat doesn't affect the amount of work done. But the horse does it faster, because it's a more powerful animal. For the purposes of discussing grid electricity consumption, watts (W) and kilowatt-hours (kWh) are the most common units used. To know how much energy is consumed, multiply the power by the time. 100 W x 1 hour is 100 watt-hours, or .1 kWh. In short, the relationship between power and consumption is time. A 100W bulb consumes 100W assuming the voltage across it is what's specified on the package, which is usually 120V in my experience. If the voltage at your socket is lower, the bulb will consume less power. It's approximately a fixed resistance, so the power consumed is $$P=\frac{V^2}{R}$$ As an aside, remember energy conservation. If something consumes 100W, that energy is being converted to some other form. Either it gets stored (potential energy), it's used (light, motion, etc.), or it's wasted as heat. For an incandescent bulb, ~90% of the power consumed is converted to heat. So a 100W incandescent bulb consumes 100W, but only outputs 10W of light. It gets hot because the other 90W is being wasted. Which is why CFL's run so much cooler and consume less power for the same light output. The others already told you that energy is power $\times$ time. And that 1 kW during 1 hour is 1 kWh. A year has 8765 hours, then a device which is in standby most of the time will consume 8.76 kWh in a year if it consumes 1 W in standby. The kWh is a practical unit for electrical power, but the SI unit is the joule ("J"), which is much smaller. A joule is the energy consumed by a 1 W device in 1 second. $1 J = 1 W \cdot s$ So a kWh is 1000 W $\times$ 3600 s = 3.6 MJ (megajoules), so that's not so practical, especially when we talk about industrial machines consuming tens of kWh; you would soon be talking about GJs (gigajoules). That's why the kWh is accepted as alternative unit. (An older unit for energy is the calorie, where 1 cal $\approx$ 4.2 J. An odd number, but it's the energy needed to heat up 1 g of water by 1 °C (from 14.5 °C to 15.5 °C). You can do that by adding 4.2 W during 1 second, or 42 W during 100 ms.) The power rating on the light bulb is the rate at which it consumes energy. How much energy is consumed is a function of how long it is turned on. Energy = Power × time. If it is on for one hour, it will consume 100 Watt Hours. This is the unit that the electric company bills. When first starting to learn physics (or most any topic,) here's a metal trick: always cross out the complicated terms, and replace them with their 'unpacked' definitions. Simple, right? This greatly helps in understanding flow rates. Don't use single units, instead give the quantity-per-second names. For example, the word "Current" means charge-flow. So (at least in your early years) make it your habit to never say current. Always change it to "charge flow." And never say Amperes, because the ampere is really just the charge-flow rate in coulombs-per-second. The answer to your question may become obvious if you stop saying the word "Power." If the word has a fuzzy definition in your mind (or, a wrong definition,) then your question may answer itself once you habitually insert the correct meaning into all sentences. "Power" means energy-flow. It's the rate of energy-transfer, or energy propagation across a distribution network (including electrical, and drive belts, and drive shafts.) Don't say Power, say energy-flow. And similarly, stop saying Watts. Watts are really just the rate of energy-flow in joules-per-second. Let's say that I have a bulb, on which is written 100 joules/second. How much does this bulb consume in 1 hour? See what I did there? One hour is 60min or 3600 seconds. So the bulb consumes 3600 X 100 = 360,000 joules of energy per hour. • +1 This is a really good answer - I like how you almost converted the question to a tautology :) – Morten Jensen Jul 3 '18 at 15:39 1. 100 Wh (0.1 kWh). Energy is power x time, so W x h = Wh. 2. The 100 W is assuming the usual household voltage. In the US that is fixed 120 V. This only varies slightly, so the power may be a bit above or below 100 W
# Transactions A transaction is a way for an application to group several reads and writes together into a logical unit. Conceptually, all the reads and writes in a transaction are executed as one operation: either the entire transaction succeeds (commit) or it fails (abort, rollback). If it fails, the application can safely retry. With transactions, error handling becomes much simpler for an application, because it doesn’t need to worry about partial failure—i.e., the case where some operations succeed and some fail (for whatever reason). # ACID A transaction is a single logical unit of work which accesses and possibly modifies the contents of a database. Transactions access data using read and write operations. In computer science, ACID (atomicity, consistency, isolation, durability) is a set of properties of database transactions intended to guarantee data validity despite errors, power failures, and other mishaps. In the context of databases, a sequence of database operations that satisfies the ACID properties (which can be perceived as a single logical operation on the data) is called a transaction. For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction. # Atomicity Transactions are often composed of multiple statements. Atomicity guarantees that each transaction is treated as a single “unit”, which either succeeds completely, or fails completely: if any of the statements constituting a transaction fails to complete, the entire transaction fails and the database is left unchanged. An atomic system must guarantee atomicity in each and every situation, including power failures, errors and crashes. A guarantee of atomicity prevents updates to the database occurring only partially, which can cause greater problems than rejecting the whole series outright. As a consequence, the transaction cannot be observed to be in progress by another database client. At one moment in time, it has not yet happened, and at the next it has already occurred in whole (or nothing happened if the transaction was cancelled in progress). An example of an atomic transaction is a monetary transfer from bank account A to account B. It consists of two operations, withdrawing the money from account A and saving it to account B. Performing these operations in an atomic transaction ensures that the database remains in a consistent state, that is, money is neither debited nor credited if either of those two operations fail. ## Example Consider the following transaction T consisting of O1 and O2: Transfer of 100 from account X to account Y. If the transaction fails after completion of O1 but before completion of O2.( say, after write(X) but before write(Y)), then amount has been deducted from X but not added to Y. This results in an inconsistent database state. Therefore, the transaction must be executed in entirety in order to ensure correctness of database state. # Consistency Consistency ensures that a transaction can only bring the database from one valid state to another, maintaining database invariants: any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof. This prevents database corruption by an illegal transaction, but does not guarantee that a transaction is correct. For example, if there ia an integrity constraint that requires that the value in A and the value in B must sum to 100. This integrity constraint should always be valid before and after a transaction is executed. Consistency is checked after each transaction, it is known that A + B = 100 before the transaction begins. If the transaction removes 10 from A successfully, atomicity will be achieved. However, a validation check will show that A + B = 90, which is inconsistent with the rules of the database. The entire transaction must be cancelled and the affected rows rolled back to their pre-transaction state. # Isolation Transactions are often executed concurrently (e.g., reading and writing to multiple tables at the same time). Isolation ensures that concurrent execution of transactions leaves the database in the same state that would have been obtained if the transactions were executed sequentially. In other words, isolation ensures that multiple transactions can occur concurrently without leading to inconsistency of database state. Isolation is the main goal of concurrency control; depending on the method used, the effects of an incomplete transaction might not even be visible to other transactions. Two phase locking is often applied to guarantee full isolation. The classic database textbooks formalize isolation as serializability, which means that each transaction can pretend that it is the only transaction running on the entire database. The database ensures that when the transactions have committed, the result is the same as if they had run serially (one after another), even though in reality they may have run concurrently. However, in practice, serializable isolation is rarely used, because it carries a performance penalty. Some popular databases, such as Oracle 11g, don’t even implement it. In Oracle there is an isolation level called “serializable,” but it actually implements something called snapshot isolation, which is a weaker guarantee than serializability. ## Example of using Isolation User 2 experiences an anomaly: the mailbox listing shows an unread message, but the counter shows zero unread messages because the counter increment has not yet happened. Isolation would have prevented this issue by ensuring that user 2 sees either both the inserted email and the updated counter, or neither, but not an inconsistent halfway point. # Durability Durability guarantees that once a transaction has been committed, it will remain committed even in the case of a hardware fault or the database crashes (e.g., power outage or crash). This usually means that completed transactions (or their effects) are recorded in a persisted transaction log. If our system is suddenly affected by a system crash or a power outage, then all unfinished committed transactions may be replayed. Consider a transaction that transfers 10 from A to B. First it removes 10 from A, then it adds 10 to B. At this point, the user is told the transaction was a success, however the changes are still queued in the disk buffer waiting to be committed to disk. Power fails and the changes are lost. The user assumes (understandably) that the changes persist. # Achieve ACID ## Locking Many databases rely upon locking to provide ACID capabilities. Locking means that the transaction marks the data that it accesses so that the DBMS knows not to allow other transactions to modify it until the first transaction succeeds or fails. The lock must always be acquired before processing data, including data that is read but not modified. Non-trivial transactions typically require a large number of locks, resulting in substantial overhead as well as blocking other transactions. For example, if user A is running a transaction that has to read a row of data that user B wants to modify, user B must wait until user A’s transaction completes. Two phase locking is often applied to guarantee full isolation. ## Multiversion Concurrency Control An alternative to locking is multiversion concurrency control, in which the database provides each reading transaction the prior, unmodified version of data that is being modified by another active transaction. This allows readers to operate without acquiring locks, i.e., writing transactions do not block reading transactions, and readers do not block writers. Going back to the example, when user A’s transaction requests data that user B is modifying, the database provides A with the version of that data that existed when user B started his transaction. User A gets a consistent view of the database even if other users are changing data. One implementation, namely snapshot isolation, relaxes the isolation property. # Challanges in ACID ACID is old school. Jim Gray described atomicity, consistency and durability long before I was even born. But that particular paper doesn’t mention anything about isolation. This is understandable if we think of the production systems of the late 70’s, which according to Jim Gray: “At present, the largest airlines and banks have about 10,000 terminals and about 100 active transactions at any instant”. So all efforts were spent on delivering correctness rather than concurrency. Things have changed drastically ever since, and nowadays even modest set-ups are able to run 1000 TPS. From a database perspective, the atomicity is a fixed property, but everything else may be traded off for performance/scalability reasons. If the database system is composed of multiple nodes, then distributed system consistency (C in CAP Theorem not C in ACID) mandates that all changes be propagated to all nodes (multi-master replication). If slaves nodes are updated asynchronously then we break the consistency rule, the system becoming “eventually consistent“.
Fourier Transform of Rep and Comb Functions A slecture by ECE student Matt Miller Partly based on the ECE438 Fall 2014 lecture material of Prof. Mireille Boutin. In this slecture, we are going to look at the fourier transforms of the $comb_T()$ and $rep_T()$ functions. First, we will define the a function $p_T(t)$ as pulse train function, or series of time-shifted impulses: \begin{align} p_T(t) = \sum_{n=-\infty}^{\infty}\delta(t-kT) \end{align} Now let's take a look at the $comb_T()$ function. By definition: \begin{align}comb_T(x(t)) :&= x(t)p_T(t) \\ &= x(t)\sum_{k=-\infty}^{\infty}\delta(t-kT) \\ &= \sum_{k=-\infty}^{\infty}x(t)\delta(t-kT) \\ &= \sum_{k=-\infty}^{\infty}x(kt)\delta(t-kT) \end{align} The result of this function is a set of time-shifted impulses whose amplitudes match those of the input signal x(t) at each given point. This signal can be referred to as a sampling of x(t) with a sampling rate of 1/T. Before we get started with the fourier transform of this function, lets take a look at the $rep_T()$ function, which is very similar to the $comb_T()$ function: \begin{align}rep_T(x(t)) :&= x(t)*p_T(t) \\ &= x(t)*\sum_{k=-\infty}^{\infty}\delta(t-kT) \\ &= \sum_{k=-\infty}^{\infty}x(t)*\delta(t-kT) \\ &= \sum_{k=-\infty}^{\infty}x(t-kT)\end{align} As shown above, the $rep_T()$ function differs from the $comb_T()$ function in that it convolves the input signal with the impulse train rather than simply multiplying the two. Now that we've seen the $rep_T()$ function, we can go back to the fourier transform of the $comb_T()$ function: ${\mathcal F}(comb_T(x(t))) = {\mathcal F}(x(t)p_T(t))$ Using the multiplication property: \begin{align} &= {\mathcal X}(f)*{\mathcal F}(p_T(t)) \\ &= {\mathcal X}(f)*{\mathcal F}(\sum_{n=-\infty}^{\infty}\frac{1}{T}e^{j{\frac{2 \pi}{T}}nt}) \\ &= {\mathcal X}(f)*\sum_{n=-\infty}^{\infty}\frac{1}{T}{\mathcal F}(e^{j{\frac{2 \pi}{T}}nt}) \\ &= {\mathcal X}(f)*\frac{1}{T}\sum_{n=-\infty}^{\infty}\delta(f - \frac{n}{T}) \\ &= \frac{1}{T}{\mathcal X}(f)*{\mathcal P}_{\frac{1}{T}}(f)\end{align} Now, knowing that a signal convolved with a pulse train results in a $rep_T()$ function, we can simplify this as: ${\mathcal F}(comb_T(x(t))) = \frac{1}{T}rep_{\frac{1}{T}}({\mathcal X}(f))$ Simply put, the fourier transform of a comb function is a rep function. Now let's see if the inverse is true: ${\mathcal F}(rep_T(x(t))) = {\mathcal F}(x(t)*p_T(t))$ Using the convolution property: \begin{align} &= {\mathcal X}(f){\mathcal F}(p_T(t)) \\ &= {\mathcal X}(f){\mathcal F}(\sum_{n=-\infty}^{\infty}\frac{1}{T}e^{j{\frac{2 \pi}{T}}nt}) \\ &= {\mathcal X}(f)\sum_{n=-\infty}^{\infty}\frac{1}{T}{\mathcal F}(e^{j{\frac{2 \pi}{T}}nt}) \\ &= {\mathcal X}(f)\frac{1}{T}\sum_{n=-\infty}^{\infty}\delta(f - \frac{n}{T}) \\ &= {\mathcal X}(f)\frac{1}{T}{\mathcal P}_{\frac{1}{T}}(f) \\ &= \frac{1}{T}comb_{\frac{1}{T}}({\mathcal X}(f))\end{align} As shown above, we now know that the fourier transform of a rep is comb. This the duality of the two functions between the time and frequency domains; one in the time domain is the other in the frequency domain, or vise-versa. One difference that it is important to remember, however, is that the resulting fourier transform's impulse-train period is the inverse of the original in the time domain. The frequency domain signal is also amplitude scaled by the inverse of the period T.
# nLab invariant ### Context #### Algebra higher algebra universal algebra # Contents ## Idea For $A$ a monoid equipped with an action on an object $V$, an invariant of the action is an element of $V$ which is taken by the action to itself. ## Definitions ### For $\infty$-group actions For $\mathbf{H}$ an (∞,1)-topos, $G \in Grp(\mathbf{H})$ an ∞-group and $* : \mathbf{B} G \vdash : V(*) : Type$ an ∞-action of $G$ on $V \in \mathbf{H}$, the type of invariants is the absolute dependent product $\vdash \prod_{* : \mathbf{B}G} V(*) : Type \,.$ The connected components of this is equivalently the group cohomology of $G$ with coefficients in the infinity-module $V$. homotopy type theoryrepresentation theory pointed connected context $\mathbf{B}G$∞-group $G$ dependent type∞-action/∞-representation dependent sum along $\mathbf{B}G \to \ast$coinvariants/homotopy quotient context extension along $\mathbf{B}G \to \ast$trivial representation dependent product along $\mathbf{B}G \to \ast$homotopy invariants/∞-group cohomology dependent sum along $\mathbf{B}G \to \mathbf{B}H$induced representation context extension along $\mathbf{B}G \to \mathbf{B}H$ dependent product along $\mathbf{B}G \to \mathbf{B}H$coinduced representation Revised on November 17, 2013 00:17:00 by Urs Schreiber (82.113.98.128)
# Asteroid may strike Mars 1. Dec 23, 2007 ### tony873004 The asteroid 2007 WD5 was reported as having a 1 in 75 chance of hitting Mars on January 30, 2008. This was based on observations through December 21, 2007. New observations are in. It's predicted path is getting closer to Mars. The media won't run another story until NASA makes another press release, but the updated numbers from some additional observations are now available. December 23rd's data shows it is now predicted to pass 17631 km above the Martian surface, more than twice as close as the prediction made with December 21st's data, when the odds of collision were placed at 1 in 75. This doesn't necessarily mean that the odds of 1 in 75 have improved. I don't know what the error bar is on the new data. Perhaps as well as the asteroid's trajectory moving closer to Mars, the error bar has shrunk enough to confidently exclude a Martian collision. Or perhaps not. Here's a screenshot from Gravity Simulator showing the asteroid's trajectory: 2. Dec 24, 2007 ### Gokul43201 Staff Emeritus 3. Dec 24, 2007 ### tony873004 The latest data is not from a press release by NASA. It is from propogating the data obtained from a direct query of JPL / NASA ' s Horizons Ephemeris Computation Service Code (Text): Rec #:621446 (+COV) Soln.date: 2007-Dec-23_00:50:04 # obs: 28 (41 days) This is the most current data available, but it will likely change from day to day. In your first link, it's strange how they say "may pass within 30,000 miles of Mars at about 6 a.m. EST (3 a.m. PST) on Jan. 30, 2008". Yet the data available on the 21st had the asteroid passing just under 50,000 km from Mars' surface at 9:11 GMT, which is 1:11 Pacific Time, not 3am PST as the article states. 4. Dec 24, 2007 ### pixel01 What surprised me (and scared) is that the asteroid was not discovered until it approached quite near the Earth, and just recently: Nov.20 ! 5. Dec 27, 2007 ### B. Elliott If it has a 1 in 75 chance of actually impacting Mars, what would the odds be of it at least getting captured by the planet? Has to be better than 1 in 75. Could we watch WD5 become a temporary Mars satellite? Or are the conditions "not so right" for this to happen? 6. Dec 27, 2007 ### D H Staff Emeritus The odds of being captured are practically the same as the odds of collision. The asteroid is in a hyperbolic orbit. A slew of coincidences need to occur for something to be captured. Think of it this way: Mars is a lot closer to the asteroid belt than is the Earth. As such, it has a lot more "close calls" than does the Earth. The number of asteroid fly-bys of Mars vastly overwhelms the number of captures. Mars has but two satellites after all. 7. Dec 27, 2007 ### ranger I'm sort of hoping there will be an impact. At least this will expose some subsurface for remote observation. 8. Dec 27, 2007 ### B. Elliott Does it primarily have to do with how quickly gravity's influence drops off with distance? I remember hearing somewhere that if you move twice the distance away, the pull drops by 1/4. Are there any graphical charts you know of that show this? 9. Dec 27, 2007 ### D H Staff Emeritus Newton's law of gravity, $F=Gm_1m_2/r^2$. If it were only the asteroid versus Mars, and Mars had no atmosphere, Mars would never capture the asteroid. Its simple orbital mechanics. To be captured, the asteroid would have to have an orbit much closer to Mar's orbit than it does (its orbit about the Sun relative to Mars' orbit about the Sun gives it too much energy), it would have a closest approach more-or-less above the sunrise line (it will get a huge velocity boost if its closest approach is on the sunset side, kind of like dropping a ping pong ball on top of a superball gives the ping ball a HUGE bounce), and it would have to hit Mar's atmosphere hard enough to slow it down considerably but not so hard to make it burn up. 10. Dec 27, 2007 ### rbj well, if Mars (and the asteroid) were a point mass (which they ain't). think of the surface of the planet as the extent of the atmosphere. a very dense atmosphere. 11. Dec 28, 2007 ### rbj and, it might be a useful canary for the politicians to take more seriously the near Earth bodies that may someday trouble our planet. wouldn't that be an Armageddon if a yucatan-sized thing visited our planet again? 12. Dec 28, 2007 ### D H Staff Emeritus Capture, in the context of this discussion, excludes collision with Mars' surface. In other words, Mars gets a new satellite. I suppose that Mars' non-spherical nature could contribute to capturing an asteroid on a hyperbolic trajectory. However, that strikes me as far, far flukier than Mars capturing an asteroid that just misses Mars and performs an aerobreaking maneuver instead. 13. Dec 28, 2007 ### tony873004 This asteroid is moving very fast with respect to Mars. There is no chance of capture. Even if it were moving slowly with respect to Mars, the best it could hope for is capture into a temporary orbit. Earth recently had a 2nd moon. It orbited a few times and then escaped. 14. Dec 28, 2007 ### D H Staff Emeritus I agree. I did not word post #6 very well. I should have said "The odds of being captured or colliding are the same as the odds of collision. (In other words, there is no chance of capture)." 15. Dec 29, 2007 ### neutrino News Update: Chances are now http://www.skyandtelescope.com/r?19=961&43=107891&44=12905456&32=3186&7=162451&40=http%3A%2F%2Fwww.skyandtelescope.com%2Fnews%2F12905456.html [Broken]. Last edited by a moderator: May 3, 2017 16. Dec 29, 2007 ### hypatia I wonder if Los Vegas has started betting on it yet? 17. Jan 3, 2008 ### OmCheeto now 1 in 28 http://neo.jpl.nasa.gov/news/news154.html [Broken] still no news from las vegas. Last edited by a moderator: May 3, 2017 18. Jan 3, 2008 ### neutrino Drat! 19. Jan 4, 2008 ### pixel01 Well, they can not see the asteroid since so it's not surprising if tomorrow you will hear the oods should be 1/24 or something. 20. Jan 11, 2008 ### Gokul43201 Staff Emeritus
🎉 Announcing Numerade's $26M Series A, led by IDG Capital!Read how Numerade will revolutionize STEM Learning Oh no! Our educators are currently working hard solving this question. In the meantime, our AI Tutor recommends this similar expert step-by-step video covering the same topics. Numerade Educator ### Problem 5 Easy Difficulty # 5. Given the vector$\overrightarrow{A B}$as shown, draw a vectora. equal to$\overrightarrow{A B}$b. opposite to$\overrightarrow{A B}$c. whose magnitude equals$|\overrightarrow{A B}|$but is not equal to$\overrightarrow{A B}$d. whose magnitude is twice that of$\overrightarrow{A B}$and in the same directione. whose magnitude is half that of$\overrightarrow{A B}$and in the opposite direction ### Answer ## a.$\overrightarrow{A B}=\overrightarrow{C D}$b.$\overrightarrow{A B}=-\overrightarrow{E F}$c.$\overrightarrow{A B} \mid=\sqrt{E F}$but$\overrightarrow{A B} \neq \overrightarrow{E F}$d.$\overrightarrow{G H}=2 \overrightarrow{A B}$e.$\overrightarrow{A B}=-2 \overrightarrow{J I}\$ #### Topics No Related Subtopics ### Discussion You must be signed in to discuss. ### Video Transcript {'transcript': "We were given a vector A that points in a native boy direction, with a magnitude of five units and a vector B that has twice the magnitude of a and points of the positive X direction. Here. I've drawn Victor's A and B in red and green, respectively, Since Victor be has twice the magnitude of a vector A and Victor is five, then Victor be as a magnitude of 10. The problem wants us to find the direction and magnitude off three different combinations of these two vectors, so the 1st 1 is a plus B. A plus B will start with a intact on B. Take a listen a plus B. We also want to find a minus B. So we start with a and tack on B, but in the opposite direction, and this will be a minus B. And lastly, we want to find B minus A. So we start with B and we subtract a. This sector will be be minus a notice. How all three of these factors thes new vectors have the same magnitude, so we only have to calculate the magnitude once, and let's go ahead and calculate the magnitude for a plus B first here we can draw a right triangle such that one of the legs is 10 and the other leg is fine. So to find this magnitude h recall from Pythagorean serum we confined H as the square room of 10 squared plus five squared, which are the legs. And we get a magnitude of 11.2. This will be the magnitude for all three vectors. Now, to find the angle we have, Tanja Fader is equal too wide over acts were attention here I have labeled as this ankle always starting from the positive X axis. So here we can find data as the inverse tension of why over axe where why here is five negative five senses pointing in the native Why Direction and exits. 10. So we get an angle of negatives 26.6 degrees. This will be for a plus B. Now, for a minus being, we will have this angle as 26.6 degrees. So that will be 180 degrees plus 26.6, which equals 206.6 degrees Now for a, uh B minus A. We have theater right here, and that is just 26.6 degrees"} University of Michigan - Ann Arbor #### Topics No Related Subtopics
# Algorithm for competing cells of 0s and 1s I'm working on a practice algorithm problem, stated as follows: There are eight houses represented as cells. Each day, the houses compete with adjacent ones. 1 represents an "active" house and 0 represents an "inactive" house. If the neighbors on both sides of a given house are either both active or both inactive, then that house becomes inactive on the next day. Otherwise it becomes active. For example, if we had a group of neighbors [0, 1, 0] then the house at [1] would become 0 since both the house to its left and right are both inactive. The cells at both ends only have one adjacent cell so assume that the unoccupied space on the other side is an inactive cell. Even after updating the cell, you have to consider its prior state when updating the others so that the state information of each cell is updated simultaneously. The function takes the array of states and a number of days and should output the state of the houses after the given number of days. Examples: • input: states = [1, 0, 0, 0, 0, 1, 0, 0], days = 1 output should be [0, 1, 0, 0, 1, 0, 1, 0] • input: states = [1, 1, 1, 0, 1, 1, 1, 1], days = 2 output should be [0, 0, 0, 0, 0, 1, 1, 0] Here's my solution: def cell_compete(states, days): def new_state(in_states): new_state = [] for i in range(len(in_states)): if i == 0: group = [0, in_states[0], in_states[1]] elif i == len(in_states) - 1: group = [in_states[i - 1], in_states[i], 0] else: group = [in_states[i - 1], in_states[i], in_states[i + 1]] new_state.append(0 if group[0] == group[2] else 1) return new_state state = None j = 0 while j < days: if not state: state = new_state(states) else: state = new_state(state) j += 1 return state I originally thought to take advantage of the fact they are 0s and 1s only and to use bitwise operators, but couldn't quite get that to work. How can I improve the efficiency of this algorithm or the readability of the code itself? • Welcome to Code Review. I have rolled back your last edit. Please do not update the code in your question to incorporate feedback from answers, doing so goes against the Question + Answer style of Code Review. This is not a forum where you should keep the most updated version in your question. Please see what you may and may not do after receiving answers. – Heslacher Sep 17 '19 at 4:35 EDIT: Thanks to @benrg pointing out a bug of the previous algorithm. I have revised the algorithm and moved it to the second part since the explanation is long. While the other answer focuses more on coding style, this answer will focus more on performance. # Implementation Improvements I will show some ways to improve the performance of the code in the original post. 1. The use of group is unnecessary in the for-loop. Also note that if a house has a missing adjacent neighbour, its next state will be the same as the existing neighbour. So the loop can be improved as follows. for i in range(len(in_states)): if i == 0: out_state = in_states[1] elif i == len(in_states) - 1: out_state = in_states[i - 1] else: out_state = in_states[i - 1] == in_states[i + 1] new_state.append(out_state) 1. It is usually more efficient to use list comprehensions rather than explicit for-loops to construct lists in Python. Here, you need to construct a list where: (1) the first element is in_states[1]; (2) the last element is in_states[-2]; (3) all other elements are in_states[i - 1] == in_states[i + 1]. In this case, it is possible to use a list comprehension to construct a list for (3) and then add the first and last elements. new_states = [in_states[i-1] == in_states[i+1] for i in range(1, len(in_states) - 1)] new_states.insert(in_states[1], 0) new_states.append(in_states[-2]) However, insertion at the beginning of a list requires to update the entire list. A better way to construct the list is to use extend with a generator expression: new_states = [in_states[1]] new_states.extend(in_states[i-1] == in_states[i+1] for i in range(1, len(in_states) - 1)) new_states.append(in_states[-2]) An even better approach is to use the unpack operator * with a generator expression. This approach is more concise and also has the best performance. # state_gen is a generator expression for computing new_states[1:-1] state_gen = (in_states[i-1] == in_states[i+1] for i in range(1, len(in_states) - 1)) new_states = [in_states[1], *state_gen, in_states[-2]] Note that it is possible to unpack multiple iterators / generator expressions into the same list like this: new_states = [*it1, *it2, *it3] Note that if it1 and it3 are already lists, unpacking will make another copy so it could be less efficient than extending it1 with it2 and it3, if the size of it1 is large. # Algorithmic Improvement Here I show how to improve the algorithm for more general inputs (i.e. a varying number of houses). The naive solution updates the house states for each day. In order to improve it, one needs to find a connection between the input states $$\s_0\$$ and the states $$\s_n\$$ after some days $$\n\$$ for a direct computation. Let $$\s_k[d]\$$ be the state of the house at index $$\d\$$ on day $$\k\$$ and $$\H\$$ be the total number of houses. We first extend the initial state sequence $$\s_0\$$ into an auxiliary sequence $$\s_0'\$$ of length $$\H'=2H+2\$$ based on the following: $$s_0'[d]=\left\{\begin{array}{ll} s_0[d] & d\in[0, H) \\ 0 & d=H, 2H + 1\\ s_0[2H-d] & d\in(H,2H] \\ \end{array}\right.\label{df1}\tag{1}$$ The sequence $$\s_k'\$$ is updated based on the following recurrence, where $$\\oplus\$$ and $$\\%\$$ are the exclusive-or and modulo operations, respectively: $$s_{k+1}'[d] = s_k'[(d-1)\%H']\oplus s_k'[(d+1)\%H']\label{df2}\tag{2}$$ Using two basic properties of $$\\oplus\$$: $$\a\oplus a = 0\$$ and $$\a\oplus 0 = a\$$, the relationship (\ref{df1}) can be proved to hold on any day $$\k\$$ by induction: $$s_{k+1}'[d] = \left\{ \begin{array}{ll} s_k'[1]\oplus s_k'[H'-1] = s_k'[1] = s_k[1] = s_{k+1}[0] & d = 0 \\ s_k'[d-1]\oplus s_k'[d+1] = s_k[d-1]\oplus s_k[d+1]=s_{k+1}[d] & d\in(0,H) \\ s_k'[H-1]\oplus s_k'[H+1] = s_k[H-1]\oplus s_k[H-1] = 0 & d = H \\ s_k'[2H-(d-1)]\oplus s_k'[2H-(d+1)] \\ \quad = s_k[2H-(d-1)]\oplus s_k[2H-(d+1)] = s_{k+1}[2H-d] & d\in(H,2H) \\ s_k'[2H-1]\oplus s_k'[2H+1] = s_k'[2H-1] = s_k[1] = s_{k+1}[0] & d = 2H \\ s_k'[2H]\oplus s_k'[0] = s_k[0]\oplus s_k[0] = 0 & d = 2H+1 \end{array}\right.$$ We can then verify the following property of $$\s_k'\$$ $$\begin{eqnarray} s_{k+1}'[d] & = & s_k'[(d-1)\%H'] \oplus s_k'[(d+1)\%H'] & \\ s_{k+2}'[d] & = & s_{k+1}[(d-1)\%H'] \oplus s_{k+1}[(d+1)\%H'] \\ & = & s_k[(d-2)\%H'] \oplus s_k[d] \oplus s_k[d] \oplus s_k[(d+2)\%H'] \\ & = & s_k[(d-2)\%H'] \oplus s_k[(d+2)\%H'] \\ s_{k+4}'[d] & = & s_{k+2}'[(d-2)\%H'] \oplus s_{k+2}'[(d+2)\%H'] \\ & = & s_k'[(d-4)\%H'] \oplus s_k'[d] \oplus s_k'[d] \oplus s_k'[(d+4)\%H'] \\ & = & s_k'[(d-4)\%H'] \oplus s_k'[(d+4)\%H'] \\ \ldots & \\ s_{k+2^m}'[d] & = & s_k'[(d-2^m)\%H'] \oplus s_k'[(d+2^m)\%H'] \label{f1} \tag{3} \end{eqnarray}$$ Based on the recurrence (\ref{f1}), one can directly compute $$\s_{k+2^m}'\$$ from $$\s_k'\$$ and skip all the intermediate computations. We can also substitute $$\s_k'\$$ with $$\s_k\$$ in (\ref{f1}), leading to the following computations: $$\begin{eqnarray} d_1' & = & (d-2^m)\%H' & \qquad d_2' & = & (d+2^m)\%H' \\ d_1 & = & \min(d_1',2H-d_1') & \qquad d_2 & = & \min(d_2', 2H-d_2') \\ a_1 & = & \left\{\begin{array}{ll} s_k[d_1] & d_1 \in [0, L) \\ 0 & \text{Otherwise} \\ \end{array}\right. & \qquad a_2 & = & \left\{\begin{array}{ll} s_k[d_2] & d_2 \in [0, L) \\ 0 & \text{Otherwise} \\ \end{array}\right. \\ & & & s_{k+2^m}[d] & = & a_1 \oplus a_2 \label{f2}\tag{4} \end{eqnarray}$$ Note that since the sequence $$\\{2^i\%H'\}_{i=0}^{+\infty}\$$ has no more than $$\H'\$$ states, it is guaranteed that $$\\{s_{k+2^i}\}_{i=0}^{+\infty}\$$ has a cycle. More formally, there exists some $$\c>0\$$ such that $$\s_{k+2^{a+c}}=s_{k+2^a}\$$ holds for every $$\a\$$ that is greater than certain threshold. Based on (\ref{f1}) and (\ref{f2}), this entails either $$\H'|2^{a+c}-2^a\$$ or $$\H'|2^{a+c}+2^a\$$ holds. If $$\H'\$$ is factorized into $$\2^r\cdot m\$$ where $$\m\$$ is odd, we can see that $$\a\geq r\$$ must hold for either of the divisibilty. That is to say, if we start from day $$\2^r\$$ and find the next $$\t\$$ such that $$\H'|2^t-2^r\$$ or $$\H'|2^t+2^r\$$, then $$\s_{k+2^t}=s_{k+2^r}\$$ holds for every $$\k\$$. This leads to the following algorithm: • Input: $$\H\$$ houses with initial states $$\s_0\$$, number of days $$\n\$$ • Output: House states $$\s_n\$$ after $$\n\$$ days • Step 1: Let $$\H'\leftarrow 2H+2\$$, find the maximal $$\r\$$ such that $$\2^r\mid H'\$$ • Step 2: If $$\n\leq 2^r\$$, go to Step 5. • Step 3: Find the minimal $$\t, t>r\$$ such that either $$\H'|2^t-2^r\$$ or $$\H'|2^t+2^r\$$ holds. • Step 4: $$\n\leftarrow (n-2^r)\%(2^t-2^r)+2^r\$$ • Step 5: Divide $$\n\$$ into a power-2 sum $$\2^{b_0}+2^{b_1}+\ldots+2^{b_u}\$$ and calculate $$\s_n\$$ based on (\ref{f2}) As an example, if there are $$\H=8\$$ houses, $$\H'=18=2^1\cdot 9\$$. So $$\r=1\$$. We can find $$\t=4\$$ is the minimal number such that $$\18\mid 2^4+2=18\$$. Therefore $$\s_{k+2}=s_{k+2^4}\$$ holds for every $$\k\geq 0\$$. So we reduce any $$\n>2\$$ to $$\(n-2)\%14 + 2\$$, and then apply Step 5 of the algorithm to get $$\s_n\$$. Based on the above analysis, every $$\n\$$ can be reduced to a number between $$\[0, 2^t)\$$ and $$\s_n\$$ can be computed within $$\\min(t, \log n)\$$ steps using the recurrence (\ref{f2}). So the ultimate time complexity of the algorithm is $$\\Theta(H'\cdot \min(t, \log n))=\Theta(H\cdot\min(m,\log n))=\Theta(\min(H^2,H\log n))\$$. This is much better than the naive algorithm which has a time complexity of $$\\Theta(H\cdot n)\$$. • Thanks so much for the thorough analysis. What is the runtime of this and how does it compare to my original solution? – LuxuryMode Sep 17 '19 at 3:13 • Your shortcut formula fails on the second example. It appears from the examples that houses outside the 8 are supposed to be treated as 0 at every stage, whereas you are assuming an infinite board with all other houses initially 0. The actual update rule is a reversible permutation of the 256 states, and all cycles have lengths 1, 2, 7, or 14, so you can start with days %= 14 and have an O(1) algorithm (which then automatically supports days < 0 too). – benrg Sep 17 '19 at 21:15 • @benrg Thanks for pointing out the mistake. I've revised the algorithm and proved a conclusion of cycle lengths for arbitrary number of houses $H$. Note that your $O(1)$ time complexity is based on fixed $H=8$ and therefore cannot be directly generalized to arbitrary $H$. – GZ0 Sep 18 '19 at 7:12 • @LuxuryMode I made a mistake yesterday on the boundaries of the state sequence. I've revised the algorithm entirely and presented a complete algorithm as well as the time complexity analysis. – GZ0 Sep 18 '19 at 7:13 • @benrg The reversibility only holds when $H$ is even. If $H$ is odd, there exist states that cannot be a valid output of any other state (e.g., [1, 0, 0]). Therefore not all the states are in the cycles themselves. – GZ0 Sep 18 '19 at 16:54 On the logic, you should notice that the next state of the i'th house becomes state[i - 1] ^ state[i + 1] (some care at the boundaries to be exercised). Upon the closer inspection you may also notice that if you represent the state of the entire block as an integer composed of bits from each house, then state = (state << 1) ^ (state >> 1) is all you need to do. Python would take care of boundaries (by shifting in zeroes into right places), and update all bits simultaneously. I don't know the constraints, but I suspect that the number of days could be quite large. Since there are only that many states the block may be in (for 8 houses there are mere 256 of them), you are going to encounter the loop. An immediate optimization is to identify it, and use its length, rather than simulating each day in the entire time period. • While Python will take care of one of the boundaries automatically (since a >> b discards the lowest b bits of a), you do need an explicit bit mask to take care of the other. Something like state = ((state << 1) ^ (state >> 1)) & ((1 << cells) - 1), where cells = 8 is the number of "houses" in the system, should do it. – Ilmari Karonen Sep 17 '19 at 10:41 # Enumerate Instead of writing range(len()), consider using enumerate. It provides the index and the value associated with that index. It's useful in your case because, instead of having to write in_states[i], you can write value instead. This will save you from having to index the list again with in_states[i]. # Docstrings You should provide a docstring at the beginning of every module, class, and method you write. This will allow people to see how your code functions, and what it's supposed to do. It also helps you remember what types of variables are supposed to be passed into the method. Take this for example. def my_method(param_one, param_two): ... do code stuff here ... By reading just the method header, you had no idea what data this method is supposed to accept (hopefully you never have parameter names this ambiguous, but I'm being extreme in this example). Now, take a look at this: def my_method(param_one, param_two): """ This method does ... and ... ... :param param_one: An Integer representing ... :param param_two: A String representing ... """ ... do code stuff here ... Now, you know clearly what is supposed to be passed to the method. # Consistency I see this in your code: new_state.append(0 if group[0] == group[2] else 1) But then I see this: if not state: state = new_state(states) else: state = new_state(state) You clearly know how to accomplish the former, and since that code looks cleaner, I'd say you stick with it and be consistent: state = new_state(states if not state else state) # Looping Your looping with the while loop and using j confuses me. It looks like a glorified for loop, only running days amount of times. So, this: state = None j = 0 while j < days: if not state: state = new_state(states) else: state = new_state(state) j += 1 Can be simplified to this: state = None for _ in range(days): state = new_state(states if not state else state) return state Updated Code """ Module Docstring (a description of this program goes here) """ def cell_compete(states, days): """ Method Docstring (a description of this method goes here) """ def new_state(in_states): """ Method Docstring (a description of this method goes here) """ new_state = [] for index, value in enumerate(in_states): if index == 0: group = [0, in_states[0], in_states[1]] elif index == len(in_states) - 1: group = [in_states[index - 1], value, 0] else: group = [in_states[index - 1], value, in_states[index + 1]] new_state.append(0 if group[0] == group[2] else 1) return new_state state = None for _ in range(days): state = new_state(states if not state else state) return state • Thanks a lot, great feedback. I appreciate it. Any thoughts on the substance/logic of the algorithm itself? – LuxuryMode Sep 17 '19 at 1:34 • states if not state else state is somewhat awkward - first, it should be inverted as state if state else states. Then, take advantage of or semantics: state or states. – Reinderien Sep 17 '19 at 6:13 • Actually state can be initialized with states to avoid the unnecessary test of if state (moreover, the states variable can be used directly without the need of state). – GZ0 Sep 17 '19 at 14:44
# Question #2000008: Testing Statistical Hypothesis Question: A newspaper in a large Midwestern city reported that the National Association of Realtors said that the mean home price last year was $116,800. The city housing department feels that this figure is too low. They randomly selected 66 home sales and find a mean of$118,900. It is known that σ is equal to $3701. Use a 5% level of significance to test the$116,800 figure. Solution: The solution consists of 103 words (1 page) Deliverables: Word Document 0
# Volume of a circle 1. Jun 18, 2009 ### Ry122 http://users.on.net/~rohanlal/circle2.jpg [Broken] this is part of the solution to finding the volume of a circle with double integrals. I just want to know where the r from rdrd0 came from and also why the limits on the d0 integral are 2pi and 0. Last edited by a moderator: May 4, 2017 2. Jun 18, 2009 ### Hootenanny Staff Emeritus I assume you mean this is part of a question to find the volume of the cylinder created by extruding a circle along the z-axis. To answer your first question, the integral has be transformed from Cartesian to polar coordinates. Rather than specifying the position of a point in terms of it's (x,y) coordinates, polar coordinates uses (r,Θ), where r is the distance from the origin to the point and Θ is the angle between the radius and the positive x semi-axis. For more information and answers to your subsequent questions see http://mathworld.wolfram.com/PolarCoordinates.html" [Broken]. Last edited by a moderator: May 4, 2017 3. Jun 18, 2009 ### Ry122 What makes you think its a cylinder? This is the full solution: http://users.on.net/~rohanlal/circ3.jpg [Broken] Last edited by a moderator: May 4, 2017 4. Jun 18, 2009 ### HallsofIvy Staff Emeritus This is the volume of a sphere, not a circle- circles don't have "volume"! And you should have learned that the "differential of area in polar coordinates" is $r dr d\theta$ when you learned about integrating in polar coordinates. There are a number of different ways of showing that. I recommend you check your calculus book for the one you were expected to learn. Last edited by a moderator: May 4, 2017 5. Jun 18, 2009 ### Ry122 and why is the limit 2pi to 0? 6. Jun 19, 2009 ### Hootenanny Staff Emeritus
# Prove that every subspace of a topological space with the discrete topology has the discrete topology. This question actually has two parts: a. Prove that every subspace of a topological space with the discrete topology has the discrete topology. b. Prove that every subspace of a topological space with the trivial topology has the trivial topology. If I can do part a, part b will be easy. What do I need to demonstrate, exactly, to show that a is true? • What are the open sets of a subspace topology? You have to show that every subset of the subspace is open. – BrianO Dec 7 '15 at 0:12 Suppose we have a topological space $(X, \tau_X)$, where $\tau_X\subseteq 2^X$ is a topology on $X$. Then every subset $Y\subseteq X$ we can consider as a topological subspace of $(X, \tau_X)$ with induced topology $\tau_Y$. This topology constructs simply like this: $$\tau_Y = \{U\cap Y:\ \ U\in\tau_X \}$$ So (a) if you have a topological space with the discrete topology $(X, 2^X)$ then for every $Y\subseteq X$ you'll have induced topology $\tau_Y = \{U\cap Y:\ \ U\in 2^X \}$ which is $2^Y$, just as was stated. And (b) if you have a topological space with the antidiscrete (trivial) topology $(X, \{\emptyset, X\})$, then the induced topology on $Y$ will look like this $\tau_Y = \{U\cap Y:\ \ U\in \{\emptyset, X\} \}=\{\emptyset, Y\}$, just as expected. • Thank you for your answer. I have a question - this is not the first time I've seen this notation $(X,2^X)$ on here, but I actually never saw this in my textbook. Maybe a stupid question, but what does this notation mean? – Indigo Dec 7 '15 at 0:31 • @Indigo, topological space is a pair of two things: a non-empty set (called carrier set, set of points) and a collection of subsets of the carrier set complied with topology axioms (called open sets). So topological space can be written as $(X, \tau)$, where $X$ is a carrier set and $\tau$ is a collection of subsets of the carrier set. $2^X$ denotes the set of all subsets of the set $X$. So, by definition, $\tau \subseteq 2^X$. For discrete topology by definition $\tau = 2^X$ – Glinka Dec 7 '15 at 0:47
You are currently browsing the tag archive for the ‘localisation’ tag. I’ve just uploaded to the arXiv my paper “Localisation and compactness properties of the Navier-Stokes global regularity problem“, submitted to Analysis and PDE. This paper concerns the global regularity problem for the Navier-Stokes system of equations $\displaystyle \partial_t u + (u \cdot \nabla) u = \Delta u - \nabla p + f \ \ \ \ \ (1)$ $\displaystyle \nabla \cdot u = 0 \ \ \ \ \ (2)$ $\displaystyle u(0,\cdot) = u_0 \ \ \ \ \ (3)$ in three dimensions. Thus, we specify initial data ${(u_0,f,T)}$, where ${0 < T < \infty}$ is a time, ${u_0: {\bf R}^3 \rightarrow {\bf R}^3}$ is the initial velocity field (which, in order to be compatible with (2), (3), is required to be divergence-free), ${f: [0,T] \times {\bf R}^3 \rightarrow {\bf R}^3}$ is the forcing term, and then seek to extend this initial data to a solution ${(u,p,u_0,f,T)}$ with this data, where the velocity field ${u: [0,T] \times {\bf R}^3 \rightarrow {\bf R}^3}$ and pressure term ${p: [0,T] \times {\bf R}^3 \rightarrow {\bf R}}$ are the unknown fields. Roughly speaking, the global regularity problem asserts that given every smooth set of initial data ${(u_0,f,T)}$, there exists a smooth solution ${(u,p,u_0,f,T)}$ to the Navier-Stokes equation with this data. However, this is not a good formulation of the problem because it does not exclude the possibility that one or more of the fields ${u_0, f, u, p}$ grows too fast at spatial infinity. This problem is evident even for the much simpler heat equation $\displaystyle \partial_t u = \Delta u$ $\displaystyle u(0,\cdot) = u_0.$ As long as one has some mild conditions at infinity on the smooth initial data ${u_0: {\bf R}^3 \rightarrow {\bf R}}$ (e.g. polynomial growth at spatial infinity), then one can solve this equation using the fundamental solution of the heat equation: $\displaystyle u(t,x) = \frac{1}{(4\pi t)^{3/2}} \int_{{\bf R}^3} u_0(y) e^{-|x-y|^2/4t}\ dy.$ If furthermore ${u}$ is a tempered distribution, one can use Fourier-analytic methods to show that this is the unique solution to the heat equation with this data. But once one allows sufficiently rapid growth at spatial infinity, existence and uniqueness can break down. Consider for instance the backwards heat kernel $\displaystyle u(t,x) = \frac{1}{(4\pi(T-t))^{3/2}} e^{|x|^2/(T-t)}$ for some ${T>0}$, which is smooth (albeit exponentially growing) at time zero, and is a smooth solution to the heat equation for ${0 \leq t < T}$, but develops a dramatic singularity at time ${t=T}$. A famous example of Tychonoff from 1935, based on a power series construction, also shows that uniqueness for the heat equation can also fail once growth conditions are removed. An explicit example of non-uniqueness for the heat equation is given by the contour integral $\displaystyle u(t,x_1,x_2,x_3) = \int_\gamma \exp(e^{\pi i/4} x_1 z + e^{5\pi i/8} z^{3/2} - itz^2)\ dz$ where ${\gamma}$ is the ${L}$-shaped contour consisting of the positive real axis and the upper imaginary axis, with ${z^{3/2}}$ being interpreted with the standard branch (with cut on the negative axis). One can show by contour integration that this function solves the heat equation and is smooth (but rapidly growing at infinity), and vanishes for ${t<0}$, but is not identically zero for ${t>0}$. Thus, in order to obtain a meaningful (and physically realistic) problem, one needs to impose some decay (or at least limited growth) hypotheses on the data ${u_0,f}$ and solution ${u,p}$ in addition to smoothness. For the data, one can impose a variety of such hypotheses, including the following: • (Finite energy data) One has ${\|u_0\|_{L^2_x({\bf R}^3)} < \infty}$ and ${\| f \|_{L^\infty_t L^2_x([0,T] \times {\bf R}^3)} < \infty}$. • (${H^1}$ data) One has ${\|u_0\|_{H^1_x({\bf R}^3)} < \infty}$ and ${\| f \|_{L^\infty_t H^1_x([0,T] \times {\bf R}^3)} < \infty}$. • (Schwartz data) One has ${\sup_{x \in {\bf R}^3} ||x|^m \nabla_x^k u_0(x)| < \infty}$ and ${\sup_{(t,x) \in [0,T] \times {\bf R}^3} ||x|^m \nabla_x^k \partial_t^l f(t,x)| < \infty}$ for all ${m,k,l \geq 0}$. • (Periodic data) There is some ${0 < L < \infty}$ such that ${u_0(x+Lk) = u_0(x)}$ and ${f(t,x+Lk) = f(t,x)}$ for all ${(t,x) \in [0,T] \times {\bf R}^3}$ and ${k \in {\bf Z}^3}$. • (Homogeneous data) ${f=0}$. Note that smoothness alone does not necessarily imply finite energy, ${H^1}$, or the Schwartz property. For instance, the (scalar) function ${u(x) = \exp( i |x|^{10} ) (1+|x|)^{-2}}$ is smooth and finite energy, but not in ${H^1}$ or Schwartz. Periodicity is of course incompatible with finite energy, ${H^1}$, or the Schwartz property, except in the trivial case when the data is identically zero. Similarly, one can impose conditions at spatial infinity on the solution, such as the following: • (Finite energy solution) One has ${\| u \|_{L^\infty_t L^2_x([0,T] \times {\bf R}^3)} < \infty}$. • (${H^1}$ solution) One has ${\| u \|_{L^\infty_t H^1_x([0,T] \times {\bf R}^3)} < \infty}$ and ${\| u \|_{L^2_t H^2_x([0,T] \times {\bf R}^3)} < \infty}$. • (Partially periodic solution) There is some ${0 < L < \infty}$ such that ${u(t,x+Lk) = u(t,x)}$ for all ${(t,x) \in [0,T] \times {\bf R}^3}$ and ${k \in {\bf Z}^3}$. • (Fully periodic solution) There is some ${0 < L < \infty}$ such that ${u(t,x+Lk) = u(t,x)}$ and ${p(t,x+Lk) = p(t,x)}$ for all ${(t,x) \in [0,T] \times {\bf R}^3}$ and ${k \in {\bf Z}^3}$. (The ${L^2_t H^2_x}$ component of the ${H^1}$ solution is for technical reasons, and should not be paid too much attention for this discussion.) Note that we do not consider the notion of a Schwartz solution; as we shall see shortly, this is too restrictive a concept of solution to the Navier-Stokes equation. Finally, one can downgrade the regularity of the solution down from smoothness. There are many ways to do so; two such examples include • (${H^1}$ mild solutions) The solution is not smooth, but is ${H^1}$ (in the preceding sense) and solves the equation (1) in the sense that the Duhamel formula $\displaystyle u(t) = e^{t\Delta} u_0 + \int_0^t e^{(t-t')\Delta} (-(u\cdot\nabla) u-\nabla p+f)(t')\ dt'$ holds. • (Leray-Hopf weak solution) The solution ${u}$ is not smooth, but lies in ${L^\infty_t L^2_x \cap L^2_t H^1_x}$, solves (1) in the sense of distributions (after rewriting the system in divergence form), and obeys an energy inequality. Finally, one can ask for two types of global regularity results on the Navier-Stokes problem: a qualitative regularity result, in which one merely provides existence of a smooth solution without any explicit bounds on that solution, and a quantitative regularity result, which provides bounds on the solution in terms of the initial data, e.g. a bound of the form $\displaystyle \| u \|_{L^\infty_t H^1_x([0,T] \times {\bf R}^3)} \leq F( \|u_0\|_{H^1_x({\bf R}^3)} + \|f\|_{L^\infty_t H^1_x([0,T] \times {\bf R}^3)}, T )$ for some function ${F: {\bf R}^+ \times {\bf R}^+ \rightarrow {\bf R}^+}$. One can make a further distinction between local quantitative results, in which ${F}$ is allowed to depend on ${T}$, and global quantitative results, in which there is no dependence on ${T}$ (the latter is only reasonable though in the homogeneous case, or if ${f}$ has some decay in time). By combining these various hypotheses and conclusions, we see that one can write down quite a large number of slightly different variants of the global regularity problem. In the official formulation of the regularity problem for the Clay Millennium prize, a positive correct solution to either of the following two problems would be accepted for the prize: • Conjecture 1.4 (Qualitative regularity for homogeneous periodic data) If ${(u_0,0,T)}$ is periodic, smooth, and homogeneous, then there exists a smooth partially periodic solution ${(u,p,u_0,0,T)}$ with this data. • Conjecture 1.3 (Qualitative regularity for homogeneous Schwartz data) If ${(u_0,0,T)}$ is Schwartz and homogeneous, then there exists a smooth finite energy solution ${(u,p,u_0,0,T)}$ with this data. (The numbering here corresponds to the numbering in the paper.) Furthermore, a negative correct solution to either of the following two problems would also be accepted for the prize: • Conjecture 1.6 (Qualitative regularity for periodic data) If ${(u_0,f,T)}$ is periodic and smooth, then there exists a smooth partially periodic solution ${(u,p,u_0,f,T)}$ with this data. • Conjecture 1.5 (Qualitative regularity for Schwartz data) If ${(u_0,f,T)}$ is Schwartz, then there exists a smooth finite energy solution ${(u,p,u_0,f,T)}$ with this data. I am not announcing any major progress on these conjectures here. What my paper does study, though, is the question of whether the answer to these conjectures is somehow sensitive to the choice of formulation. For instance: 1. Note in the periodic formulations of the Clay prize problem that the solution is only required to be partially periodic, rather than fully periodic; thus the pressure has no periodicity hypothesis. One can ask the extent to which the above problems change if one also requires pressure periodicity. 2. In another direction, one can ask the extent to which quantitative formulations of the Navier-Stokes problem are stronger than their qualitative counterparts; in particular, whether it is possible that each choice of initial data in a certain class leads to a smooth solution, but with no uniform bound on that solution in terms of various natural norms of the data. 3. Finally, one can ask the extent to which the conjecture depends on the category of data. For instance, could it be that global regularity is true for smooth periodic data but false for Schwartz data? True for Schwartz data but false for smooth ${H^1}$ data? And so forth. One motivation for the final question (which was posed to me by my colleague, Andrea Bertozzi) is that the Schwartz property on the initial data ${u_0}$ tends to be instantly destroyed by the Navier-Stokes flow. This can be seen by introducing the vorticity ${\omega := \nabla \times u}$. If ${u(t)}$ is Schwartz, then from Stokes’ theorem we necessarily have vanishing of certain moments of the vorticity, for instance: $\displaystyle \int_{{\bf R}^3} \omega_1 (x_2^2-x_3^2)\ dx = 0.$ On the other hand, some integration by parts using (1) reveals that such moments are usually not preserved by the flow; for instance, one has the law $\displaystyle \partial_t \int_{{\bf R}^3} \omega_1(t,x) (x_2^2-x_3^2)\ dx = 4\int_{{\bf R}^3} u_2(t,x) u_3(t,x)\ dx,$ and one can easily concoct examples for which the right-hand side is non-zero at time zero. This suggests that the Schwartz class may be unnecessarily restrictive for Conjecture 1.3 or Conjecture 1.5. My paper arose out of an attempt to address these three questions, and ended up obtaining partial results in all three directions. Roughly speaking, the results that address these three questions are as follows: 1. (Homogenisation) If one only assumes partial periodicity instead of full periodicity, then the forcing term ${f}$ becomes irrelevant. In particular, Conjecture 1.4 and Conjecture 1.6 are equivalent. 2. (Concentration compactness) In the ${H^1}$ category (both periodic and nonperiodic, homogeneous or nonhomogeneous), the qualitative and quantitative formulations of the Navier-Stokes global regularity problem are essentially equivalent. 3. (Localisation) The (inhomogeneous) Navier-Stokes problems in the Schwartz, smooth ${H^1}$, and finite energy categories are essentially equivalent to each other, and are also implied by the (fully) periodic version of these problems. The first two of these families of results are relatively routine, drawing on existing methods in the literature; the localisation results though are somewhat more novel, and introduce some new local energy and local enstrophy estimates which may be of independent interest. Broadly speaking, the moral to draw from these results is that the precise formulation of the Navier-Stokes equation global regularity problem is only of secondary importance; modulo a number of caveats and technicalities, the various formulations are close to being equivalent, and a breakthrough on any one of the formulations is likely to lead (either directly or indirectly) to a comparable breakthrough on any of the others. This is only a caricature of the actual implications, though. Below is the diagram from the paper indicating the various formulations of the Navier-Stokes equations, and the known implications between them: The above three streams of results are discussed in more detail below the fold.
## Higher derivative theories We tend to not use higher derivative theories. It turns out that there is a very good reason for this, but that reason is rarely discussed in textbooks. We will take, for concreteness, $L\left(q,\dot q, \ddot q\right)$, a Lagrangian which depends on the 2nd derivative in an essential manner. Inessential dependences are terms such as $q\ddot q$ which may be partially integrated to give ${\dot q}^2$. Mathematically, this is expressed through the necessity of being able to invert the expression $$P_2 = \frac{\partial L\left(q,\dot q, \ddot q\right)}{\partial \ddot q},$$ and get a closed form for $\ddot q \left(q, \dot q, P_2 \right)$. Note that usually we also require a similar statement for $\dot q \left(q, p\right)$, and failure in this respect is a sign of having a constrained system, possibly with gauge degrees of freedom. In any case, the non-degeneracy leads to the Euler-Lagrange equations in the usual manner: $$\frac{\partial L}{\partial q} - \frac{d}{dt}\frac{\partial L}{\partial \dot q} + \frac{d^2}{dt^2}\frac{\partial L}{\partial \ddot q} = 0.$$ This is then fourth order in $t$, and so require four initial conditions, such as $q$, $\dot q$, $\ddot q$, $q^{(3)}$. This is twice as many as usual, and so we can get a new pair of conjugate variables when we move into a Hamiltonian formalism. We follow the steps of Ostrogradski, and choose our canonical variables as $Q_1 = q$, $Q_2 = \dot q$, which leads to \begin{align} P_1 &= \frac{\partial L}{\partial \dot q} - \frac{d}{dt}\frac{\partial L}{\partial \ddot q}, \\ P_2 &= \frac{\partial L}{\partial \ddot q}. \end{align} Note that the non-degeneracy allows $\ddot q$ to be expressed in terms of $Q_1$, $Q_2$ and $P_2$ through the second equation, and the first one is only necessary to define $q^{(3)}$. We can then proceed in the usual fashion, and find the Hamiltonian through a Legendre transform: \begin{align} H &= \sum_i P_i \dot{Q}_i - L \\ &= P_1 Q_2 + P_2 \ddot{q}\left(Q_1, Q_2, P_2\right) - L\left(Q_1, Q_2,\ddot{q}\right). \end{align} Again, as usual, we can take time derivative of the Hamiltonian to find that it is time independent if the Lagrangian does not depend on time explicitly, and thus can be identified as the energy of the system. However, we now have a problem: $H$ has only a linear dependence on $P_1$, and so can be arbitrarily negative. In an interacting system this means that we can excite positive energy modes by transferring energy from the negative energy modes, and in doing so we would increase the entropy — there would simply be more particles, and so a need to put them somewhere. Thus such a system could never reach equilibrium, exploding instantly in an orgy of particle creation. This problem is in fact completely general, and applies to even higher derivatives in a similar fashion.
# pub2007.bib @comment{{This file has been generated by bib2bib 1.94}} @comment{{Command line: /usr/bin/bib2bib --quiet -c 'not journal:"Discussions"' -c year=2007 -c $type="ARTICLE" -oc pub2007.txt -ob pub2007.bib lmdplaneto.link.bib}} @article{2007Natur.450..646B, author = {{Bertaux}, J.-L. and {Vandaele}, A.-C. and {Korablev}, O. and {Villard}, E. and {Fedorova}, A. and {Fussen}, D. and {Quémerais}, E. and {Belyaev}, D. and {Mahieux}, A. and {Montmessin}, F. and {Muller}, C. and {Neefs}, E. and {Nevejans}, D. and {Wilquet}, V. and {Dubois}, J.~P. and {Hauchecorne}, A. and {Stepanov}, A. and {Vinogradov}, I. and {Rodin}, A. and {Bertaux}, J.-L. and {Nevejans}, D. and {Korablev}, O. and {Montmessin}, F. and {Vandaele}, A.-C. and {Fedorova}, A. and {Cabane}, M. and {Chassefière}, E. and {Chaufray}, J.~Y. and {Dimarellis}, E. and {Dubois}, J.~P. and {Hauchecorne}, A. and {Leblanc}, F. and {Lefèvre}, F. and {Rannou}, P. and {Quémerais}, E. and {Villard}, E. and {Fussen}, D. and {Muller}, C. and {Neefs}, E. and {van Ransbeeck}, E. and {Wilquet}, V. and {Rodin}, A. and {Stepanov}, A. and {Vinogradov}, I. and {Zasova}, L. and {Forget}, F. and {Lebonnois}, S. and {Titov}, D. and {Rafkin}, S. and {Durry}, G. and {Gérard}, J.~C. and {Sandel}, B.}, title = {{A warm layer in Venus' cryosphere and high-altitude measurements of HF, HCl, H$_{2}$O and HDO}}, journal = {\nat}, year = 2007, volume = 450, pages = {646-649}, abstract = {{Venus has thick clouds of H$_{2}$SO$_{4}$aerosol particles extending from altitudes of 40 to 60km. The 60-100km region (the mesosphere) is a transition region between the 4day retrograde superrotation at the top of the thick clouds and the solar-antisolar circulation in the thermosphere (above 100km), which has upwelling over the subsolar point and transport to the nightside. The mesosphere has a light haze of variable optical thickness, with CO, SO$_{2}$, HCl, HF, H$_{2}$O and HDO as the most important minor gaseous constituents, but the vertical distribution of the haze and molecules is poorly known because previous descent probes began their measurements at or below 60km. Here we report the detection of an extensive layer of warm air at altitudes 90-120km on the night side that we interpret as the result of adiabatic heating during air subsidence. Such a strong temperature inversion was not expected, because the night side of Venus was otherwise so cold that it was named the cryosphere' above 100km. We also measured the mesospheric distributions of HF, HCl, H$_{2}$O and HDO. HCl is less abundant than reported 40years ago. HDO/H$_{2}$O is enhanced by a factor of \~{}2.5 with respect to the lower atmosphere, and there is a general depletion of H$_{2}$O around 80-90km for which we have no explanation. }}, doi = {10.1038/nature05974}, adsurl = {http://adsabs.harvard.edu/abs/2007Natur.450..646B}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2007Natur.450..641D, author = {{Drossart}, P. and {Piccioni}, G. and {Gérard}, J.~C. and {Lopez-Valverde}, M.~A. and {Sanchez-Lavega}, A. and {Zasova}, L. and {Hueso}, R. and {Taylor}, F.~W. and {Bézard}, B. and {Adriani}, A. and {Angrilli}, F. and {Arnold}, G. and {Baines}, K.~H. and {Bellucci}, G. and {Benkhoff}, J. and {Bibring}, J.~P. and {Blanco}, A. and {Blecka}, M.~I. and {Carlson}, R.~W. and {Coradini}, A. and {di Lellis}, A. and {Encrenaz}, T. and {Erard}, S. and {Fonti}, S. and {Formisano}, V. and {Fouchet}, T. and {Garcia}, R. and {Haus}, R. and {Helbert}, J. and {Ignatiev}, N.~I. and {Irwin}, P. and {Langevin}, Y. and {Lebonnois}, S. and {Luz}, D. and {Marinangeli}, L. and {Orofino}, V. and {Rodin}, A.~V. and {Roos-Serote}, M.~C. and {Saggin}, B. and {Stam}, D.~M. and {Titov}, D. and {Visconti}, G. and {Zambelli}, M. and {Tsang}, C. and {Ammannito}, E. and {Barbis}, A. and {Berlin}, R. and {Bettanini}, C. and {Boccaccini}, A. and {Bonnello}, G. and {Bouyé}, M. and {Capaccioni}, F. and {Cardesin}, A. and {Carraro}, F. and {Cherubini}, G. and {Cosi}, M. and {Dami}, M. and {de Nino}, M. and {Del Vento}, D. and {di Giampietro}, M. and {Donati}, A. and {Dupuis}, O. and {Espinasse}, S. and {Fabbri}, A. and {Fave}, A. and {Ficai Veltroni}, I. and {Filacchione}, G. and {Garceran}, K. and {Ghomchi}, Y. and {Giustizi}, M. and {Gondet}, B. and {Hello}, Y. and {Henry}, F. and {Hofer}, S. and {Huntzinger}, G. and {Kachlicki}, J. and {Knoll}, R. and {Kouach}, D. and {Mazzoni}, A. and {Melchiorri}, R. and {Mondello}, G. and {Monti}, F. and {Neumann}, C. and {Nuccilli}, F. and {Parisot}, J. and {Pasqui}, C. and {Perferi}, S. and {Peter}, G. and {Piacentino}, A. and {Pompei}, C. and {Réess}, J.-M. and {Rivet}, J.-P. and {Romano}, A. and {Russ}, N. and {Santoni}, M. and {Scarpelli}, A. and {Sémery}, A. and {Soufflot}, A. and {Stefanovitch}, D. and {Suetta}, E. and {Tarchi}, F. and {Tonetti}, N. and {Tosi}, F. and {Ulmer}, B.}, title = {{A dynamic upper atmosphere of Venus as revealed by VIRTIS on Venus Express}}, journal = {\nat}, year = 2007, volume = 450, pages = {641-645}, abstract = {{The upper atmosphere of a planet is a transition region in which energy is transferred between the deeper atmosphere and outer space. Molecular emissions from the upper atmosphere (90-120km altitude) of Venus can be used to investigate the energetics and to trace the circulation of this hitherto little-studied region. Previous spacecraft and ground-based observations of infrared emission from CO$_{2}$, O$_{2}$and NO have established that photochemical and dynamic activity controls the structure of the upper atmosphere of Venus. These data, however, have left unresolved the precise altitude of the emission owing to a lack of data and of an adequate observing geometry. Here we report measurements of day-side CO$_{2}$non-local thermodynamic equilibrium emission at 4.3{\micro}m, extending from 90 to 120km altitude, and of night-side O$_{2}$emission extending from 95 to 100km. The CO$_{2}$emission peak occurs at \~{}115km and varies with solar zenith angle over a range of \~{}10km. This confirms previous modelling, and permits the beginning of a systematic study of the variability of the emission. The O$_{2}$peak emission happens at 96km+/-1km, which is consistent with three-body recombination of oxygen atoms transported from the day side by a global thermospheric sub-solar to anti-solar circulation, as previously predicted. }}, doi = {10.1038/nature06140}, adsurl = {http://adsabs.harvard.edu/abs/2007Natur.450..641D}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2007Natur.450..637P, author = {{Piccioni}, G. and {Drossart}, P. and {Sanchez-Lavega}, A. and {Hueso}, R. and {Taylor}, F.~W. and {Wilson}, C.~F. and {Grassi}, D. and {Zasova}, L. and {Moriconi}, M. and {Adriani}, A. and {Lebonnois}, S. and {Coradini}, A. and {Bézard}, B. and {Angrilli}, F. and {Arnold}, G. and {Baines}, K.~H. and {Bellucci}, G. and {Benkhoff}, J. and {Bibring}, J.~P. and {Blanco}, A. and {Blecka}, M.~I. and {Carlson}, R.~W. and {di Lellis}, A. and {Encrenaz}, T. and {Erard}, S. and {Fonti}, S. and {Formisano}, V. and {Fouchet}, T. and {Garcia}, R. and {Haus}, R. and {Helbert}, J. and {Ignatiev}, N.~I. and {Irwin}, P.~G.~J. and {Langevin}, Y. and {Lopez-Valverde}, M.~A. and {Luz}, D. and {Marinangeli}, L. and {Orofino}, V. and {Rodin}, A.~V. and {Roos-Serote}, M.~C. and {Saggin}, B. and {Stam}, D.~M. and {Titov}, D. and {Visconti}, G. and {Zambelli}, M. and {Ammannito}, E. and {Barbis}, A. and {Berlin}, R. and {Bettanini}, C. and {Boccaccini}, A. and {Bonnello}, G. and {Bouye}, M. and {Capaccioni}, F. and {Cardesin Moinelo}, A. and {Carraro}, F. and {Cherubini}, G. and {Cosi}, M. and {Dami}, M. and {de Nino}, M. and {Del Vento}, D. and {di Giampietro}, M. and {Donati}, A. and {Dupuis}, O. and {Espinasse}, S. and {Fabbri}, A. and {Fave}, A. and {Veltroni}, I.~F. and {Filacchione}, G. and {Garceran}, K. and {Ghomchi}, Y. and {Giustini}, M. and {Gondet}, B. and {Hello}, Y. and {Henry}, F. and {Hofer}, S. and {Huntzinger}, G. and {Kachlicki}, J. and {Knoll}, R. and {Driss}, K. and {Mazzoni}, A. and {Melchiorri}, R. and {Mondello}, G. and {Monti}, F. and {Neumann}, C. and {Nuccilli}, F. and {Parisot}, J. and {Pasqui}, C. and {Perferi}, S. and {Peter}, G. and {Piacentino}, A. and {Pompei}, C. and {Reess}, J.-M. and {Rivet}, J.-P. and {Romano}, A. and {Russ}, N. and {Santoni}, M. and {Scarpelli}, A. and {Semery}, A. and {Soufflot}, A. and {Stefanovitch}, D. and {Suetta}, E. and {Tarchi}, F. and {Tonetti}, N. and {Tosi}, F. and {Ulmer}, B. }, title = {{South-polar features on Venus similar to those near the north pole}}, journal = {\nat}, year = 2007, volume = 450, pages = {637-640}, abstract = {{Venus has no seasons, slow rotation and a very massive atmosphere, which is mainly carbon dioxide with clouds primarily of sulphuric acid droplets. Infrared observations by previous missions to Venus revealed a bright dipole' feature surrounded by a cold collar' at its north pole. The polar dipole is a double-eye' feature at the centre of a vast vortex that rotates around the pole, and is possibly associated with rapid downwelling. The polar cold collar is a wide, shallow river of cold air that circulates around the polar vortex. One outstanding question has been whether the global circulation was symmetric, such that a dipole feature existed at the south pole. Here we report observations of Venus' south-polar region, where we have seen clouds with morphology much like those around the north pole, but rotating somewhat faster than the northern dipole. The vortex may extend down to the lower cloud layers that lie at about 50km height and perhaps deeper. The spectroscopic properties of the clouds around the south pole are compatible with a sulphuric acid composition. }}, doi = {10.1038/nature06209}, adsurl = {http://adsabs.harvard.edu/abs/2007Natur.450..637P}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2007JGRE..11211S90M, author = {{Montmessin}, F. and {Gondet}, B. and {Bibring}, J.-P. and {Langevin}, Y. and {Drossart}, P. and {Forget}, F. and {Fouchet}, T.}, title = {{Hyperspectral imaging of convective CO$_{2}$ice clouds in the equatorial mesosphere of Mars}}, journal = {Journal of Geophysical Research (Planets)}, keywords = {Atmospheric Processes: Planetary meteorology (5445, 5739), Atmospheric Processes: Clouds and aerosols, Atmospheric Processes: Middle atmosphere dynamics (0341, 0342), Atmospheric Processes: Mesospheric dynamics, Planetary Sciences: Solid Surface Planets: Remote sensing}, year = 2007, volume = 112, number = e11, eid = {E11S90}, pages = {E11S90}, abstract = {{A unique feature of the Martian climate is the possibility for carbon dioxide, the main atmospheric constituent, to condense as ice. CO$_{2}$ice is usually detected as frost but is also known to exist as clouds. This paper presents the first unambiguous observation of CO$_{2}$ice clouds on Mars. These images were obtained by the visible and near-infrared imaging spectrometer OMEGA on board Mars Express. The data set encompasses 19 different occurrences. Compositional identification is based on the detection of a diagnostic spectral feature around 4.26 {$\mu$}m which is produced by resonant scattering of solar photons by mesospheric CO$_{2}$ice particles in a spectral interval otherwise dominated by saturated gaseous absorption. Observed clouds exhibit a strong seasonal and geographic dependence, concentrating in the near-equatorial regions during two periods before and after northern summer solstice (Ls 45{\deg} and 135{\deg}). Radiative transfer modeling indicates that the 4.26 {$\mu$}m feature is very sensitive to cloud altitude, opacity, and particle size, thereby explaining the variety of spectra associated with the cloud images. On two orbits, the simultaneous detection of clouds with their shadow provides straightforward and robust estimates of cloud properties. These images confirm the conclusions established from modeling: clouds are thick, with normal opacities greater than 0.2 in the near infrared, and are lofted in the mesosphere above 80 km. The mean radius of CO$_{2}$ice crystals is found to exceed 1 {$\mu$}m, an unexpected value considering this altitude range. This finding implies the existence of high-altitude atmospheric updrafts which are strong enough to counteract the rapid gravitational fall of particles. This statement is consistent with the cumuliform morphology of the clouds which may be linked to a moist convective origin generated by the latent heat released during CO$_{2}$condensation. }}, doi = {10.1029/2007JE002944}, adsurl = {http://adsabs.harvard.edu/abs/2007JGRE..11211S90M}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2007Icar..191..236D, author = {{De La Haye}, V. and {Waite}, J.~H. and {Cravens}, T.~E. and {Nagy}, A.~F. and {Johnson}, R.~E. and {Lebonnois}, S. and {Robertson}, I.~P. }, title = {{Titan's corona: The contribution of exothermic chemistry}}, journal = {\icarus}, year = 2007, volume = 191, pages = {236-250}, abstract = {{The contribution of exothermic ion and neutral chemistry to Titan's corona is studied. The production rates for fast neutrals N$_{2}$, CH$_{4}$, H, H$_{2}$,$^{3}$CH$_{2}$, CH$_{3}$, C$_{2}$H$_{4}$, C$_{2}$H$_{5}$, C$_{2}$H$_{6}$, N($^{4}$S), NH, and HCN are determined using a coupled ion and neutral model of Titan's upper atmosphere. After production, the formation of the suprathermal particles is modeled using a two-stream simulation, as they travel simultaneously through a thermal mixture of N$_{2}$, CH$_{4}$, and H$_{2}$. The resulting suprathermal fluxes, hot density profiles, and energy distributions are compared to the N$_{2}$and CH$_{4}$INMS exospheric data presented in [De La Haye, V., Waite Jr., J.H., Johnson, R.E., Yelle, R.V., Cravens, T.E., Luhmann, J.G., Kasprzak, W.T., Gell, D.A., Magee, B., Leblanc, F., Michael, M., Jurac, S., Robertson, I.P., 2007. J. Geophys. Res., doi:10.1029/2006JA012222, in press], and are found insufficient for producing the suprathermal populations measured. Global losses of nitrogen atoms and carbon atoms in all forms due to exothermic chemistry are estimated to be 8.3{\times}10 Ns and 7.2{\times}10 Cs. }}, doi = {10.1016/j.icarus.2007.04.031}, adsurl = {http://adsabs.harvard.edu/abs/2007Icar..191..236D}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2007P&SS...55.1673B, author = {{Bertaux}, J.-L. and {Nevejans}, D. and {Korablev}, O. and {Villard}, E. and {Quémerais}, E. and {Neefs}, E. and {Montmessin}, F. and {Leblanc}, F. and {Dubois}, J.~P. and {Dimarellis}, E. and {Hauchecorne}, A. and {Lefèvre}, F. and {Rannou}, P. and {Chaufray}, J.~Y. and {Cabane}, M. and {Cernogora}, G. and {Souchon}, G. and {Semelin}, F. and {Reberac}, A. and {Van Ransbeek}, E. and {Berkenbosch}, S. and {Clairquin}, R. and {Muller}, C. and {Forget}, F. and {Hourdin}, F. and {Talagrand}, O. and {Rodin}, A. and {Fedorova}, A. and {Stepanov}, A. and {Vinogradov}, I. and {Kiselev}, A. and {Kalinnikov}, Y. and {Durry}, G. and {Sandel}, B. and {Stern}, A. and {Gérard}, J.~C. }, title = {{SPICAV on Venus Express: Three spectrometers to study the global structure and composition of the Venus atmosphere}}, journal = {\planss}, year = 2007, volume = 55, pages = {1673-1700}, abstract = {{Spectroscopy for the investigation of the characteristics of the atmosphere of Venus (SPICAV) is a suite of three spectrometers in the UV and IR range with a total mass of 13.9 kg flying on the Venus Express (VEX) orbiter, dedicated to the study of the atmosphere of Venus from ground level to the outermost hydrogen corona at more than 40,000 km. It is derived from the SPICAM instrument already flying on board Mars Express (MEX) with great success, with the addition of a new IR high-resolution spectrometer, solar occultation IR (SOIR), working in the solar occultation mode. The instrument consists of three spectrometers and a simple data processing unit providing the interface of these channels with the spacecraft. A UV spectrometer (118-320 nm, resolution 1.5 nm) is identical to the MEX version. It is dedicated to nadir viewing, limb viewing and vertical profiling by stellar and solar occultation. In nadir orientation, SPICAV UV will analyse the albedo spectrum (solar light scattered back from the clouds) to retrieve SO$_{2}$, and the distribution of the UV-blue absorber (of still unknown origin) on the dayside with implications for cloud structure and atmospheric dynamics. On the nightside, {$\gamma$} and {$\delta$} bands of NO will be studied, as well as emissions produced by electron precipitations. In the stellar occultation mode the UV sensor will measure the vertical profiles of CO$_{2}$, temperature, SO$_{2}$, SO, clouds and aerosols. The density/temperature profiles obtained with SPICAV will constrain and aid in the development of dynamical atmospheric models, from cloud top ({\tilde}60 km) to 160 km in the atmosphere. This is essential for future missions that would rely on aerocapture and aerobraking. UV observations of the upper atmosphere will allow studies of the ionosphere through the emissions of CO, CO$^{+}$, and CO$_{2}^{+}$, and its direct interaction with the solar wind. It will study the H corona, with its two different scale heights, and it will allow a better understanding of escape mechanisms and estimates of their magnitude, crucial for insight into the long-term evolution of the atmosphere. The SPICAV VIS-IR sensor (0.7-1.7 {$\mu$}m, resolution 0.5-1.2 nm) employs a pioneering technology: an acousto-optical tunable filter (AOTF). On the nightside, it will study the thermal emission peeping through the clouds, complementing the observations of both VIRTIS and Planetary Fourier Spectrometer (PFS) on VEX. In solar occultation mode this channel will study the vertical structure of H$_{2}$O, CO$_{2}$, and aerosols. The SOIR spectrometer is a new solar occultation IR spectrometer in the range {$\lambda$}=2.2-4.3 {$\mu$}m, with a spectral resolution {$\lambda$}/{$\Delta$} {$\lambda$}$\gt$15,000, the highest on board VEX. This new concept includes a combination of an echelle grating and an AOTF crystal to sort out one order at a time. The main objective is to measure HDO and H$_{2}$O in solar occultation, in order to characterize the escape of D atoms from the upper atmosphere and give more insight about the evolution of water on Venus. It will also study isotopes of CO$_{2}$and minor species, and provides a sensitive search for new species in the upper atmosphere of Venus. It will attempt to measure also the nightside emission, which would allow a sensitive measurement of HDO in the lower atmosphere, to be compared to the ratio in the upper atmosphere, and possibly discover new minor atmospheric constituents. }}, doi = {10.1016/j.pss.2007.01.016}, adsurl = {http://adsabs.harvard.edu/abs/2007P%26SS...55.1673B}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2007P&SS...55.1653D, author = {{Drossart}, P. and {Piccioni}, G. and {Adriani}, A. and {Angrilli}, F. and {Arnold}, G. and {Baines}, K.~H. and {Bellucci}, G. and {Benkhoff}, J. and {Bézard}, B. and {Bibring}, J.-P. and {Blanco}, A. and {Blecka}, M.~I. and {Carlson}, R.~W. and {Coradini}, A. and {Di Lellis}, A. and {Encrenaz}, T. and {Erard}, S. and {Fonti}, S. and {Formisano}, V. and {Fouchet}, T. and {Garcia}, R. and {Haus}, R. and {Helbert}, J. and {Ignatiev}, N.~I. and {Irwin}, P.~G.~J. and {Langevin}, Y. and {Lebonnois}, S. and {Lopez-Valverde}, M.~A. and {Luz}, D. and {Marinangeli}, L. and {Orofino}, V. and {Rodin}, A.~V. and {Roos-Serote}, M.~C. and {Saggin}, B. and {Sanchez-Lavega}, A. and {Stam}, D.~M. and {Taylor}, F.~W. and {Titov}, D. and {Visconti}, G. and {Zambelli}, M. and {Hueso}, R. and {Tsang}, C.~C.~C. and {Wilson}, C.~F. and {Afanasenko}, T.~Z. }, title = {{Scientific goals for the observation of Venus by VIRTIS on ESA/Venus express mission}}, journal = {\planss}, year = 2007, volume = 55, pages = {1653-1672}, abstract = {{The Visible and Infrared Thermal Imaging Spectrometer (VIRTIS) on board the ESA/Venus Express mission has technical specifications well suited for many science objectives of Venus exploration. VIRTIS will both comprehensively explore a plethora of atmospheric properties and processes and map optical properties of the surface through its three channels, VIRTIS-M-vis (imaging spectrometer in the 0.3-1 {$\mu$}m range), VIRTIS-M-IR (imaging spectrometer in the 1-5 {$\mu$}m range) and VIRTIS-H (aperture high-resolution spectrometer in the 2-5 {$\mu$}m range). The atmospheric composition below the clouds will be repeatedly measured in the night side infrared windows over a wide range of latitudes and longitudes, thereby providing information on Venus's chemical cycles. In particular, CO, H$_{2}$O, OCS and SO$_{2}$can be studied. The cloud structure will be repeatedly mapped from the brightness contrasts in the near-infrared night side windows, providing new insights into Venusian meteorology. The global circulation and local dynamics of Venus will be extensively studied from infrared and visible spectral images. The thermal structure above the clouds will be retrieved in the night side using the 4.3 {$\mu$}m fundamental band of CO$_{2}$. The surface of Venus is detectable in the short-wave infrared windows on the night side at 1.01, 1.10 and 1.18 {$\mu$}m, providing constraints on surface properties and the extent of active volcanism. Many more tentative studies are also possible, such as lightning detection, the composition of volcanic emissions, and mesospheric wave propagation. }}, doi = {10.1016/j.pss.2007.01.003}, adsurl = {http://adsabs.harvard.edu/abs/2007P%26SS...55.1653D}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2007Icar..190...32F, author = {{Fouchet}, T. and {Lellouch}, E. and {Ignatiev}, N.~I. and {Forget}, F. and {Titov}, D.~V. and {Tschimmel}, M. and {Montmessin}, F. and {Formisano}, V. and {Giuranna}, M. and {Maturilli}, A. and {Encrenaz}, T. }, title = {{Martian water vapor: Mars Express PFS/LW observations}}, journal = {\icarus}, year = 2007, volume = 190, pages = {32-49}, abstract = {{We present the seasonal and geographical variations of the martian water vapor monitored from the Planetary Fourier Spectrometer Long Wavelength Channel aboard the Mars Express spacecraft. Our dataset covers one martian year (end of Mars Year 26, Mars Year 27), but the seasonal coverage is far from complete. The seasonal and latitudinal behavior of the water vapor is globally consistent with previous datasets, Viking Orbiter Mars Atmospheric Water Detectors (MAWD) and Mars Global Surveyor Thermal Emission Spectrometer (MGS/TES), and with simultaneous results obtained from other Mars Express instruments, OMEGA and SPICAM. However, our absolute water columns are lower and higher by a factor of 1.5 than the values obtained by TES and SPICAM, respectively. In particular, we retrieve a Northern midsummer maximum of 60 pr-{$\mu$}m, lower than the 100-pr-{$\mu$}m observed by TES. The geographical distribution of water exhibits two local maxima at low latitudes, located over Tharsis and Arabia. Global Climate Model (GCM) simulations suggest that these local enhancements are controlled by atmospheric dynamics. During Northern spring, we observe a bulge of water vapor over the seasonal polar cap edge, consistent with the northward transport of water from the retreating seasonal cap to the permanent polar cap. In terms of vertical distribution, we find that the water volume mixing ratio over the large volcanos remains constant with the surface altitude within a factor of two. However, on the whole dataset we find that the water column, normalized to a fixed pressure, is anti-correlated with the surface pressure, indicating a vertical distribution intermediate between control by atmospheric saturation and confinement to a surface layer. This anti-correlation is not reproduced by GCM simulations of the water cycle, which do not include exchange between atmospheric and subsurface water. This situation suggests a possible role for regolith-atmosphere exchange in the martian water cycle. }}, doi = {10.1016/j.icarus.2007.03.003}, adsurl = {http://adsabs.harvard.edu/abs/2007Icar..190...32F}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2007JGRE..112.8S17M, author = {{Montmessin}, F. and {Haberle}, R.~M. and {Forget}, F. and {Langevin}, Y. and {Clancy}, R.~T. and {Bibring}, J.-P.}, title = {{On the origin of perennial water ice at the south pole of Mars: A precession-controlled mechanism?}}, journal = {Journal of Geophysical Research (Planets)}, keywords = {Atmospheric Composition and Structure: Evolution of the atmosphere (1610, 8125), History of Geophysics: Planetology, Planetary Sciences: Solid Surface Planets: Origin and evolution, Planetary Sciences: Solid Surface Planets: Ices, Planetary Sciences: Solid Surface Planets: Meteorology (3346)}, year = 2007, volume = 112, eid = {E08S17}, pages = {E08S17}, abstract = {{The poles of Mars are known to have recorded recent ($\lt$10$^{7}$years) climatic changes. While the south polar region appears to have preserved its million-year-old environment from major resurfacing events, except for the small portion containing the CO$_{2}$residual cap, the discovery of residual water ice units in areas adjacent to the cap provides compelling evidence for recent glaciological activity. The mapping and characterization of these H$_{2}$O-rich terrains by Observatoire pour la Minéralogie, l'Eau, les Glaces et l'Activité (OMEGA) on board Mars Express, which have supplemented earlier findings by Mars Odyssey and Mars Global Surveyor, have raised a number of questions related to their origin. We propose that these water ice deposits are the relics of Mars' orbit precession cycle and that they were laid down when perihelion was synchronized with northern summer, i.e., more than 10,000 years ago. We favor precession over other possible explanations because (1) as shown by our General Circulation Model (GCM) and previous studies, current climate is not conducive to the accumulation of water at the south pole due to an unfavorable volatile transport and insolation configuration, (2) the residual CO$_{2}$ice cap, which is known to cold trap water molecules on its surface and which probably controls the current extent of the water ice units, is geologically younger, (3) our GCM shows that 21,500 years ago, when perihelion occurred during northern spring, water ice at the north pole was no longer stable and accumulated instead near the south pole with rates as high as 1 mm yr$^{-1}$. This could have led to the formation of a meters-thick circumpolar water ice mantle. As perihelion slowly shifted back to the current value, southern summer insolation intensified and the water ice layer became unstable. The layer recessed poleward until the residual CO$_{2}$ice cover eventually formed on top of it and protected water ice from further sublimation. In this polar accumulation process, water ice clouds play a critical role since they regulate the exchange of water between hemispheres. The so-called Clancy effect,'' which sequesters water in the spring/summer hemisphere coinciding with aphelion due to cloud sedimentation, is demonstrated to be comparable in magnitude to the circulation bias forced by the north-to-south topographic dichotomy. However, we predict that the response of Mars' water cycle to the precession cycle should be asymmetric between hemispheres not only because of the topographic bias in circulation but also because of an asymmetry in the dust cycle. We predict that under a reversed perihelion'' climate, dust activity during northern summer is less pronounced than during southern summer in the opposite perihelion configuration (i.e., today's regime). When averaged over a precession cycle, this reduced potential for dust lifting will force a significantly colder summer in the north and, by virtue of the Clancy effect, will curtail the ability of the northern hemisphere to transfer volatiles to the south. This process may have helped create the observed morphological differences in the layered deposits between the poles and could help explain the large disparity in their resurfacing ages. }}, doi = {10.1029/2007JE002902}, adsurl = {http://adsabs.harvard.edu/abs/2007JGRE..112.8S17M}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2007JGRE..112.8S16S, author = {{Spiga}, A. and {Forget}, F. and {Dolla}, B. and {Vinatier}, S. and {Melchiorri}, R. and {Drossart}, P. and {Gendrin}, A. and {Bibring}, J.-P. and {Langevin}, Y. and {Gondet}, B.}, title = {{Remote sensing of surface pressure on Mars with the Mars Express/OMEGA spectrometer: 2. Meteorological maps}}, journal = {Journal of Geophysical Research (Planets)}, keywords = {Planetary Sciences: Solar System Objects: Mars, Atmospheric Composition and Structure: Planetary atmospheres (5210, 5405, 5704), Atmospheric Processes: Mesoscale meteorology, Exploration Geophysics: Remote sensing, Atmospheric Composition and Structure: Pressure, density, and temperature}, year = 2007, volume = 112, eid = {E08S16}, pages = {E08S16}, abstract = {{Surface pressure measurements help to achieve a better understanding of the main dynamical phenomena occurring in the atmosphere of a planet. The use of the Mars Express OMEGA visible and near-IR imaging spectrometer allows us to tentatively perform an unprecedented remote sensing measurement of Martian surface pressure. OMEGA reflectances in the CO$_{2}$absorption band at 2 {$\mu$}m are used to retrieve a hydrostatic estimation of surface pressure (see companion paper by Forget et al. (2007)) with a precision sufficient to draw maps of this field and thus analyze meteorological events in the Martian atmosphere. Prior to any meteorological analysis, OMEGA observations have to pass quality controls on insolation and albedo conditions, atmosphere dust opacity, and occurrence of water ice clouds and frosts. For the selected observations, registration shifts with the MOLA reference are corrected. Sea-level'' surface pressure reduction is then carried out in order to remove the topographical component of the surface pressure field. Three main phenomena are observed in the resulting OMEGA surface pressure maps: horizontal pressure gradients, atmospheric oscillations, and pressure perturbations in the vicinity of topographical obstacles. The observed pressure oscillations are identified as possible signatures of phenomena such as inertia-gravity waves or convective rolls. The pressure perturbations detected around the Martian hills and craters may be the signatures of complex interactions between an incoming flow and topographical obstacles. Highly idealized mesoscale simulations using the WRF model enable a preliminary study of these complex interactions, but more realistic mesoscale simulations are necessary. The maps provide valuable insights for future synoptic and mesoscale modeling, which will in turn help in the interpretation of observations. }}, doi = {10.1029/2006JE002870}, adsurl = {http://adsabs.harvard.edu/abs/2007JGRE..112.8S16S}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2007JGRE..112.8S15F, author = {{Forget}, F. and {Spiga}, A. and {Dolla}, B. and {Vinatier}, S. and {Melchiorri}, R. and {Drossart}, P. and {Gendrin}, A. and {Bibring}, J.-P. and {Langevin}, Y. and {Gondet}, B.}, title = {{Remote sensing of surface pressure on Mars with the Mars Express/OMEGA spectrometer: 1. Retrieval method}}, journal = {Journal of Geophysical Research (Planets)}, keywords = {Planetary Sciences: Solar System Objects: Mars, Atmospheric Composition and Structure: Planetary atmospheres (5210, 5405, 5704), Atmospheric Processes: Mesoscale meteorology, Exploration Geophysics: Remote sensing, Atmospheric Composition and Structure: Pressure, density, and temperature}, year = 2007, volume = 112, eid = {E08S15}, pages = {E08S15}, abstract = {{Observing and analyzing the variations of pressure on the surface of a planet is essential to understand the dynamics of its atmosphere. On Mars the absorption by atmospheric CO$_{2}$of the solar light reflected on the surface allows us to measure the surface pressure by remote sensing. We use the imaging spectrometer OMEGA aboard Mars Express, which provides an excellent signal to noise ratio and the ability to produce maps of surface pressure with a resolution ranging from 400 m to a few kilometers. Surface pressure is measured by fitting spectra of the CO$_{2}$absorption band centered at 2 {$\mu$}m. To process the hundreds of thousands of pixels present in each OMEGA image, we have developed a fast and accurate algorithm based on a line-by-line radiative transfer model which includes scattering and absorption by dust aerosols. In each pixel the temperature profile, the dust opacity, and the surface spectrum are carefully determined from the OMEGA data set or from other sources to maximize the accuracy of the retrieval. We estimate the 1-{$\sigma$} relative error to be around 7 Pa in bright regions and about 10 Pa in darker regions, with a possible systematic bias on the absolute pressure lower than 30 Pa (4\%). The method is first tested by comparing an OMEGA pressure retrieval obtained over the Viking Lander 1 (VL1) landing site with in situ measurements recorded 30 years ago by the VL1 barometer. The retrievals are further validated using a surface pressure predictor which combines the VL1 pressure records with the MOLA topography and meteorological pressure gradients simulated with a General Circulation Model. A good agreement is obtained. In particular, OMEGA is able to monitor the seasonal variations of the surface pressure in Isidis Planitia. Such a tool can be applied to detect meteorological phenomena, as described by Spiga et al. (2007). }}, doi = {10.1029/2006JE002871}, adsurl = {http://adsabs.harvard.edu/abs/2007JGRE..112.8S15F}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2007JGRE..112.8S12L, author = {{Langevin}, Y. and {Bibring}, J.-P. and {Montmessin}, F. and {Forget}, F. and {Vincendon}, M. and {Douté}, S. and {Poulet}, F. and {Gondet}, B.}, title = {{Observations of the south seasonal cap of Mars during recession in 2004-2006 by the OMEGA visible/near-infrared imaging spectrometer on board Mars Express}}, journal = {Journal of Geophysical Research (Planets)}, keywords = {Planetary Sciences: Solid Surface Planets: General or miscellaneous, Planetary Sciences: Solid Surface Planets: Ices, Planetary Sciences: Solid Surface Planets: Polar regions, Planetary Sciences: Solid Surface Planets: Remote sensing}, year = 2007, volume = 112, eid = {E08S12}, pages = {E08S12}, abstract = {{The OMEGA visible/near-infrared imaging spectrometer on board Mars Express has observed the southern seasonal cap in late 2004 and 2005 and then in the summer of 2006. These observations extended from the period of maximum extension, close to the southern winter solstice, to the end of the recession at L$_{s}$325{\deg}. The spectral range and spectral resolution of OMEGA make it possible to monitor the extent and effective grain size of CO$_{2}$ice and H$_{2}$O ice on the ground, the level of contamination of CO$_{2}$ice and H$_{2}$O ice by dust, and the column density of {$\mu$}m-sized ice grains in the atmosphere. The CO$_{2}$seasonal cap is very clean and clear in early southern winter. Contamination by H$_{2}$O ice spreads eastward from the Hellas basin until the southern spring equinox. During southern spring and summer, there is a very complex evolution in terms of effective grain size of CO$_{2}$ice and contamination by dust or H$_{2}$O ice. H$_{2}$O ice does not play a significant role close to the southern summer solstice. Contamination of CO$_{2}$ice by H$_{2}$O ice is only observed close to the end of the recession, as well as the few H$_{2}$O ice patches already reported by Bibring et al. (2004a). These observations have been compared to the results of a general circulation model, with good qualitative agreement on the distribution of H$_{2}$O ice on the surface and in the atmosphere. Resolving the remaining discrepancies will improve our understanding of the water cycle on Mars. }}, doi = {10.1029/2006JE002841}, adsurl = {http://adsabs.harvard.edu/abs/2007JGRE..112.8S12L}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2007P&SS...55.1346G, author = {{Grassi}, D. and {Formisano}, V. and {Forget}, F. and {Fiorenza}, C. and {Ignatiev}, N.~I. and {Maturilli}, A. and {Zasova}, L.~V.}, title = {{The martian atmosphere in the region of Hellas basin as observed by the planetary Fourier spectrometer (PFS-MEX)}}, journal = {\planss}, year = 2007, volume = 55, pages = {1346-1357}, abstract = {{This work presents a review of the observations acquired by the planetary Fourier spectrometer (PFS) in the region of the Hellas basin. Taking advantage of the high spectral resolution of PFS, the vertical air temperature profile can be investigated with a previously unexperienced vertical resolution. Extensive comparisons with the expectations of EMCD 4.0 database highlight moderate discrepancies, strongly dependant on season. Namely, the morning observations acquired around L$_{s}\$=45{\deg} show a series of temperature deficiencies with recurrent spatial patterns in different observations, correlated with the topography profile. Trends of integrated dust loads as a function of the field of view (FOV) elevation are also described. Values are consistent with the retrieval hypothesis of a dust scale height equal to the gas one, even far from the season of main dust storms. }}, doi = {10.1016/j.pss.2006.12.006}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2007JGRE..112.6012L, author = {{Levrard}, B. and {Forget}, F. and {Montmessin}, F. and {Laskar}, J. }, title = {{Recent formation and evolution of northern Martian polar layered deposits as inferred from a Global Climate Model}}, journal = {Journal of Geophysical Research (Planets)}, keywords = {Planetary Sciences: Solid Surface Planets: Atmospheres (0343, 1060), Planetary Sciences: Solid Surface Planets: Orbital and rotational dynamics (1221), Planetary Sciences: Solid Surface Planets: Polar regions, Planetary Sciences: Solid Surface Planets: Meteorology (3346), Planetary Sciences: Solar System Objects: Mars}, year = 2007, volume = 112, eid = {E06012}, pages = {E06012}, abstract = {{We present a time-marching model which simulates the exchange of water ice between the Martian northern cap, the tropics, and a high-latitude surface reservoir. Net annual exchange rates of water and their sensitivity to variations in orbital/rotational parameters are examined using the Martian water cycle modeled by the LMD three-dimensional Global Climate Model. These rates are propagated over the last 10 Myr to follow the thickness of the reservoirs. The effect of a sublimation dust lag is taken account to test simple models of layer formation. Periods of high mean polar summer insolation (\~{}5-10 Ma ago) lead to a rapid exhaustion of a northern polar cap and a prolonged formation of tropical glaciers. The formation of a northern cap and of a high-latitude icy mantle may have started 4 Ma ago with the average decrease of polar insolation. Tropical ice may have disappeared around 2.7 Ma ago, but small glaciers could have formed during the last peaks of polar summer insolation. Over the last 4 Myr, most of the present cap may have formed at the expense of tropical and high-latitude reservoirs forming distinct layers at almost each \~{}51-kyr/120-kyr insolation cycle. Layers thickness ranges from 10 to 80 m, variations being produced by the modulation of the obliquity with \~{}2.4 and 1.3 Myr periods. Because only \~{}30 insolation cycles have occurred since 4 Ma ago, we found an inconsistency between the recent astronomical forcing, the observed number of layers, and simple astronomically based scenarios of layers formation. }}, doi = {10.1029/2006JE002772},
How to map short sequences to long reads, recovering all multiply-mapped high-quality matches The dilemma: I have around one million short sequences (21 bp to several 100s of basepairs) for which I need to identify all occurrences of in 20-30x coverage noisy long reads (both pacbio and ONT). All of the short sequences and long reads are derived from the same individual, and there are enough short sequences given the organism's genome size that each long read should have at least one or multiple short sequences. An example: Given three short sequences, AA & BBBB & CCCCCC , how to identify all occurrences of these sequences in long, uncorrected reads. Sequences to identify in long reads: - AA - BBBB - CCCCCC Long Reads: - 1: ------AA-----------------------CCCCCC------ - 2: ------AA--------BBBB----------------------- - 3: ---BBBB-----------CCCC-- - 4: ------------------AA----- In this example, the output sam file will contain high MAPQ hits for short sequence AA to reads (1, 2, 3), for short sequence BBBB to reads (2, 3). For short sequence CCCCCC there will be a high MAPQ hit to read 1 and a lower MAPQ hit to read 3 Limitations: • Due to the requirements of this project, I cannot perform a de novo assembly then map the short sequences to the assembly output. It is necessary to directly map the short sequences to the long reads. • I cannot correct the long reads before I map/align the short sequences to them. • The alignment/mapping should sensitively detect the short sequences in the long reads. • The short sequences may be anywhere from 1Mbp to 30Mbp in total bases. • The long reads may be anywhere from 3Gbp to 90Gbp in total bases. So the mapper/alignment technique is hopefully fast. Sensitivity is more important though. Research so far: • bwa mem does not output multiply-mapped in a predictable manner that fits this use case https://www.biostars.org/p/304614/ • blat seems to output many hits for a single short sequence, but there are no out-of-the-box parameters for mapping short reads to long reads. • Maybe multiple sequence alignment is the most sensitive, but wouldn't that entail running (no. of short seqs) * (no. of long reads) alignments? • Don't map short reads to noisy long reads. This won't work. As you already have decent coverage, assemble first (or correct long reads by themselves with canu) and then map. It will be faster and give you much better result. Sep 6 '18 at 14:08 • Hi user17818 - unfortunately that isn't an option in this scenario based on the requirements for this project. I'll update my question to better clarify that de novo assembly, then mapping the short sequences is not an option. Sep 7 '18 at 6:13 • This is probably an intermediate problem of a larger project. If so, you need to change the strategy. Mapping a 21bp oligo to 90Gb noisy reads will lead to many false positives and negatives. This is a theoretical limitation that no direct mapping algorithms can get around. Sep 7 '18 at 11:52 • You're right - there will be false negatives and positives. My goal is to minimize them by finding the best mapping strategy to get multiple mappings of each query. The size range of the illumina-derived "short" query sequences vary from 21 to hundreds/thousands of bp. The chances of correctly mapping the short queries to a long read will increase as the lengths of the illumina-derived queries increase. The kmer space of even a 31-mer is 4.61x10^18, or around 9 orders of magnitude larger than the kmer space of a human genome. A few indels in the long reads should still be uniquely mappable. Sep 8 '18 at 2:31 • "Uniquely mapped" doesn't mean the mapping is correct. This is particularly true when you look for all partial mappings. For >100bp sequences, you may use minimap2, taking Illumina sequences as the reference. It won't work well with shorter sequences, though. Sep 9 '18 at 0:24 1 Answer LAST has given the best results for me when I've tried to do this, although I agree with @user172818 that it's not a good idea to map really short reads. This is due to a combination of natural sequence duplication in long DNA sequences (e.g. see here), as well as abundant base calling differences present in single-molecule sequencing. Minimising error is not necessarily going to be the best option, and concentrating only on unique hits will miss a lot of signal. There are frequently multiple identically plausible positions, even at zero error, and the long reads will have their own associated errors. I also find the limitations a little odd. Canu can do read-level correction on long reads based on overlapping reads, and if assembly is possible, then nanopolish can be used to correct nanopore reads for systematic base-calling error introduced by the methylation of unamplified template. • Thanks for your comments, @gringer. The limitations in the problem are the limitations. The best I can do in this case is minimize false positive hits. I'll see what I can do with LAST! Sep 10 '18 at 3:12
# How do you multiply (4+2i)(1-5i)? $\left(4 + 2 i\right) \left(1 - 5 i\right) = 14 - 18 i$ $\left(4 + 2 i\right) \left(1 - 5 i\right) = \left(4\right) \left(1\right) + \left(2 i\right) \left(1\right) + \left(4\right) \left(- 5 i\right) + \left(2 i\right) \left(- 5 i\right)$ $= 4 + 2 i - 20 i - 10 {i}^{2}$ $= 14 - 18 i$
# Solution Find the rational numbers, in lowest terms, given by each of the following continued fractions. Do you notice anything interesting? What value would have a simple continued fraction representation with an infinite string of $1$'s (i.e., $[1,\overline{1}]$)? Knowing that the continued fractions given below should better and better approximate $[1,\overline{1}]$, what might one conclude? 1. [1;1] 2. [1;1,1] 3. [1;1,1,1] 4. [1;1,1,1,1] After calculating the first several convergent values, it shouldn't be hard to see the pattern... $$\begin{array}{rcl} [1;1] &=& 2\\ [1;1,1] &=& 3/2\\ [1;1,1,1] &=& 5/3\\ [1;1,1,1,1] &=& 8/5\\ [1;1,1,1,1,1] &=& 13/8\\ \vdots \end{array}$$ We are looking at ratios of successive Fibonacci numbers! Of course, we expect the convergents to get closer and closer in value to $[1,\overline{1}]$, which we can compute in the standard way. $$[1,\overline{1}] = \frac{1+\sqrt{5}}{2}$$ The observant among you might recognize this value as the golden ratio, $\varphi$. As such, we conclude that the limiting quotient of successive Fibonacci numbers must be the golden ratio. That ties some nice things together, doesn't it!
# Audio Bit Depth “mapping” from DAW to Audio card: is this correct? Let say I'm generating a 1 kHz sine wave within my DAW, using some synth (Sytrus in my case). I normalize it, so it play at max "digital" amplitude 0db. Thus, any sample of the signal would gets a Floating Point values (in 32 or 80 bit of precision, using FL Studio 32bit; but that's irrelevant right now). This means that all values (since the signal is normalized) goes from -1.0 to 1.0 max. Now, the signal go through ASIO Drive and reach my sound card (an M-Audio FireWire Solo), 24-bit (fixed-point representation). Due to ENOB, it will use 21-bit, thus the max value would be 1048576 (which is 100000000000000000000 in binary). Is correct to state that the value 1.0 is mapped to 100000000000000000000? And -1.0 to 000000000000000000000? I'm really not sure about this. Does this mapping happen? Is it correct the range I've used (-1.0/1.0 => 000000000000000000000/100000000000000000000)? Can you help me to understand this "mapping", if any? • nope, that's not the meaning of ENOB. The DAC will (most likely, I can't look inside) use all the 24 bits you send it – ENOB is just the amount of "precision" that you effectively achieve. – Marcus Müller May 2 '18 at 9:31 • @MarcusMüller: but how will happens this mapping between DAW range and audio card range? Any example? – markzzz May 2 '18 at 10:15 • You're just making assumptions on what your hardware consumes. Maybe they are right, in which case your assumptions about the conversion could be right, or they could be wrong (which I actually think is not that unlikely) and then your conversion is wrong. Ex falso quodlibet, from a wrong presumption follows arbitrary stuff. – Marcus Müller May 2 '18 at 10:48 • @MarcusMüller: that's just an example. My dubt is: I pass from a 4,294,967,296 distinguishable values (fp 32 bit) to 6,777,216 . So the audio card decide how to "map" a huge range to number to a small one. Is it the audio card builder that decide it? Or any standard? (or example) – markzzz May 2 '18 at 10:54 • There's no 32 bit floats involved anywhere here. The digital hardware designer, the driver author and the DSP engineer will define what the sound card does with the numbers it gets over the wire, and what kind of numbers these are. – Marcus Müller May 2 '18 at 11:38 You make false assumptions: • Your audio software might work on floating point numbers. In most cases, it converts these before handing them over to your operating system's sound architecture, but these architectures usually offer userland applications a wide range of sample formats – be it floating point, signed or unsigned integers. • The sound system internally converts to the sound card driver's format – again, drivers typically offer multiple sample formats to the OS • The driver then converts what it gets to the appropriate on-the-wire format. This is usually absolutely specific to your hardware. It is usually not floating point numbers. It might be signed or unsigned 2-complement integers of very different bit depths, or something else. The sky's the limit: Someone designed the driver together with the hardware with a lot of requirements that have nothing to do with "it's an easily understood standard number format". • The sound card hardware takes the on-the-wire format and converts it to an internal sample format suitable for internal DSP. Usually, you don't have any clue what that is. • The DSP chain will at the end talk to a DAC – and DACs can take signed or unsigned integers of bit depths between 1 and 32 bit. Oh, and throw in some endianness confusion while you're considering all this. • So its the DAW (or audio application) that convert its 32 bit floating point signal to ASIO (or Directsound or WASAPI) format, used by OS. Than that's driver communicate with soundcard and make another conversion to another format. – markzzz May 2 '18 at 12:47 If your sound card has input range 2Vp-p (2 volts peak to peak = -1V to +1v) and has 24 bits of resolution (ENOB doesn't play any role in this, you still get 24 bits for each sample for each channel, but the higher ENOB the smaller noise you get), here is what voltage represents the output code of ADC: 00000000 00000000 00000000 (lowest possible result from ADC) => -1V 10000000 00000000 00000000 (midpoint) => 0V 11111111 11111111 11111111 (highest possible value) => +1V 000 (0 dec) => -1V 001 (1 dec) 010 (2 dec) 011 (3 dec) 100 (4 dec) => (midpoint approximately) 0V 101 (5 dec) 110 (6 dec) 111 (7 dec) => +1V So the voltage is proportionally distributed in the full ADC range, to calculate the voltage correspondig to specific code, you can use following formula: U = (code/(2^bits-1)) * (Umax - Umin) + Umin where Umin = -1V, Umax = 1V, bits = 24 For conversion between binary and decimal numbers refer to this document • you probably meant "your ENOB is higher the less noise you get" – Marcus Müller May 2 '18 at 11:58 • by the way, there's ADCs that are differential and actually emit 2-complement signed numbers :) I must admit I don't know whether that's common in the field of audio ADCs. – Marcus Müller May 2 '18 at 11:59 • So converting from 32fp (from DAW) to sound card 24bit integer will make a lots of data loss. Fortunately, I think it doesn't make any significant differences, and 24bit is enough. Why does sound card (hardware) use fixed point (integer) instead of floating point? Is it more quickly on calculation? – markzzz May 2 '18 at 13:19 • All of the comparator or sigma delta based DACs/ADCs use fixed length code word. In some special applications multiple ADCs connected in parallel are used and for the result the highest non satuarted code word is taken to achieve better resolution of small signals. But this is not your case. "Floating point" is by my opinion only code word transformed into floating point number to make further numeric processing more straightforward. Can you please post the link of your DAW's specification? – gabonator May 2 '18 at 14:20
# Vector transformations that lead to the identity matrix 1. Nov 25, 2012 ### geert200 Hi all, I have a question that seems very simple but I just do not see it;) Let α denote an r×1 vector with arbitrary entries; I'm trying to construct an 1×r vector m such that αm = I, where I is the r×r identity matrix... The first question is: is this possible? I tried the following; let m = α'(α α')^{-1}, but then the problem is that (α α')^{-1} is not defined (rank 1) how can I fix this;
Recently, a lot of interesting work has been done in the area of applying Machine Learning Algorithms for analyzing price patterns and predicting stock prices and index changes. Most stock traders nowadays depend on Intelligent Trading Systems which help them in predicting prices based on various situations and conditions, thereby helping them in making instantaneous investment decisions. We evaluate CHALLENGER ENERGY GROUP PLC prediction models with Supervised Machine Learning (ML) and Statistical Hypothesis Testing1,2,3,4 and conclude that the LON:CEG stock is predictable in the short/long term. According to price forecasts for (n+8 weeks) period: The dominant strategy among neural network is to Buy LON:CEG stock. Keywords: LON:CEG, CHALLENGER ENERGY GROUP PLC, stock forecast, machine learning based prediction, risk rating, buy-sell behaviour, stock analysis, target price analysis, options and futures. ## Key Points 1. Can neural networks predict stock market? 2. How useful are statistical predictions? 3. Can statistics predict the future? ## LON:CEG Target Price Prediction Modeling Methodology Stock market prediction is the act of trying to determine the future value of a company stock or other financial instrument traded on a financial exchange. The successful prediction of a stock's future price will maximize investor's gains. This paper proposes a machine learning model to predict stock market price. We consider CHALLENGER ENERGY GROUP PLC Stock Decision Process with Statistical Hypothesis Testing where A is the set of discrete actions of LON:CEG stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Statistical Hypothesis Testing)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Supervised Machine Learning (ML)) X S(n):→ (n+8 weeks) $∑ i = 1 n r i$ n:Time series to forecast p:Price signals of LON:CEG stock j:Nash equilibria k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## LON:CEG Stock Forecast (Buy or Sell) for (n+8 weeks) Sample Set: Neural Network Stock/Index: LON:CEG CHALLENGER ENERGY GROUP PLC Time series to forecast n: 13 Oct 2022 for (n+8 weeks) According to price forecasts for (n+8 weeks) period: The dominant strategy among neural network is to Buy LON:CEG stock. X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Yellow to Green): *Technical Analysis% ## Conclusions CHALLENGER ENERGY GROUP PLC assigned short-term Ba3 & long-term Ba3 forecasted stock rating. We evaluate the prediction models Supervised Machine Learning (ML) with Statistical Hypothesis Testing1,2,3,4 and conclude that the LON:CEG stock is predictable in the short/long term. According to price forecasts for (n+8 weeks) period: The dominant strategy among neural network is to Buy LON:CEG stock. ### Financial State Forecast for LON:CEG Stock Options & Futures Rating Short-Term Long-Term Senior Outlook*Ba3Ba3 Operational Risk 3689 Market Risk3846 Technical Analysis9053 Fundamental Analysis7360 Risk Unsystematic7877 ### Prediction Confidence Score Trust metric by Neural Network: 80 out of 100 with 491 signals. ## References 1. Breiman L, Friedman J, Stone CJ, Olshen RA. 1984. Classification and Regression Trees. Boca Raton, FL: CRC Press 2. Chernozhukov V, Newey W, Robins J. 2018c. Double/de-biased machine learning using regularized Riesz representers. arXiv:1802.08667 [stat.ML] 3. M. Petrik and D. Subramanian. An approximate solution method for large risk-averse Markov decision processes. In Proceedings of the 28th International Conference on Uncertainty in Artificial Intelligence, 2012. 4. Abadie A, Diamond A, Hainmueller J. 2010. Synthetic control methods for comparative case studies: estimat- ing the effect of California's tobacco control program. J. Am. Stat. Assoc. 105:493–505 5. Nie X, Wager S. 2019. Quasi-oracle estimation of heterogeneous treatment effects. arXiv:1712.04912 [stat.ML] 6. S. Bhatnagar. An actor-critic algorithm with function approximation for discounted cost constrained Markov decision processes. Systems & Control Letters, 59(12):760–766, 2010 7. A. Tamar, Y. Glassner, and S. Mannor. Policy gradients beyond expectations: Conditional value-at-risk. In AAAI, 2015 Frequently Asked QuestionsQ: What is the prediction methodology for LON:CEG stock? A: LON:CEG stock prediction methodology: We evaluate the prediction models Supervised Machine Learning (ML) and Statistical Hypothesis Testing Q: Is LON:CEG stock a buy or sell? A: The dominant strategy among neural network is to Buy LON:CEG Stock. Q: Is CHALLENGER ENERGY GROUP PLC stock a good investment? A: The consensus rating for CHALLENGER ENERGY GROUP PLC is Buy and assigned short-term Ba3 & long-term Ba3 forecasted stock rating. Q: What is the consensus rating of LON:CEG stock? A: The consensus rating for LON:CEG is Buy. Q: What is the prediction period for LON:CEG stock? A: The prediction period for LON:CEG is (n+8 weeks)
# ADA1: Class 23, Nonparametric methods Advanced Data Analysis 1, Stat 427/527, Fall 2022, Prof. Erik Erhardt, UNM Author Published August 13, 2022 Include your answers in this document in the sections below the rubric where I have point values indicated (1 p). # Rubric Answer the questions with the data examples. library(erikmisc) ── Attaching packages ─────────────────────────────────────── erikmisc 0.1.16 ── ✔ tibble 3.1.8 ✔ dplyr 1.0.9 ── Conflicts ─────────────────────────────────────────── erikmisc_conflicts() ── erikmisc, solving common complex data analysis workflows by Dr. Erik Barry Erhardt <[email protected]> library(tidyverse) ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.2 ── ✔ ggplot2 3.3.6 ✔ purrr 0.3.4 ✔ tidyr 1.2.0 ✔ stringr 1.4.0 ✔ readr 2.1.2 ✔ forcats 0.5.1 ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ── # Paired Comparisons of Two Sleep Remedies, revisited We revisit the sleep remedy dataset (from WS16) because the normality assumption of the sampling distribution of the mean did not seem to be reasonable. The plots below remind us of the data and bootstrap estimate of the sampling distribution. Warning: Removed 2 rows containing missing values (geom_bar). Use the example in ADA1 Chapter 6 on Nonparametric Methods to state the hypothesis and results of the tests: ## Sign test (1 p) Hypotheses in words and notation. Be sure to specify which population parameter is being tested. (1 p) R code and results for test (use the BSDA package and SIGN.test()). Use $$\alpha = 0.10$$ (for a change). (1 p) Conclusion based on test. ## 90% CI for the Median (0.5 p) R code and CI (use the wilcox.test()). (0.5 p) State the assumptions for the Wilcoxon CI. (1 p) Determine the appropriate method for a CI (sign or Wilcoxon), then present and interpret the 90% CI with reference to the population parameter. # The Number of Breaks in Yarn during Weaving This data set gives the number of warp breaks per loom, where a loom corresponds to a fixed length of yarn. We will analyze wool type B and see whether the tension affects the number of breaks. A data frame with 54 observations on 3 variables. [,1] breaks numeric The number of breaks [,2] wool factor The type of wool (A or B) [,3] tension factor The level of tension (L, M, H) library(datasets) str(warpbreaks) 'data.frame': 54 obs. of 3 variables: $breaks : num 26 30 54 25 70 52 51 26 67 18 ...$ wool : Factor w/ 2 levels "A","B": 1 1 1 1 1 1 1 1 1 1 ... $tension: Factor w/ 3 levels "L","M","H": 1 1 1 1 1 1 1 1 1 2 ... # keep only wool type B, then drop wool type from data frame dat_warp_sub <- warpbreaks %>% filter( wool == "B" ) %>% select( -wool ) str(dat_warp_sub) 'data.frame': 27 obs. of 2 variables:$ breaks : num 27 14 29 19 29 31 41 20 44 42 ... $tension: Factor w/ 3 levels "L","M","H": 1 1 1 1 1 1 1 1 1 2 ... Notice that in the plot below I include the points, median, and IQR (the center 50%). Because there’s only 9 observations per group the boxplot and violin plot that I would typically also include becomes distracting and less informative. With very small sample sizes, the salience of individual values should increase relative to summaries. # A set of useful summary functions is provided from the Hmisc package: #library(Hmisc) stat_sum_df <- function(fun, geom="crossbar", ...) { stat_summary(fun.data=fun, colour="gray60", geom=geom, width=0.2, ...) } # Plot the data using ggplot library(ggplot2) p <- ggplot(dat_warp_sub, aes(x = tension, y = breaks)) # plot a reference line for the global mean (assuming no groups) p <- p + geom_hline(yintercept = mean(dat_warp_sub$breaks), colour = "black", linetype = "dashed", size = 0.3, alpha = 0.5) # box for IQR and median p <- p + stat_sum_df("median_hilow", fun.args = list(conf.int=0.5), show.legend=FALSE) # diamond at mean for each group p <- p + stat_summary(fun = median, geom = "point", shape = 18, size = 4, colour = "red", alpha = 0.8) # points for observed data p <- p + geom_point(alpha = 0.5) #position = position_jitter(w = 0.05, h = 0), alpha = 1) p <- p + labs(title = "Number of breaks by loom tension\nfor wool B") p <- p + expand_limits(y = 10) p <- p + theme_bw() #print(p) # log scale p2 <- p p2 <- p2 + scale_y_log10(breaks = seq(0, 50, by=10)) p2 <- p2 + labs(y = "log10(breaks)") p2 <- p2 + expand_limits(y = 10) #print(p2) library(gridExtra) Attaching package: 'gridExtra' The following object is masked from 'package:dplyr': combine grid.arrange(p, p2, ncol=2) Some numerical summaries may be helpful later. # summary of each by(dat_warp_sub$breaks, dat_warp_sub$tension, summary) dat_warp_sub$tension: L Min. 1st Qu. Median Mean 3rd Qu. Max. 14.00 20.00 29.00 28.22 31.00 44.00 ------------------------------------------------------------ dat_warp_sub$tension: M Min. 1st Qu. Median Mean 3rd Qu. Max. 16.00 21.00 28.00 28.78 39.00 42.00 ------------------------------------------------------------ dat_warp_sub$tension: H Min. 1st Qu. Median Mean 3rd Qu. Max. 13.00 15.00 17.00 18.78 21.00 28.00 # IQR, sd, and n by(dat_warp_sub$breaks, dat_warp_sub$tension, function(X) { c(IQR(X), sd(X), length(X)) }) dat_warp_sub$tension: L [1] 11.000000 9.858724 9.000000 ------------------------------------------------------------ dat_warp_sub$tension: M [1] 18.000000 9.431036 9.000000 ------------------------------------------------------------ dat_warp_sub$tension: H [1] 6.000000 4.893306 9.000000 # calculate summaries dat_warp_summary <- dat_warp_sub %>% group_by(tension) %>% summarize( mean = mean (breaks, na.rm = TRUE) , sd = sd (breaks, na.rm = TRUE) , median = median(breaks, na.rm = TRUE) , iqr = IQR (breaks, na.rm = TRUE) , n = n() , .groups = "drop_last" ) %>% ungroup() dat_warp_summary # A tibble: 3 × 6 tension mean sd median iqr n <fct> <dbl> <dbl> <dbl> <dbl> <int> 1 L 28.2 9.86 29 11 9 2 M 28.8 9.43 28 18 9 3 H 18.8 4.89 17 6 9 Nonparametric methods are available to use even when assumptions for other tests are met. Use the Kruskal-Wallis rank sum test to assess whether there are differences in medians between these three groups. If there are, determine which pairs differ using the Wilcox-Mann-Whitney test. (1 p) State the hypothesis. Perform the KW test. # KW ANOVA fit_kruskal_bt <- kruskal.test( breaks ~ tension , data = dat_warp_sub ) fit_kruskal_bt Kruskal-Wallis rank sum test data: breaks by tension Kruskal-Wallis chi-squared = 6.905, df = 2, p-value = 0.03167 (1 p) State the assumptions of the KW test and provide the support that the assumptions are either met or not. (1 p) Interpret the test result above. Use $$\alpha = 0.10$$. Follow-up pairwise comparisons. wilcox.test(breaks ~ tension, data = dat_warp_sub, subset = (tension %in% c("L", "M"))) Warning in wilcox.test.default(x = DATA[[1L]], y = DATA[[2L]], ...): cannot compute exact p-value with ties Wilcoxon rank sum test with continuity correction data: breaks by tension W = 41.5, p-value = 0.9647 alternative hypothesis: true location shift is not equal to 0 wilcox.test(breaks ~ tension, data = dat_warp_sub, subset = (tension %in% c("L", "H"))) Warning in wilcox.test.default(x = DATA[[1L]], y = DATA[[2L]], ...): cannot compute exact p-value with ties Wilcoxon rank sum test with continuity correction data: breaks by tension W = 64.5, p-value = 0.03768 alternative hypothesis: true location shift is not equal to 0 wilcox.test(breaks ~ tension, data = dat_warp_sub, subset = (tension %in% c("M", "H"))) Warning in wilcox.test.default(x = DATA[[1L]], y = DATA[[2L]], ...): cannot compute exact p-value with ties Wilcoxon rank sum test with continuity correction data: breaks by tension W = 67.5, p-value = 0.01897 alternative hypothesis: true location shift is not equal to 0 (1 p) Interpret the pairwise comparisons above (which pairs differ) at a Bonferroni-corrected significance level of $$\alpha/3 = 0.0333$$. (1 p) Summarize results by ordering the means and grouping pairs that do not differ (see notes for examples). REPLACE THIS EXAMPLE WITH YOUR RESULTS. Example: Groups A and C differ, but B is not different from either. (These groups are ordered by mean, so A has the lowest median and C has the highest.) Group A Group B Group C ----------------- -----------------
# openmc.lib.find_material¶ openmc.lib.find_material(xyz)[source] Find the material at a given point Parameters: xyz (iterable of float) – Cartesian coordinates of position Material containing the point, or None is no material is found openmc.lib.Material or None
# Abelian surfaces admitting an (l,l)-endomorphism 9 Jun 2011 We give a classification of all principally polarized abelian surfaces that admit an $(l,l)$-isogeny to themselves, and show how to compute all the abelian surfaces that occur. We make the classification explicit in the simplest case $l=2$... (read more) PDF Abstract # Code Add Remove Mark official No code implementations yet. Submit your code now
Acta Physico-Chimica Sinica ›› 2018, Vol. 34 ›› Issue (8): 904-911. Special Issue: Green Chemistry • ARTICLE • ### Investigation on the Thermal Stability of Deep Eutectic Solvents Wenjun CHEN1,Zhimin XUE2,*(),Jinfang WANG1,Jingyun JIANG1,Xinhui ZHAO1,Tiancheng MU1,*() 1. 1 Department of Chemistry, Renmin University of China, Beijing 100872, P. R. China 2 College of Materials Science and Technology, Beijing Forestry University, Beijing 100083, P. R. China • Received:2017-12-12 Accepted:2017-12-26 Published:2018-04-03 • Contact: Zhimin XUE,Tiancheng MU E-mail:[email protected];[email protected] • Supported by: The project was supported by the National Natural Science Foundation of China(21773307);The project was supported by the National Natural Science Foundation of China(21503016) Abstract: In recent years, deep eutectic solvents (DESs) have attracted considerable attention. They have been applied in many fields such as dissolution and separation, electrochemistry, materials preparation, reaction, and catalysis. The DESs are generally formed by the hydrogen bonding interactions between hydrogen-bond donors (HBDs) and acceptors (HBAs). Knowledge of the thermal stability of DESs is very important for their application at high temperatures. However, there have been relatively few studies on the thermal stability of DESs. Herein, a systematic investigation on the thermal stability of 40 DESs was carried out using thermal gravimetric analysis (TGA), and the onset decomposition temperatures (Tonset) of these solvents were obtained. The most important conclusion drawn from this work is that the thermal behavior of DESs is quite different from that of ionic liquids. The anions or cations of ionic liquids decompose first, followed by the decomposition of the opposite ion at elevated temperatures. On the other hand, the DESs generally first decompose to HBDs and HBAs at high temperatures through the weakening of the hydrogen bond interactions. Subsequently, the HBDs with relatively low boiling points or poor stabilities undergo volatilization or decomposition; the HBAs also undergo volatilization or decomposition but at a higher temperature. For example, the most commonly used HBA choline chloride (ChCl) begins to decompose at around 250 ℃. The hydrogen bond plays an important role in the thermal stability of DESs. It hinders the "escape" of molecules and requires greater energy to break than pure HBAs and HBDs, which causes the Tonset of DESs to shift to higher temperatures. Note that the thermal stability of HBDs has a crucial effect on the Tonset of DESs. The HBDs would decompose or volatilize first during TGA because of their relatively poor thermal stability or lower boiling points. The more stable the HBDs are, the greater would be the Tonset values of the corresponding DESs. Further, the effects of anions on HBAs, molar ratio of HBAs to HBDs, and heating rate in fast scan TGA have been discussed. As the heating rate increased, the TGA curves of DESs shifted to higher temperatures gradually, and the temperature hysteretic effect became prominent when the rate reached 10 ℃?min?1. From an industrial application point of view, there is an overestimation of the onset decomposition temperatures of DESs by Tonset, so the long-term stability of DESs was investigated at the end of the study. This study could help understand the thermal behavior of DESs (progressive decomposition) and provide guidance for designing DESs with appropriate thermal stability for practical applications. MSC2000: • O642
# Hopf-Rinow theorem If $(M,g)$ is a riemannian manifold. $M$ is complete(geodesically) then any two point can be joined by a geodesic. Geodesic($\gamma(t))$ is smooth curve such that $\nabla_{\gamma'^{(t)}}\gamma'(t)=0$. Where $\nabla$ is levi civita connection for metric $g$. Now question is: Whether always these curve(geodesic) is real analytic??? Hopf-Rinow theorem proves that there is a smooth curve which is goedesic which joins two points.
# Math Induction Proof: $(1+\frac1n)^n < n$ So I have to prove: For each natural number greater than or equal to 3, $$(1+\frac1n)^n<n$$ My work: Basis step: $n=3$ $$\left(1+\frac13\right)^3<3$$ $$\left(\frac43\right)^3<3$$ $$\left(\frac{64}{27}\right)<3$$ which is true. Now the inductive step, assume $P(k)=\left(1+\frac1k\right)^k<k$ to be true and prove $P(k+1)=\left(1+\frac1{k+1}\right)^{k+1}<k+1$. This is where I am stuck because usually you add or multiply by $k+1$ or some similar term. • Are you sure you're not supposed to show $$\left(1 + \frac1n\right)^n < 3\,?$$ – Daniel Fischer Nov 26 '13 at 21:30 • @DanielFischer, that would be harder. – dfeuer Nov 26 '13 at 22:03 • @dfeuer For sure. But it would not be an exceptional exercise. – Daniel Fischer Nov 26 '13 at 22:05 • Is it possible to prove something with limits? Because $\left( 1\; +\; \frac{1}{n} \right)^{n}$ tends to $e$ as $n$ approaches infinity...? – 1110101001 Nov 29 '13 at 0:27 Hint: $$\left( 1+\frac{1}{k+1} \right)^{k+1} = \left( 1 + \frac{1}{k+1}\right) \left( 1 + \frac{1}{k+1}\right)^{k} < \left( 1 + \frac{1}{k+1}\right) \left( 1 + \frac{1}{k}\right)^{k}\\ < \left( 1 + \frac{1}{k+1}\right)k$$ where the last inequality comes from your induction hypothesis. • Alright I understand how you broke down the $$\left(1+\frac1{k+1}\right)^{k+1}=\left(1+\frac1{k+1}\right)\left(1+\frac1{k+1}\right)^k$$ but I do not see how you got the inequality from the hypothesis – user111702 Nov 26 '13 at 21:42 • The first inequality is simply because $\frac{1}{k+1} < \frac{1}{k}$. The second inequality was by the induction hypothesis that $$\left( 1 + \frac{1}{k} \right)^k < k$$ – Tom Nov 26 '13 at 21:44 • So from the hypothesis you multiply both sides by a factor of $$\left(1+\frac1{k+1}\right)$$ and get $$\left(1+\frac1{k+1}\right)\left(1+\frac1k\right)^k<\left(1+\frac1{k+1}\right)k$$ right? – user111702 Nov 26 '13 at 21:50 • That's one way to see it. I just considered the following $$\left( 1+\frac{1}{k+1} \right) \underbrace{ \left(1+\frac{1}{k} \right)^k }_{<k \text{ by hypothesis}} < \left(1 + \frac{1}{k+1} \right)k$$ – Tom Nov 26 '13 at 21:51 • Alright so then how do you know that $$\left(1+\frac1{k+1}\right)\left(1+\frac1{k+1}\right)^k<\left(1+\frac1{k+1}\right)\left(1+\frac1{k}\right)^k$$ – user111702 Nov 26 '13 at 21:56 $\left(1+\frac 1 n\right)^n=1+1+\binom n 2\frac 1 {n^2}+\binom n 3 \frac 1 {n^3}+\dotsb+\frac 1 {n^n}$ But $\binom n k \frac 1 {n^k}=\frac {n(n-1)\dotsm(n-k+1)}{k!n^k}<\frac 1 {k!}$. So the expression we're interested in is less than $$1+1+\frac 1 {2!}+\frac 1{3!}+\dotsb+\frac 1 {n!}<1+1+\frac 1 2 +\frac 1 4 +\frac 1 8+\dotsb=3.$$ This is equivalent to proving $$P(n)\colon\qquad (n+1)^n <n^{n+1}$$ for $n\ge3$. Inductive step: We assume that $P(k-1)$ holds, i.e., that $$k^{k-1}<(k-1)^k.$$ We will prove $P(k)$ by contradiction. Assume that $P(k)$ does not hold, i.e., $$(k+1)^k \ge k^{k+1}.$$ By multiplying the inequalities \begin{align*} (k-1)^k &> k^{k-1}\\ (k+1)^k &\ge k^{k+1} \end{align*} we get $$(k^2-1)^k \ge k^{2k},$$ i.e., $(k^2-1)^k \ge (k^2)^k$, which is a contradiction. This can be used to show that the sequence $\sqrt[n]{n}$ is eventually decreasing. I guess there are a few posts about this question, but I was not able to find some such post quickly.
# In a 2-columned document, how do you get a table to stretch across the entire width? I'm working on a document that runs two columns of text down the page, like many scientific conference proceedings. However, I'd like for a table to run across the entire width of the page, as it is far too wide to fit into one column. How do I do this? - It may depend on your document class, but the usual way is to use table* in place of table. \documentclass[twocolumn]{article} \usepackage{lipsum} \begin{document} \begin{table*} \centering \begin{tabular}{|c|} \Large This table is so wide that it needs to use both columns. \end{tabular} \end{table*} \lipsum[1-10] \end{document} - It might be worth adding that the only permitted position for table* and figure* floats is at the top of the page. Thus, if one tries to specify h or b to indicate the preferred location for "starred" floats, things aren't going to turn out well... – Mico Jul 27 '12 at 13:46
North-facing rocks that soak up polar sunlight [duplicate] Chapter 155 in the book '365 Surprising Scientific Facts, Breakthroughs, and Discoveries' of Sharon Bertsch McGrayne (Wiley, 1994) states that: North-facing rocks that soak up polar sunlight can be 15°C (60°F) warmer than the surrounding air. • Simply that polar sunlight is not really a thing? Or is it? – fffred May 22 '16 at 12:25 It is strange because The formula to convert Celsius to Fahrenheit is: $ºC = \frac{ºF - 32}{ 1.8}$ so a difference a difference of $1°C$ corresponds to a difference of $1.8 °F$ so a difference of $15°C$ is a difference of $27°F$ This book is confusing the conversion $15°C$ = $60°F$ with the conversion of $\Delta ºC$ to $\Delta ºF$. • I always tried to distinguish in my teaching between 5 Celsius degrees, and 5 degrees Celsius. Similar to 5 miles on Interstate 94, and Mile 5 on Interstate 94 – DJohnM May 22 '16 at 18:26 Celsius and Fahrenheit have different starting points, so converting between absolute temperatures is not the same as converting between relative temperatures (i.e. differences). This book converted an absolute temperature of 15 degrees Celsius rather than a relative temperature difference: the correct value would be only about 27 degrees. Hmmm...where should I start? "north facing rocks". how can rocks have a face? But let's say they do. It is kind of strange to ... receive sunlight when you are facing north. But this can still happen if you are at the south poll. Then it's north in every direction • But the question does not ask whether it could be true. It simply asks what is strange. Which is strange btw – fffred May 22 '16 at 12:30 • @fffred. I explained what I think is strange. I also added my personal thoughts that this can still happen. Is it strange that I did that? – Marius May 22 '16 at 12:32 • Your answer is fine. But your reason why it could be true sounds like a puzzle solution, not real science, and the question cites something real it seems. Well, I guess this could still be the right answer – fffred May 22 '16 at 12:36 • Rocks that are close enough to the north-pole can receive sunlight on the north-facing side, since earths axial tilt is currently ~23.4°. Inside the arctic circle there's a season, when the sun doesn't set for a few days to month, depending on how far north the position is and thus as well shines, when it's north from the viewer. And we can define "north facing" as "the largest side of the rock is on the north-side of the rock" – Paul May 22 '16 at 15:03 If we take the direction absolute: The sun is never in the North. It comes up in the East, travels over South to the West. North-facing rocks will not soak up any (direct) light because the sun is never there. • @Lordofdark is probably right though, that's the biggest flaw. – Mast May 23 '16 at 10:20 • The north pole is ice, and under the ice is sea. there are no rocks at the north pole. so clearly the rocks are at the south pole. – Jasen May 24 '16 at 8:49 • @Jasen Is polar light something only happening at the exact poles? I'd assume Greenland and Svalbard would be possible locations as well. – Mast May 24 '16 at 8:57 It is strange because rocks that are 15 degrees Celsius warmer will be above freezing, thereby eliminating permafrost and melting ice multiple months out of the year.
# What is the square root of 130? Sep 17, 2015 The actual answer is a number between 11 and 12, as $121 < 130 < 144$ so $\sqrt{{11}^{2}} < \sqrt{130} < \sqrt{{12}^{2}}$. But it's usually bad form to evaluate the root as it'll just give us an ugly number, we'll need to put everything as approximate because you can't put the exact value of a root, etc. so it's often not really worth the trouble. What we can do, is factor the numbers to see if there's a way to get a smaller number under the root. While factoring we only check for primes and work from the smallest (2) to the biggest. You don't have to do it that way, but this way is the simplest as you'll cover every base and won't forget a number or so. To factor we list the number and put a bar next to it 130 | Then we put the smallest prime that 130 can be perfectly divided by, on th eother side of the bar, and the quotient under the number 130 | 2 65 | And so on until we reach 1. Remembering those shortcuts to see if a number will divide or not is helpful here (i.e.: all evens are divisable by 2, all numbers that end in 5 or 0 are divisable by 5, if the sum or every digit is 3, 6 or 9 it's divisable by 3, and so on.) In the end it comes out to 130 | 2 65 | 5 13 | 13 1 | / 130 = 2513 Since none of these numbers is a perfect square, we can't take anything out of the root. So for most cases just saying $\sqrt{130}$ is that, should suffice. If your teacher really wants a value, you can use that range above and start estimating values, if you don't have a calculator. I.e.: $11 < \sqrt{130} < 12$ Since 130 is closer to 121 than to 144, we can guess that it's root will be closer to 11 than to 144. We check out then with 11,5. $11.5 \cdot 11.5 = 132.25$ $132.25 > 130$ $11 < \sqrt{130} < 11.5$ So we found a better upper range, now, since 132,25 is closer to 130 than 121, we can guess that the root will be closer to 11.5 than to 11. So we can test out with 11.4 $11.4 \cdot 11.4 = 129.96$/ $129.96 < 130$ $11.4 < \sqrt{130} < 11.5$ And so on, until we get a good enough estimate. If you have a calculator you can just put that in and find the value. Which is approximately $11.401754$
# Technical Studies Reference ### Ratio (Single Line) This study calculates the ratio of the data specified by the Input Data Input of two charts. This is useful for comparing the relative strength of one chart to another. Refer to the Multiple Chart Studies description for instructions to use this study. Let $$X_t^{(1)}$$ and $$X_t^{(2)}$$ denote the value of the Input Data at Index $$t$$ of Chart 1 and Chart 2, respectively. Chart 1 (aka the Source Chart) is always the chart to which the Ratio (Single Line) study is applied, and Chart 2 (aka the Destination Chart) is the chart specified by the Chart 2 Number Input. $$X_t^{(1)}$$ and $$X_t^{(2)}$$ are both determined by a single Input Data Input. Let $$v_1$$ and $$v_2$$ denote the Inputs Chart 1 Multiplier and Chart 2 Multiplier, respectively. Then the Ratio (Single Line) study calculates the following Ratio. $$\displaystyle{\frac{v_1 \cdot X_t^{(1)}}{v_2 \cdot X_t^{(2)}}}$$ If $$X_t^{(1)} = 0$$ or $$X_t^{(2)} = 0$$, then a value of $$0$$ is used for the Ratio. If the Use Latest Source Data For Last Bar Input is set to Yes, and a replay is not running and at the last bar in the Destination Chart, then the study uses the data from the very latest bar in the Source Chart. If this Input is set to No, then this is not done. #### Inputs • Input Data • Chart 2 Number: Determines which chart is to be used in the divisor of the calculated Ratio. • Chart 1 Multiplier • Chart 2 Multiplier • Use Latest Source Data For Last Bar: Determines whether the data from the last bar in the Source Chart is to be used.
# Time reversal transformation of the complex scalar field consider a complex scalar field $$\phi$$ $$\phi(t,x)=\int\frac{d^3k}{\sqrt{2\omega_k(2\pi)^{3}}} \big(a_ke^{i\vec{k}\cdot\vec{x}-i\omega_kt} +b^\dagger_ke^{-i\vec{k}\cdot\vec{x}+i\omega_kt}\big)$$ By definition, time reversal operator $$T$$ is anti-unitary, so we have $$T\phi(t,x)T^\dagger =\int\frac{d^3k}{\sqrt{2\omega_k(2\pi)^{3}}} \big(Ta_k T^\dagger e^{-i\vec{k}\cdot\vec{x}+i\omega_kt} +T b^\dagger_k T^\dagger e^{+i\vec{k}\cdot\vec{x}-i\omega_kt}\big) \\ =\int\frac{d^3k}{\sqrt{2\omega_k(2\pi)^{3}}} \big(a_{-k} e^{-i\vec{k}\cdot\vec{x}+i\omega_kt} + b^\dagger_{-k} e^{+i\vec{k}\cdot\vec{x}-i\omega_kt}\big) \\ =\int\frac{d^3k}{\sqrt{2\omega_k(2\pi)^{3}}} \big(a_{k} e^{i\vec{k}\cdot\vec{x}+i\omega_kt} + b^\dagger_{k} e^{-i\vec{k}\cdot\vec{x}-i\omega_kt}\big)=\phi(-t,x)$$ But this seem contradict with the expextation that for complex scalar field $$\phi$$, we should have $$T\phi T^\dagger \sim \phi^*$$ Anything wrong with the time reversal or the expectation of the field? The paradox can be resolved by being clear about how we want to define time-reversal. In the simplest case of a non-interacting, single-component complex scalar field, a typical definition takes time-reversal to be the antilinear transformation that replaces the field operator $$\phi(t,x)$$ with $$\phi(-t,x)$$. This is consistent with the calculation shown in the question. There is no contradiction, because antilinearity doesn't imply that the result should be $$\sim\phi^*$$. Whether or not a given transformation is a symmetry of the given model is a separate question, not addressed here; that would require specifying the whole model. The key message here is simply that if we're clear with definitions, then no contradictions arise. Quantum (field) theory is formulated using operators on a Hilbert space. For the question being asked here, the Hilbert space is not important, so we can think of the operators as elements of an abstract algebra (a C*-algebra) instead of as operators on a Hilbert space, but to be concise, I'll still call them "operators." Every operator $$A$$ has an adjoint, often denoted $$A^\dagger$$ by physicists and often denoted $$A^*$$ by mathematicians. I'll use the notation $$A^\dagger$$ here. The adjoint satisfies $$(zA)^\dagger=z^* A^\dagger \tag{1}$$ for all complex numbers $$z$$, where $$z^*$$ denotes the complex conjugate. It also satisfies $$(AB)^\dagger=B^\dagger A^\dagger. \tag{2}$$ We can think of the adjoint as an extension of complex conjugation from complex numbers to operators. An antilinear transformation takes each operator $$A$$ and returns a new operator, which I'll denote $$\sigma(A)$$, subject to these rules: $$\begin{gather} \sigma(zA)=z^*\sigma(A) \tag{3} \\ \sigma(AB)=\sigma(A)\sigma(B). \tag{4} \end{gather}$$ Notice that (4) preserves the order of multiplication but (2) reverses it. The quantum "complex" scalar field is a collection of operators $$\phi(t,x)$$, one per point in spacetime. (We can make this well-defined by treating spacetime as a very fine discrete lattice, but that level of detail won't be needed here.) Saying that the field operator is "complex" is a common but dangerous way of saying that it is not self-adjoint: $$\phi^\dagger(t,x)\neq\phi(t,x)$$. The field operator $$\phi(t,x)$$ is an operator, not a complex number. With all of that in mind, suppose that we define time-reversal to be the antilinear transformation $$\sigma_T$$ whose effect on the field operator is $$\sigma_T\big(\phi(t,x)\big)\equiv\phi(-t,x). \tag{5}$$ For any other operator $$A$$ that can be expressed in terms of $$\phi(t,x)$$, such as the operators $$a_k$$ or $$b_k$$ used in the question, the effect of $$\sigma_T$$ on $$A$$ can be derived from this definition. In particular, the requirement that $$\sigma_T$$ be antilinear implies $$\sigma_T\big(z\,\phi(t,x)\big)=z^*\phi(-t,x) \tag{6}$$ for all complex numbers $$z$$. This definition of time-reversal is consistent with the calculation shown in the question. The key message here is that antilinearity does not imply that the result must be $$\sim\phi^*$$. The structure of a given model might also motivate defining charge conjugation to be an ordinary linear transformation $$\sigma_C$$ that satisfies $$\sigma_C\big(\phi(t,x)\big)\equiv\phi^*(t,x). \tag{7}$$ (This is typically done in scalar QED, for example.) The requirement that $$\sigma_C$$ be linear implies $$\begin{gather} \sigma_C(zA)=z\,\sigma_C(A) \tag{8} \\ \sigma_C(AB)=\sigma_C(A)\sigma_C(B). \tag{9} \end{gather}$$ In particular, $$\sigma_C\big(z\,\phi(t,x)\big)=z\,\phi^*(t,x) \tag{10}$$ for all complex numbers $$z$$. Although this charge conjugation transformation takes the adjoint of the field operator, it does not take the adjoint of most other operators. It is a linear transformation, which preserves the order of multiplication as shown in (9), in contrast to (2). The transformations $$\sigma_T$$ and $$\sigma_C$$ were defined above by specifying two things: • whether they are linear or antilinear • how they affect the field operator. Their effect on all other operators may be deduced from these, assuming that all other operators may be expressed in terms of the field operators (as usual in QFT). More general definitions of such transformations can allow them to mix different components or different fields with each other; only simple representative possibilities were considered here. P.S. - An antilinear transformation $$\sigma_T$$ can be written as $$\sigma_T(A)=TAT^{-1} \tag{11}$$ for an antilinear operator $$T$$. This can be a useful way to write it when working with vectors in the Hilbert space, but this fact does not affect the answer to the question. • Time-reversal sometimes maps $\phi(t)\to\phi(-t)$, and sometimes $\phi(t)\to\phi^*(-t)$. And there are much more exotic alternatives, such as mixing different fields, multiplying them by matrices in Lorentz space, etc. In principle, they are all valid, but they correspond to different notions of time-reversal. In any case, it is not correct to say that time-reversal maps $\phi(t)$ to $\phi(-t)$: that is only one possibility, but there are many more. – AccidentalFourierTransform Dec 7 '18 at 0:31 • @AccidentalFourierTransform Hmmm, that's a very good comment. I unwittingly had only the simplest model in mind when I wrote this answer, which is presumably the kind of model the OP had in mind (based on the calculation in the question); but since neither I nor the OP actually specified the model.... Yes, good point. I'll try to adjust the wording to account for this. Thanks! – Chiral Anomaly Dec 7 '18 at 0:34 • @AccidentalFourierTransform One question, though: Doesn't it make sense to first define the form that a C, P, or T transformation should have, and then ask whether or not the given model has any symmetries with that form? If that does make sense, then wouldn't we want to define T to be a transformation that replaces $\phi(t,x)$ with $M\phi(-t,x)$, modulo continuous Lorentz transf's of course, where $M$ is some arbitrary matrix that allows for mixing among the various fields? An wouldn't we want to define C as replacing $\phi(t,x)$ with $B\phi^*(t,x)$ for some matrix B (mod contin LTs)? – Chiral Anomaly Dec 7 '18 at 0:49 • You may very well do that, but my point is that you don't have to. What you call $CT$, I may call $T$, and vice versa. The defining property of time-reversal is anti-unitarity; you can always compose any given $T$ with some (typically $\mathbb Z_2$-valued) unitary transformation, and the result will stay anti-unitary, and so it will define a valid new time-reversal $T'$. There is no a priori reason to impose that $T$ is to map $\phi(t)\to\phi(-t)$. It is certainly a valid possibility, but there are many more, and any of them can be called $T$. (Other transformations would be $T',T'',\dots$). – AccidentalFourierTransform Dec 7 '18 at 1:19 • Yeah, exactly. There is no canonical way of how $T$ should act. While $\phi\to\phi$ is a very natural possibility, the alternative $\phi\to\phi^*$ is also perfectly valid, and it is used very often. One of them is $T$, and the other one $CT$. But which is which is a matter of conventions. So OP is allowed to call $\phi\to\phi^*$ a $T$ transformation. – AccidentalFourierTransform Dec 7 '18 at 1:30
## What is Transitive Relation – Definition and Examples Here you will learn what is transitive relation on set with definition and examples based on it. Let’s begin – What is Transitive Relation ? Definition : Let A be any set. A relation R on A is said to be a transitive relation iff (a, b) $$\in$$ R and  (b, c) $$\in$$ R  $$\implies$$  …
# 1 Linear OLS Regression Go to the MLX, M, PDF, or HTML version of this file. Go back to fan’s MEconTools Package, Matlab Code Examples Repository (bookdown site), or Math for Econ with Matlab Repository (bookdown site). ## 1.1 Fit a Line through Origin to Two Points Fit a line from the origin through two points, given Equations $$Y=a\cdot X$$, where we have two pairs of points for x and y. rng(3); [x1, x2] = deal(rand(),rand()); [y1, y2] = deal(rand(),rand()); ar_x = [x1,x2]'; ar_y = [y1,y2]'; Fit a line through the two points, passing through the x-intercept. Three formulas that provide the same answer. % simple formula fl_slope_basic = (1/(x1*x1 + x2*x2))*(x1*y1 + x2*y2); % (X'X)^(-1)(X'Y) fl_slope_matrix = inv(ar_x'*ar_x)*(ar_x'*ar_y); % Use matlab function tb_slope_fitlm = fitlm(ar_x, ar_y, 'Intercept',false); fl_slope_fitlm = tb_slope_fitlm.Coefficients{1, 1}; Visualize results. figure(); hold on; scatter([x1,x2], [y1,y2]); xlim([0, 1]); ylim([0, 1]); refline(fl_slope_basic, 0); grid on; grid minor; title('Best fit line through origin with two points');
research on obesity and weight control indicates that # research on obesity and weight control indicates that S 9.9k points Homeostasis, which is the goal of drive reduction, is defined as:the body's tendency to maintain a constant internal state. Mary loves hang-gliding. It would be most difficult to explain Mary's behavior according to:drive-reduction theory. Positive and negative environmental stimuli that motivate behavior are called:incentives. Which of the following is inconsistent with the drive-reduction theory of motivation?Monkeys will work puzzles even if not given a food reward. For a thirsty person, drinking water serves to reduce:a drive. Which of the following is not an example of homeostasis?feeling hungry at the sight of an appetizing food By motivating us to satisfy our physical needs, hunger and thirst serve to:maintain homeostasis. One problem with the idea of motivation as drive reduction is that:because some motivated behaviors do not seem to be based on physiological needs, they cannot be explained in terms of drive reduction. Food deprivation is to ________ as hunger is to ________.need; drive The role of learning in motivation is most obvious from the influence of:incentives. Basal metabolic rate is the body's resting rate of:energy expenditure. Electrical stimulation of the lateral hypothalamus will cause an animal to:begin eating. Research on the physiological basis of hunger has indicated that:hunger continues in humans whose cancerous stomachs have been removed. Destruction of the ventromedial hypothalamus of a rat is most likely to:cause the rat to become extremely fat. The brain area that when stimulated suppresses eating is the:ventromedial hypothalamus. Leptin, a hunger-dampening protein, is secreted by:fat cells. Dr. Milosz electrically stimulates the lateral hypothalamus of a well-fed laboratory rat. This procedure is likely to:cause the rat to begin eating. Hunger is increased by ________ but it is decreased by ________.cause the rat to begin eating. Ancel Keys and his colleagues observed that men on a semistarvation diet:lost interest in sex and social activities. An explanation of motivation in terms of homeostasis is best illustrated by the concept of:set point. Need is to ________ as drive is to ________.food deprivation; hunger Lack of body fluids is to cold water as ________ is to ________.need; incentive On some college football teams, players are rewarded for outstanding performance with a gold star on their helmets. This practice best illustrates the use of:incentives. Victims of a famine will often eat unappetizing and nutritionally poor foods simply to relieve their constant hunger. Their behavior is best explained in terms of:drive-reduction theory. For a hungry person, the consumption of food serves to:maintain homeostasis. Randy, who has been under a lot of stress lately, has intense cravings for sugary junk foods, which tend to make him feel more relaxed. Which of the following is the most likely explanation for his craving?The extra sugar tends to lower blood insulin level, which promotes relaxation. A starving rat will lose all interest in food if its ________ is destroyed.lateral hypothalamus Destruction of the ventromedial hypothalamus of a rat is most likely to:cause the rat to become extremely fat. An animal's stomach and intestines will process food more rapidly and the animal will become extremely fat if its:ventromedial hypothalamus is destroyed. Hunger is increased by ________ but it is decreased by ________.stimulation of the lateral hypothalamus; stimulation of the ventromedial hypothalamus Two rats have escaped from their cages in the neurophysiology lab. The technician needs your help in returning them to their proper cages. One rat is grossly overweight; the other is severely underweight. You confidently state that the overweight rat goes in the "________-destruction" cage, while the underweight rat goes in the "________-destruction" cage.ventromedial hypothalamus; lateral hypothalamus Increases in the hormone insulin lead to:decreasing blood glucose levels. The concept of a set point is relevant to understanding the experience of:hunger. A need refers to:a physiological state that usually triggers motivational arousal. An aroused or activated state that is often triggered by a physiological need is called a(n):drive. Some students work hard in school in order to attain high grades. This best illustrates the importance of:incentives. Ali's parents have tried hard to minimize their son's exposure to sweet, fattening foods. If Ali has the occasion to taste sweet foods in the future, which of the following is likely:He will display a preference for sweet tastes. Increases in insulin will:lower blood sugar and trigger hunger. The set point is:the specific body weight maintained automatically by most adults over long periods of time. The relative risk of death among healthy nonsmokers is highest for:overweight men. In one experiment, professional actors played the role of either normal-weight or overweight job applicants. Research participants' willingness to hire the applicants revealed:greater discrimination against overweight women than against overweight men. By dramatically reducing her daily caloric intake, Marilyn plans to reduce her normal body weight by 10 to 15 percent. Research suggests that after three or four weeks of sustained dieting, Marilyn will:have a lower resting metabolic rate. Given an obese parent, boys are at an ________ risk for obesity and girls are at a ________ risk for obesity.increased; increased The number of fat cells a person has is influenced by:all of the alternatives. In a classic experiment, obese patients whose daily caloric intake was dramatically reduced lost only 6 percent of their weight. This limited weight loss was due, at least in part, to the fact that their dietary restriction led to a(n):decrease in their metabolic rate. In animals, destruction of the lateral hypothalamus results in ________, whereas destruction of the ventromedial hypothalamus results in ________.loss of hunger; overeating Research on obesity and weight control indicates that:no matter how carefully people diet, they can never lose fat cells. According to health psychologists, which of the following would be the best advice or encouragement to offer someone who wants to lose excess weight?Reduce your weight gradually over a period of several months." The correct order of the stages of Masters and Johnson's sexual response cycle is:excitement; plateau; orgasm; resolution Many sexually active American adolescents fail to avoid pregnancy because:they have mistaken ideas about effective birth control methods. The refractory period is the:time span after orgasm during which a male cannot be aroused to another orgasm. Research on sex hormones and human sexual behavior indicates that:women's sexual interests are only loosely linked to the phases of their menstrual cycles. Of the following parts of the world, teen intercourse rates are highest in:Western Europe. Kamil, a 33-year-old lawyer, experiences premature ejaculation. Through behaviorally oriented therapy, he would most likely learn to minimize his problem by:learning ways to control his urge to ejaculate. Which of the following was not identified as a contributing factor in the high rate of unprotected sex among adolescents?thrill-seeking Evidence that brain differences between homosexuals and heterosexuals influence sexual orientation is provided by the fact that these differences:originate at about the time of birth. Research on the environmental conditions that influence sexual orientation indicates that:the reported backgrounds of homosexuals and heterosexuals are similar. Why do people with a high need for achievement prefer moderately difficult tasks?Moderately difficult tasks present an attainable goal in which success is attributable to their own skill. Psychologist Henry Murray asked research participants to invent stories about ambiguous pictures. The stories were then scored for content related to acts of heroism, pride, and other signs of:achievement motivation. The health risks associated with obesity are generally the greatest for those who carry their excess weight around their:bellies. New research has linked women's obesity to their risk of late-life:Alzheimer's disease. Lucille has been sticking to a strict diet but can't seem to lose weight. What is the most likely explanation for her difficulty?Her prediet weight was near her body's set point. Research on obesity and weight control indicates that: Student Responseno matter how carefully people diet, they can never lose fat cells. The text suggests that a neophobia for unfamiliar tastes:protected our ancestors from potentially toxic substances. Having lost weight, formerly obese individuals have ________ fat cells and ________ metabolic rates.smaller; slower After an initial rapid weight loss, a person on a diet loses weight much more slowly. This slowdown occurs because:when a person diets, metabolism decreases. During which phase of the sexual response cycle does the refractory period begin?the resolution phase Research on the sexual response cycle indicates that:enough sperm may be released prior to male orgasm to enable conception. During the resolution phase of the sexual response cycle, people are most likely to experience a rapid decrease in physiological arousal if:they have just experienced orgasm. Compared to European teens, American teens have ________ rates of sexual intercourse and ________ rates of abortion.lower; higher A problem that consistently interferes with one's ability to complete the sexual response cycle is called:a sexual disorder. Which of the following is not true with respect to sexual orientation?With the help of a therapist, most people find it easy to change their sexual orientation A homosexual orientation is:very persistent and difficult to change. People who are high in achievement motivation prefer ________ tasks; people who are low in achievement motivation prefer ________ tasks.moderately difficult; very easy or very difficult Researchers use biological, psychological, and social-cultural levels of analysis to understand hunger motivation. The social-cultural level of analysis is especially likely to emphasize that eating disorders are influenced by:mass media standards of appearance. The tendency to overeat when food is plentiful:emerged in our prehistoric ancestors as an adaptive response to alternating periods of feast and famine. Researchers have observed that the incidence of obesity and diabetes among 50,000 nurses was predicted by their:TV viewing habits. The descriptions of orgasm written by men and women are ________, and the subcortical brain regions active in men and women during orgasm are ________:similar; similar Which of the following statements concerning homosexuality is true?New research indicates that sexual orientation may be at least partly physiological. Sexual orientation refers toa person's enduring sexual attraction toward members of a particular gender. Which of the following suggestions would be the worst advice for a dieter?"Avoid eating during the day so you can enjoy a big meal in the evening." When an organism's weight falls below its set point, the organism is likely to experience a(n):increase in hunger and a decrease in its metabolic rate. increase in hunger and a decrease in its metabolic rate.In order to treat yourself to one 'normal' meal each day, eat very little until the evening meal." According to Masters and Johnson, the sexual response of males is most likely to differ from that of females during:the resolution phase. The removal of a woman's ovaries may contribute to decreasing sexual interest because her natural ________ level is _______.testosterone; lowered Teens who use alcohol prior to sexual intercourse experience:reduced self-awareness and are less likely to use condoms. Ted is an amateur golfer who has a high need for achievement. Research suggests that Ted most likely prefers playing golf on courses that for him are:moderately difficult. Homeostasis refers to:the tendency to maintain a steady internal state. In order to encourage achievement motivation in the classroom, students should be taught to attribute their good grades to:their own hard work. Which of the following is not necessarily a reason that obese people have trouble losing weight?Obese people tend to lack willpower. Although John has been obese for as long as he can remember, he is determined to lose excess weight with a special low-calorie diet. John is likely to have difficulty losing weight while dieting because:fat tissue can be maintained by fewer calories than can other body tissues. Evidence that obesity is influenced by factors in addition to genetics includes the fact that:obesity is more common today than it was 40 years ago. research on studybud Surround your text in *italics* or **bold**, to write a math equation use, for example, $x^2+2x+1=0$ or $$\beta^2-1=0$$ Use LaTeX to type formulas and markdown to format text. See example. • Answer the question above my logging into the following networks ### Post as a guest • Your email will not be shared or posted anywhere on our site • Stats Views: 12
# Equilibrium and Center of Gravity problem 1. Nov 27, 2006 ### p0ke I'm having some trouble finding a definitive answer for this problem of my physics lab. The large quadricep muscles in the upper leg terminate at the lower end in a tendon attached to the upper end of the tibia. The force on the lower leg when the leg is extended are modeled below (attached jpg), where T is the tension in the tendon, w is the force of gravity acting on the lower leg, and F is the weight of the foot. Find T when the tendon is at an angle of 20 degrees with the tibia, assuming that w = 180N, F = 30.0 N, and the leg is extended at an angle of 45.0 degrees with the vertical. Assume that the center of gravity of the lower leg is at its center, and that the tendon attaches to the lower leg at a point one fifth of the way down the leg. I've been looking through some examples to try to come up with a general idea, but this problem seems somewhat different as no lengths or distances are given except that the tendon attaches 1/5 down the leg. So far, I've found forces in x and y components to get SUMFx = Tx + wx + Fx = 0 to get Tx = 0. Then in y direction, I'm not sure if i can do this but i found Tsin65 - 180 - 30 = 0 to get T = 231.7. Knowing that this is supposed to be a torque-related problem I also used SUMt = tT + tw + tF = 0. Then tT = rTTsinθ. This is where I am stuck. How would I use the torque equilibrium equation if I am not given any distances? Any hints or suggestions would be really helpful! #### Attached Files: • ###### equilibrium.JPG File size: 5.1 KB Views: 83 2. Nov 28, 2006 ### andrevdh Welcome to PF p0ke. The length of the leg will appear in the calculation of all three torques. So you can cancel it out in "the sum of the torques = to zero". You can then take moments about the knee (pivot point) taking $$l$$ as the length of the tibia, which then cancels in the equation. The only unknown in the torque equation will then be the tension in the muscle. So you only need to consider this equation. There will also be a reaction force at the knee acting on the tibia, which need not be included in the torque equation if you take the torques about the knee. Last edited: Nov 28, 2006
# Converting Sigma notation.... by Pranav-Arora Tags: converting, notation, sigma HW Helper P: 6,164 Quote by Pranav-Arora I like Serena said that define a function of x and integrate it but i still don't get how he got x2k? I defined an arbitrary function s(x) that looks a bit like your problem with the special property that if you substitute x=1, it is identical to your problem. It's a trick to solve your problem. The choice of the power 2k was inspired so that the factor (2k+1) would disappear during integration. P: 3,060 Quote by I like Serena I defined an arbitrary function s(x) that looks a bit like your problem with the special property that if you substitute x=1, it is identical to your problem. It's a trick to solve your problem. The choice of the power 2k was inspired so that the factor (2k+1) would disappear during integration. Ok i got it!! But how i would integrate the expression. It involves sigma notation and i have never done integration of any expression which involves sigma notation. HW Helper P: 6,164 Quote by Pranav-Arora Ok i got it!! But how i would integrate the expression. It involves sigma notation and i have never done integration of any expression which involves sigma notation. So write out the terms of the summation, do the integration, and combine the resulting terms back into sigma notation. P: 3,060 Quote by I like Serena So write out the terms of the summation, do the integration, and combine the resulting terms back into sigma notation. I did as you said. After integrating, i got $$x+nx^3+\frac{n(n-1)x^5}{2!}+\frac{n(n-1)(n-2)x^7}{3!}........x^{2n+1}$$ Now what should i do next? HW Helper P: 3,324 To be frank, I have never answered such questions myself either, but I took I like Serena's advice and it worked out wonderfully. We'll start again just to make things clear, To solve $$\sum_{k=0}^{n}(2k+1)^{n}C_k$$ we define a function $$f(x)=\sum_{k=0}^{n}(2k+1){^n}C_kx^{2k}$$ we make it x2k because the integral of that is $\frac{x^{2k+1}}{2k+1}$ and notice how that denominator will cancel with the (2k+1) factor in the original question. So we have, $$\int{f(x)dx}=\sum_{k=0}^{n}{^n}C_kx^{2k+1}$$ And here is the tricky part, we need to convert the right side into a binomial expression using the formula $$\sum_{k=0}^{n}{^n}C_ka^kb^{n-k}=(a+b)^n$$ It is clear that the b is again missing, which it is just hidden as b=1, but we need to convert the x2k+1 in such a way that it is equivalent to ak. Use your rules for indices to convert it in such a way. HW Helper P: 3,324 What might have been easier for you is you could've split the summation into $$2\sum k{^n}C_k+\sum {^n}C_k$$ and then defined $$s(x)=2\sum k{^n}C_kx^{k+1}+\sum {^n}C_kx^{k+1}$$ HW Helper P: 6,164 Quote by Pranav-Arora I did as you said. After integrating, i got $$x+nx^3+\frac{n(n-1)x^5}{2!}+\frac{n(n-1)(n-2)x^7}{3!}........x^{2n+1}$$ Now what should i do next? $$\sum_{k=0}^n {^n}C_k (2k+1) x^{2k} = {^n}C_0 + {^n}C_1 \cdot 3 x^2 + {^n}C_2 \cdot 5 x^4 + ... + {^n}C_k (2k + 1) x^{2k} + ...$$ Integration would give: $${^n}C_0 \cdot x + {^n}C_1 \cdot x^3 + {^n}C_2 \cdot x^5 + ... + {^n}C_k x^{2k+1} + ...$$ Converting back to sigma notation: $$\sum_{k=0}^n {^n}C_k x^{2k+1}$$ P: 3,060 Quote by Mentallic To be frank, I have never answered such questions myself either, but I took I like Serena's advice and it worked out wonderfully. We'll start again just to make things clear, To solve $$\sum_{k=0}^{n}(2k+1)^{n}C_k$$ we define a function $$f(x)=\sum_{k=0}^{n}(2k+1){^n}C_kx^{2k}$$ we make it x2k because the integral of that is $\frac{x^{2k+1}}{2k+1}$ and notice how that denominator will cancel with the (2k+1) factor in the original question. So we have, $$\int{f(x)dx}=\sum_{k=0}^{n}{^n}C_kx^{2k+1}$$ And here is the tricky part, we need to convert the right side into a binomial expression using the formula $$\sum_{k=0}^{n}{^n}C_ka^kb^{n-k}=(a+b)^n$$ It is clear that the b is again missing, which it is just hidden as b=1, but we need to convert the x2k+1 in such a way that it is equivalent to ak. Use your rules for indices to convert it in such a way. Would it be like this (x2k+1+1)n. P: 3,060 Quote by I like Serena Hmm, what I intended was this: $$\sum_{k=0}^n {^n}C_k (2k+1) x^{2k} = {^n}C_0 + {^n}C_1 \cdot 3 x^2 + {^n}C_2 \cdot 5 x^4 + ... + {^n}C_k (2k + 1) x^{2k} + ...$$ Integration would give: $${^n}C_0 \cdot x + {^n}C_1 \cdot x^3 + {^n}C_2 \cdot x^5 + ... + {^n}C_k x^{2k+1} + ...$$ Converting back to sigma notation: $$\sum_{k=0}^n {^n}C_k x^{2k+1}$$ I did the same way. HW Helper P: 3,324 Quote by Pranav-Arora Would it be like this (x2k+1+1)n. Noo... $$\sum{^n}C_kx^k1^{n-k}=(x+1)^n$$ and not $$(x^k+1)^n$$ Use the fact that $$a^{b+1}=a\cdot a^b$$ and $$a^{2b}=\left(a^2\right)^b$$ P: 3,060 Quote by Mentallic Noo... $$\sum{^n}C_kx^k1^{n-k}=(x+1)^n$$ and not $$(x^k+1)^n$$ Use the fact that $$a^{b+1}=a\cdot a^b$$ and $$a^{2b}=\left(a^2\right)^b$$ I tried it but got stuck again. I did it like this:- $$x^{2k+1}=x^{2k}.x=(x^2)^k.x$$ HW Helper P: 3,324 Quote by Pranav-Arora I tried it but got stuck again. I did it like this:- $$x^{2k+1}=x^{2k}.x=(x^2)^k.x$$ Why did you get stuck? That's exactly what it should be! So now we have $$\int{f(x)dx}=\sum_{k=0}^{n}{^n}C_kx\cdot \left(x^2\right)^k$$ And since x is independent of k, it can move out the front of the summation, so we have $$x\sum_{k=0}^{n}{^n}C_k\left(x^2\right)^k1^{n-k}$$ And now apply the formula to convert it into a binomial. And since you need to find f(1), take the derivative of both sides to get the expression for f(x). P: 3,060 Quote by Mentallic Why did you get stuck? That's exactly what it should be! So now we have $$\int{f(x)dx}=\sum_{k=0}^{n}{^n}C_kx\cdot \left(x^2\right)^k$$ And since x is independent of k, it can move out the front of the summation, so we have $$x\sum_{k=0}^{n}{^n}C_k\left(x^2\right)^k1^{n-k}$$ And now apply the formula to convert it into a binomial. And since you need to find f(1), take the derivative of both sides to get the expression for f(x). If i convert it into binomial, i get $$x.(x^2+1)^n$$ I substitute the value 1 and i get $$2^n$$ But then how i would find out the derivative?? Do i have to first take the derivative and substitute the value 1? HW Helper P: 6,164 Quote by Pranav-Arora If i convert it into binomial, i get $$x.(x^2+1)^n$$ Good! Quote by Pranav-Arora But then how i would find out the derivative?? Do i have to first take the derivative and substitute the value 1? Yep! HW Helper P: 3,324 Quote by Pranav-Arora But then how i would find out the derivative?? Do i have to first take the derivative and substitute the value 1? Yep, that's what I meant by Quote by Mentallic And since you need to find f(1), take the derivative of both sides to get the expression for f(x). You're nearly there! P: 3,060 Thanks!! I think that this time i am right. I took the derivative and found it to be $$2nx(x^2+1)^{n-1}$$ Now i substituted the value 1 and i got:- $$2n.2^{n-1}$$ Right...? HW Helper P: 6,164 No, not quite. What you have is not the derivative of $x.(x^2+1)^n$. You need to apply the so called product rule. That is: (u v)' = u' v + u v' And you have to apply the so called chain rule. That is: (u(v))' = u'(v) v' Are you familiar with those rules? P: 3,060 I am not familiar with the product rule but when i calculated the derivative on Wolfram, it was the same as i got? Related Discussions General Math 6 Calculus & Beyond Homework 1 Precalculus Mathematics Homework 4 Calculus & Beyond Homework 2 Calculus & Beyond Homework 16
# Score 7660 49220 Accepted 100 Subtask no. Testdata Range Constraints Score 1 0~9 $N, M \leq 5000$ 15 / 15 2 0~29 $N, M \leq 30000$ 20 / 20 3 0~44 no additional limits 65 / 65 # Testdata Results Testdata no. Subtasks Time (ms) Memory (KiB) Verdict Score 0 1 2 3 24 24792 Accepted 100 1 1 2 3 24 25012 Accepted 100 2 1 2 3 24 25012 Accepted 100 3 1 2 3 24 24968 Accepted 100 4 1 2 3 24 24900 Accepted 100 5 1 2 3 20 24800 Accepted 100 6 1 2 3 20 24956 Accepted 100 7 1 2 3 24 24876 Accepted 100 8 1 2 3 20 24820 Accepted 100 9 1 2 3 20 24804 Accepted 100 10 2 3 48 26524 Accepted 100 11 2 3 52 26536 Accepted 100 12 2 3 48 26564 Accepted 100 13 2 3 48 26544 Accepted 100 14 2 3 48 26572 Accepted 100 15 2 3 52 26484 Accepted 100 16 2 3 52 26520 Accepted 100 17 2 3 48 26484 Accepted 100 18 2 3 52 26540 Accepted 100 19 2 3 48 26568 Accepted 100 20 2 3 52 26532 Accepted 100 21 2 3 52 26632 Accepted 100 22 2 3 48 26576 Accepted 100 23 2 3 48 26512 Accepted 100 24 2 3 48 26516 Accepted 100 25 2 3 48 26440 Accepted 100 26 2 3 52 26572 Accepted 100 27 2 3 48 26572 Accepted 100 28 2 3 40 26172 Accepted 100 29 2 3 36 25636 Accepted 100 30 3 504 49220 Accepted 100 31 3 500 49120 Accepted 100 32 3 500 49048 Accepted 100 33 3 504 49204 Accepted 100 34 3 596 49140 Accepted 100 35 3 596 49144 Accepted 100 36 3 500 49128 Accepted 100 37 3 504 49144 Accepted 100 38 3 500 49064 Accepted 100 39 3 500 49216 Accepted 100 40 3 252 47444 Accepted 100 41 3 252 47412 Accepted 100 42 3 256 47308 Accepted 100 43 3 252 47272 Accepted 100 44 3 252 47248 Accepted 100 Submitter: Compiler: c++14 Code Length: 2.52 KB
Shifting the phase of an Arduino PWM This is a fairly simple question. Is it possible to use additional circuitry, connected to an Arduino PWM pin, to shift it's phase? As far as I can tell, the Arduino PWM pins can be configured for frequency and duty cycle, but I do not believe that phase can be controlled...at least, not out of the box. I had an Arduino kit, however I'm taking it back today (too expensive, with a ton of parts that I don't need). If I need to use something other than an Arduino to support phase shifted PWM, then I'm open to that. • What type of frequency do you want to shift the phase of, and relative to what? An external circuit can be possible, but depending on the timing different techniques come into play. Is it possible you could accomplish a shift by commanding a single shorter or longer frame period in software? Note that the actual functional element of an "arduino" is a microcontroller costing a few dollars; the rest is convenient packaging, programming interface and accessories. – Chris Stratton Aug 4 '14 at 16:21 • Also want to point out that as long as we are potentially talking about an external phase shift circuit this belongs here and not the Arduino site. – Chris Stratton Aug 4 '14 at 16:23 • As others here have said, if you want more precise control of the timing functions on your microcontroller then you must step outside the arduino library and configure it yourself. Read the datasheet section about timer/counters. – sherrellbc Aug 4 '14 at 17:25 • How much phase shift do you need and how accurate does it have to be? – EM Fields Aug 4 '14 at 19:18 • I need 180 degrees...it does not have to be super accurate, as far as I understand. – jrista Aug 4 '14 at 19:24 • Thanks, I hadn't looked at the datasheet before. If it's possible to shift the phase of the Arduino PWM, then that will work. I'm able to pick up the Arduino Uno R3 for about $19, so it isn't that expensive for the convenience of a handy package that can adapt arduino shields and all that. – jrista Aug 4 '14 at 17:29 • BTW, since this answers the question Arduino-style, I'm ok if you guys want to move this to the Arduino forum. I forgot about that one, but this probably belongs there. – jrista Aug 4 '14 at 17:30 • You can pick up an ATmega328P for about$3, so don't feel compelled to turn this into an Arduino question. Especially since you can't do all this with the Arduino libraries regardless. – Ignacio Vazquez-Abrams Aug 4 '14 at 17:31
# GENLASSO: Generalized LASSO In ADMM: Algorithms using Alternating Direction Method of Multipliers ## Description Generalized LASSO is solving the following equation, \textrm{min}_x ~ \frac{1}{2}\|Ax-b\|_2^2 + λ \|Dx\|_1 where the choice of regularization matrix D leads to different problem formulations. ## Usage 1 2 admm.genlasso(A, b, D = diag(length(b)), lambda = 1, rho = 1, alpha = 1, abstol = 1e-04, reltol = 0.01, maxiter = 1000) ## Arguments A an (m\times n) regressor matrix b a length-m response vector D a regularization matrix of n columns lambda a regularization parameter rho an augmented Lagrangian parameter alpha an overrelaxation parameter in [1,2] abstol absolute tolerance stopping criterion reltol relative tolerance stopping criterion maxiter maximum number of iterations ## Value a named list containing x a length-n solution vector history dataframe recording iteration numerics. See the section for more details. ## Iteration History When you run the algorithm, output returns not only the solution, but also the iteration history recording following fields over iterates, objval object (cost) function value r_norm norm of primal residual s_norm norm of dual residual eps_pri feasibility tolerance for primal feasibility condition eps_dual feasibility tolerance for dual feasibility condition In accordance with the paper, iteration stops when both r_norm and s_norm values become smaller than eps_pri and eps_dual, respectively. Xiaozhi Zhu ## References \insertRef 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 ## generate sample data m = 100 n = 200 p = 0.1 # percentange of non-zero elements x0 = matrix(Matrix::rsparsematrix(n,1,p)) A = matrix(rnorm(m*n),nrow=m) for (i in 1:ncol(A)){ A[,i] = A[,i]/sqrt(sum(A[,i]*A[,i])) } b = A%*%x0 + sqrt(0.001)*matrix(rnorm(m)) D = diag(n); ## set regularization lambda value regval = 0.1*Matrix::norm(t(A)%*%b, 'I') ## solve LASSO via reducing from Generalized LASSO output = admm.genlasso(A,b,D,lambda=regval) # set D as identity matrix ## visualize ## report convergence plot niter = length(output$history$s_norm) par(mfrow=c(1,3)) plot(1:niter, output$history$objval, "b", main="cost function") plot(1:niter, output$history$r_norm, "b", main="primal residual") plot(1:niter, output$history$s_norm, "b", main="dual residual")
Where does precisely the dificulty in exponentiating a Hamiltonian $H$ in the quantum simulation problem lay? I've read in the Nielsen's, Chuang's "Quantum Computation and Quantum Information": Classical simulation begins with the realization that in solving a simple differential equation such as $$dy/dt = f(y)$$, to first order, it is known that $$y(t + \Delta t) \approx y(t) + f (y)\Delta t$$. Similarly, the quantum case is concerned with the solution of $$id|\psi \rangle/dt = H|\psi \rangle$$, which, for a time-independent $$H$$, is just $$|\psi(t)\rangle = e^{-iHt}|\psi(0)\rangle.\ \ \ \ \ \ (4.96)$$ Since H is usually extremely difficult to exponentiate (it may be sparse, but it is also exponentially large), a good beginning is the first order solution $$|\psi(t + \Delta t)\rangle \approx (I − iH \Delta t)|\psi(t)\rangle$$. This is tractable, because for many Hamiltonians $$H$$ it is straightforward to compose quantum gates to efficiently approximate $$I − iH \Delta t$$. However, such first order solutions are generally not very satisfactory. Efficient approximation of the solution to Equation (4.96), to high order, is possible for many classes of Hamiltonian. For example, in most physical systems, the Hamiltonian can be written as a sum over many local interactions. Specifically, for a system of $$n$$ particles, $$H = \sum_{k=1}^L H_k,\ \ \ \ \ \ \ (4.97)$$ where each $$H_k$$ acts on at most a constant c number of systems, and L is a polynomial in $$n$$. For example, the terms $$H_k$$ are often just two-body interactions such as $$X_i X_j$$ and one-body Hamiltonians such as $$X_i$$. [...] The important point is that although $$e^{−iHt}$$ is difficult to compute, $$e^{−iH_kt}$$ acts on a much smaller subsystem, and is straightforward to approximate, using quantum circuits. This may be a silly question, but I'm stuck with this one. Does the difficulty of obtaining $$e^{-iHt}$$ lies only in its size? Both $$e^{-iHt}$$ and $$e^{-iH_kt}$$ can be seen as matrices (of course, the first one is immensely larger than the latter one) and a Taylor series can be used to approximate both of them. This in turn boils down to just making a number of multiplications of $$H$$ (with different coefficients standing by the consecutive matrices). So, it makes sense for a sparse matrix to be easier to obtain, because we just don't have to do a number of multiplications, which would at the end give 0. There are two things that come to my mind. First of which is a divide-and-conquer approach, where obtainment of $$e^{-iH_kt}$$ is simple and all "small" results are combined to get a big one. In fact, I think that Trotterization is this kind of approach. The second thing is a guess, that maybe $$e^{-iH_kt}$$ can be computed in some different way, than using Taylor series (it's a really wild guess)? TL;DR: Hamiltonian simulation does not just mean "exponentiating $$H$$". It means finding a quantum circuit $$U$$ that approximates the matrix exponentiation $$e^{-iHt}$$. More importantly, the size of the Hamiltonian matrix $$H$$ isn't the key concern here. The gate complexity (or query complexity, in case the Hamiltonian is described as an oracle) of matrix exponentiation is. Simulating arbitrary matrix exponentiations using quantum circuits is computationally very expensive. By imposing specific restrictions on the structure of the local interaction $$H_k$$ matrices (like, $$H_k$$ can act on at most constant $$c$$ number of systems), the complexity of the simulation can be reduced. Definitions The basic goal of the Hamiltonian simulation problem, given a Hamiltonian $$H$$ ($$2^n\times 2^n$$ Hermitian matrix acting on $$n$$ qubits), is to find an algorithm that approximates $$U$$ such that $$||U - e^{-iHt}|| \leq \epsilon$$, where $$e^{-iHt}$$ is the ideal evolution and $$||.||$$ is the operator norm (aka spectral norm) $$||A|| := \mathrm{max}_{|\psi\rangle\neq 0}\frac{||A|\psi\rangle||}{|||\psi\rangle||},$$ where $$||\psi\rangle|| = \sqrt{\langle \psi|\psi\rangle}$$ is the usual Euclidean norm of $$|\psi\rangle$$. The key that determines the hardness of this problem is not so much the size of $$H$$ but rather the gate complexity (gate complexity and time complexity are more or less proportional). In order for the Hamiltonian simulation to be efficient, we need $$U$$ to be approximable by a quantum circuit containing $$\mathrm{poly}(n)$$ gates. Most Hamiltonians $$H$$ do not satisfy this criterion and are thus not efficiently simulable (you may ask for the proof for this as a separate question if you're curious!). Commuting Hamiltonians Fortunately, most physically occurring Hamiltonians can indeed be simulated efficiently. A very special case is the class of $$k$$-local Hamiltonians - these Hamiltonians can be expressed as $$H = \sum_{j=1}^{m} H_j$$ where each $$H_j$$ acts non-trivially on at most $$k$$-qubits. We generally assume that $$m \leq \binom{n}{k} = \mathcal{O}(n^k)$$. Since $$k$$ is supposed to be a constant, $$m$$ is polynomial in $$n$$. It can be proved from the Solovay-Kitaev theorem that each of the individual $$H_j$$ operators can be simulated efficiently (cf. Ashley Montanaro's lecture notes). Now if the $$H_j$$'s commute we have $$\exp(-iHt) = \exp(-i(\sum_{j=1}^m H_j)) = \prod_{j=1}^{m}\exp(-iH_jt).$$ From here it can also be shown that for any $$t$$ there exists a quantum algorithm that approximates the operator $$e^{-iHt}$$ to within $$\epsilon$$ in time $$\mathcal{O}(m \ \mathrm{polylog}(m/\epsilon))$$. Non-Commuting Hamiltonians However, this technique does not work for non-commuting Hamiltonian $$H_j$$'s since the formula $$e^{-i(A+B)t} = e^{-iAt}e^{-iBt}$$ does not hold for non-commuting matrices $$A$$ and $$B$$. But the Lie-Trotter product formula comes to our rescue, which says Let $$A$$ and $$B$$ be Hermitian matrices such that $$||A|| \leq K$$ and $$||B|| \leq K$$, for some real $$K$$. Then $$e^{-iA}e^{-iB} = e^{-i(A+B)} + \mathcal{O}(k^2)$$. Applying this formula multiple times, for any Hermitian matrices $$H_1, H_2, \ldots$$ satisfying $$||H_j|| \leq K \leq 1$$ for all $$j$$. $$e^{-iH_1}e^{-iH_2}\ldots e^{-iH_m} = e^{-i(H_1+\ldots + H_m)} + \mathcal{O}(m^3K^2).$$ Therefore, there is a universal constant $$C$$ such that if $$r \geq Cm^3(Kt)^2/\epsilon$$, $$||e^{-iH_1t/r}e^{-iH_2t/r}\ldots e^{-iH_mt/r}|| \leq \epsilon/r.$$ Hence, for any such $$n$$ $$||(e^{-iH_1t/r}e^{-iH_2t/r}\ldots e^{-iH_mt/r})^r - e^{-i(H_1 + \ldots + H_m)}|| \leq \epsilon$$ follows from the lemma which states that if $$(U_i), (V_i)$$ are sequences of $$m$$ unitary operators satisfying $$||U_i - V_i||$$ for all $$1\leq i \leq m$$, then $$||U_m\ldots U_1 - V_m\ldots V_1||\leq m\epsilon$$. Given this result, any $$k$$-local Hamiltonian can be simulated simply by simulating the evolution of each term for time $$t/r$$ to high enough accuracy and concatenating the individual simulations. Larger the $$r$$, more accurate the simulation. This can be formalized as Let $$H$$ be a Hamiltonian which can be written as the sum of $$m$$ terms $$H_j$$, each acting non-trivially on $$k = \mathcal{O}(1)$$ qubits and satisfying $$||H_j|| \leq K$$ for some $$K$$. Then, for any $$t$$, there exists a quantum circuit which approximates the operator $$e^{−iHt}$$ to within $$\epsilon$$ in time $$\mathcal{O}(m^3(Kt)^2/\epsilon)$$, up to polylogarithmic factors. The $$t^2$$ dependency can be further lowered, and the complexity can be improved to $$\mathcal{O}(mkt)$$. You're right that there are other techniques of Hamiltonian simulation like the Taylor series ($$\mathcal{O}(\frac{t\log^2(t/\epsilon)}{\log \log \frac{t}{\epsilon}}$$)) and quantum walk ($$\mathcal{O}(\frac{t}{\sqrt{\epsilon}})$$). With the quantum signal processing algorithm it is $$\mathcal{O}(t + \log \frac{1}{\epsilon})$$. References • Is it the case that we have a proof that 'most Hamiltonians' aren't efficiently simulatable or that we don't have a proof that they are? Dec 25 '19 at 22:43 • @Mithrandir24601 Ashley Montarano claims in his lecture notes that there is a simple counting-based proof, but I haven't seen that so far (you could try asking him directly...). Otherwise, there's only evidence. Dec 26 '19 at 5:54
# What is the conjugate of 7 + 4i? Nov 11, 2015 7 - 4$i$ Conjugate for any complex number ($a + b i$) is ($a - b i$) or for ${e}^{a x}$ is ${e}^{-} \left(a x\right)$ .
IEVref: 103-06-01 ID: Language: en Status: Standard Term: period Synonym1: Synonym2: Synonym3: Symbol: T Definition: smallest positive difference between two values of the independent variable at which the values of a periodic quantity are identically repeatedNote 1 to entry: If $f\left(t\right)$ denotes a periodic quantity, then $f\left(t+T\right)=f\left(t\right)$. Note 2 to entry: The term "period duration" is sometimes used in the case of a function of time. Note 3 to entry: The symbol T is mainly used for the period when the independent variable is time. Publication date: 2009-12 Source: Replaces: Internal notes: 2017-02-20: Editorial revisions in accordance with the information provided in C00020 (IEV 103) - evaluation. JGO CO remarks: TC/SC remarks: VT remarks: Domain1: Domain2: Domain3: Domain4: Domain5:
# David Udell comments on David Udell’s Shortform • In the 1920s when and CL began, logicians did not automatically think of functions as sets of ordered pairs, with domain and range given, as mathematicians are trained to do today. Throughout mathematical history, right through to computer science, there has run another concept of function, less precise at first but strongly influential always; that of a function as an operation-process (in some sense) which may be applied to certain objects to produce other objects. Such a process can be defined by giving a set of rules describing how it acts on an arbitrary input-object. (The rules need not produce an output for every input.) A simple example is the permutation-operation defined by . Nowadays one would think of a computer program, though the ‘operation-process’ concept was not originally intended to have the finiteness and effectiveness limitations that are involved with computation. Perhaps the most important difference between operators and functions is that an operator may be defined by describing its action without defining the set of inputs for which this action produces results, i.e., without defining its domain. In a sense, operators are ‘partial functions.’ A second important difference is that some operators have no restriction on their domain; they accept any inputs, including themselves. The simplest example is , which is defined by the operation of doing nothing at all. If this is accepted as a well-defined concept, then surely the operation of doing nothing can be applied to it. We simply get . Of course, it is not claimed that every operator is self-applicable; this would lead to contradictions. But the self-applicability of at least such simple operators as , , and seems very reasonable. The operator concept can be modelled in standard ZF set theory if, roughly speaking, we interpret operators as infinite sequences of functions (satisfying certain conditions), instead of as single functions. This was discovered by Dana Scott in 1969 (pp. 45-6). --Hindley and Seldin, Lambda-Calculus and Combinators (2008)
# Change the decimal point of every value in an R data frame column. R ProgrammingServer Side ProgrammingProgramming To change the decimal point of every value in an R data frame column, we can use round function. For Example, if we have a data frame called df that contains a column say X and we want to have each value with 3 decimal places then we can use the below command − df$X<-round(df$X,3) ## Example 1 Following snippet creates a sample data frame − x<-rnorm(20) df1<-data.frame(x) df1 The following dataframe is created x 1 -0.91562005 2 -0.71486966 3 -1.35440791 4 -0.86207755 5 -0.48550958 6 0.43145743 7 0.20498938 8 -1.06666846 9 0.42006706 10 -1.58312323 11 -3.17485910 12 0.86979277 13 0.51422397 14 0.10609016 15 1.76677390 16 0.37099348 17 -0.09970752 18 -0.44883679 19 -0.78389296 20 -0.60084347 To change the decimal point of every value in column x of df1 on the above created data frame, add the following code to the above snippet − x<-rnorm(20) df1<-data.frame(x) df1$x<-round(df1$x,2) df1 ## Output If you execute all the above given snippets as a single program, it generates the following Output − x 1 -0.92 2 -0.71 3 -1.35 4 -0.86 5 -0.49 6 0.43 7 0.20 8 -1.07 9 0.42 10 -1.58 11 -3.17 12 0.87 13 0.51 14 0.11 15 1.77 16 0.37 17 -0.10 18 -0.45 19 -0.78 20 -0.60 ## Example 2 Following snippet creates a sample data frame − y<-rexp(20,3.25) df2<-data.frame(y) df2 The following dataframe is created y 1 0.12846498 2 0.45411494 3  0.07496508 4  0.32808533 5 0.11909036 6  0.29416546 7  0.12022920 8  0.21379528 9  0.10379913 10 0.32190311 11 0.52390563 12 0.20316711 13 0.03514671 14 0.11567971 15 0.44197119 16 0.17787958 17 0.03580091 18 0.25273254 19 0.09771133 20 0.04789005 To change the decimal point of every value in column y of df2 on the above created data frame, add the following code to the above snippet − y<-rexp(20,3.25) df2<-data.frame(y) df2$y<-round(df2$y,4) df2 ## Output If you execute all the above given snippets as a single program, it generates the following Output − y 1 0.1285 2 0.4541 3 0.0750 4 0.3281 5 0.1191 6 0.2942 7 0.1202 8 0.2138 9 0.1038 10 0.3219 11 0.5239 12 0.2032 13 0.0351 14 0.1157 15 0.4420 16 0.1779 17 0.0358 18 0.2527 19 0.0977 20 0.0479 Published on 05-Nov-2021 05:54:26
# 6.4.pdf - Zachary Dorff Assignment Section 6.4 due at... • 2 This preview shows page 1 - 2 out of 2 pages. Zachary Dorff Zhu MAT 266 ONLINE B Fall 2019 Assignment Section 6.4 due 10/27/2019 at 11:59pm MST 1. (1 point) Use the Table of Integrals in the back of your textbook to evaluate the integral: Z sec 3 ( 5 x ) dx + C Solution: SOLUTION From a table of integrals, we have Z sec 3 ( u ) du = 1 2 sec ( u ) tan ( u )+ 1 2 ln sec ( u )+ tan ( u ) + C For Z sec 3 ( 5 x ) dx , we let u = 5 x , du = 5 dx = 1 5 du = dx , and our integral can be written as Z sec 3 ( 5 x ) dx = 1 5 Z sec 3 ( u ) du = 1 5 h 1 2 sec ( u ) tan ( u )+ 1 2 ln sec ( u )+ tan ( u ) i + C = 1 10 sec ( 5 x ) tan ( 5 x )+ 1 10 ln sec ( 5 x )+ tan ( 5 x ) + C Correct Answers: (1/(2*5))*(sec(5*x)*tan(5*x)+ln(abs(sec(5*x)+tan(5*x)))) 2. (1 point) Use the table of in Integrals in the back of your textbook to evaluate the integral: Z p 6 - 4 x - 4 x 2 dx Note: Use an upper-case ”C” for the constant of integration. Solution: SOLUTION From a table of integrals, we have Z p a 2 - u 2 du = 1 2 u p a 2 - u 2 + 1 2 a 2 sin - 1 u a + C To evaluate R 6 - 4 x - 4 x 2 dx we need to complete the square inside the radical to use the above formula.
'Nip it in the butt' or 'Nip it in the bud'. The 20 Wildest, Weirdest And Most Delicious Recipe... Thanksgiving Practice Round: Soups And Salads. Does this mean to peel it? Leafy greens like romaine lettuce, chard and kale Sign up for the best of Food Republic, delivered to your inbox Tuesday and Thursday. Vegetables ... By using this website, you agree to our use of cookies. Send us feedback. Top ‘n tail these guys too, but to keep them from rolling around while you’re trying to do so, risking life and fingers, carefully slice off a piece of the side to square the vegetable and keep it sitting still on the cutting board while you’re trimming the edges. Hang On To Your Radish Tops! Can you spell these 10 commonly misspelled words? How Do You Remove Turmeric Stains? Artichoke Oysters? Figure out the 20% trimmed mean for the number set {8, 3, 7, 1, 3, and 9}. How to Trim a Zucchini Plant. Recipe I want to make says "squash should be trimmed and cut". Trimmed Mean a method of averaging that removes a small percentage of the largest and smallest values before calculating the mean. Don’t trim too much or the leaves will fall off the sprout. We use cookies to provide you with a great experience and to help our website run effectively. Just slice off the bottom-most portion of the base where it met the stem. How to use courgette in a sentence. Broccoli, cauliflower, asparagus Views expressed in the examples do not represent the opinion of Merriam-Webster or its editors. Accessed 26 Nov. 2020. Steven Satterfield Ha... Watermelon Tartare? \, = \frac{(3 + 3 + 7 + 8)}{4} And How It Might Get You Laid. Record the given arrangement of numbers {8, 3, 7, 1, 3, 9} in rising request, = 1, 3, 3,7,8,9. \, = \frac{Sum\ of\ your\ Trimmed\ Set}{Total\ Numbers\ in\ Trimmed\ set} \\[7pt] Check your email for a confirmation link. As the trimmed tally is 1, we ought to expel one number from the earliest starting point and end. \, = \frac{21}{4} \\[7pt] When a recipe calls for a certain vegetable, trimmed, do you break out the scissors and prepare to recreate that greasy sideways wig Justin Bieber always sports or possibly rev up the chainsaw? She quartered three green tomatoes and sliced three, … for centuries Italian peasants have made, … a "fritto misto" type selection of battered and deep-fried, One of my go-to weeknight dinners is pasta with sautéed, Dinner guests dined on a menu that included risotto carnaroli with violet artichokes and wild herbs, blue lobster roti with pomme de terre mousseline, and, Post the Definition of courgette to Facebook, Share the Definition of courgette on Twitter. When a recipe calls for a certain vegetable, trimmed, do you break out the scissors and prepare to recreate that greasy sideways wig Justin Bieber always sports or possibly rev up the chainsaw? g = Floor (Trimmed Mean Percent x Sample Size) g = Floor (0.2 x 6) g = Floor (1.2) Trimmed check (g) = 1 Record the given arrangement of numbers {8, 3, 7, 1, 3, 9} in rising request, = 1, 3, 3,7,8,9. Cut the bottom inch to two inches off the stem. Please tell us where you read or heard it (including the quote, if possible). Courgette definition is - zucchini. It’s actually referring to neither of those kinds of “trim,” believe it or not. Brussels sprouts This method is called the “top ‘n tail.” Anything with two inedible tips, like bean and okra pods or long root veggies like carrots and parsnips or stalks like celery gets the tough, fibrous tops and bottoms sliced off. Along these lines, we uproot first number (1) and last number (9) from the above arrangement of numbers, = 3, 3, 7, 8.Now Trimmed mean can be computed as: The Trimmed Mean of the given numbers is 5.25. Here's how to trim several kinds of vegetables. Faux Gras, Anyone? Give us a chance to first ascertain the estimation of Trimmed check (g), where g alludes to number of qualities to be trimmed from the given arrangement. As the trimmed tally is 1, we ought to expel one number from the earliest starting point and end. Delivered to your inbox! These example sentences are selected automatically from various online news sources to reflect current usage of the word 'courgette.' What made you want to look up courgette? 'All Intensive Purposes' or 'All Intents and Purposes'? Trimmed Mean Percent = $\frac{20}{100} = 0.2$; Sample Size=6, $\mu = \frac{\sum {X_i}}{n} \\[7pt] In many ways, zucchini plants are perfect examples of self-sustaining plants. String beans, sugar snap peas, okra, cucumbers, carrots — Noel Vietmeyer … a "fritto misto" type selection of battered and deep-fried courgette, celeriac and carrot … Sponsored links: July 10th, 2011, 12:38 PM #2. “Courgette.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/courgette. Learn a new word every day. Nglish: Translation of courgette for Spanish Speakers, Britannica English: Translation of courgette for Arabic Speakers. Slice off the top inch to inch and a half as well as the thicker, fibrous stems. \, = {5.25}$, Process Capability (Cp) & Process Performance (Pp). It's actually referring to neither of those kinds of "trim," believe it or not. zucchini She quartered three green tomatoes and sliced three courgettes … — Alice Thomas Ellis … for centuries Italian peasants have made courgettes into fritters or squash-blossom sandwiches. Turnips, beets, celery root If the stalks still seem a little tough, a quick swipe with a vegetable peeler should thin them out enough to be tender when cooked. Success! Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free! All Rights Reserved. The Trimmed Mean can be calculated using the following formula. Test Your Knowledge - and learn some interesting things along the way. © 2020 Food Republic. French, diminutive of courge gourd, from Middle French, from Latin cucurbita. Kitchen Witch. Here’s how to trim several kinds of vegetables.
# What is dark matter? 1. Aug 19, 2009 ### Baboon What is dark matter? How was dark matter formed? Any replies would be greatly appreciated. Last edited by a moderator: Aug 19, 2009 2. Aug 19, 2009 ### fatra2 Simply put, dark matter is matter that is dark. hahaha!!! A bit more details would probably be helpful. Let's start from the beginning. By looking at galaxies around us, we can see them spinning around a center. The speed of rotation depends on the amount of matter in the galaxy. The more matter the faster the spinning. Nothing too complicated up to now. Problem comes here. Calculating the rotational speed of galaxies from the only mass we see does not explain the speed observed in the telescops. Two solutions are possible: 1. our laws of phyiscs are wrong, 2. some matter is hidden somewhere. Of course, we could never imagine that we made a mistake drawing the laws of physics, therefore we had to look into matter that is not seen (therefore dark matter). 3. Aug 19, 2009 ### Baboon What about ...Dark matter is matter that cannot be detected by its emitted radiation but whose presence can be inferred from gravitational effects on visible matter such as stars and galaxies. Estimates of the amount of matter in the universe based on gravitational effects consistently suggest that there is far more matter than is directly observable. 4. Aug 19, 2009 ### fatra2 That's the beauty of this subject. By definition of "dark", dark matter does not emit any radiation. Now the gravitational pull is the reason why we started wondering about dark matter. Since then, we came a long way. We know that there are many candidates that could explain this missing matter. First, let's not forget the massive objects (starts) at the end of the life, like white dwarfs, neutron stars, black holes don't emit radiation (or very few). They partly explain this rotational overspeed. Therefore, scientists attention turned to find what is the rest. You have the choice, between neutrinos, WIMP (weakly interactive massive particles) and many more. Since this is not my field, I can only give few details. Your next comment might be on the neutrinos, thinking "how can such small particles explains missing mass of the universe???" My answer would be quite simple. Of course one does make that much of a difference in our Universe. But remember that more than $$10^{11}$$ particle pass through every kg of your body every second of your life. To make the little story complete. These neutrinos have no electric charge (no electric field, and don't interact with matter, except for direct collision), very little mass (from what I remember less than 1/1000 the mass of the electron). They seem to be very good candidate for this dark matter. Cheers 5. Aug 19, 2009 ### George Jones Staff Emeritus The most massive neutrinos (there are three flavours) have mass less than 1/1000000 the mass of the electron (page 396 of the second edition of Introduction to Elementary Particles by David Griffiths). Neutrinos likely account for only a small fraction of dark matter mass. Also, neutrino dark matter cannot account for structure formation in the early universe that leads to the galaxies and clusters of galaxies that we observe. Neutrinos move too fast to allow this to happen. 6. Aug 19, 2009 ### fatra2 Thank you for the clarifications on neutrinos. I gave the numbers from the top of my head. You might be right to say that neutrino account for only a fraction of the dark matter. From my understanding, we are just at the beginning of this field, and discoveries will most likely enlighten us in the near future. Facts are that dark matter seems to be out there. We just need to find the right place to look for it. 7. Aug 19, 2009 ### Baboon Im the beginner in the physics and it is simply interesting to me Forgive for a silly or simple question tnx http://www.u-n-i-v-e-r-s-e.com/the_Universe.html" [Broken] Last edited by a moderator: May 4, 2017 8. Aug 19, 2009 ### Heisenberg. I have a question that regards to some of the comments posted above - According to string theory dark matter might possibly be a higher vibration of the superstring -Since string theory claims that us three dimensional beings can only see the lowest vibration of the superstring (e.g atoms, light) then dark matter might be the next set in vibrations - how popular is this string theory interpretation of dark matter, for I noticed it was not mentioned above? Is the reason neutrinos are so seemingly elusive to us the fact they have a higher vibration or are neutrions seperate from string theory altogether? 9. Aug 20, 2009 ### Chronos Dark matter is like neutrinos, we know its there but is mighty hard to directly detect. Most scientists doubt it is a Baskin-Robbins collage of neutrinos, rather suspecting it is a fundamentally different family of particles [I suspect there is more than one flavor, as is the case with neutrinos]. I usually generally avoid string discussions. The music is lovely, but, there are no lyrics. 10. Aug 24, 2009 ### Hippasos 11. Aug 24, 2009 ### Chronos Expanding on [repeating?] what George said, neutrinos travel nearly at the speed of light. This is not conducive to large scale structure formation. Dark matter appears to travel around the same speed as ordinary matter. 12. Aug 24, 2009 ### ideasrule I haven't read the paper, but it seems to address why the galaxies are accelerating away from us, and not why galaxies have the observed rotation curve. 13. Aug 24, 2009 ### George Jones Staff Emeritus 14. Aug 24, 2009 ### DaveC426913 It might be more more intuitive to see it as a form of matter (it could be much like protons and electrons for all we know) that simply does not interact with photons - neither absorbing them nor emitting them. If it does not intereact with EM radiation, then it is invisible to all our sensory apparati yet still interacts normally with gravity. 15. Aug 24, 2009 ### kldickson What kind of technology would we need to positively detect dark matter and go beyond inferring its existence? 16. Aug 24, 2009 ### DaveC426913 Why, a Dark Matter Detector of course.:tongue: (Go head. Ask what a DMD is and how it works.) Seriously. You do realize that, since we don't know what it is or why we can't see it, there is no way of knowing what it would take... 17. Aug 25, 2009 ### Chronos Particle physics is the current search method. Even dark matter particles have a probability of interacting with normal matter, or other dark matter particles, if you observe a sufficient number of collisions. 18. Aug 25, 2009 ### George Jones Staff Emeritus I think that if the LHC finds evidence of supersymmetry, the case for non-baryonic dark matter will be greatly strengthened. 19. Aug 25, 2009 ### kldickson Yes, dark matter particles don't emit electromagnetic radiation, but surely there are other ways of positively identifying them. What is known from their interactions with particles that do emit radiation? 20. Aug 26, 2009 ### fatra2 The only way of identifying dark matter is through indirect effect. Like when a neutrino makes a direct hit with a nucleus, we can only measure the recoil of the nucleus and deduce that it was made by a neutrino. If there would be a direct way of detecting dark matter, it would become "visible" in some way, and could not be called "dark" matter anymore. Cheers 21. Aug 26, 2009 ### azzkika Am i correct in thinking the term 'dark' infers no emission of EMF whatsoever? If that is correct, does that imply that dark matter does not interact with 'fields' the same as normal matter if at all? Referring to the neutrino discussion, does the faster a particle travels have an increase to it's gravitational effect when passing another particle? I'm very amateur to physics, but if this was so, then maybe neutrino's could constitute more of dark matter if they are all travelling in the right direction. 22. Aug 26, 2009 ### DaveC426913 Its behaviour is dictated by what we've given it as a nickname? :uhh: Seems kind of the tail wagging the dog wouldn't you say? 23. Aug 27, 2009 ### Chronos 'Dark' was coined in reference to its resistance to detection by means of kinetic reactions and EM emissions. DM is very much like neutrinos. It took us many years to confirm the existence of neutrinos, it will take us many more to detect DM particles. Neutrinos travel at nearly light speed, making them relatively easy to detect. DM does not, making it much harder to detect. 24. Aug 30, 2009 ### _PJ_ I must admit, I've never liked the ideas of Dark Matter (or Dark Energy for that matter - pun unintentional) However, neutrinos perhaps account for some of it. It has occurred to me that gravtiational effects of some bodies may be 'concealed' by other matter, for examplethe concensus that there are black holes at the centre of many galaxies (perhaps all), including the Milky Way. The Milky Way centre has what is called the Great Attractor near its centre, and around this, many stars and their associated families no doubt are pullled into tight orbits. All this mass of which we cannot detect individually, only infer by the motion and what radiation is emitted, may therefore be 'hiding' greater mass behind it? 25. Aug 31, 2009 ### Chronos The 'bullet cluster' study is the smoking gun in the case for dark matter. See: A direct empirical proof of the existence of dark matter Douglas Clowe (1), Marusa Bradac (2), Anthony H. Gonzalez (3), Maxim Markevitch (4), Scott W. Randall (4), Christine Jones (4), Dennis Zaritsky (1) ((1) Steward Observatory, Tucson, (2) KIPAC, Stanford, (3) Department of Astronomy, Gainesville, (4) CfA, Cambridge) http://arxiv.org/abs/astro-ph/0608407 We present new weak lensing observations of 1E0657-558 (z=0.296), a unique cluster merger, that enable a direct detection of dark matter, independent of assumptions regarding the nature of the gravitational force law. Due to the collision of two clusters, the dissipationless stellar component and the fluid-like X-ray emitting plasma are spatially segregated. By using both wide-field ground based images and HST/ACS images of the cluster cores, we create gravitational lensing maps which show that the gravitational potential does not trace the plasma distribution, the dominant baryonic mass component, but rather approximately traces the distribution of galaxies. An 8-sigma significance spatial offset of the center of the total mass from the center of the baryonic mass peaks cannot be explained with an alteration of the gravitational force law, and thus proves that the majority of the matter in the system is unseen.
# How do you evaluate e^( ( pi)/4 i) - e^( ( 11 pi)/6 i) using trigonometric functions? May 26, 2017 ${e}^{\frac{\pi}{4} i} - {e}^{\frac{11 \pi}{6} i} = \frac{\sqrt{2} - \sqrt{3}}{2} + \frac{\sqrt{2} + 1}{2} i$ #### Explanation: There is a special formula for complex numbers that we will use to solve this problem: ${e}^{i \theta} = \cos \theta + i \sin \theta$ Therefore, we can rewrite this problem as: ${e}^{\frac{\pi}{4} i} - {e}^{\frac{11 \pi}{6} i}$ $= \cos \left(\frac{\pi}{4}\right) + i \sin \left(\frac{\pi}{4}\right) - \cos \left(\frac{11 \pi}{6}\right) - i \sin \left(\frac{11 \pi}{6}\right)$ Now all we have to do is simplify. $= \frac{\sqrt{2}}{2} + \frac{\sqrt{2}}{2} i - \frac{\sqrt{3}}{2} - \left(- \frac{1}{2} i\right)$ $= \frac{\sqrt{2} - \sqrt{3}}{2} + \frac{\sqrt{2} + 1}{2} i$ So ${e}^{\frac{\pi}{4} i} - {e}^{\frac{11 \pi}{6} i} = \frac{\sqrt{2} - \sqrt{3}}{2} + \frac{\sqrt{2} + 1}{2} i$
# Finding homotopy equivalence This is part of a problem from Hatcher: Show that the space in $\mathbb R^2$ which is the union (for $n \in \mathbb N$) of circles $C_n$, where $C_n$ is the circle centered at $(n,0)$ with radius $n$ is not homoemorphic to the wedge sum of infinite circles, but they are homotopy equivalent. I was able to prove that these spaces are not homeomorphic by considering how open sets at $0$ for the first case would differ from the open sets around wedge point. But I have no idea how to prove (in this case and in general too) how to prove that two spaces are homotopically equivalent if the spaces under consideration are not simple CW complexes. Thanks! -
## Motivic cohomology vs. K-theory for singular varieties As far as I understand, for a smooth variety $X$ its motivic cohomology could be described as the corresponding piece of the $\gamma$-filtration of (Quillen's) $K^*(X)$; this is completely true for $\mathbb{Q}$-coefficients, and true up to bounded denominators for $\mathbb{Z}$-coefficients. My question is: is there a similar result for singular varieties? Here for motivic cohomology I would like to take $Hom_{DM}(M(X),\mathbb{Z}(p)[q]$; $DM$ is the category of Voevodsky's motives, and $M(X)$ is the motif of $X$ (I don't want to take the motif with compact support instead). These cohomology theories satisfy cdh-descent, but have no easy descriptions in terms of complexes of algebraic cycles. Unfortunately, I don't know much about the $\gamma$-filtration of $K$-theory. Actually, I would like to prove the following fact: if a morphism $X\to Y$ of varieties induces an isomorphism for $K^*$, then the exponents of the kernels and cokernels of the corresponding morphisms for motivic cohomology are bounded (by a constant that depends only only on the dimensions of $X$ and $Y$). Any hints and/or references would be very welcome! - The precise relationship between K-theory and motivic cohomology for smooth schemes is the (analog of the) Atiya-Hirzebruch spectral sequence. This generalizes to non-smooth schemes if one uses K'-theory and higher Chow groups. It is easy to see that motivic cohomology cannot be used to recover K-theory, because it does not detect nilpotents. For example, $k[t]/t^2$ has K-theory different from $k$, but motivic cohomology is the same. cdh-descent than gives examples for reduced schemes (look at a cusp). There are some approaches (Bloch-Esnault) to beef up cycles to get results in the above examples. As for your original question, recover motivic cohomology from K-theory, this might be possible, but I have no result to offer. -
## 60.20 Divided power Poincaré lemma Just the simplest possible version. Lemma 60.20.1. Let $A$ be a ring. Let $P = A\langle x_ i \rangle$ be a divided power polynomial ring over $A$. For any $A$-module $M$ the complex $0 \to M \to M \otimes _ A P \to M \otimes _ A \Omega ^1_{P/A, \delta } \to M \otimes _ A \Omega ^2_{P/A, \delta } \to \ldots$ is exact. Let $D$ be the $p$-adic completion of $P$. Let $\Omega ^ i_ D$ be the $p$-adic completion of the $i$th exterior power of $\Omega _{D/A, \delta }$. For any $p$-adically complete $A$-module $M$ the complex $0 \to M \to M \otimes ^\wedge _ A D \to M \otimes ^\wedge _ A \Omega ^1_ D \to M \otimes ^\wedge _ A \Omega ^2_ D \to \ldots$ is exact. Proof. It suffices to show that the complex $E : (0 \to A \to P \to \Omega ^1_{P/A, \delta } \to \Omega ^2_{P/A, \delta } \to \ldots )$ is homotopy equivalent to zero as a complex of $A$-modules. For every multi-index $K = (k_ i)$ we can consider the subcomplex $E(K)$ which in degree $j$ consists of $\bigoplus \nolimits _{I = \{ i_1, \ldots , i_ j\} \subset \text{Supp}(K)} A \prod \nolimits _{i \not\in I} x_ i^{[k_ i]} \prod \nolimits _{i \in I} x_ i^{[k_ i - 1]} \text{d}x_{i_1} \wedge \ldots \wedge \text{d}x_{i_ j}$ Since $E = \bigoplus E(K)$ we see that it suffices to prove each of the complexes $E(K)$ is homotopic to zero. If $K = 0$, then $E(K) : (A \to A)$ is homotopic to zero. If $K$ has nonempty (finite) support $S$, then the complex $E(K)$ is isomorphic to the complex $0 \to A \to \bigoplus \nolimits _{s \in S} A \to \wedge ^2(\bigoplus \nolimits _{s \in S} A) \to \ldots \to \wedge ^{\# S}(\bigoplus \nolimits _{s \in S} A) \to 0$ which is homotopic to zero, for example by More on Algebra, Lemma 15.28.5. $\square$ An alternative (more direct) approach to the following lemma is explained in Example 60.25.2. Lemma 60.20.2. Let $A$ be a ring. Let $(B, I, \delta )$ be a divided power ring. Let $P = B\langle x_ i \rangle$ be a divided power polynomial ring over $B$ with divided power ideal $J = IP + B\langle x_ i \rangle _{+}$ as usual. Let $M$ be a $B$-module endowed with an integrable connection $\nabla : M \to M \otimes _ B \Omega ^1_{B/A, \delta }$. Then the map of de Rham complexes $M \otimes _ B \Omega ^*_{B/A, \delta } \longrightarrow M \otimes _ P \Omega ^*_{P/A, \delta }$ is a quasi-isomorphism. Let $D$, resp. $D'$ be the $p$-adic completion of $B$, resp. $P$ and let $\Omega ^ i_ D$, resp. $\Omega ^ i_{D'}$ be the $p$-adic completion of $\Omega ^ i_{B/A, \delta }$, resp. $\Omega ^ i_{P/A, \delta }$. Let $M$ be a $p$-adically complete $D$-module endowed with an integral connection $\nabla : M \to M \otimes ^\wedge _ D \Omega ^1_ D$. Then the map of de Rham complexes $M \otimes ^\wedge _ D \Omega ^*_ D \longrightarrow M \otimes ^\wedge _ D \Omega ^*_{D'}$ is a quasi-isomorphism. Proof. Consider the decreasing filtration $F^*$ on $\Omega ^*_{B/A, \delta }$ given by the subcomplexes $F^ i(\Omega ^*_{B/A, \delta }) = \sigma _{\geq i}\Omega ^*_{B/A, \delta }$. See Homology, Section 12.15. This induces a decreasing filtration $F^*$ on $\Omega ^*_{P/A, \delta }$ by setting $F^ i(\Omega ^*_{P/A, \delta }) = F^ i(\Omega ^*_{B/A, \delta }) \wedge \Omega ^*_{P/A, \delta }.$ We have a split short exact sequence $0 \to \Omega ^1_{B/A, \delta } \otimes _ B P \to \Omega ^1_{P/A, \delta } \to \Omega ^1_{P/B, \delta } \to 0$ and the last module is free on $\text{d}x_ i$. It follows from this that $F^ i(\Omega ^*_{P/A, \delta }) \to \Omega ^*_{P/A, \delta }$ is a termwise split injection and that $\text{gr}^ i_ F(\Omega ^*_{P/A, \delta }) = \Omega ^ i_{B/A, \delta } \otimes _ B \Omega ^*_{P/B, \delta }$ as complexes. Thus we can define a filtration $F^*$ on $M \otimes _ B \Omega ^*_{B/A, \delta }$ by setting $F^ i(M \otimes _ B \Omega ^*_{P/A, \delta }) = M \otimes _ B F^ i(\Omega ^*_{P/A, \delta })$ and we have $\text{gr}^ i_ F(M \otimes _ B \Omega ^*_{P/A, \delta }) = M \otimes _ B \Omega ^ i_{B/A, \delta } \otimes _ B \Omega ^*_{P/B, \delta }$ as complexes. By Lemma 60.20.1 each of these complexes is quasi-isomorphic to $M \otimes _ B \Omega ^ i_{B/A, \delta }$ placed in degree $0$. Hence we see that the first displayed map of the lemma is a morphism of filtered complexes which induces a quasi-isomorphism on graded pieces. This implies that it is a quasi-isomorphism, for example by the spectral sequence associated to a filtered complex, see Homology, Section 12.24. The proof of the second quasi-isomorphism is exactly the same. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
# Probability about random variables with exponential distribution Suppose we have $n$ batteries which has a lifetime that is exponentially distributed with parameter $\lambda$. Each battery's lifetime is independent. If we initially put 2 batteries on and every time a battery fails, we replace it with another battery until there is only one working battery. What is the probability that we can use these $n$ more than $x$ years. And what is the expectation of the total time until there is only one battery left working. I realize that we need to use the memoryless property of exponential distribution r.vs but I kind of got stuck there. - Hints: The failure of the batteries is a Poisson process Having two batteries on at the same time doubles the intensity of the process You can use the $n$ batteries up to the point at which there have been $n-1$ failures - Could you please explain a little bit more about this? Can I simply assume that the process in the question can be modeled as a Gamma distribution random variable with parameter $(n-1,\lambda)$? @Henry –  geraldgreen Nov 17 '11 at 19:30 @John:You have to double the intensity, so the second parameter is $2\lambda$ –  Henry Nov 19 '11 at 10:01 I believe whenever you change a battery, the remaining one just acts like it has been freshly changed too, according to the memoryless property. Thus the question can be simplified into: How long could $(n-1)$ batteries last. Because batteries' lifespan are all idd, the probability of their sum should be: $$P(x)=\int_0^\infty \cdots \int_0^{t_i} \prod_{i=1}^{n-1} f(t_i-x)f(x)dx\,.$$ - Not quite - the distribution of failure times of a pair of batteries is the minimum of two exponential distributions with rate $\lambda$, which is another exponential with rate $2\lambda$. –  Chris Taylor Nov 17 '11 at 8:32
Warning This documents an unmaintained version of NetworkX. Please upgrade to a maintained version and see the current NetworkX documentation. # is_weakly_connected¶ is_weakly_connected(G)[source] Test directed graph for weak connectivity. A directed graph is weakly connected if, and only if, the graph is connected when the direction of the edge between nodes is ignored. Parameters: G (NetworkX Graph) – A directed graph. connected – True if the graph is weakly connected, False otherwise. bool is_strongly_connected(), is_semiconnected(), is_connected()
# Spacy-NP Annotator¶ ## Chunker based noun phrase annotator¶ The noun phrase annotator is a plug-in that can be used with Spacy pipeline structure. The annotator loads a trained SequenceChunker model that is able to predict chunk labels, creates Spacy based Span objects and applies a sequence of filtering to produce a set of noun phrases, finally, it attaches it to the document object. The annotator implementation can be found in NPAnnotator. ### Usage example¶ Loading a Spacy pipeline and adding a sentence breaker (required) and NPAnnotator annotator as the last annotator in the pipeline: nlp = spacy.load('en') Parse documents regularly and get the noun phrase annotations using a dedicated method: doc = nlp('The quick brown fox jumped over the fence') noun_phrases = nlp_architect.pipelines.spacy_np_annotator.get_noun_phrases(doc) ## Standalone Spacy-NPAnnotator¶ For use cases in which the user is not interested in specialized Spacy pipelines we have implemented SpacyNPAnnotator which will run a Spacy pipeline internally and provide string based noun phrase chunks given documents in string format. ### Usage example¶ Just as in NPAnnotator, we need to provide a trained SequenceChunker model and its parameters file. It is also possible to provide a specific Spacy model to base the pipeline on. The following example shows how to load a model/parameters using the default Spacy English model (en) and how to get the noun phrase annotations. spacy_np = SpacyNPAnnotator(<model_path>, <model_parameters_path>, spacy_mode='en') noun_phrases = spacy_np('The quick brown fox jumped over the fence')
# LHC 2021-01-14 17:42 Particles of the Universe: an ATLAS Experiment Colouring Book in Greek | Σωματίδια του Σύμπαντος: Ένα Βιβλίο Ζωγραφικής του Πειράματος ATLAS / Anthony, Katarina (Universita degli Studi di Udine (IT)) ; Dantas Oliveira Ribeiro Velho, Mariana (Universidade Nova de Lisboa (PT)) Language: Greek - The ATLAS Colouring Book (Ages 4+): Explore the world of particles in this free-to-download colouring book! Meet the elementary particles that scientists have discovered – so far! – and learn about the role they play in our Universe.. ATLAS-OUTREACH-2021-004. - 2021. - 18. Language: Modern Greek 2021-01-14 10:23 Different approaches for minimising proton beam losses on the 11 T dipole in the IR7 dispersion suppressor / Belli, Eleonora ; Bruce, Roderik (CERN) ; Giovannozzi, Massimo (CERN) ; Mereghetti, Alessio ; Mirarchi, Daniele (CERN) ; Redaelli, Stefano (CERN) The High Luminosity Large Hadron Collider (HL–LHC) project aims at increasing the integrated luminosity by a factor of 10 beyond the LHC design value. [...] CERN-ACC-NOTE-2021-0002. - 2021. 2021-01-13 16:20 Development of an LHC model in BDSIM to study collimation cleaning and beam-induced backgrounds at ATLAS / Walker, Stuart Derek (University of London (GB)) The Large Hadron Collider (LHC) is at the frontier of high energy physics. At 27 km in circumference and operating at the highest achieved energy to date at 6.5 TeV, it is reliant on cold superconducting magnets throughout the machine to steer and control the beam. [...] CERN-THESIS-2019-410.- 194 p. Fulltext: PDF; External link: Approve this document (restricted) 2021-01-12 16:28 ATLAS Experiment Colouring Book in Swedish | ATLASExperimentets Målarbok / Anthony, Katarina (Universita degli Studi di Udine (IT)) Language: Swedish - The ATLAS Experiment Colouring Book is a free-to-download educational book, ideal for kids aged 5-9. [...] ATLAS-OUTREACH-2021-003. - 2021. Language: Swedish 2021-01-12 15:34 ATLAS Experiment Colouring Book in Hebrew | חוברת הצביעה של ניסוי אטלס / Anthony, Katarina (Universita degli Studi di Udine (IT)) Language: Hebrew - The ATLAS Experiment Colouring Book is a free-to-download educational book, ideal for kids aged 5-9. [...] ATLAS-OUTREACH-2021-002. - 2021. - 16. Language: Hebrew 2021-01-12 06:31 A possible LHCb Luminosity Monitor based on the Muon System / Kotriakhova, S (St. Petersburg, INP ; INFN, Rome) /LHCb MUON group Collaboration The Muon System of the LHCb experiment, after the ongoing upgrade, will be composed of 4 stations which comprise 1104 multi-wire-proportional-chambers (MWPC) with order of {10$^{5}$} readout channels. We are investigating the possibility of using the rates recorded on the Muon chambers to measure the luminosity. [...] 2020 - 13 p. - Published in : JINST 15 (2020) C09039 Fulltext: PDF; In : Instrumentation for Colliding Beam Physics, Novosibirsk, Russia, 24 - 28 Feb 2020, pp.C09039 2021-01-05 11:16 Test of Lepton Flavour Universality using the $B^0 \to D^{*-} \tau^+ \nu_\tau$ decays at LHCb / Gerstel, Dawid (Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France) This thesis presents the measurement of the $R(D_star) \equiv \frac {\mathcal{B} (B^0 \to D^{*-} \tau^+ \nu_\tau)} {{\mathcal{B} (B^0 \to D^{*-} \mu^+ \nu_mu})}$ ratio with 2 fb$^{-1}$ of $pp$ collisions collected at $\sqrt s=13\text{TeV}$ by \lhcb during 2015-2016 using 3-prong tau decays. The study comprises a test of the Lepton Flavour Universality in \btoclnu decays to help resolve the tension between the Standard Model $R(D^*)$ estimation and the experimental results from the B-factories and LHCb. [...] CERN-THESIS-2020-247.- 187 p. Fulltext: PDF; External link: Approve this document (restricted) 2020-12-17 10:32 Beam Loss Monitors in the Large Hadron Collider / Morales Vigo, Sara The LHC at CERN is the largest and most powerful particle accelerator ever built [...] CERN-THESIS-2020-237 - Full text 2020-12-16 17:28 LHC MD 4505: Forced 3D beam oscillations / Malina, Lukas (CERN) ; Tomas Garcia, Rogelio (CERN) ; Timko, Helga (CERN) ; Louro Alves, Diogo Miguel (CERN) ; Coello De Portugal - Martinez Vazquez, Jaime Maria This note summarises the detailed programme of an MD testing the new chromaticity and fast optics measurement methods. [...] CERN-ACC-NOTE-2020-0065. - 2020. - 14 p. Full text 2020-12-04 16:41 Particle Collider Probes of Dark Energy, Dark Matter and Generic Beyond Standard Model Signatures in Events With an Energetic Jet and Large Missing Transverse Momentum Using the ATLAS Detector at the LHC / Lindon, Jack Various Beyond Standard Model signatures are probed using a monojet analysis with the ATLAS experiment using $\sqrt{s} =$ 13 TeV proton-proton collision data, and model-independent limits on generic Beyond Standard Model signatures are set [...] CERN-THESIS-2020-219 - 204 p. Full text
Feynman-Kac formulas $n\geq 1$, $d\geq 1$. Let ${\cal O}$ be an open subset of $(0,T)\times I\!\!R^n$. For $W$ a $d-$dimensional Brownian motion, we construct the process $X$ as the solution of the following SDE : : (1) \begin{align} \left\{ \begin{array}{l} dX^{t,x}_s=b(s,X_s^{t,x})ds+\sigma(s,X^{t,x}_s)dW_s,\mbox{ }\mbox{ }t\leq s\leq \tau \\ X^{t,x}_t=x \in \left\{y \left| (t,x) \in {\cal O}\right.\right\}, \end{array} \right. \end{align} where $\tau=\inf\left\{s\in ]t,T] \left| (s,X_s^{t,x})\not\in{\cal O} \right.\right\}$ is the first exit time of $X$ from the domain $\cal O$.\ We then consider two other processes $Y$ and $Z$ defined by the following BSDE : $$\left\{ \begin{array}{l} -dY^{t,x}_s=F(s,X_s^{t,x},Y_s^{t,x},Z_s^{t,x})ds-Z^{t,x}_sdW_s,\mbox{ }\mbox{ }t\leq s\leq \tau\Y^{t,x}_{\tau}=g(X^{t,x}_{\tau}).\\end{array} \right.$$ Writting $L\phi(t,x)=b(t,x)D\phi(t,x)+\frac{1}{2}\mbox{trace}\left[\sigma\sigma^*(t,x)D^2\phi(t,x)\right]$ for a regular function $\phi$ , we have, under some conditions (see \cite{magdalena} p. 580), for every $t\leq s\leq \tau$ : $$\left\{ \begin{array}{l} Y^{t,x}_s=u(s,X^{t,x}_s)\mbox{ }\Z^{t,x}_t=\sigma^*Du(s,X^{t,x}_s),\\end{array} \right.$$ where $u$ is the solution (eventually in a certain generalized sense) of the PDE :\ $$\left\{ \begin{array}{l} -\displaystyle\frac{\partial u}{\partial t}-Lu-F(t,x,u,\sigma^*(t,x)Du)=0\mbox{ in } {\cal O},\u(t,x)=g(t,x) \mbox{ on } \partial {\cal O}. \end{array} \right.$$ page revision: 0, last edited: 25 May 2009 19:44
# Heat death of the universe For the album by Off Minor, see The Heat Death of the Universe. The heat death of the universe is a historically suggested ultimate fate of the universe in which the universe has diminished to a state of no thermodynamic free energy and therefore can no longer sustain processes that consume energy (including computation and life). Heat death does not imply any particular absolute temperature; it only requires that temperature differences or other processes may no longer be exploited to perform work. In the language of physics, this is when the universe reaches thermodynamic equilibrium (maximum entropy). The hypothesis of heat death stems from the ideas of William Thomson, 1st Baron Kelvin, who in the 1850s took the theory of heat as mechanical energy loss in nature (as embodied in the first two laws of thermodynamics) and extrapolated it to larger processes on a universal scale. In a more recent view than Kelvin's, it has been recognized by a respected authority on thermodynamics, Max Planck, that the phrase 'entropy of the universe' has no meaning because it admits of no accurate definition.[1][2] Kelvin's speculation falls with this recognition. ## Origins of the idea The idea of heat death stems from the second law of thermodynamics, which states that entropy tends to increase in an isolated system. If the universe lasts for a sufficient time, it will asymptotically approach a state where all energy is evenly distributed. In other words, in nature there is a tendency to the dissipation (energy loss) of mechanical energy (motion); hence, by extrapolation, there exists the view that the mechanical movement of the universe will run down, as work is converted to heat, in time because of the second law. The idea of heat death was first proposed in loose terms beginning in 1851 by William Thomson, 1st Baron Kelvin, who theorized further on the mechanical energy loss views of Sadi Carnot (1824), James Joule (1843), and Rudolf Clausius (1850). Thomson’s views were then elaborated on more definitively over the next decade by Hermann von Helmholtz and William Rankine. ### History The idea of heat death of the universe derives from discussion of the application of the first two laws of thermodynamics to universal processes. Specifically, in 1851 William Thomson outlined the view, as based on recent experiments on the dynamical theory of heat, that "heat is not a substance, but a dynamical form of mechanical effect, we perceive that there must be an equivalence between mechanical work and heat, as between cause and effect."[3] Lord Kelvin originated the idea of universal heat death in 1852. In 1852, Thomson published his "On a Universal Tendency in Nature to the Dissipation of Mechanical Energy" in which he outlined the rudiments of the second law of thermodynamics summarized by the view that mechanical motion and the energy used to create that motion will tend to dissipate or run down, naturally.[4] The ideas in this paper, in relation to their application to the age of the sun and the dynamics of the universal operation, attracted the likes of William Rankine and Hermann von Helmholtz. The three of them were said to have exchanged ideas on this subject.[5] In 1862, Thomson published "On the age of the sun’s heat", an article in which he reiterated his fundamental beliefs in the indestructibility of energy (the first law) and the universal dissipation of energy (the second law), leading to diffusion of heat, cessation of useful motion (work), and exhaustion of potential energy through the material universe while clarifying his view of the consequences for the universe as a whole. In a key paragraph, Thomson wrote: The result would inevitably be a state of universal rest and death, if the universe were finite and left to obey existing laws. But it is impossible to conceive a limit to the extent of matter in the universe; and therefore science points rather to an endless progress, through an endless space, of action involving the transformation of potential energy into palpable motion and hence into heat, than to a single finite mechanism, running down like a clock, and stopping for ever.[6] In the years to follow both Thomson’s 1852 and the 1865 papers, Helmholtz and Rankine both credited Thomson with the idea, but read further into his papers by publishing views stating that Thomson argued that the universe will end in a "heat death" (Helmholtz) which will be the "end of all physical phenomena" (Rankine).[5][7] ## Current status Inflationary cosmology suggests that in the early universe, before cosmic inflation, energy was uniformly distributed,[8] and the universe was thus in a state superficially similar to heat death. However, these two states are actually very different: in the early universe, gravity was a very important force, and in a gravitational system, if energy is uniformly distributed, entropy is quite low, compared to a state in which most matter has collapsed into black holes. Thus, such a state is not in thermodynamic equilibrium, as it is thermodynamically unstable.[9][10] Proposals about the final state of the universe depend on the assumptions made about its ultimate fate, and these assumptions have varied considerably over the late 20th century and early 21st century. In a hypothesized "open" or "flat" universe that continues expanding indefinitely, a heat death is also expected to occur,[11] with the universe cooling to approach absolute zero temperature and approaching a state of maximal entropy over a very long time period. There is dispute over whether or not an expanding universe can approach maximal entropy; it has been proposed that in an expanding universe, the value of maximum entropy increases faster than the universe gains entropy, causing the universe to move progressively further away from heat death.[citation needed] There is much doubt about the definition of the entropy of the universe. In a view more recent than Kelvin's, it has been recognized by a respected authority on thermodynamics, Max Planck, that the phrase 'entropy of the universe' has no meaning because it admits of no accurate definition.[1][2] Kelvin's speculation falls with this recognition. More recently, Grandy writes: "It is rather presumptuous to speak of the entropy of a universe about which we still understand so little, and we wonder how one might define thermodynamic entropy for a universe and its major constituents that have never been in equilibrium in their entire existence."[12] In Landsberg's opinion, "The third misconception is that thermodynamics, and in particular, the concept of entropy, can without further enquiry be applied to the whole universe. ... These questions have a certain fascination, but the answers are speculations, and lie beyond the scope of this book."[13] Discussing the question of entropy for non-equilibrium states in general, Lieb and Yngvason express their opinion as follows: "Despite the fact that most physicists believe in such a nonequilibrium entropy, it has so far proved impossible to define it in a clearly satisfactory way."[14] In the opinion of Čápek and Sheehan, "no known formulation [of entropy] applies to all possible thermodynamic regimes."[15] A recent analysis of entropy states that "The entropy of a general gravitational field is still not known," and that "gravitational entropy is difficult to quantify." The analysis considers several possible assumptions that would be needed for estimates, and suggests that the visible universe has more entropy than previously thought. This is because the analysis concludes that supermassive black holes are the largest contributor.[16] Another writer goes further; "It has long been known that gravity is important for keeping the universe out of thermal equilibrium. Gravitationally bound systems have negative specific heat—that is, the velocities of their components increase when energy is removed. ... Such a system does not evolve toward a homogeneous equilibrium state. Instead it becomes increasingly structured and heterogeneous as it fragments into subsystems."[17] In other words, this writer is saying that when gravity is taken into account (which Kelvin did not), a prediction of heat death is not justified. ## Time frame for heat death From the Big Bang through the present day and well into the future, matter and dark matter in the universe are thought to be concentrated in stars, galaxies, and galaxy clusters. Therefore, the universe is not in thermodynamic equilibrium and objects can do physical work.[18], §VID. The decay time for a supermassive black hole of roughly 1 galaxy-mass (1011 solar masses) due to Hawking radiation is on the order of 10100 years,[19] so entropy can be produced until at least that time. After that time, the universe enters the so-called dark era, and is expected to consist chiefly of a dilute gas of photons and leptons.[18], §VIA. With only very diffuse matter remaining, activity in the universe will have tailed off dramatically, with extremely low energy levels and extremely long time scales. Speculatively, it is possible that the universe may enter a second inflationary epoch, or, assuming that the current vacuum state is a false vacuum, the vacuum may decay into a lower-energy state.[18], §VE. It is also possible that entropy production will cease and the universe will achieve heat death.[18], §VID. Possibly another universe could be created by random quantum fluctuations or quantum tunneling in roughly $10^{10^{56}}$ years.[20] Over an infinite time there would be a spontaneous entropy decrease by Poincaré recurrence theorem, thermal fluctuations[21][22] and Fluctuation theorem.[23][24] ## References 1. ^ a b 2. ^ a b Uffink, J. (2003). Irreversibility and the Second Law of Thermodynamics, Chapter 7 of Entropy, p. 129 of Greven, A., Keller, G., Warnecke (editors) (2003), Entropy, Princeton University Press, Princeton NJ, ISBN 0-691-11338-6. Uffink asserts the authority of Planck's text. 3. ^ Thomson, William. (1851). "On the Dynamical Theory of Heat, with numerical results deduced from Mr Joule’s equivalent of a Thermal Unit, and M. Regnault’s Observations on Steam." Excerpts. [§§1–14 & §§99–100], Transactions of the Royal Society of Edinburgh, March, 1851; and Philosophical Magazine IV. 1852. [from Mathematical and Physical Papers, vol. i, art. XLVIII, pp. 174] 4. ^ Thomson, William (1852). "On a Universal Tendency in Nature to the Dissipation of Mechanical Energy" Proceedings of the Royal Society of Edinburgh for April 19, 1852, also Philosophical Magazine, Oct. 1852. [This version from Mathematical and Physical Papers, vol. i, art. 59, pp. 511.] 5. ^ a b Smith, Crosbie & Wise, Matthew Norton. (1989). Energy and Empire: A Biographical Study of Lord Kelvin. (pg. 500). Cambridge University Press. 6. ^ Thomson, William. (1862). "On the age of the sun’s heat", Macmillan’s Mag., 5, 288–93; PL, 1, 394–68. 7. ^ Physics Timeline (Helmholtz and Heat Death, 1854) 8. ^ Andrew R Liddle; Andrew R Liddle (1999). "An introduction to cosmological inflation". arXiv:astro-ph/9901124 [astro-ph]. 9. ^ Hawking, S.; S. W. Hawking (1976). "Black holes and thermodynamics". Physical Review D 13 (2): 191. Bibcode:1976PhRvD..13..191H. doi:10.1103/PhysRevD.13.191. 10. ^ S. W. Hawking and Don N. Page. "Thermodynamics of black holes in anti-de Sitter space". Comm. Math. Phys. 87, no. 4 (1982), 577–588. Retrieved 2006-09-09. 11. ^ Plait, Philip Death From the Skies!, Viking Penguin, NY, ISBN 978-0-670-01997-7, p. 259 12. ^ Grandy, W.T. (Jr) (2008). Entropy and the Time Evolution of Macroscopic Systems, Oxford University Press, Oxford UK, ISBN 978-0-19-954617-6, p. 151. 13. ^ Landsberg, P.T. (1961). Thermodynamics, with Quantum Statistical Illustrations, Wiley, New York, p. 391. 14. ^ Lieb, E.H., Yngvason, J. (2003). The entropy of classical thermodynamics, Chapter 8 of Greven, A., Keller, G., Warnecke (editors) (2003). Entropy, Princeton University Press, Princeton NJ, ISBN 0-691-11338-6, page 190. 15. ^ Čápek, V., Sheehan, D.P. (2005). Challenges to the Second Law of Thermodynamics: Theory and Experiment, Springer, Dordrecht, ISBN 1-4020-3015-0, 26. 16. ^ Egan; Chas A. Egan and Charles H. Lineweaver (2009). "A Larger Estimate of the Entropy of the Universe". arXiv:0909.3983 [astro-ph.CO]. 17. ^ Smolin, L. (2014). Time, laws, and future of cosmology, Physics Today, 67: 38–43, page 42. 18. ^ a b c d Fred C. Adams and Gregory Laughlin (1997). "A dying universe: the long-term fate and evolution of astrophysical objects". Reviews of Modern Physics 69 (2): 337–372. arXiv:astro-ph/9701131. Bibcode:1997RvMP...69..337A. doi:10.1103/RevModPhys.69.337.. 19. ^ Particle emission rates from a black hole: Massless particles from an uncharged, nonrotating hole, Don N. Page, Physical Review D 13 (1976), pp. 198–206. doi:10.1103/PhysRevD.13.198. See in particular equation (27). 20. ^ Carroll, Sean M. and Chen, Jennifer (2004). "Spontaneous Inflation and Origin of the Arrow of Time". arXiv:hep-th/0410270. 21. ^ http://arxiv.org/pdf/astro-ph/0302131.pdf?origin=publication_detail 22. ^ http://arxiv.org/abs/1205.1046 23. ^ http://www.researchgate.net/publication/2215242_Spontaneous_entropy_decrease_and_its_statistical_formula 24. ^ http://iopscience.iop.org/1475-7516/2007/01/022