text
stringlengths
104
605k
# Templates¶ EC3 recipes are described in a superset of RADL, which is a specification of virtual machines (e.g., instance type, disk images, networks, etc.) and contextualization scripts. ## Basic structure¶ An RADL document has the following general structure: network <network_id> (<features>) system <system_id> (<features>) configure <configure_id> (<Ansible recipes>) deploy <system_id> <num> [<cloud_id>] The keywords network, system and configure assign some features or recipes to an identity <id>. The features are a list of constrains separated by and, and a constrain is formed by <feature name> <operator> <value>. For instance: system tomcat_node ( cpu.count = 4 and memory.size >= 1024M and net_interface.0.connection = 'net' ) This RADL defines a system with the feature cpu.count equal to four, the feature memory.size greater or equal than 1024M and with the feature net_interface.0.connection bounded to 'net'. The deploy keyword is a request to deploy a number of virtual machines. Some identity of a cloud provider can be specified to deploy on a particular cloud. ## EC3 types of Templates¶ In EC3, there are three types of templates: • images, that includes the system section of the basic template. It describes the main features of the machines that will compose the cluster, like the operating system or the CPU and RAM memory required; • main, that includes the deploy section of the frontend. Also, they include the configuration of the chosen LRMS. • component, for all the recipes that install and configure software packages that can be useful for the cluster. In order to deploy a cluster with EC3, it is mandatory to indicate in the ec3 launch command, one recipe of kind main and one recipe of kind image. The component recipes are optional, and you can include all that you need. To consult the type (kind) of template from the ones offered with EC3, simply use the ec3 templates command like in the example above: \$ ./ec3 templates name kind summary --------------------------------------------------------------------------------------------------------------------- blcr component Tool for checkpoint the applications. centos-ec2 images CentOS 6.5 amd64 on EC2. ckptman component Tool to automatically checkpoint applications running on Spot instances. docker component An open-source tool to deploy applications inside software containers. gnuplot component A program to generate two- and three-dimensional plots. nfs component Tool to configure shared directories inside a network. octave component A high-level programming language, primarily intended for numerical computations openvpn component Tool to create a VPN network. sge main Install and configure a cluster SGE from distribution repositories. slurm main Install and configure a cluster SLURM 14.11 from source code. torque main Install and configure a cluster TORQUE from distribution repositories. ubuntu-azure images Ubuntu 12.04 amd64 on Azure. ubuntu-ec2 images Ubuntu 14.04 amd64 on EC2. ## Network Features¶ Under the keyword network there are the features describing a Local Area Network (LAN) that some virtual machines can share in order to communicate to themselves and to other external networks. The supported features are: outbound = yes|no Indicate whether the IP that will have the virtual machines in this network will be public (accessible from any external network) or private. If yes, IPs will be public, and if no, they will be private. The default value is no. ## System Features¶ Under the keyword system there are the features describing a virtual machine. The supported features are: image_type = vmdk|qcow|qcow2|raw Constrain the virtual machine image disk format. virtual_system_type = '<hypervisor>-<version>' Constrain the hypervisor and the version used to deploy the virtual machine. price <=|=|=> <positive float value> Constrain the price per hour that will be paid, if the virtual machine is deployed in a public cloud. cpu.count <=|=|=> <positive integer value> Constrain the number of virtual CPUs in the virtual machine. cpu.arch = i686|x86_64 Constrain the CPU architecture. cpu.performance <=|=|=> <positive float value>ECU|GCEU Constrain the total computational performance of the virtual machine. memory.size <=|=|=> <positive integer value>B|K|M|G Constrain the amount of RAM memory (main memory) in the virtual machine. net_interface.<netId> Features under this prefix refer to virtual network interface attached to the virtual machine. net_interface.<netId>.connection = <network id> Set the virtual network interface is connected to the LAN with ID <network id>. net_interface.<netId>.ip = <IP> Set a static IP to the interface, if it is supported by the cloud provider. net_interface.<netId>.dns_name = <string> Set the string as the DNS name for the IP assigned to this interface. If the string contains #N# they are replaced by a number that is distinct for every virtual machine deployed with this system description. instance_type = <string> Set the instance type name of this VM. disk.<diskId>.<feature> Features under this prefix refer to virtual storage devices attached to the virtual machine. disk.0 refers to system boot device. disk.<diskId>.image.url = <url> Set the source of the disk image. The URI designates the cloud provider: • one://<server>:<port>/<image-id>, for OpenNebula; • ost://<server>:<port>/<ami-id>, for OpenStack; • aws://<region>/<ami-id>, for Amazon Web Service; • gce://<region>/<image-id>, for Google Cloud; • azr://<image-id>, for Microsoft Azure Clasic; and • azr://<publisher>/<offer>/<sku>/<version>, for Microsoft Azure; and • <fedcloud_endpoint_url>/<image_id>, for FedCloud OCCI connector. • appdb://<site_name>/<apc_name>?<vo_name>, for FedCloud OCCI connector using AppDB info (from ver. 1.6.0). • docker://<docker_image>, for Docker images. • fbw://<fogbow_image>, for FogBow images. Either disk.0.image.url or disk.0.image.name must be set. disk.<diskId>.image.name = <string> Set the source of the disk image by its name in the VMRC server. Either disk.0.image.url or disk.0.image.name must be set. disk.<diskId>.type = swap|iso|filesystem Set the type of the image. disk.<diskId>.device = <string> Set the device name, if it is disk with no source set. disk.<diskId>.size = <positive integer value>B|K|M|G Set the size of the disk, if it is a disk with no source set. disk.0.free_size = <positive integer value>B|K|M|G Set the free space available in boot disk. disk.<diskId>.os.name = linux|windows|mac os x Set the operating system associated to the content of the disk. disk.<diskId>.os.flavour = <string> Set the operating system distribution, like ubuntu, centos, windows xp and windows 7. disk.<diskId>.os.version = <string> Set the version of the operating system distribution, like 12.04 or 7.1.2. disk.0.os.credentials.username = <string> and disk.0.os.credentials.password = <string> Set a valid username and password to access the operating system. disk.0.os.credentials.public_key = <string> and disk.0.os.credentials.private_key = <string> Set a valid public-private keypair to access the operating system. disk.<diskId>.applications contains (name=<string>, version=<string>, preinstalled=yes|no) Set that the disk must have installed the application with name name. Optionally a version can be specified. Also if preinstalled is yes the application must have already installed; and if no, the application can be installed during the contextualization of the virtual machine if it is not installed. ## Special EC3 Features¶ There are also other special features related with EC3. These features enable to customize the behaviour of EC3: ec3_max_instances = <integer value> Set maximum number of nodes with this system configuration; a negative value means no constrain. The default value is -1. This parameter is used to set the maximum size of the cluster. ec3_destroy_interval = <positive integer value> Some cloud providers require paying in advance by the hour, like AWS. Therefore, the node will be destroyed only when it is idle and at the end of the interval expressed by this option (in seconds). The default value is 0. ec3_destroy_safe = <positive integer value> This value (in seconds) stands for a security margin to avoid incurring in a new charge for the next hour. The instance will be destroyed (if idle) in up to (ec3_destroy_interval - ec3_destroy_safe) seconds. The default value is 0. ec3_if_fail = <string> Set the name of the next system configuration to try when no more instances can be allocated from a cloud provider. Used for hybrid clusters. The default value is ‘’. ec3_inherit_from = <string> Name of the already defined system from which inherit its characteristics. For example, if we have already defined a system wn where we have specified cpu and os, and we want to change memory only for a new system, instead of writing again the values for cpu and os, we inherit these values from the specified system like ec3_inherit_from = system wn. The default value is ‘None’. ec3_reuse_nodes = <boolean> Indicates that you want to stop/start working nodes instead of powering off/on them. The default value is ‘false’. ec3_golden_images = <boolean> Indicates that you want to use the golden images feature. See golden images for more info. The default value is ‘false’. ec3_additional_vm = <boolean> Indicates that you want this VM to be treated as an additional VM of the cluster, for example, to install server services that you do not want to put in the front machine. The default value is ‘false’. ec3_node_type = <string> Indicates the type of the node. Currently the only supported value is wn. It enables to distinguish the WNs from the rest of nodes. The default value is ‘None’. ec3_node_keywords = <string> Comma separated list of pairs key=value that specifies some specific features supported by this type of node (i.e. gpu=1,infiniband=1). The default value is ‘None’. ec3_node_queues_list = <string> Comma separated list of queues this type of node belongs to. The default value is ‘None’. ec3_node_pattern = <string> A pattern (as a Python regular expression) to match the name of the virtual nodes with the current node type The value of this variable must be set according to the value of the variable ec3_max_instances. For example if ec3_max_instances is set to 5 a valid value can be: ‘wn[1-5]’. This variable has preference over ec3_if_fail so if a virtual node to be switched on matches with the specified pattern ec3_if_fail variable will be ignored. The default value is ‘None’. ## System and network inheritance¶ It is possible to create a copy of a system or a network and to change and add some features. If feature ec3_inherit_from is presented, ec3 replaces that object by a copy of the object pointed out in ec3_inherit_from and appends the rest of the features. Next example shows a system wn_ec2 that inherits features from system wn: system wn ( ec3_if_fail = 'wn_ec2' and disk.0.image.url = 'one://myopennebula.com/999' and net_interface.0.connection='public' ) system wn_ec2 ( ec3_inherit_from = system wn and disk.0.image.url = 'aws://us-east-1/ami-e50e888c' and spot = 'yes' and ec3_if_fail = '' ) The system wn_ec2 that ec3 sends finally to IM is: system wn_ec2 ( net_interface.0.connection='public' and disk.0.image.url = 'aws://us-east-1/ami-e50e888c' and spot = 'yes' and ec3_if_fail = '' ) In case of systems, if system A inherits features from system B, the new configure section is composed by the one from system A followed by the one of system B. Following the previous example, these are the configured named after the systems: configure wn ( @begin - user: name=user1 password=1234 @end ) configure wn_ec2 ( @begin - apt: name=pkg @end ) Then the configure wn_ec2 that ec3 sends finally to IM is: configure wn_ec2 ( @begin - user: name=user1 password=1234 - apt: name=pkg @end ) ## Configure Recipes¶ Contextualization recipes are specified under the keyword configure. Only Ansible recipes are supported currently. They are enclosed between the tags @begin and @end, like that: configure add_user1 ( @begin --- - user: name=user1 password=1234 @end ) ### Exported variables from IM¶ To easy some contextualization tasks, IM publishes a set of variables that can be accessed by the recipes and have information about the virtual machine. IM_NODE_HOSTNAME Hostname of the virtual machine (without the domain). IM_NODE_DOMAIN Domain name of the virtual machine. IM_NODE_FQDN Complete FQDN of the virtual machine. IM_NODE_NUM The value of the substitution #N# in the virtual machine. IM_MASTER_HOSTNAME Hostname (without the domain) of the virtual machine doing the master role. IM_MASTER_DOMAIN Domain name of the virtual machine doing the master role. IM_MASTER_FQDN Complete FQDN of the virtual machine doing the master role. ### Including a recipe from another¶ The next RADL defines two recipes and one of them (add_user1) is called by the other (add_torque): configure add_user1 ( @begin --- - user: name=user1 password=1234 @end ) @begin --- - yum: name=torque-client,torque-server state=installed @end ) ### Including file content¶ If in a vars map a variable has a map with key ec3_file, ec3 replaces the map by the content of file in the value. For instance, there is a file slurm.conf with content: ControlMachine=slurmserver AuthType=auth/munge CacheGroups=0 The next ansible recipe will copy the content of slurm.conf into /etc/slurm-llnl/slurm.conf: configure front ( @begin - vars: SLURM_CONF_FILE: ec3_file: slurm.conf - copy: dest: /etc/slurm-llnl/slurm.conf content: "{{SLURM_CONF_FILE}}" @end ) Warning Avoid using variables with file content in compact expressions like this: - copy: dest=/etc/slurm-llnl/slurm.conf content={{SLURM_CONF_FILE}} ### Include RADL content¶ Maps with keys ec3_xpath and ec3_jpath are useful to refer RADL objects and features from Ansible vars. The difference is that ec3_xpath prints the object in RADL format as string, and ec3_jpath prints objects as YAML maps. Both keys support the next paths: • /<class>/*: refer to all objects with that <class> and its references; e.g., /system/* and /network/*. • /<class>/<id> refer to an object of class <class> with id <id>, including its references; e.g., /system/front, /network/public. • /<class>/<id>/* refer to an object of class <class> with id <id>, without references; e.g., /system/front/*, /network/public/* Consider the next example: network public ( ) system front ( net_interface.0.connection = 'public' and net_interface.0.dns_name = 'slurmserver' and queue_system = 'slurm' ) system wn ( net_interface.0.connection='public' ) configure slum_rocks ( @begin - vars: JFRONT_AST: ec3_jpath: /system/front/* XFRONT: ec3_xpath: /system/front content: "{{XFRONT}}" when: JFRONT_AST.queue_system == "slurm" @end ) RADL configure slurm_rocks is transformed into: configure slum_rocks ( @begin - vars: JFRONT_AST: class: system id: front net_interface.0.connection: class: network id: public reference: true net_interface.0.dns_name: slurmserver queue_system: slurm XFRONT: | network public () system front ( net_interface.0.connection = 'public' and net_interface.0.dns_name = 'slurmserver' and queue_system = 'slurm' ) - content: '{{XFRONT}}' • For image templates, respect the frontend and working nodes nomenclatures. The system section for the frontend must receive the name front, while at least one type of working node must receive the name wn. • For component templates, add a configure section with the name of the component. You also need to add an include statement to import the configure in the system that you want. See Including a recipe from another for more details. Also, it is important to provide a description section in each new template, to be considered by the ec3 templates` command.
rivet is hosted by Hepforge, IPPP Durham ## Rivet analyses reference ### CMS_2011_S9086218 Measurement of the inclusive jet cross-section in $pp$ collisions at $\sqrt{s} = 7$ TeV Experiment: CMS (LHC) Inspire ID: 902309 Status: VALIDATED Authors: • Rasmus Sloth Hansen<[email protected]> References: • http://cdsweb.cern.ch/record/1355680 Beams: p+ p+ Beam energies: (3500.0, 3500.0) GeV Run details: • Inclusive QCD at 7TeV comEnergy, ptHat (or equivalent) greater than 10 GeV The inclusive jet cross section is measured in pp collisions with a center-of-mass energy of 7 TeV at the LHC using the CMS experiment. The data sample corresponds to an integrated luminosity of 34 inverse picobarns. The measurement is made for jet transverse momenta in the range 18-1100 GeV and for absolute values of rapidity less than 3. Jets are anti-kt with $R=0.5$, $p_\perp>18$ GeV and $|y|<3.0$. Source code: CMS_2011_S9086218.cc 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Tools/BinnedHistogram.hh" namespace Rivet { // Inclusive jet pT class CMS_2011_S9086218 : public Analysis { public: // Constructor CMS_2011_S9086218() : Analysis("CMS_2011_S9086218") {} // Book histograms and initialize projections: void init() { const FinalState fs; // Initialize the projectors: declare(FastJets(fs, FastJets::ANTIKT, 0.5),"Jets"); // Book histograms: {Histo1DPtr tmp; _hist_sigma.add(0.0, 0.5, book(tmp, 1, 1, 1));} {Histo1DPtr tmp; _hist_sigma.add(0.5, 1.0, book(tmp, 2, 1, 1));} {Histo1DPtr tmp; _hist_sigma.add(1.0, 1.5, book(tmp, 3, 1, 1));} {Histo1DPtr tmp; _hist_sigma.add(1.5, 2.0, book(tmp, 4, 1, 1));} {Histo1DPtr tmp; _hist_sigma.add(2.0, 2.5, book(tmp, 5, 1, 1));} {Histo1DPtr tmp; _hist_sigma.add(2.5, 3.0, book(tmp, 6, 1, 1));} } // Analysis void analyze(const Event &event) { const double weight = 1.0; const FastJets& fj = apply(event,"Jets"); const Jets& jets = fj.jets(Cuts::ptIn(18*GeV, 1100.0*GeV) && Cuts::absrap < 4.7); // Fill the relevant histograms: for(const Jet& j : jets) { _hist_sigma.fill(j.absrap(), j.pT(), weight); } } // Finalize void finalize() { _hist_sigma.scale(crossSection()/sumOfWeights()/2.0, this); } private: BinnedHistogram _hist_sigma; }; // This global object acts as a hook for the plugin system. DECLARE_RIVET_PLUGIN(CMS_2011_S9086218); }
# Uniqueness of extension of a norm on a ring to its completion This is with reference to theorem 2.18 that appears here http://www.maths.gla.ac.uk/~ajb/dvi-ps/padicnotes.pdf Essentially the author says that if $N$ is a norm on a ring $R$ and $\hat{R}$ is the completion of $R$ with respect to this norm, then there is a unique way to extend the norm to $\hat{R}$. I don't quite understand the uniqueness part and I don't think the author actually proves this. - It seems the uniqueness is essentially implied in the statement since R is dense in R^. –  Fredrik Meyer May 5 '11 at 1:04
Battery size calculator The battery size calculator calculates required the battery rating in ampere hour (Ah) based on load, duration and discharge level. Parameters: • Load type (ampere or watt): Select the unit in which the load will be specified, ampere or watt. • Load (watt): If the load type is watt, specify the load in watt, for example 100 W. Use and average value if its a cyclical load. • Load (ampere): If the load type is ampere, specify the current in ampere, for example 10 A. • Voltage (V): Specify the battery voltage, if the load type is watt. • Required duration (hours): Specify the duration that the load must be supplied for. • Battery type:  Select the battery type,  lead-acid or lithium-ion. • Required battery size (Ah): The recommend battery size in Ah. Current calculation: • If the load is specified in watt, the current I is calculated as: $$I=\dfrac{P}{V}$$ where, P is the power in watts, and V is the voltage in volts. Lithium-ion batteries: • The required battery size $$B_{li-ion}$$ for litium-ion batteries is calculated by the battery size calculator as: $$B_{li-ion}=\dfrac {100 \cdot I \cdot t}{100 – Q}$$ where, I is current in ampere, t is the duration in hours and Q is the required remaining charge in percentage. • The required battery size $$B_{lead-acid}$$ for lead-acid batteries is calculated by the battery size calculator as: $$B_{lead-acid}=\dfrac{100 \cdot I \cdot t}{(100-Q) \cdot (0.02 \cdot t+0.6)}$$
frequency_weights() creates a vector of frequency weights which allow you to compactly repeat an observation a set number of times. Frequency weights are supplied as a non-negative integer vector, where only whole numbers are allowed. Usage frequency_weights(x) Arguments x An integer vector. Value A new frequency weights vector. importance_weights() Examples # Record that the first observation has 10 replicates, the second has 12 # replicates, and so on frequency_weights(c(10, 12, 2, 1)) #> <frequency_weights[4]> #> [1] 10 12 2 1 # Fractional values are not allowed try(frequency_weights(c(1.5, 2.3, 10))) #> Error in frequency_weights(c(1.5, 2.3, 10)) : #> Can't convert x <double> to <integer>.
Chapter 68 Pneumonia is an infection of the alveolar or gas-exchanging portions of the lung. Community-acquired pneumonia (CAP) accounts for approximately 4 million cases and 1 million hospitalizations per year in the U.S.1,2 It is the sixth leading cause of death, particularly among older adults. The incidence of pneumonia caused by atypical or opportunistic infections is increasing. Recurring epidemics of severe acute respiratory syndrome or pandemic influenza may alter future recommendations for the evaluation and management of this disease. Pneumococcal pneumonia produces typical symptoms of fever, cough, and rigors, but atypical infections, infections in compromised hosts, and infections in patients at the extremes of age may produce atypical findings, such as a change in mental status or a decline in function. Patients with health care–associated pneumonia are at risk for infection with resistant organisms. The patient’s environment must be considered when predicting the causative organism and selecting treatment choices (Table 68-1).3,4 The most important environments to be considered in the ED are CAP and health care–associated pneumonia. Table 68-1 Acquisition Environment Classification for Pneumonia ### Pathophysiology Pathogenic organisms may be inhaled or aspirated directly into the lungs. Some bacteria, such as Staphylococcus aureus or Pneumococcus, can produce pneumonia as a result of hematogenous seeding. Patients most at risk for pneumonia are those with a predisposition to aspiration, impaired mucociliary clearance, or risk of bacteremia (Table 68-2). Table 68-2 Risk Factors for Pneumonia Some forms of pneumonia produce an intense inflammatory response within the alveoli that leads to filling of the air space with organisms, exudate, and white blood cells. Organisms can distribute throughout the lung by spreading along the bronchial tree or through pores between adjacent alveoli (pores of Kohn). Bacterial pneumonia results in an intense inflammatory response and tends ... Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access. Ok ## Subscription Options ### AccessMedicine Full Site: One-Year Subscription Connect to the full suite of AccessMedicine content and resources including more than 250 examination and procedural videos, patient safety modules, an extensive drug database, Q&A, Case Files, and more.
# Browse results ## You are looking at 1 - 10 of 2,951 items for : • Continental Philosophy • Search level: All Clear All The Legacy of Jan Łukasiewicz In 1906, Jan Łukasiewicz, a great logician, published his classic dissertation on the concept of cause, not yet available in English, containing not only a thorough reconstruction of the title concept, but also a systematization of the analytical method. It sparked an extremely inspiring discussion among the other representatives of the Lvov-Warsaw School. The main voices of this discussion are supplemented here with texts of contemporary Polish philosophers. They show how the concept of cause is presently functioning in various disciplines and point to the topicality of Łukasiewicz’s method of analysis. This book makes the attempt to wed reason and the poetic. The tool for this attempt is Rational Poetic Experimentalism (RPE), which is being introduced and explored in this book. According to RPE, it makes sense to look for poetic elements in human reality (including reason), outside of the realm of imaginative literature. Provocatively, RPE contends that philosophy’s search for truth has not been a great success so far. So, why not experiment with philosophical concepts and look for thought-provoking ideas by employing the principles of RPE, instead of fruitlessly searching for truths using conventional methods? This book intends to present Mamardashvili’s philosophical perspective on modern society by exemplifying in different ways its distinctive contribution to the greater philosophical landscape. The authors aim to define both Mamardashvili’s place in the history of philosophy—among the currents of twentieth-century European thought and, in particular, phenomenology—and his relations with authors like Hegel, Proust, Deleuze, and Wittgenstein, while identifying the basic methodological instruments and substantive concepts of his thought—language, migration, citizenship, or “the freedom of complaint.” The volume will be useful both for preparatory courses (by supplying an introduction to Mamardashvili’s thought and forming the key necessary concepts) and for advanced research exigencies, allowing a professional audience to discover the remarkable insights of Mamardashvili’s philosophy. Cet ouvrage est la première étude systématique du rapport entre communauté et littérature dans la pensée de Jean-Luc Nancy. L'auteure développe la thèse originale que cette relation doit être comprise comme une refonte du mythe. Traversant l’œuvre de Nancy dans son intégralité, elle démontre de façon incomparable comment s’articulent les questions centrales de la communauté et de la littérature. De plus, en faisant ce lien en termes de « mythe », ce livre situe l’œuvre de Nancy dans une tradition plus large, allant du romantisme allemand aux théories contemporaines de la pertinence sociale de la littérature. This is the first book to provide a systematic investigation of the relation between community and literature in the work of Jean-Luc Nancy. It develops the original claim that this relation has to be understood as a rethinking of myth. Traversing the entirety of Nancy’s vast oeuvre, the author offers an incomparable account of the ways in which Nancy’s central questions of community and literature are linked together. Moreover, by putting this linkage in terms of ‘myth’, this book situates Nancy’s work within a larger tradition, leading from German Romanticism to contemporary theories of the social relevance of literature. Gregory P. Floyd and Stephanie Rumpza, eds., The Catholic Reception of Continental Philosophy in North America (Toronto: University of Toronto Press, 2020), pp. ix + 335, $60.00, ISBN 978-1-4875-0649-0 (cloth). This collection of twelve essays investigates the origins of the popularity of continental philosophy in North American Catholic universities, especially when compared to its relative scarcity in non-Catholic institutions. In examining how continental philosophy came to have wide currency in North American Catholic schools, the focus of The Catholic Reception is primarily historical. As such, many of the essays, including those of Daniel Dahlstrom, John Caputo, In: Journal for Continental Philosophy of Religion Three essays in the present issue result from a recent conference focused on the thought of Jean-Luc Marion, hosted online by the Academia Nacional de Ciencias de Buenos Aires and the Universidad del Salvador. The theme this year addressed feelings and moods, a topic seldom considered in the secondary literature, despite Marion occasionally being recognized as a “philosopher of love.” Primarily known for his notions of givenness, saturated phenomena, revelation, and his rigorous and detailed scholarship on Descartes, Marion’s contributions to a phenomenology of affectivity may seem ancillary to these predominant concepts. Yet, an attentive inspection of his complete corpus Free access In: Journal for Continental Philosophy of Religion Free access In: Journal for Continental Philosophy of Religion ## Abstract From Jean-Luc Marion’s examination of Gustave Courbet and his painting, it is shown how grief can operate as a model for the hermeneutics of love. This hermeneutics of grief, in turn, makes possible a consideration of all phenomena, and not only the human, as saturated phenomena. In: Journal for Continental Philosophy of Religion Rossano Zas Friz De Col, S.J., Ignatian Christian Life: A New Paradigm trans. Susan Dawson Vásquez, (Chestnut Hill, MA: Institute for Advanced Jesuit Studies, 2021), pp. xiii + 187,$ 19.95, ISBN 978-1-947617-12-4 (pbk). Ignatian Christian Life: A New Paradigm is the latest contribution by Rossano Zas Friz De Col, S.J., to his ongoing interdisciplinary project in spiritual theology. He begins from the shift to a secularized culture, which demands a new paradigm in the Ignatian tradition. Despite this initial proposal, in subsequent chapters he only minimally develops his account of secularism, shifting his focus to issues In: Journal for Continental Philosophy of Religion Author: William Large ## Abstract Kant argues against the ontological argument for the existence of God but replaces it with a moral theism. This article analyses Kant’s moral proof with emphasis on the Critique of the Power of Judgement, and his historical and political writings. It argues that at the heart of this argument is the idea of progress. The concrete content of the moral law is the idea of a just world. Such a just world would be impossible without the idea of God, since there would be no harmony between nature and freedom. It contrasts Kant’s concept of time and history with Heidegger’s. The difference between them is a reversal of modality. For Kant, actuality determines possibility. If I cannot imagine a just word as actual, then I would fall into moral despair. The idea of God grounds this actuality. For Heidegger, possibility is higher than actuality. Since history has no teleology, then no idea of God is required. In: Journal for Continental Philosophy of Religion
## CryptoDB ### Paper: Logarithmic-Size (Linkable) Threshold Ring Signatures in the Plain Model Authors: Abida Haque , North Carolina State University Stephan Krenn , Austrian Institute of Technology Daniel Slamanig , Austrian Institute of Technology Christoph Striecks , Austrian Institute of Technology Search ePrint Search Google Slides PKC 2022 A $1$-out-of-$N$ ring signature scheme, introduced by Rivest, Shamir, and Tauman-Kalai (ASIACRYPT '01), allows a signer to sign a message as part of a set of size $N$ (the so-called ring'') which are anonymous to any verifier, including other members of the ring. Threshold ring (or thring'') signatures generalize ring signatures to $t$-out-of-$N$ parties, with $t \geq 1$, who anonymously sign messages and show that they are distinct signers (Bresson et al., CRYPTO'02). Until recently, there was no construction of ring signatures that both $(i)$ had logarithmic signature size in $N$, and $(ii)$ was secure in the plain model. The work of Backes et al. (EUROCRYPT'19) resolved both these issues. However, threshold ring signatures have their own particular problem: with a threshold $t \geq 1$, signers must often reveal their identities to the other signers as part of the signing process. This is an issue in situations where a ring member has something controversial to sign; he may feel uncomfortable requesting that other members join the threshold, as this reveals his identity. Building on the Backes et al. template, in this work we present the first construction of a thring signature that is logarithmic-sized in $N$, in the plain model, and does not require signers to interact with each other to produce the thring signature. We also present a linkable counterpart to our construction, which supports a fine-grained control of linkability. Moreover, our thring signatures can easily be adapted to achieve the recent notions of claimability and repudiability (Park and Sealfon, CRYPTO'19). ##### BibTeX @inproceedings{pkc-2022-31719, title={Logarithmic-Size (Linkable) Threshold Ring Signatures in the Plain Model}, publisher={Springer-Verlag}, author={Abida Haque and Stephan Krenn and Daniel Slamanig and Christoph Striecks}, year=2022 }
## The Annals of Probability ### Path Properties of Index-$\beta$ Stable Fields John P. Nolan #### Abstract We examine the paths of the stable fields that are the analogs of index-$\beta$ Gaussian fields. We find Holder conditions on their paths and find the Hausdorff dimension of the image, graph and level sets when we have local nondeterminism, generalizing the Gaussian results. #### Article information Source Ann. Probab., Volume 16, Number 4 (1988), 1596-1607. Dates First available in Project Euclid: 19 April 2007 https://projecteuclid.org/euclid.aop/1176991586 Digital Object Identifier doi:10.1214/aop/1176991586 Mathematical Reviews number (MathSciNet) MR958205 Zentralblatt MATH identifier 0673.60043 JSTOR Nolan, John P. Path Properties of Index-$\beta$ Stable Fields. Ann. Probab. 16 (1988), no. 4, 1596--1607. doi:10.1214/aop/1176991586. https://projecteuclid.org/euclid.aop/1176991586 • See Correction: John P. Nolan. Correction: Path Properties of Index-$\beta$ Stable Fields. Ann. Probab., Volume 20, Number 3 (1992), 1601--1602.
# What are some applications outside of mathematics for algebraic geometry? Are there any results from algebraic geometry that have led to an interesting "real world" application? - community-wiki? Genuine question. –  Jamie Banks Jul 23 '10 at 23:42 Algebraic geometry is incredibly important in bioinformatics and DNA/taxonomy stuff. I don't actually understand the details myself which is why I haven't written an answer. –  Noah Snyder Jul 24 '10 at 0:03 @Katie, good call on the community-wiki –  Jonathan Fischoff Jul 24 '10 at 1:16 The following slideshow gives an explanation of how algebraic geometry can be used in phylogenetics. See also this post of Charles Siegel on Rigorous Trivialties. This is not an area I've looked at in much detail at all, but it appears that the idea is to use a graph to model evolutionary processes, and such that the "transition function" for these processes is given by a polynomial map. In particular, it'd be of interest to look at the potential outcomes, namely the image of the transition function; that corresponds to the image of a polynomial map (which is not necessarily an algebraic variety, but it is a constructible set, so not that badly behaved either). (In practice, though, it seems that one studies the closure, which is a legitimate algebraic set.) - Bernd Sturmfels at Berkeley has done quite a bit of work on applying algebraic geometry to just these sort of phylogenetics problems (but I don't know enough about the work to comment intelligently). –  Jamie Banks Jul 24 '10 at 0:33 @Katie Here is a summary article in this direction math.berkeley.edu/~bernd/ClayBiology.pdf –  BBischof Jul 24 '10 at 1:15 The slideshow is down. Can you provide another link? –  Troy Woo Apr 25 at 15:15 Broadly speaking, algebraic geometry is used a lot in some areas of robotics and mechanical engineering. Real algebraic geometry, for example, is important to the development of CAD systems (think NURBS, computing intersections of primitives, etc.) And AG comes up in robotics when it is important to figure out, say, what motions a robotic arm in a given configuration is capable of, or to construct some kind of linkage that draws a prescribed curve. Something specific in that vein: Kempe's Universality Theorem gives that any bounded algebraic curve in $\mathbb{R}^2$ is the locus of some linkage. The "locus of a linkage" being the path drawn out by all the vertices of a graph, where the edge lengths are all specified and one or more vertices remains still. Interestingly, Kempe's orginal proof of the theorem was flawed, and more recent proofs have been more involved. However, Timothy Abbott's MIT masters thesis gives a simpler proof that gives a working linkage for a given curve, and makes for interesting reading concerning the problem in general. Edit: The NURBS connection is, in part, that can construct a B-spline that approximates a given real algebraic curve, which is crucial in displaying intersection curves, for example. See here for more details (I'm afraid I don't know many on this.) - I was not aware there was a connection between NURBS and AG, could you expand on that? –  Jonathan Fischoff Jul 24 '10 at 0:33
# Alternatives to Wage Labour Wage labour — the renting out of an employee's time to an employer — is the only type of work arrangement most of us are used to. In fact, it's so prevalent that it's almost unthinkable to conceive of it as being fundamentally harmful, or to think of an alternative. I'd like to argue that wage labour is indeed harmful, and that there exist alternatives that can provide much greater collective happiness and a much more efficient organisation of work. As the curious reader can find these ideas and others in works by philosophers and sociologists such as Marx [marx-wage-labour] and Graeber [graeber-debt], I have tried to keep the terminology in this essay to a minimum, so as to ensure it is accessible to a general audience. Virtually all of the working population at the present moment can be split into two classes: employers and employees. I believe a better description would treat class as a continuum, with gradations among both employees and employers as a function of their dependence upon the capital owned by others, but for simplicity's sake, let us content ourselves with this dichotomy for the present moment. Employers are shareholders of a business that sells a product or a service to generate revenue. Part of this revenue is used to pay employees who create the actual product or service and, therefore, also create the entirety of the value that the business earns its revenue for. The rest of the revenue remains as profit, which the employer is entitled to. In addition, employers possess two privileges that employees do not, which is the key distinguishing feature of the two classes. The first privilege is power over the company's actions, usually enshrined either as shareholders' voting rights, or direct control by an executive. The second privilege is ownership of the company's capital, usually as ownership of shares. Through the combination of these two privileges, employers are able to decide both the salaries of employees, as well as the price of the product or service sold. The net result is that an employer can, solely by virtue of their power over the company, and without necessarily providing any contribution of their own whatsoever to the value of the company's products or services, extract as much profit out of the employees' labour as they see fit, while simultaneously compensating the employees as little as they desire. Again, let us assume that the employer or shareholder provides no economic value and does not contribute to the company's output in any way. Taking this into account, I can scarcely believe that anyone would consider this arrangement reasonable and equitable in the slightest, except perhaps if one's judgement was clouded by the sheer force of habit resulting from taking this present system for granted. Suppose I walked over to a car repair shop and declared myself the owner of it, demanding 50\% of the shop's revenue from that day forward, without my making a contribution of any kind — I would be laughed at and thrown out immediately. The reader will no doubt object to this comparison on the grounds that a forcible takeover of an existing business is not the same as the founding of a company and the subsequent hiring of employees. However, it is not immediately obvious where exactly the crux of this difference lies. I have been able to identify three classes of argument supporting the alleged fairness of ownership of the labour of others. I have called these the “work argument”, the “idea argument” and the “risk argument”, and they cover the three main contributions one can offer to a new business, namely work, ideas and capital, respectively. I will consider each argument separately. The work argument centers around the idea that employers, especially when they are ambitious entrepreneurs, do incomparably more work than employees, and that this work is both significantly more difficult and much more crucial to the business than that of employees. Whether or not this claim is true obviously varies from company to company — there are certainly companies where the founders work many times more than employees, but the founders of many other companies do nothing particularly important and instead spend their time trying to justify their position. In my professional experience, I have come into contact with more companies of the latter than the former category, but I cannot claim to have conducted any kind of meaningful survey. I think it is true and reasonable that those contributing to a company should be paid in proportion to the quantity, difficulty and importance of their work. I find this self-evident and I doubt this is a controversial view. I can, however, see no justification for differential treatment of employers and employees from this point of view. If employee A works twice as much as employee B, and we are judging pay by the quality of one's work, we should naturally want employee A to be paid somewhat more than employee B. Perhaps employee A also works twice as much as employer E, or perhaps the work of employee A is twice as difficult or important as that of employer E. Say employer E has founded a data science company but has no experience with data science themselves, and most of the work is done by data scientist A. In this situation, one can scarcely use the argument of work to justify paying employer E more than employee A. Therefore, while the criterion of work is an important one, it can simply be applied uniformly to everyone involved in a certain enterprise, with no need for preferential treatment. This is a very important point, because the salaries of CEOs and the earnings of shareholders are often tens, hundreds or thousands of times higher than those of even the most essential “ordinary” employees, regardless of the work actually done by the former, if any. Even with heroic efforts, a founder could hardly work a hundred times more in a day than a regular employee, to justify a hundredfold pay increase. Despite this, in any average company, if employees gathered together to object to a CEO who had no particularly measurable contribution being paid more than them, they would not have much of a chance of success. We think it self-evident that CEOs are highly paid simply for their being CEOs. The clearest real-life example of this that I can think of is the classification of workers as “essential” during the coronavirus pandemic. The essential workers were in no cases the most highly paid — as far as I know, CEOs, high-level corporate managers, fund managers and so on were not classified as essential. Some of the jobs classified by the UK Health Security Agency as “essential” include: all NHS \footnote{National Health Service} and social care staff, transport workers, education and childcare workers, and those involved in supply chains (i.e. supermarket workers). [uk-essential-workers] We can conclude two things. Firstly, the work argument hardly offers much justification for paying employers many times more than employees. Secondly, we would benefit from a system that allows pay to vary more closely with labour, and I will provide a sketch of such a system momentarily. The second argument we will consider is the idea argument, which entails the assumption that, because employers provide the underlying “mission and vision” of a company, they should earn many times more than those that work under them. The trouble with this argument is that ideas are not nearly as valuable to a company as the work and, most importantly, expertise, needed to execute those ideas. Venture capitalist Paul Graham agrees: “A lot of would-be startup founders think the key to the whole process is the initial idea, and from that point all you have to do is execute. Venture capitalists know better. If you go to VC firms with a brilliant idea that you'll tell them about if they sign a nondisclosure agreement, most will tell you to get lost”. [graham] Furthermore, many successful companies shift ideas multiple times, which they gracefully term a “pivot”. “Microsoft's original plan was to make money selling programming languages, of all things. Their current business model didn't occur to them until IBM dropped it in their lap five years later”. [graham] Therefore, we can hardly use the idea argument to justify an employer with a “good idea” being paid many times more than the people actually implementing this idea. Lastly, we will examine the risk argument, which involves the claim that employers, in their entrepreneurial efforts of founding a business, are exposed to a much greater degree of personal risk than they would have had they lived the comparatively stable life of an employee. Instead, entrepreneurial employers incur a sort of opportunity cost because they could have instead been earning money at a stable job, and should therefore be rewarded for carrying this burden of risk that employees never carry. A second aspect to this argument is that company founders often also risk the capital they used to start the company. I shall consider these two aspects separately. Firstly, the risk of starting a venture should be proportional to the expected gain. If these two are in proportion, the founder of a risky venture should already have the potential for significant gain without having to also claim significantly more earnings than everyone else carrying out the work. Additionally, the risk employees are exposed to is also proportional to the likelihood of the company they work for to fail, so the risk incurred does not exhibit a dichotomy between employers and employees. We are left, finally, with the financial aspect. A founder will often put forward some significant amount of capital on starting a business, which is something an employee does not do. If this, however, is the only remaining salient grounds for differentiation between employees and employers, it is just as well to treat this infusion of capital as a loan to the corporation, the repayment of which should be prioritised. Capitalists might object, saying that their contribution of seed capital means they should indefinitely receive a higher proportion of future profits than those who did not contribute this capital. I object to this on the grounds that it amounts to what Marx represented as $M \to M'$: that is, the creation of money ($M'$) not from work or something of intrinsic value, but simply from other money ($M$). An in-depth discussion of this topic is beyond the scope of this essay, but my view is that the ownership of an asset (future profits) simply on the basis of having owned some existing asset (capital) is not the basis of equitable collaboration. Since the arguments we've surveyed have failed to hold water, we can only conclude that one cannot, at least based on the above arguments, describe wage labour as an equitable system. Therefore, the position of the employer has the possibility, and often the probability, to amount to nothing more than rent-seeking, with no checks or balances in place. The downsides are plain to see. The average employee's work ends up amounting to half of their waking hours across the entirety of their adult life, assuming they work 8 hours a day. For this reason, and because our work is one of the key components of our personal and social identity, the way one views one's work is heavily tied into how one views oneself. However, when one has no control over one's own work (because the employer decides what work must be done and how to do it) and no ownership of one's own work (because the employer decides the employee's wages and keeps the rest for himself), the default psychological relationship of today's employees to their work is one of alienation and intense dissonance. [price] It is not difficult to see that a more equitable system would contribute greatly both to general productivity and to the population's mental health. It is worth asking why this system is so prevalent if it is so damaging. I believe this is largely because of philosophical, not economic, reasons. In our wider culture and general consciousness, we fetishise the role of the entrepreneur, attributing to them extraordinary intellect and Herculean efforts. We imagine the rich providing “a ladder upon which the aspiring poor may climb” [carnegie]. We turn a blind eye to employees who struggle to rise up the hierarchy of work despite this often happening precisely because wage labour demands daily effort simply for survival. Employers have no incentive to change the status quo, because this status quo benefits them greatly. I think it would however be foolish to imply that employers are “evil” — the simpler explanation is that the current system benefits employers greatly, and it takes a great deal of intentional psychological effort to fight, on moral grounds, against an arrangement that gives you riches. Therefore, change has to happen in the collective philosophy and consciousness of society. Of course, in the absence of a better alternative, change is unlikely. In the remainder of this essay, I will endeavour to propose some general principles along which a better system might be designed. We value the principle of democracy in the spheres of government, and we should consider it in our present question as well, because it can serve as a guiding beacon. I propose that companies should, by default, be organised using the principle of democracy. More concretely, this has two main implications. Firstly, regarding executive power, all decisions made which have an impact on the whole of the company and its employees, should not be made by a CEO or board, but instead by the entire body of employees, subject to conditions I will describe presently. Secondly, on the topic of ownership, all employees should hold shares in the company and be compensated according to the company's profits, and in this way they will directly benefit from the value generated by their own work. Because the distinction between employers and employees is largely eliminated by this change, I shall henceforth simply refer to all those working as part of a company as “members”. There are, of course, many questions as to the specifics. The most obvious is the question of proportionality. If all members should have a share of executive power and ownership, should they all have an equal share? No. As I have described above, I believe that both control and pay should be proportional to the value a certain member offers to the company, and this is chiefly decided by the quantity, difficulty and importance of their work. Therefore, a member whose work is vital to the company may have greater voting rights and a greater share in the profits than a member who simply assists with certain matters on a part-time basis. There are other possible criteria that could be considered, such as length of membership in the company, but I believe these additional criteria are not fundamental and each company should have some flexibility in deciding its own principles, so long as they are still in line with the principle of democracy. Importantly, there has been a long tradition and legal basis for this kind of system dating back to the 15th century, today represented as the cooperative form of incorporation. The question that naturally follows is: who should decide the relative importance of each member's contribution? The answer follows from our new organisational structure. The members themselves should decide each others' relative contribution, and this should happen as part of periodic reviews conducted between peers. This system has been in place at several large companies for some time \footnote{Video game developer Valve Corporation uses a system almost identical to the one I described to decide salaries}, and Google has a “peer bonus” system where employees can offer a small bonus to their colleagues. It may not always be realistic for every single member of a company to show up to a vote when the company is very large. In this scenario, a system of representation comparable to that of parliamentary representation may be used. In questions of company-wide governance, members can create a periodically-elected council of representatives that votes on their behalf. In smaller-scale issues such as that of peer-to-peer voting on member contributions just described, members can be organised into working groups that settle issues among themselves, with periodic checks to ensure decisions made are in line with wider company policy. People have humorously described capitalist corporations as “little Soviet Unions” due to their totalitarian nature — why wouldn't we want to make them just as fair and democratic as we want our political system to be? One might object to these principles on the grounds that there often exist people who make some contribution to an organisation, but that contribution is so sporadic or incidental that they cannot be categorised as regular “members” of the organisation in any meaningful way. This, however, is not a problem — the system I am proposing is entirely compatible with the notion of external collaborators, which can work together with a company under the well-established tradition of contract work. There is then full flexibility on both sides, enabling the collaborator to set their own terms of collaboration and negotiate a fair agreement. Another benefit of the above proposal is that it mitigates a very pernicious problem that has caused much suffering: the vacuum of responsibility between executives and shareholders. If a company organised around the typical principles of wage labour causes harm to a person or class of people, it is very difficult to determine responsibility and correct its actions, since the executives of the company often deflect responsibility by saying that they have a responsibility towards shareholders to maximise profits, and shareholders deflect responsibility by saying that it is not them, but the executives, who carried out the harmful actions. This exact problem has been covered in the media numerous times, especially in the instance of real estate companies that create miserable living conditions for tenants because the executives cut costs to an extreme in order to satisfy the shareholders. A cooperative corporation avoids this problem because the “executives” and “shareholders” are one and the same, therefore those members of a company who voted for a certain policy can be held responsible. While I can think of many open questions and many directions in which this proposal can be (and has been) further developed, I firmly believe that even gradual, piecemeal change can make a significant positive impact not only on productivity, but also widespread mental health in today's society. Above all, I stress the importance of questioning current systems of labour, even when they seem so natural to us as to be almost unquestionable. In the 18th century, slavery was viewed by most not necessarily in a positive light, but as a sort of necessary evil, a fundamental reality of nature. Change was viewed as impossible and utopian. However, the systems around us are nothing more than our own inventions. Let us learn from the lessons of history, and take joy in the power to alter the course of our lives for the better. • Marx, Karl; Joynes, J L. “Wage-labour and Capital”, London: Printed by the Twentieth Century Press, 1893. • Graeber, David. “Debt: The First 5,000 Years”, Brooklyn, N.Y: Melville House, 2011. • UK Health Security Agency. “Essential workers prioritised for COVID-19 testing”, GOV.UK. https://www.gov.uk/guidance/essential-workers-prioritised-for-covid-19-testing. • Graham, Paul. “How to Start a Startup”, March 2005. http://paulgraham.com/start.html • Price, Richard H.; Friedland, Daniel S.; Vinokur, Anuram D. “Job loss: Hard times and eroded identity”, Perspectives on loss: A sourcebook (1998): 303-316. • Carnegie, Andrew. “The Gospel of Wealth”, June 1889.
# What does the $I$ mean when measuring ${\rm Tr}(\rho (I\otimes\sigma\otimes\cdots))$ in quantum tomography? In Nielsen and Chuang's QCQI, I learned that the quantum tomography for n qubit can be described easily in math as we need to measure $$Tr(\rho W_k),\forall k$$ where $$W_k\in\{I,\sigma_x,\sigma_y,\sigma_z\}^{\otimes n}$$. But what my understanding toward $$Tr(\rho \hat{o})$$ for some observable $$\hat{o}$$ is that, we measure with $$\rho$$ with measurement set from the spectrum decomposition of $$\hat{o}$$. E.g., the term $$Tr(\rho\sigma_x)$$ means we measure in this set of projective measurement: $$|+\rangle\langle+|,|-\rangle\langle-|$$ . But since $$I$$ have infinite spectrum decomposition(not unique because the multiplicity of the eigenvalues), so what does $$Tr(\rho I)$$ mean in the language of measurement? • why do you think the degeneracy of $I$ should be a problem here? – glS Oct 8 '21 at 14:28 • In addition to @gls 's comment, $I$ only has an infinite spectrum when you are considering infinite-dimensional Hilbert spaces. Oct 8 '21 at 14:29 • Generally, it means "don't measure that qubit". Oct 8 '21 at 14:30 • Not having a unique eigenbasis also means that you can just pick your favorite one. Besides, @DaftWullie is right. Recall that the reduced state $\rho_B$ of a composite state $\rho_{AB}$ is uniquely defined by the equation $\mathrm{tr}(I \otimes X \rho_{AB}) = \mathrm{tr}(X\rho_B)$ for all operators $X$. Oct 10 '21 at 7:41
Share # Evaluate : ∫30 dx/(9+x2) - CBSE (Science) Class 12 - Mathematics ConceptEvaluation of Simple Integrals of the Following Types and Problems #### Question Evaluate : int_0^3dx/(9+x^2) #### Solution Given, I=int_0^3dx/(9+x^2)=int_0^3dx/(3^2+x^2) We know that, intdx/(x^2+a^2)=1/atan^(-1)(x/a)+C Therefore, I=int_0^3dx/(x^2+3^2) =1/3[tan^(-1)(x/3)]_0^3 =1/3[tan^(-1)1-tan^(-1)0] =1/3[pi/4-0] =pi/12 Is there an error in this question or solution? #### APPEARS IN Solution Evaluate : ∫30 dx/(9+x2) Concept: Evaluation of Simple Integrals of the Following Types and Problems. S
# Five equations to solve 6-variables question, is it possible? I was given a task to find the value of variable a,b,c,d,e and f. But I'm not sure it is even possible, given that only 5 equations are available. Can anybody point out how to solve these: a+b+c=164.35; d+e+f=94.44; a^2+d^2=20.06^2; b^2+e^2=74.34^2; c^2+f^2=123.27^2 Thank you. EDIT 1: If you know additional constraints for each variable, you can add them in to the list of equations. sols = Solve[{ a + b + c == 164.35, d + e + f == 94.44, a^2 + d^2 == 20.06^2, b^2 + e^2 == 74.34^2, c^2 + f^2 == 123.27^2, a >= 0, b >= 0, c >= 0, d >= 0, e >= 0, f >= 0 }, {a, b, c, d, e, f}, Reals ] sols[[3]] (* {a->19.7827,b->23.001,c->121.566,d->3.32366,e->70.6922,f->20.4241} *) sols[[3]] remains relatively simple, however the first 2 are still rather complicated expressions. Original: If you look up Solve or NSolve in the documentation centre, it should demonstrate some basic examples of how to use those functions. sols = Solve[{ a + b + c == 164.35, d + e + f == 94.44, a^2 + d^2 == 20.06^2, b^2 + e^2 == 74.34^2, c^2 + f^2 == 123.27^2 }, {a, b, c, d, e, f}, Reals ] The first 7 solutions returned are complicated ConditionalExpressions, but if you look as sols[[8 ;; 17]], those are fairly simple solutions with a numerical value for each of the variables. Of course, I'm assuming you're only looking for answers where all $$a$$ through $$f$$ are real. If that's not the case, you can drop the Reals part from the Solve command. In general, there are many values of $$a, b, c, d, e,$$ and $$f$$ that satisfy the equations. I don't know if you have any further constraints, or if you simply need some solution. • Wow thanks. This is great. Yes, a through f are real values, but I forgot to mention that the values are positive number. Is there a way to add this constraint inside this? Another question is, the value may be a positive number, but it can also be zero, is there any way to represent my statement here? Thanks again. – Azlan Jun 12 at 2:49 • @Azlan No problem. I've updated by answer to include your conditions. – MassDefect Jun 12 at 2:58 • Great, thanks! This definitely solves my problem. – Azlan Jun 12 at 3:04 • @Azlan Note that your intuition was correct: in general it is not possible to solve $n$ variables with only $n-1$ equations. In many cases, you would still be able to reduce complicated expressions to arrive at a family of solutions. Take, for example Reduce[{x+y+z==2,x+y==1},{x,y,z},Reals] which gives $z=1$ and $y=1-x$. This last part describes a line of points for which the equations hold. – LBogaardt Jun 12 at 14:33 • @LBogaardt thanks for your comments, I was thinking about some similar argument when I got the assignment thus I came to ask the question here, in case anybody have any better idea. However, by applying the code that MassDefect has shown above, the answer did come out (with a few warnings). Further, by applying a few conditions, I can get one single answer that fits all of the equations and conditions. I believe (my assumption) Mathematica did go beyond reasonable measure to get that answer ie. "trial and error", which I am so thankful that I get to apply my theory on the assignment. – Azlan Jun 14 at 0:55
# How do you find the bond enthalpy from an equation? Which equation represents the $$\ce{N-H}$$ bond enthalpy in $$\ce{NH3}?$$ \begin{align} &\textbf{A.} &\ce{NH3(g) &-> N(g) + 3 H(g)}\\ &\textbf{B.} &\ce{1/3 NH3(g) &-> 1/3 N2(g) + H(g)}\\ &\textbf{C.} &\ce{NH3(g) &-> 1/2 N2(g) + 3/2 H(g)}\\ &\textbf{D.} &\ce{NH3(g) &-> .NH2(g) + .H(g)} \end{align} From what I've learnt, I understand that the bond enthalpy is defined as the energy required to break one mole of a specific bond. In the question above, I opted for answer C as it was the only one with the products in the form of $$\ce{N2}$$ and $$\ce{H2}.$$ But since there are three $$\ce{N-H}$$ bonds in $$\ce{NH3},$$ I am unsure about answer B. Can you clarify this? • you missed $H_2$ in option c. – Tapi Dec 30 '19 at 20:08
# My Data Science Blogs ## January 20, 2018 ### Distilled News DataScience: Elevate is a full-day event dedicated to data science best practices. Register today to hear from experts at Uber, Facebook, Salesforce, and more. DataScience: Elevate provides a closer look at how today’s top companies use machine learning and artificial intelligence to do better business. Free to attend, this multi-city event features presentations, panels, and networking sessions designed to elevate data science work and connect you with the companies that are driving change in enterprise data science. Imagine a world where machines understand what you want and how you are feeling when you call at a customer care – if you are unhappy about something, you speak to a person quickly. If you are looking for a specific information, you may not need to talk to a person (unless you want to!). This is going to be the new order of the world – you can already see this happening to a good degree. Check out the highlights of 2017 in the data science industry. You can see the breakthroughs that deep learning was bringing in a field which were difficult to solve before. One such field that deep learning has a potential to help solving is audio/speech processing, especially due to its unstructured nature and vast impact. So for the curious ones out there, I have compiled a list of tasks that are worth getting your hands dirty when starting out in audio processing. I’m sure there would be a few more breakthroughs in time to come using Deep Learning. The article is structured to explain each task and its importance. There is also a research paper that goes in the details of that specific task, along with a case study that would help you get started in solving the task. So let’s get cracking! We’ve compiled a list of the hottest events and conferences from the world of Data Science, Machine Learning and Artificial Intelligence happening in 2018. Below are all the links you need to get yourself to these great events! In this article, we have outlined some of the Scala libraries that can be very useful while performing major data scientific tasks. They have proved to be highly helpful and effective for achieving the best results. How are you monitoring your Python applications? Take the short survey – the results will be published on KDnuggets and you will get all the details. Propensity scores are an alternative method to estimate the effect of receiving treatment when random assignment of treatments to subjects is not feasible. Tensorflow 1.4 was released a few weeks ago with an implementation of Gradient Boosting, called TensorFlow Boosted Trees (TFBT). Unfortunately, the paper does not have any benchmarks, so I ran some against XGBoost. For many Kaggle-style data mining problems, XGBoost has been the go-to solution since its release in 2006. It’s probably as close to an out-of-the-box machine learning algorithm as you can get today, as it gracefully handles un-normalized or missing data, while being accurate and fast to train. In a previous post, I outlined emerging applications of reinforcement learning (RL) in industry. I began by listing a few challenges facing anyone wanting to apply RL, including the need for large amounts of data, and the difficulty of reproducing research results and deriving the error estimates needed for mission-critical applications. Nevertheless, the success of RL in certain domains has been the subject of much media coverage. This has sparked interest, and companies are beginning to explore some of the use cases and applications I described in my earlier post. Many tasks and professions, including software development, are poised to incorporate some forms of AI-powered automation. In this post, I’ll describe how RISE Lab’s Ray platform continues to mature and evolve just as companies are examining use cases for RL. Assuming one has identified suitable use cases, how does one get started with RL? Most companies that are thinking of using RL for pilot projects will want to take advantage of existing libraries. Any programming environment should be optimized for its task, and not all tasks are alike. For example, if you are exploring uncharted mountain ranges, the portability of a tent is essential. However, when building a house to weather hurricanes, investing in a strong foundation is important. Similarly, when beginning a new data science programming project, it is prudent to assess how much effort should be put into ensuring the code is reproducible. Note that it is certainly possible to go back later and “shore up” the reproducibility of a project where it is weak. This is often the case when an “ad-hoc” project becomes an important production analysis. However, the first step in starting a project is to make a decision regarding the trade-off between the amount of time to set up the project and the probability that the project will need to be reproducible in arbitrary environments. Another simple yet powerful technique we can pair with pipelines to improve performance is grid search, which attempts to optimize model hyperparameter combinations. ### Document worth reading: “An Overview on Data Representation Learning: From Traditional Feature Learning to Recent Deep Learning” Since about 100 years ago, to learn the intrinsic structure of data, many representation learning approaches have been proposed, including both linear ones and nonlinear ones, supervised ones and unsupervised ones. Particularly, deep architectures are widely applied for representation learning in recent years, and have delivered top results in many tasks, such as image classification, object detection and speech recognition. In this paper, we review the development of data representation learning methods. Specifically, we investigate both traditional feature learning algorithms and state-of-the-art deep learning models. The history of data representation learning is introduced, while available resources (e.g. online course, tutorial and book information) and toolboxes are provided. Finally, we conclude this paper with remarks and some interesting research directions on data representation learning. An Overview on Data Representation Learning: From Traditional Feature Learning to Recent Deep Learning ## January 19, 2018 ### Because it's Friday: Principles and Values Most companies publish mission and vision statements, and some also publish a detailed list of principles that underlie the company ethos. But what makes a good collection of principles, and does writing them down really matter? At the recent Monktoberfest conference, Bryan Cantrill argued that yes, they do matter, mostly by way of some really egregious counterexamples. That's all from the blog for this week. We'll be back on Monday — have a great weekend! ### Book Memo: “Advances in Hybridization of Intelligent Methods” Models, Systems and Applications This book presents recent research on the hybridization of intelligent methods, which refers to combining methods to solve complex problems. It discusses hybrid approaches covering different areas of intelligent methods and technologies, such as neural networks, swarm intelligence, machine learning, reinforcement learning, deep learning, agent-based approaches, knowledge-based system and image processing. The book includes extended and revised versions of invited papers presented at the 6th International Workshop on Combinations of Intelligent Methods and Applications (CIMA 2016), held in The Hague, Holland, in August 2016. The book is intended for researchers and practitioners from academia and industry interested in using hybrid methods for solving complex problems. Google has recently released a Jupyter Notebook platform called Google Colaboratory. You can run Python code in a browser, share results, and save your code for later. It currently does not support R code. &utm&utm&utm ### The Friday #rstats PuzzleR : 2018-01-19 (This article was first published on R – rud.is, and kindly contributed to R-bloggers) Peter Meissner (@marvin_dpr) released crossword.r to CRAN today. It’s a spiffy package that makes it dead simple to generate crossword puzzles. He also made a super spiffy javascript library to pair with it, which can turn crossword model output into an interactive puzzle. I thought I’d combine those two creations with a way to highlight new/updated packages from the previous week, cool/useful packages in general, and some R functions that might come in handy. Think of it as a weekly way to get some R information while having a bit of fun! This was a quick, rough creation and I’ll be changing the styles a bit for next Friday’s release, but Peter’s package is so easy to use that I have absolutely no excuse to not keep this a regular feature of the blog. I’ll release a static, ggplot2 solution to each puzzle the following Monday(s). If you solve it before then, tweet a screen shot of your solution with the tag #rstats #puzzler and I’ll pick the first time-stamped one to highlight the following week. I’ll also get a GitHub setup for suggestions/contributions to this effort + to hold the puzzle data. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... ### If you did not already know Robust Multiple Signal Classification (MUSIC) In this paper, we introduce a new framework for robust multiple signal classification (MUSIC). The proposed framework, called robust measure-transformed (MT) MUSIC, is based on applying a transform to the probability distribution of the received signals, i.e., transformation of the probability measure defined on the observation space. In robust MT-MUSIC, the sample covariance is replaced by the empirical MT-covariance. By judicious choice of the transform we show that: 1) the resulting empirical MT-covariance is B-robust, with bounded influence function that takes negligible values for large norm outliers, and 2) under the assumption of spherically contoured noise distribution, the noise subspace can be determined from the eigendecomposition of the MT-covariance. Furthermore, we derive a new robust measure-transformed minimum description length (MDL) criterion for estimating the number of signals, and extend the MT-MUSIC framework to the case of coherent signals. The proposed approach is illustrated in simulation examples that show its advantages as compared to other robust MUSIC and MDL generalizations. … Cumulative Gains Model Quality Metric In developing risk models, developers employ a number of graphical and numerical tools to evaluate the quality of candidate models. These traditionally involve numerous measures including the KS statistic or one of many Area Under the Curve (AUC) methodologies on ROC and cumulative Gains charts. Typical employment of these methodologies involves one of two scenarios. The first is as a tool to evaluate one or more models and ascertain the effectiveness of that model. Second however is the inclusion of such a metric in the model building process itself such as the way Ferri et al. proposed to use Area Under the ROC curve in the splitting criterion of a decision tree. However, these methods fail to address situations involving competing models where one model is not strictly above the other. Nor do they address differing values of end points as the magnitudes of these typical measures may vary depending on target definition making standardization difficult. Some of these problems are starting to be addressed. Marcade Chief Technology officer of the software vendor KXEN gives an overview of several metric techniques and proposes a new solution to the problem in data mining techniques. Their software uses two statistics called KI and KR. We will examine the shortfalls he addresses more thoroughly and propose a new metric which can be used as an improvement to the KI and KR statistics. Although useful in a machine learning sense of developing a model, these same issues and solutions apply to evaluating a single model’s performance as related by Siddiqi and Mays with respect to risk scorecards. We will not specifically give examples of each application of the new statistics but rather make the claim that it is useful in most situations where an AUC or model separation statistic (such as KS) is used. … Probabilistic D-Clustering We present a new iterative method for probabilistic clustering of data. Given clusters, their centers and the distances of data points from these centers, the probability of cluster membership at any point is assumed inversely proportional to the distance from (the center of) the cluster in question. This assumption is our working principle. The method is a generalization, to several centers, of theWeiszfeld method for solving the Fermat-Weber location problem. At each iteration, the distances (Euclidean, Mahalanobis, etc.) from the cluster centers are computed for all data points, and the centers are updated as convex combinations of these points, with weights determined by the above principle. Computations stop when the centers stop moving. Progress is monitored by the joint distance function, a measure of distance from all cluster centers, that evolves during the iterations, and captures the data in its low contours. The method is simple, fast (requiring a small number of cheap iterations) and insensitive to outliers. … ### Tracking America in the age of Trump DURING his first year as America’s president Donald Trump attempted to redefine what it means to be leader of the free world. He has seen White House staffers come and go; been embroiled in scandal; waged war against “fake news”; and offended friends and foes alike. ### Curb your imposterism, start meta-learning (This article was first published on That’s so Random, and kindly contributed to R-bloggers) Recently, there has been a lot of attention for the imposter syndrome. Even seasoned programmers admit they suffer from feelings of anxiety and low self-esteem. Some share their personal stories, which can be comforting for those suffering in silence. I focus on a method that helped me grow confidence in recent years. It is a simple, yet very effective way to deal with being overwhelmed by the many things a data scientis can acquaint him or herself with. ## Two Faces of the Imposter Demon I think imposterism can be broken into two, related, entities. The first is feeling you are falling short on a personal level. That is, you think you are not intelligent enough, you think you don’t have perseverance, or any other way to describe you are not up to the job. Most advice for overcoming imposterism focuses on this part. I do not. Rather, I focus on the second foe, the feeling that you don’t know enough. This can be very much related to the feeling of failing on a personal level, you might feel you don’t know enough because you are too slow a learner. However, I think it is helpful to approach it as objective as possible. The feeling of not knowing enough can be combated more actively. Not by learning as much you can, but by considering not knowing a choice, rather than an imperfection. ## You can’t have it all The field of data science is incredibly broad. Comprising, among many others, getting data out of computer systems, preparing data in databases, principles of distributed computing, building and interpreting statistical models, data visualization, building machine learning pipelines, text analysis, translatingbusiness problems into data problems and communicating results to stakeholders. To make matters worse, for each and every topic there are several, if not dozens, databases, languages, packages and tools. This means, by definition, no one is going to have mastery of everything the field comprises. And thus there are things you do not and never will know. ## Learning new stuff To stay effective you have to keep up with developments within the field. New packages will aid your data preparations, new tools might process data in a faster way and new machine learning models might give superior results. Just to name a few. I think a great deal of impostering comes from feeling you can’t keep up. There is a constant list in the back of your head with cool new stuff you still have to try out. This is where meta-learning comes into play, actively deciding what you will and will not learn. For my peace of mind it is crucial to decide the things I am not going to do. I keep a log (Google Sheets document) that has two simple tabs. The first a collector of stuff I come across in blogs and on twitter. These are things that do look interesting, but it needs a more thorough look. I also add things that I come across in the daily job, such as a certain part of SQL I don’t fully grasp yet. Once in a while I empty the collector, trying to pick up the small stuff right away and moving the larger things either to second tab or to will-not-do. The second tab holds the larger things I am actually going to learn. With time at hand at work or at home I work on learning the things on the second tab. More about this later. ## Define Yourself So you cannot have it all, you have to choose. What can be of good help when choosing is to have a definition of your unique data science profile. Here is mine: I have thorough knowledge of statistical models and know how to apply them. I am a good R programmer, both in interactive analysis and in software development. I know enough about data bases to work effectively with them, if necessary I can do the full data preparation in SQL. I know enough math to understand new models and read text books, but I can’t derive and proof new stuff on my own. I have a good understanding of the principles of machine learning and can apply most of the algorithms in practice. My oral and written communication are quite good, which helps me in translating back and forth between data and business problems. That’s it, focused on what I do well and where I am effective. Some things that are not in there; building a full data pipeline on an Hadoop cluster, telling data stories with d3.js, creating custom algorithms for a business, optimizing a database, effective use of python, and many more. If someone comes to me with one of these task, it is just “Sorry, I am not your guy”. I used to feel that I had to know everything. For instance, I started to learn python because I thought a good data scientist should know it as well as R. Eventually, I realized I will never be good at python, because I will always use R as my bread-and-butter. I know enough python to cooperate in a project where it is used, but that’s it and that it will remain. Rather, I spend time and effort now in improving what I already do well. This is not because I think because R is superior to python. I just happen to know R and I am content with knowing R very well at the cost of not having access to all the wonderful work done in python. I will never learn d3.js, because I don’t know JavaScript and it will take me ages to learn. Rather, I might focus on learning Stan which is much more fitting to my profile. I think it is both effective and alleviating stress to go deep on the things you are good at and deliberately choose things you will not learn. ## The meta-learning I told you about the collector, now a few more words about the meta-learning tab. It has three simple columns. what it is I am going to learn and how I am going to do that are the first two obvious categories. The most important, however, is why I am going to learn it. For me there are only two valid reasons. Either I am very interested in the topic and I envision enjoying doing it, or it will allow me to do my current job more effectively. I stressed current there because scrolling the requirements of job openings is about the perfect way to feed your imposter monster. Focus on what you are doing now and have faith you will pick-up new skills if a future job demands it. Meta-learning gives me focus, relaxation and efficiency. At its core it is defining yourself as a data scientist and deliberately choose what you are and, more importantly, what you are not going to learn. I experienced, that doing this with rigor actively fights the imposterism. Now, what works for me might not work for you. Maybe a different system fits you better. However, I think everybody benefits from defining the data scientist he/she is and actively choose what not to learn. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... ### Learn Data Science Without a Degree But how do you learn data science? Let’s take a look at some of the steps you can take to begin your journey into data science without needing a degree, including Springboard’s Data Science Career Track. ### R Packages worth a look Allows you to retrieve information from the ‘Google Knowledge Graph’ API <https://…/knowledge.html> and process it in R in various forms. The ‘Knowledge Graph Search’ API lets you find entities in the ‘Google Knowledge Graph’. The API uses standard ‘schema.org’ types and is compliant with the ‘JSON-LD’ specification. Fast Region-Based Association Tests on Summary Statistics (sumFREGAT) An adaptation of classical region/gene-based association analysis techniques that uses summary statistics (P values and effect sizes) and correlations between genetic variants as input. It is a tool to perform the most common and efficient gene-based tests on the results of genome-wide association (meta-)analyses without having the original genotypes and phenotypes at hand. Monotonic Association on Zero-Inflated Data (mazeinda) Methods for calculating and testing the significance of pairwise monotonic association from and based on the work of Pimentel (2009) <doi:10.4135/9781412985291.n2>. Computation of association of vectors from one or multiple sets can be performed in parallel thanks to the packages ‘foreach’ and ‘doMC’. Computing Envelope Estimators (Renvlp) Provides a general routine, envMU(), which allows estimation of the M envelope of span(U) given root n consistent estimators of M and U. The routine envMU() does not presume a model. This package implements response envelopes (env()), partial response envelopes (penv()), envelopes in the predictor space (xenv()), heteroscedastic envelopes (henv()), simultaneous envelopes (stenv()), scaled response envelopes (senv()), scaled envelopes in the predictor space (sxenv()), groupwise envelopes (genv()), weighted envelopes (weighted.env(), weighted.penv() and weighted.xenv()), envelopes in logistic regression (logit.env()), and envelopes in Poisson regression (pois.env()). For each of these model-based routines the package provides inference tools including bootstrap, cross validation, estimation and prediction, hypothesis testing on coefficients are included except for weighted envelopes. Tools for selection of dimension include AIC, BIC and likelihood ratio testing. Background is available at Cook, R. D., Forzani, L. and Su, Z. (2016) <doi:10.1016/j.jmva.2016.05.006>. Optimization is based on a clockwise coordinate descent algorithm. Model Based Random Forest Analysis (mobForest) Functions to implements random forest method for model based recursive partitioning. The mob() function, developed by Zeileis et al. (2008), within ‘party’ package, is modified to construct model-based decision trees based on random forests methodology. The main input function mobforest.analysis() takes all input parameters to construct trees, compute out-of-bag errors, predictions, and overall accuracy of forest. The algorithm performs parallel computation using cluster functions within ‘parallel’ package. ### President Trump’s first year, through The Economist’s covers SATURDAY January 20th marks one year since Donald Trump’s inauguration as the 45th President of the United States. Over the intervening months the world has been forced to come to terms with—and repeatedly adjust to—having Mr Trump in the White House. His first 365 days have hurtled by like an out-of-control fairground ride. ### Porn traffic before and after the missile alert in Hawaii PornHub compared minute-to-minute traffic on their site before and after the missile alert to an average Saturday (okay for work). Right after the alert there was a dip as people rushed for shelter, but not long after the false alarm notice, traffic appears to spike. Some interpret this as people rushed to porn after learning that a missile was not headed towards their home. Maybe that’s part of the reason, but my guess is that Saturday morning porn consumers woke earlier than usual. Tags: , ### Edelweiss: Data Scientist Seeking a Data Scientist for building, validating and deploying machine learning models on unstructured data for various business problems. ### Plot2txt for quantitative image analysis Plot2txt converts images into text and other representations, helping create semi-structured data from binary, using a combination of machine learning and other algorithms. ### The Trumpets of Lilliput Gur Huberman pointed me to this paper by George Akerlof and Pascal Michaillat that gives an institutional model for the persistence of false belief. The article begins: This paper develops a theory of promotion based on evaluations by the already promoted. The already promoted show some favoritism toward candidates for promotion with similar beliefs, just as beetles are more prone to eat the eggs of other species. With such egg-eating bias, false beliefs may not be eliminated by the promotion system. Our main application is to scientific revolutions: when tenured scientists show favoritism toward candidates for tenure with similar beliefs, science may not converge to the true paradigm. We extend the statistical concept of power to science: the power of the tenure test is the probability (absent any bias) of denying tenure to a scientist who adheres to the false paradigm, just as the power of any statistical test is the probability of rejecting a false null hypothesis. . . . It was interesting to see a mathematical model for the persistence of errors, and I agree that there must be something to their general point that people are motivated to support work that confirms their beliefs and to discredit work that disconfirms their beliefs. We’ve seen a lot of this sort of analysis at the individual level (“motivated reasoning,” etc.) and it makes sense to think of this at an interpersonal or institutional level too. There were, however, some specific aspects of their model that I found unconvincing, partly on statistical grounds and partly based on my understanding of how social science works within society: 1. Just as I don’t think it is helpful to describe statistical hypotheses as “true” or “false,” I don’t think it’s helpful to describe scientific paradigms as “true” or “false.” Also, I’m no biologist, but I’m skeptical of a statement such as, “With the beetles, the more biologically fit species does not always prevail.” What does it mean to say a species is “more biologically fit”? If they survive and reproduce, they’re fit, no? And if a species’ eggs get eaten before they’re hatched, that reduces the species’s fitness. In the article, they modify “true” and “false” to “Better” and “Worse,” but I have pretty much the same problem here, which is that different paradigms serve different purposes, so I don’t see how it typically makes sense to speak of one paradigm as giving “a more correct description of the world,” except in some extreme cases. For example, a few years ago I reviewed a pop-science book that was written from a racist paradigm. Is that paradigm “more correct” or “less correct” than a non-racist paradigm? It depends on what questions are being asked, and what non-racist paradigm is being used as a comparison. Beyond all this—or perhaps explaining my above comments—is my irritation at people who use university professors as soft targets. Silly tenured professors ha ha. Bad science is a real problem but I think it’s ludicrous to attribute that to the tenure system. Suppose there was no such thing as academic tenure, then I have a feeling that social and biomedical science research would be even more fad-driven. I sent the above comments to the authors, and Akerlof replied: I think that your point of view and ours are surprisingly on the same track; in fact the paper answers Thomas Kuhn’s question: what makes science so successful. The point is rather subtle and is in the back pages: especially regarding the differences between promotions of scientists and promotion of surgeons who did radical mastectomies. The post The Trumpets of Lilliput appeared first on Statistical Modeling, Causal Inference, and Social Science. ### Registration and talk proposals open Monday for useR!2018 Registration will open on Monday (January 22) for useR! 2018, the official R user conference to be held in Brisbane, Australia July 10-13. If you haven't been to a useR! conference before, it's a fantastic opportunity to meet and mingle with other R users from around the world, see talks on R packages and applications, and attend tutorials for deep dives on R-related topics. This year's conference will also feature keynotes from Jenny Bryan, Steph De Silva, Heike Hofmann, Thomas Lin Pedersen, Roger Peng and Bill Venables. It's my favourite conference of the year, and I'm particularly looking forward to this one. This video from last year's conference in Brussels (a sell-out with over 1,1000 attendees) will give you a sense of what a useR! conference is like: The useR! conference brought to you by the R Foundation and is 100% community-led. That includes the content: the vast majority of talks come directly from R users. If you've written an R package, performed an interesting analysis with R, or simply have something to share of interest to the R community, consider proposing a talk by submitting an abstract. (Abstract submissions are open now.) Most talks are 20 minutes, but you can also propose a 5-minute lightning talk or a poster. If you're not sure what kind of talk you might want to give, check out the program from useR!2017 for inspiration. R-Ladies, which promotes gender diversity in the R community, can also provide guidance on abstracts. Note that all proposals must comply with the conference code of conduct. Early-bird registrations close on March 15, and while general registration will be open until June my advice is to get in early, as this year's conference is likely to sell out once again. If you want to propose a talk, submissions are due by March 2 (but early submissions have a better chance of being accepted). Follow @user!2018_conf on Twitter for updates about the conference, and click the links below to register or submit an abstract. I look forward to seeing you in Brisbane! Update Jan 19: Registrations will now open January 22 useR! 2018: Registration; Abstract submission ### Managing Machine Learning Workflows with Scikit-learn Pipelines Part 2: Integrating Grid Search Another simple yet powerful technique we can pair with pipelines to improve performance is grid search, which attempts to optimize model hyperparameter combinations. ### A lesson from the Charles Armstrong plagiarism scandal: Separation of the judicial and the executive functions Charles Armstrong is a history professor at Columbia University who, so I’ve heard, has plagiarized and faked references for an award-winning book about Korean history. The violations of the rules of scholarship were so bad that the American Historical Association “reviewed the citation issue after being notified by a member of the concerns some have about the book” and, shortly after that, Armstrong relinquished the award. More background here. To me, the most interesting part of the story is that Armstrong was essentially forced to give in, especially surprising given how aggressive his original response was, attacking the person whose work he’d stolen. It’s hard to imagine that Columbia University could’ve made Armstrong return the prize, given that the university gave him a “President’s Global Innovation Fund Grant” many months after the plagiarism story had surfaced. The reason why, I think, is that the American Historical Association had this independent committee. And that gets us to the point raised in the title of this post. Academic and corporate environments are characterized by an executive function with weak to zero legislative or judicial functions. That is, decisions are made based on consequences, with very few rules. Yes, we have lots of little rules and red tape, but no real rules telling the executives what to do. Evaluating every decision based on consequences seems like it could be a good idea, but it leads to situations where wrongdoers are left in place, as in any given situation it seems like too much trouble to deal with the problem. An analogy might be with the famous probability-matching problem. Suppose someone gives shuffles a deck with 100 cards, 70 red and 30 black, and then starts pulling out cards, one at a time, asking you to guess. You’ll maximize your expected number of correct answers by simply guessing Red, Red, Red, Red, Red, etc. In each case, that’s the right guess, but put it together and your guesses are not representative. Similarly, if for each scandal the university makes the locally optimal decision to do nothing, the result is that nothing is ever done. This analogy is not perfect: I’m not recommending that the university sanction 30% of its profs at random—for one thing, that could be me! But it demonstrates the point that a series of individually reasonable decisions can be unreasonable in aggregate. Anyway, one advantage of a judicial branch—or, more generally, a fact-finding institution that is separate from enforcement and policymaking—is that its members can feel free to look for truth, damn the consequences, because that’s their role. So, instead of the university weighing the negatives of having an barely-repentant plagiarist on faculty or having the embarrassment of sanctioning a tenured professor, there can be an independent committee of the American History Association just judging the evidence. it’s a lot easier to judge the evidence if you don’t have direct responsibility for what will be done by the evidence. Or, to put it another way, it’s easier to be a judge if you don’t also have to play the roles of jury and executioner. P.S. I see that Armstrong was recently quoted in Newsweek regarding Korea policy. Maybe they should’ve interviewed the dude he copied from instead. Why not go straight to the original, no? THE threat of nuclear holocaust, familiar to Americans who grew up during the cold war, is alien to most today. On Saturday January 13th fears of annihilation reemerged. ### Introducing RLlib: A composable and scalable reinforcement learning library RISE Lab’s Ray platform adds libraries for reinforcement learning and hyperparameter tuning. In a previous post, I outlined emerging applications of reinforcement learning (RL) in industry. I began by listing a few challenges facing anyone wanting to apply RL, including the need for large amounts of data, and the difficulty of reproducing research results and deriving the error estimates needed for mission-critical applications. Nevertheless, the success of RL in certain domains has been the subject of much media coverage. This has sparked interest, and companies are beginning to explore some of the use cases and applications I described in my earlier post. Many tasks and professions, including software development, are poised to incorporate some forms of AI-powered automation. In this post, I’ll describe how RISE Lab’s Ray platform continues to mature and evolve just as companies are examining use cases for RL. Assuming one has identified suitable use cases, how does one get started with RL? Most companies that are thinking of using RL for pilot projects will want to take advantage of existing libraries. There are several open source projects that one can use to get started. From a technical perspective, there are a few things to keep in mind when considering a library for RL: • Support for existing machine learning libraries. Because RL typically uses gradient-based or evolutionary algorithms to learn and fit policy functions, you will want it to support your favorite library (TensorFlow, Keras, PyTorch, etc.). • Scalability. RL is computationally intensive, and having the option to run in a distributed fashion becomes important as you begin using it in key applications. • Composability. RL algorithms typically involve simulations and many other components. You will want a library that lets you reuse components of RL algorithms (such as policy graphs, rollouts), that is compatible with multiple deep learning frameworks, and that provides composable distributed execution primitives (nested parallelism). ## Introducing Ray RLlib Ray is a distributed execution platform (from UC Berkeley’s RISE Lab) aimed at emerging AI applications, including those that rely on RL. RISE Lab recently released RLlib, a scalable and composable RL library built on top of Ray: RLlib is designed to support multiple deep learning frameworks (currently TensorFlow and PyTorch) and is accessible through a simple Python API. It currently ships with the following popular RL algorithms (more to follow): It’s important to note that there is no dominant pattern for computing and composing RL algorithms and components. As such, we need a library that can take advantage of parallelism at multiple levels and physical devices. RLlib is an open source library for the scalable implementation of algorithms that connect the evolving set of components used in RL applications. In particular, RLlib enables rapid development because it makes it easy to build scalable RL algorithms through the reuse and assembly of existing implementations (“parallelism encapsulation”). RLlib also lets developers use neural networks created with several popular deep learning frameworks, and it integrates with popular third-party simulators. Software for machine learning needs to run efficiently on a variety of hardware configurations, both on-premise and on public clouds. Ray and RLlib are designed to deliver fast training times on a single multi-core node or in a distributed fashion, and these software tools provide efficient performance on heterogeneous hardware (whatever the ratio of CPUs to GPUs might be). ## Examples: Text summarization and AlphaGo Zero The best way to get started is to apply RL on some of your existing data sets. To that end, a relatively recent application of RL is in text summarization. Here’s a toy example to try—use RLlib to summarize unstructured text (note that this is not a production-grade model): # Complete notebook available here: https://goo.gl/n6f43h document = """Insert your sample text here """ summary = summarization.summarize(agent, document) print("Original document length is {}".format(len(document))) print("Summary length is {}".format(len(summary))) Text summarization is just one of several possible applications. A recent RISE Lab paper provides other examples, including an implementation of the main algorithm used in AlphaGo Zero in about 70 lines of RLlib pseudocode. ## Hyperparameter tuning with RayTune Another common example involves model building. Data scientists spend a fair amount of time conducting experiments, many of which involve tuning parameters for their favorite machine learning algorithm. As deep learning (and RL) become more popular, data scientists will need software tools for efficient hyperparameter tuning and other forms of experimentation and simulation. RayTune is a new distributed, hyperparameter search framework for deep learning and RL. It is built on top of Ray and is closely integrated with RLlib. RayTune is based on grid search and uses ideas from early stopping, including the Median Stopping Rule and HyperBand. There is a growing list of open source software tools available to companies wanting to explore deep learning and RL. We are in empirical era, and we need tools that enable quick experiments in parallel, while letting us take advantage of popular software libraries, algorithms, and components. Ray just added two libraries that will let companies experiment with reinforcement learning and also efficiently search through the space of neural network architectures. Reinforcement learning applications involve multiple components, each of which presents opportunities for distributed computation. Ray RLlib adopts a programming model that enables the easy composition and reuse of components, and takes advantage of parallelism at multiple levels and physical devices. Over the short term, RISE Lab plans to add more RL algorithms, APIs for integration with online serving, support for multi-agent scenarios, and an expanded set of optimization strategies. Related resources: ### Four short links: 19 January 2018 Pricing, Windows Emulation, Toxic Tech Culture, and AI Futures 1. Pricing Summary -- quick and informative read. Three-part tariff (3PT)—Again, the software has a base platform fee, but the fee is $25,000 because it includes the first 150K events free. Each marginal event costs$0.15. In academic research and theory, the three-part tariff is proven to be best. It provides many different ways for the sales team to negotiate on price and captures the most value. 2. Wine 3.0 Released -- the Windows emulator now runs Photoshop CC 2018! Astonishing work. 3. Getting Free of Toxic Tech Culture (Val Aurora and Susan Wu) -- We didn’t realize how strongly we’d unconsciously adopted this belief that people in tech were better than those who weren’t until we started to imagine ourselves leaving tech and felt a wave of self-judgment and fear. Early on, Valerie realized that she unconsciously thought of literally every single job other than software engineer as “for people who weren’t good enough to be a software engineer” – and that she thought this because other software engineers had been telling her that for her entire career. This. 4. The Future Computed: Artificial Intelligence and its Role in Society -- Microsoft's book on the AI-enabled future. Three chapters: The Future of Artificial Intelligence; Principles, Policies, and Laws for the Responsible Use of AI; and AI and the Future of Jobs and Work. ### What can Text Mining tell us about Snapchat’s new update? Last week, Snapchat unveiled a major redesign of their app that received quite a bit of negative feedback. As a video-sharing platform that has integrated itself into users’ daily lives, Snapchat relies on simplicity and ease of use. So when large numbers of these users begin to express pretty serious frustration about the app’s new design, it’s a big threat to their business. You can bet that right now Snapchat are analyzing exactly how big a threat this backlash is by monitoring the conversation online. This is a perfect example of businesses leveraging the Voice of their Customer with tools like Natural Language Processing. Businesses that track their product’s reputation online can quantify how serious events like this are and make informed decisions on their next steps. In this blog, we’ll give a couple of examples of how you can dive into online chatter and extract important insights on customer opinion. This TechCrunch article pointed out that 83% of Google Play Store reviews in the immediate aftermath of the update gave the app one or two stars. But as we mentioned in a blog last week, star rating systems aren’t enough – they don’t tell you why people feel the way they do and most of the time people base their star rating on a lot more than how they felt about a product or service. To get accurate and in-depth insights, you need to understand exactly what a reviewer is positive or negative about, and to what degree they feel this way. This can only be done effectively with text mining. So in this short blog, we’re going to use text mining to: 1. Analyze a sample of the Play Store reviews to see what Snapchat users mentioned in reviews posted since the update. 2. Gather and analyze a sample of 1,000 tweets mentioning “Snapchat update” to see if the reaction was similar on social media. In each of these analyses, we’ll use the use the AYLIEN Text Analysis API, which comes with a free plan that’s ideal for testing it out on small datasets like the ones we’ll use in this post. ## What did the app reviewers talk about? As TechCrunch pointed out, 83% of reviews since the update shipped received one or two stars, which gives us a high-level overview of the sentiment shown towards the redesign. But to dig deeper, we need to look into the reviews and see what people were actually talking about in all of these reviews. As a sample, we gathered the 40 reviews readily available on the Google Play Store and saved them in a spreadsheet. We can analyze what people were talking about in them by using our Text Analysis API’s Entities feature. This feature analyzes a piece of text and extracts the people, places, organizations and things mentioned in it. One of the types of entities returned to us is a list of keywords. To get a quick look into what the reviewers were talking about in a positive and negative light, we visualized the keywords extracted along with the average sentiment of the reviews they appeared in. From the 40 reviews, our Text Analysis API extracted 498 unique keywords. Below you can see a visualization of the keywords extracted and the average sentiment of the reviews they appeared in from most positive (1) to most negative (-1). First of all, you’ll notice that keywords like “love” and “great” are high on the chart, while “frustrating” and “terrible” are low on the scale – which is what you’d expect. But if you look at keywords that refer to Snapchat, you’ll see that “Bitmoji” appears high on the chart, while “stories,” “layout,” and “unintuitive” all  appear low down the chart, giving an insight into what Snapchat’s users were angry about. ## How did Twitter react to the Snapchat update? Twitter is such an accurate gauge of what the general public is talking about that the US Geological Survey uses it to monitor for earthquakes – because the speed at which people react to earthquakes on Twitter outpaces even their own seismic data feeds! So if people Tweet about earthquakes during the actual earthquakes, they are absolutely going to Tweet their opinions of Snapchat updates. To get a snapshot of the Twitter conversation, we gathered 1,000 Tweets that mentioned the update.To gather the Tweets, we ran a search on Twitter using the Twitter Search API (this is really easy –  take a look at our beginners’ guide to doing this in Python). After we gathered our Tweets, we analyzed them with our Sentiment Analysis feature and as you can see, the Tweets were overwhelmingly negative: <noscript><a href="http://blog.aylien.com/feed/"><img alt="Sentiment of 1,000 Tweets about Snapchat " src="https:&#47;&#47;public.tableau.com&#47;static&#47;images&#47;Sn&#47;Snapchat_6&#47;Sheet1&#47;1_rss.png" style="border: none;" /></a></noscript> Quantifying the positive, negative, and neutral sentiment shown towards the update on Twitter is useful, but using Text Mining we can go one further and extract the keywords mentioned in every one of these Tweets. To do this, we use the Text Analysis API’s Entities feature. Disclaimer: this being Twitter, there was quite a bit of opinion expressed in a NSFW manner 😉 <noscript><a href="http://blog.aylien.com/feed/"><img alt="Most mentioned keywords on Twitter in Tweets about &quot;Snapchat update&quot; " src="https:&#47;&#47;public.tableau.com&#47;static&#47;images&#47;7P&#47;7PBRWCTHW&#47;1_rss.png" style="border: none;" /></a></noscript> The number of expletives we identified as keywords reinforces the severity of the opinion expressed towards the update. You can see that “stories” and “story” are two of the few prominently-featured keywords that referred to feature updates while keywords like “awful” and “stupid” are good examples of the most-mentioned keywords in reaction to the update as a whole. It’s clear that using text mining processes like sentiment analysis and entity extraction – can provide a detailed overview of public reaction to an event by extracting granular information from product reviews and social media chatter. If you can think of insights you could extract with text mining about topics that matter to you, our Text Analysis API allows you to analyze 1,000 documents per day free of charge and getting started with our tools couldn’t be easier – click on the image below to sign up. The post What can Text Mining tell us about Snapchat’s new update? appeared first on AYLIEN. ### On Random Weights for Texture Generation in One Layer Neural Networks Continuing up on the use of random projections (which in the context of DNNs is really about NN with random weights), today we have: Recent work in the literature has shown experimentally that one can use the lower layers of a trained convolutional neural network (CNN) to model natural textures. More interestingly, it has also been experimentally shown that only one layer with random filters can also model textures although with less variability. In this paper we ask the question as to why one layer CNNs with random filters are so effective in generating textures? We theoretically show that one layer convolutional architectures (without a non-linearity) paired with the an energy function used in previous literature, can in fact preserve and modulate frequency coefficients in a manner so that random weights and pretrained weights will generate the same type of images. Based on the results of this analysis we question whether similar properties hold in the case where one uses one convolution layer with a non-linearity. We show that in the case of ReLu non-linearity there are situations where only one input will give the minimum possible energy whereas in the case of no nonlinearity, there are always infinite solutions that will give the minimum possible energy. Thus we can show that in certain situations adding a ReLu non-linearity generates less variable images. Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there ! Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin. ### 501 days of Summer (school) (This article was first published on Gianluca Baio's blog, and kindly contributed to R-bloggers) As I anticipated earlier, we’re now ready to open registration for our Summer School in Florence (I was waiting for UCL to set up the registration system and thought it may take much longer than it actually did $-$ so well done UCL!). We’ll probably have a few changes here and there in the timetable $-$ we’re thinking of introducing some new topics and I think I’ll certainly merge a couple of my intro lectures, to leave some time for those… Nothing is fixed yet and we’re in the process of deliberating all the changes $-$ but I’ll post as soon as we have a clearer plan for the revised timetable. Here’s the advert (which I’ve sent out to some relevant mailing list, also). Summer school: Bayesian methods in health economics Date: 4-8 June 2018 Venue: CISL Study Center, Florence (Italy) COURSE ORGANISERS: Gianluca Baio, Chris Jackson, Nicky Welton, Mark Strong, Anna Heath OVERVIEW: This summer school is intended to provide an introduction to Bayesian analysis and MCMC methods using R and MCMC sampling software (such as OpenBUGS and JAGS), as applied to cost-effectiveness analysis and typical models used in health economic evaluations. We will present a range of modelling strategies for cost-effectiveness analysis as well as recent methodological developments for the analysis of the value of information. The course is intended for health economists, statisticians, and decision modellers interested in the practice of Bayesian modelling and will be based on a mixture of lectures and computer practicals, although the emphasis will be on examples of applied analysis: software and code to carry out the analyses will be provided. Participants are encouraged to bring their own laptops for the practicals. We shall assume a basic knowledge of standard methods in health economics and some familiarity with a range of probability distributions, regression analysis, Markov models and random-effects meta-analysis. However, statistical concepts are reviewed in the context of applied health economic evaluations in the lectures. The summer school is hosted in the beautiful complex of the Centro Studi Cisl, overlooking and a short distance from Florence (Italy). The registration fees include full board accommodation in the Centro Studi. More information can be found at the summer school webpage. Registration is available from the UCL Store. For more details or enquiries, email Dr Gianluca Baio. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... ### R Packages worth a look Parallel Runs of Reverse Depends (prrd) Reverse depends for a given package are queued such that multiple workers can run the tests in parallel. Critical Line Algorithm in Pure R (CLA) Implements ‘Markovitz’ Critical Line Algorithm (‘CLA’) for classical mean-variance portfolio optimization. Care has been taken for correctness in light of previous buggy implementations. Extension for ‘R6’ Base Class (r6extended) Useful methods and data fields to extend the bare bones ‘R6’ class provided by the ‘R6’ package – ls-method, hashes, warning- and message-method, general get-method and a debug-method that assigns self and private to the global environment. Run Predictions Inside the Database (tidypredict) It parses a fitted ‘R’ model object, and returns a formula in ‘Tidy Eval’ code that calculates the predictions. It works with several databases back-ends because it leverages ‘dplyr’ and ‘dbplyr’ for the final ‘SQL’ translation of the algorithm. It currently supports lm(), glm() and randomForest() models. Bayesian Structure Learning in Graphical Models using Birth-Death MCMC (BDgraph) Provides statistical tools for Bayesian structure learning in undirected graphical models for continuous, discrete, and mixed data. The package is implemented the recent improvements in the Bayesian graphical models literature, including Mohammadi and Wit (2015) <doi:10.1214/14-BA889> and Mohammadi et al. (2017) <doi:10.1111/rssc.12171>. To speed up the computations, the BDMCMC sampling algorithms are implemented in parallel using OpenMP in C++. ### Distilled News In this article a few simple applications of Markov chain are going to be discussed as a solution to a few text processing problems. These problems appeared as assignments in a few courses, the descriptions are taken straightaway from the courses themselves. Remember, that I told last time that Python if statements are similar to how our brain processes conditions in our everyday life? That’s true for for loops too. You go through your shopping list, until collected every item from it. The dealer gives a card for each player until everyone has five. The athlete does push-ups until reaching one-hundred… Loops everywhere! As of for loops in Python: they are perfect for processing repetitive programming tasks. In this article, I’ll show you everything you need to know about them: the syntax, the logic and best practices too! This post shows you how to label hundreds of thousands of images in an afternoon. You can use the same approach whether you are labeling images or labeling traditional tabular data (e.g, identifying cyber security atacks or potential part failures). I’m contemplating the idea of teaching a course on simulation next fall, so I have been exploring various topics that I might include. (If anyone has great ideas either because you have taught such a course or taken one, definitely drop me a note.) Monte Carlo (MC) simulation is an obvious one. I like the idea of talking about importance sampling, because it sheds light on the idea that not all MC simulations are created equally. I thought I’d do a brief blog to share some code I put together that demonstrates MC simulation generally, and shows how importance sampling can be an improvement. Microsoft R Open (MRO), Microsoft’s enhanced distribution of open source R, has been upgraded to version 3.4.3 and is now available for download for Windows, Mac, and Linux. This update upgrades the R language engine to the latest R (version 3.4.3) and updates the bundled packages (specifically: checkpoint, curl, doParallel, foreach, and iterators) to new versions. MRO is 100% compatible with all R packages. MRO 3.4.3 points to a fixed CRAN snapshot taken on January 1 2018, and you can see some highlights of new packages released since the prior version of MRO on the Spotlights page. As always, you can use the built-in checkpoint package to access packages from an earlier date (for reproducibility) or a later date (to access new and updated packages). Making deep learning simple and accessible to enterprises: Polyaxon aims to be an enterprise-grade open source platform for building, training, and monitoring large scale deep learning applications. It includes an infrastructure, set of tools, proven algorithms, and industry models to enable your organization to innovate faster. Polyaxon is a platform-agnostic with no lock-in. You keep full ownership and control of sensitive data on-premise or in the cloud. When approaching problems with sequential data, such as natural language tasks, recurrent neural networks (RNNs) typically top the choices. While the temporal nature of RNNs are a natural fit for these problems with text data, convolutional neural networks (CNNs), which are tremendously successful when applied to vision tasks, have also demonstrated efficacy in this space. In our LSTM tutorial, we took an in-depth look at how long short-term memory (LSTM) networks work and used TensorFlow to build a multi-layered LSTM network to model stock market sentiment from social media content. In this post, we will briefly discuss how CNNs are applied to text data while providing some sample TensorFlow code to build a CNN that can perform binary classification tasks similar to our stock market sentiment model. ### Book Memo: “Stochastic Modelling in Production Planning” Methods for Improvement and Investigations on Production System Behaviour Alexander Hübl develops models for production planning and analyzes performance indicators to investigate production system behaviour. He extends existing literature by considering the uncertainty of customer required lead time and processing times as well as by increasing the complexity of multi-machine multi-items production models. Results are on the one hand a decision support system for determining capacity and the further development of the production planning method Conwip. On the other hand, the author develops the JIT intensity and analytically proves the effects of dispatching rules on production lead time. ### The difference between me and you is that I’m not on fire “Eat what you are while you’re falling apart and it opened a can of worms. The gun’s in my hand and I know it looks bad, but believe me I’m innocent.” – Mclusky While the next episode of Madam Secretary buffers on terrible hotel internet, I (the other other white meat) thought I’d pop in to say a long, convoluted hello. I’m in New York this week visiting Andrew and the Stan crew (because it’s cold in Toronto and I somehow managed to put all my teaching on Mondays. I’m Garfield without the spray tan.). So I’m in a hotel on the Upper West Side (or, like, maybe the upper upper west side. I’m in the 100s. Am I in Harlem yet? All I know is that I’m a block from my favourite bar [which, as a side note, Aki does not particularly care for] where I am currently not sitting and writing this because last night I was there reading a book about the rise of the surprisingly multicultural anti-immigration movement in Australia and, after asking what my book was about, some bloke started asking me for my genealogy and “how Australian I am” and really I thought that it was both a bit much and a serious misunderstanding of what someone who is reading book with headphones on was looking for in a social interaction.) going through the folder of emails I haven’t managed to answer in the last couple of weeks looking for something fun to pass the time. And I found one. Ravi Shroff from the Department of Applied Statistics, Social Science and Humanities at NYU (side note: applied statistics gets a short shrift in a lot of academic stats departments around the world, which is criminal. So I will always love a department that leads with it in the title. I’ll also say that my impression when I wandered in there for a couple of hours at some point last year was that, on top of everything else, this was an uncommonly friendly group of people. Really, it’s my second favourite statistics department in North America, obviously after Toronto who agreed to throw a man into a volcano every year as part of my startup package after I got really into both that Tori Amos album from 1996 and cultural appropriation. Obviously I’m still processing the trauma of being 11 in 1996 and singularly unable to sacrifice any young men to the volcano goddess.) sent me an email a couple of weeks ago about constructing interpretable decision rules. (Meta-structural diversion: I starting writing this with the new year, new me idea that every blog post wasn’t going to devolve into, say, 500 words on how Medúlla is Björk’s Joanne, but that resolution clearly lasted for less time than my tenure as an Olympic torch relay runner. But if you’ve not learnt to skip the first section of my posts by now, clearly reinforcement learning isn’t for you.) #### To hell with good intentions Ravi sent me his paper Simple rules for complex decisions by Jongbin Jung, Connor Concannon, Ravi Shroff, Sharad Goel and Daniel Goldstein and it’s one of those deals where the title really does cover the content. This is my absolute favourite type of statistics paper: it eschews the bright shiny lights of ultra-modern methodology in favour of the much harder road of taking a collection of standard tools and shaping them into something completely new. Why do I prefer the latter? Well it’s related to the age old tension between “state-of-the-art” methods and “stuff-people-understand” methods. The latter are obviously preferred as they’re much easier to push into practice. This is in spite of the former being potentially hugely more effective. Practically, you have to balance “black box performance” with “interpretability”. Where you personally land on that Pareto frontier is between you and your volcano goddess. This paper proposes a simple decision rule for binary classification problems and shows fairly convincingly that it can be almost as effective as much more complicated classifiers. #### There ain’t no fool in Ferguson The paper proposes a Select-Regress-and-Round method for constructing decision rules that works as follows: 1. Select a small number $k$ of features $\mathbf{x}$ that will be used to build the classifier 2. Regress: Use a logistic-lasso to estimate the classifier $h(\mathbf{x}) = (\mathbf{x}^T\mathbf{\beta} \geq 0 \text{ ? } 1 \text{ : } 0)$. 3. Round: Chose $M$ possible levels of effect and build weights $w_j = \text{Round} \left( \frac{M \beta_j}{\max_i|\beta_i|}\right)$. The new classifier (which chooses between options 1 and 0) selects 1 if $\sum_{j=1}^k w_j x_j > 0$. In the paper they use $k=10$ features and $M = 3$ levels.  To interpret this classifier, we can consider each level as a discrete measure of importance.  For example, when we have $M=3$ we have seven levels of importance from “very high negative effect”, through “no effect”, to “very high positive effect”. In particular • $w_j=0$: The $j$th feature has no effect • $w_j= \pm 1$: The $j$th feature has a low effect (positive or negative) • $w_j = \pm 2$: The $j$th feature has a medium effect (positive or negative) • $w_j = \pm 3$: The $j$th feature has a high effect (positive or negative). A couple of key things here that makes this idea work.  Firstly, the initial selection phase allows people to “sense check” the initial group of features while also forcing the decision rule to only depend on a small number of features, which greatly improves the ability for people to interpret the rule.  The second two phases then works out which of those features are used (the number of active features can be less than $k$. Finally the last phase gives a qualitative weight to each feature. This is a transparent way of building a decision rule, as the effect of each feature used to make the decision is clearly specified.  But does it work? #### She will only bring you happiness The most surprising thing in this paper is that this very simple strategy for building a decision rule works fairly well. Probably unsurprisingly, complicated, uninterpretable decision rules constructed through random forests typically do work better than this simple decision rule.  But the select-regress-round strategy doesn’t do too badly.  It might be possible to improve the performance by tweaking the first two steps to allow for some low-order interactions. For binary features, this would allow for classifiers where neither X nor Y are strong indicators of success, but the co-occurance of them (XY) is. Even without this tweak, the select-regress-round classifier performs about as well as logistic regression and logistic lasso models that use all possible features (see the above figure from the paper), although it performs worse than the random forrest.  It also doesn’t appear that the rounding process has too much of an effect on the quality of the classifier. #### This man will not hang The substantive example in this paper has to do with whether or not a judge decides to grant bail, where the event you’re trying to predict is a failure to appear at trial. The results in this paper suggest that the select-regress-round rule leads to a consistently lower rate of failure compared to the “expert judgment” of the judges.  It also works, on this example, almost as well as a random forest classifier. There’s some cool methodology stuff in here about how to actually build, train, and evaluate classification rules when, for any particular experimental unit (person getting or not getting bail in this case), you can only observed one of the potential outcomes.  This paper uses some ideas from the causal analysis literature to work around that problem. I guess the real question I have about this type of decision rule for this sort of example is around how these sorts of decision rules would be applied in practice.  In particular, would judges be willing to use this type of system?  The obvious advantage of implementing it in practice is that it is data driven and, therefore, the decisions are potentially less likely to fall prey to implicit and unconscious biases. The obvious downside is that I am personally more than the sum of my demographic features (or other measurable quantities) and this type of system would treat me like the average person who has shares the $k$ features with me. ### Sketchnotes from TWiML&AI #92: Learning State Representations with Yael Niv (This article was first published on Shirin's playgRound, and kindly contributed to R-bloggers) These are my sketchnotes for Sam Charrington’s podcast This Week in Machine Learning and AI about Learning State Representations with Yael Niv: https://twimlai.com/twiml-talk-92-learning-state-representations-yael-niv/ You can listen to the podcast here. In this interview Yael and I explore the relationship between neuroscience and machine learning. In particular, we discusses the importance of state representations in human learning, some of her experimental results in this area, and how a better understanding of representation learning can lead to insights into machine learning problems such as reinforcement and transfer learning. Did I mention this was a nerd alert show? I really enjoyed this interview and I know you will too. Be sure to send over any thoughts or feedback via the show notes page. https://twimlai.com/twiml-talk-92-learning-state-representations-yael-niv/ R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... ### General Linear Models The Basics (This article was first published on Bluecology blog, and kindly contributed to R-bloggers) # General Linear Models: The Basics General linear models are one of the most widely used statistical tool in the biological sciences. This may be because they are so flexible and they can address many different problems, that they provide useful outputs about statistical significance AND effect sizes, or just that they are easy to run in many common statistical packages. The maths underlying General Linear Models (and Generalized linear models, which are a related but different class of model) may seem mysterious to many, but are actually pretty accessible. You would have learned the basics in high school maths. We will cover some of those basics here. ## Linear equations As the name suggests General Linear Models rely on a linear equation, which in its basic form is simply: yi = α + βx*i* + ϵ*i The equation for a straight line, with some error added on. If you aren’t that familiar with mathematical notation, notice a few I used normal characters for variables (i.e. things you measure) and Greek letters for parameters, which are estimated when you fit the model to the data. yi are your response data, I indexed the y with i to indicate that there are multiple observations. xi is variously known as a covariate, predictor variable or explanatory variable. α is an intercept that will be estimated. α has the same units as y. (e.g. if y is number of animals, then α is expected the number of animals when x = 0). β is a slope parameter that will also be estimated. β is also termed the effect size because it measures the effect of x on y. β has units of ‘y per x’. For instance, if x is temperature, then β has units of number of animals per degree C. β thus measures how much we expect y to change if x were to increase by 1. Finally, don’t forget ϵi, which is the error. ϵi will measure the distance between each prediction of yi made by the model and the observed value of yi. These predictions will simply be calculated as: yi = α + βx*i (notice I just removed the ϵi from the end). You can think of the linear predictions as: the mean or ‘expected’ value a new observation yi would take if we only knew xi and also as the ‘line of best fit’. ## Simulating ideal data for a general linear model Now we know the model, we can generate some idealized data. Hopefully this will then give you a feel for how we can fit a model to data. Open up R and we will create these parameters: n <- 100 beta <- 2.2 alpha <- 30 Where n is the sample size and alpha and beta are as above. We also need some covariate data, we will just generate a sequence of n numbers from 0 to 1: x <- seq(0, 1, length.out = n) The model’s expectation is thus this straight line: y_true <- beta * x + alpha plot(x, y_true) Because we made the model up, we can say this is the true underlying relationship. Now we will add error to it and see if we can recover that relationship with a general linear model. Let’s generate some error: sigma <- 2.4 set.seed(42) error <- rnorm(n, sd = sigma) y_obs <- y_true + error plot(x, y_obs) lines(x, y_true) Here sigma is our standard deviation, which measures how much the observations y vary around the true relationship. We then used rnorm to generate n random normal numbers, that we just add to our predicted line y_true to simulate observing this relationship. Congratulations, you just created a (modelled) reality a simulated an ecologist going out and measuring that reality. Note the set.seed() command. This just ensures the random number generator produces the same set of numbers every time it is run in R and it is good practice to use it (so your code is repeatable). Here is a great explanation of seed setting and why 42 is so popular . Also, check out the errors: hist(error) Looks like a normal distribution hey? That’s because we generated them from a normal distribution. That was a handy trick, because the basic linear model assumes the errors are normally distributed (but not necessarily the raw data). Also note that sigma is constant (e.g. it doesn’t get larger as x gets larger). That is another assumption of basic linear models called ‘homogeneity of variance’. ## Fitting a model To fit a basic linear model in R we can use the lm() function: m1 <- lm(y_obs ~ x) It takes a formula argument, which simply says here that y_obs depends on (the tilde ~) x. R will do all the number crunching to estimate the parameters now. To see what it came up with try: coef(m1) ## (Intercept) x ## 30.163713 2.028646 This command tells us the estimate of the intercept ((Intercept)) and the slope on x under x. Notice they are close to, but not exactly the same as alpha and beta. So the model has done a pretty decent job of recovering our original process. The reason the values are not identical is that we simulated someone going and measuring the real process with error (that was when we added the normal random numbers). We can get slightly more details about the model fit like this: summary(m1) ## ## Call: ## lm(formula = y_obs ~ x) ## ## Residuals: ## Min 1Q Median 3Q Max ## -7.2467 -1.5884 0.1942 1.5665 5.3433 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 30.1637 0.4985 60.503 <2e-16 *** ## x 2.0286 0.8613 2.355 0.0205 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 2.511 on 98 degrees of freedom ## Multiple R-squared: 0.05357, Adjusted R-squared: 0.04391 ## F-statistic: 5.547 on 1 and 98 DF, p-value: 0.0205 I’m not going to go overboard with explaining this output now, but notice a few key things. With the summary, we get standard errors for the parameter estimates (which is a measure of how much they might vary). Also notice the R-squared, which can be handy. Finally, notice that the Residual standard error is close to the value we used for sigma, which is because it is an estimate of sigma from our simulated data. Your homework is play around with the model and sampling process. Try change alpha, beta, n and sigma, then refit the model and see what happens. ## Final few points So did you do the homework? If you did, well done, you just performed a simple power analysis (in the broad sense). In a more formal power analysis (which is what you might have come across previously) could systematically vary n or beta and for 1000 randomised data sets and then calculate the proportion out of 1000 data-sets that your p-value was ‘significant’ (e.g. less than a critical threshold like the ever-popular 0.05). This number tells you how good you are at detecting ‘real’ effects. Here’s a great intro to power analysis in the broad sense: Bolker, Ecological Models and Data in R One more point. Remember we said above about some ‘assumptions’. Well we can check those in R quite easily: plot(m1, 1) This shows a plot of the residuals (A.K.A. errors) versus the predicted values. We are looking for ‘heteroskedasticity’ which is a fancy way of saying the errors aren’t equal across the range of predictions (remember I said sigma is a constant?). Another good plot: plot(m1, 2) Here we are looking for deviations of the points from the line. Points on the line mean the errors are approximately normally distributed, which was a key assumption. Points far from the line could indicate the errors are skewed left or right, too fat in the middle, or too in the middle skinny. More on that issue here ## The end So the basics might belie the true complexity of situations we can address with General Linear Models and their relatives Generalized Linear Models. But, just to get you excited, here are a few things you can do by adding on more terms to the right hand side of the linear equation: 1. Model multiple, interacting covariates. 2. Include factors as covariates (instead of continuous variables). Got a factor and a continuous variable? Don’t bother with the old-school ANCOVA method, just use a linear model. 3. Include a spline to model non-linear effects (that’s a GAM). 4. Account for hierarchies in your sampling, like transects sampled within sites (that’s a mixed effects model) 5. Account for spatial or temporal dependencies. 6. Model varying error variance (e.g. when the variance increases with the mean). You can also change the left-hand side, so that it no longer assumes normality (then that’s a Generalized Linear Model). Or even add chains of models together to model pathways of cause and effect (that’s a ‘path analysis’ or ‘structural equation model’) If this taster has left you keen to learn more, then check out any one of the zillion online courses or books on GLMs with R, or if you can get to Brisbane, come to our next course (which as of writing was in Feb 2018, but we do them regularly) . Now you know the basics, practice, practice, practice and pretty soon you will be running General Linear Models behind your back while you watch your 2 year old, which is what I do for kicks. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... ### Mapping a list of functions to a list of datasets with a list of columns as arguments (This article was first published on Econometrics and Free Software, and kindly contributed to R-bloggers) This week I had the opportunity to teach R at my workplace, again. This course was the “advanced R” course, and unlike the one I taught at the end of last year, I had one more day (so 3 days in total) where I could show my colleagues the joys of the tidyverse and R. To finish the section on programming with R, which was the very last section of the whole 3 day course I wanted to blow their minds; I had already shown them packages from the tidyverse in the previous days, such as dplyr, purrr and stringr, among others. I taught them how to use ggplot2, broom and modelr. They also liked janitor and rio very much. I noticed that it took them a bit more time and effort for them to digest purrr::map() and purrr::reduce(), but they all seemed to see how powerful these functions were. To finish on a very high note, I showed them the ultimate purrr::map() use case. Consider the following; imagine you have a situation where you are working on a list of datasets. These datasets might be the same, but for different years, or for different countries, or they might be completely different datasets entirely. If you used rio::import_list() to read them into R, you will have them in a nice list. Let’s consider the following list as an example: library(tidyverse) data(mtcars) data(iris) data_list = list(mtcars, iris) I made the choice to have completely different datasets. Now, I would like to map some functions to the columns of these datasets. If I only worked on one, for example on mtcars, I would do something like: my_summarise_f = function(dataset, cols, funcs){ dataset %>% summarise_at(vars(!!!cols), funs(!!!funcs)) } And then I would use my function like so: mtcars %>% my_summarise_f(quos(mpg, drat, hp), quos(mean, sd, max)) ## mpg_mean drat_mean hp_mean mpg_sd drat_sd hp_sd mpg_max drat_max ## 1 20.09062 3.596563 146.6875 6.026948 0.5346787 68.56287 33.9 4.93 ## hp_max ## 1 335 my_summarise_f() takes a dataset, a list of columns and a list of functions as arguments and uses tidy evaluation to apply mean(), sd(), and max() to the columns mpg, drat and hp of mtcars. That’s pretty useful, but not useful enough! Now I want to apply this to the list of datasets I defined above. For this, let’s define the list of columns I want to work on: cols_mtcars = quos(mpg, drat, hp) cols_iris = quos(Sepal.Length, Sepal.Width) cols_list = list(cols_mtcars, cols_iris) Now, let’s use some purrr magic to apply the functions I want to the columns I have defined in list_cols: map2(data_list, cols_list, my_summarise_f, funcs = quos(mean, sd, max)) ## [[1]] ## mpg_mean drat_mean hp_mean mpg_sd drat_sd hp_sd mpg_max drat_max ## 1 20.09062 3.596563 146.6875 6.026948 0.5346787 68.56287 33.9 4.93 ## hp_max ## 1 335 ## ## [[2]] ## Sepal.Length_mean Sepal.Width_mean Sepal.Length_sd Sepal.Width_sd ## 1 5.843333 3.057333 0.8280661 0.4358663 ## Sepal.Length_max Sepal.Width_max ## 1 7.9 4.4 That’s pretty useful, but not useful enough! I want to also use different functions to different datasets! Well, let’s define a list of functions then: funcs_mtcars = quos(mean, sd, max) funcs_iris = quos(median, min) funcs_list = list(funcs_mtcars, funcs_iris) Because there is no map3(), we need to use pmap(): pmap( list( dataset = data_list, cols = cols_list, funcs = funcs_list ), my_summarise_f) ## [[1]] ## mpg_mean drat_mean hp_mean mpg_sd drat_sd hp_sd mpg_max drat_max ## 1 20.09062 3.596563 146.6875 6.026948 0.5346787 68.56287 33.9 4.93 ## hp_max ## 1 335 ## ## [[2]] ## Sepal.Length_median Sepal.Width_median Sepal.Length_min Sepal.Width_min ## 1 5.8 3 4.3 2 Now I’m satisfied! Let me tell you, this blew their minds ! To be able to use things like that, I told them to always solve a problem for a single example, and from there, try to generalize their solution using functional programming tools found in purrr. If you found this blog post useful, you might want to follow me on twitter for blog post updates. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... ### SatRday in South Africa (This article was first published on R on The Jumping Rivers Blog, and kindly contributed to R-bloggers) Jumping Rivers is proud to be sponsoring the upcoming SatRday conference in Cape Town, South Africa on 17th March 2018. ## What is SatRday? SatRdays are a collection of free/cheap accessible R conferences organised by members of the R community at various locations across the globe. Each SatRday looks to provide talks and/or workshops by R programmers covering the language and it’s applications and is run as a not-for-profit event. They provide a great place to meet like minded people, be it researchers, data scientists, developers or enthusiasts, to discuss your passion for R programming. ## SatRday in Cape Town This years SatRday in Cape Town has a collection of workshops on the days running up to the conference on the Saturday. For more detailed information concerning speakers, workshop topics and registration head on over to http://capetown2018.satrdays.org/ . ## Be in it to win it In addition to sponsoring the conference this year, Jumping Rivers is also giving you the chance to win a free ticket. To be in with a chance just respond to the tweet below: R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... ## January 18, 2018 ### The Generalization Mystery: Sharp vs Flat Minima I set out to write about the following paper I saw people talk about on twitter and reddit: It's related to this pretty insightful paper: Inevitably, I started thinking more generally about flat and sharp minima and generalization, so rather than describing these papers in details, I ended up dumping some thoughts of my own. Feedback and pointers to literature are welcome, as always #### Summary of this post • Flatness of minima is hypothesized to have something to do with generalization in deep nets. • as Dinh et al (2017) show, flatness is sensitive to reparametrization and thus cannot predict generalization ability alone. • Li et al (2017) use a form of parameter normalization to make their method more robust to reparametrization and produce some fancy plots comparing deep net architectures. • While this analysis is now invariant to the particular type of reparametrizations considered by Dinh et al, it may still be sensitive to other types of invariances, so I'm not sure how much to trust these plots and conclusions. • Then I go back to square one and ask how one could construct indicators of generalization that are invariant by construction, for example by considering ratios of flatness measures. • Finally, I have a go at developing a local measure of generalization from first principles. The resulting metric depends on the data and statistical properties of gradients calculated from different minibatches. ## Flatness, Generalization and SGD The loss surface of deep nets tends to have many local minima. Many of these might be equally good in terms of training error, but they may have widely different generalization performance, i.e. an network with minimal training loss might perform very well, or very poorly on a held-out training set. Interestingly, stochastic gradient descent (SGD) with small batchsizes appears to locate minima with better generalization properties than large-batch SGD. So the big question is: what measurable property of a local minimum can we use to predict generalization properties? And how does this relate to SGD? There is speculation dating back to at least Hochreiter and Schmidhuber (1997) that the flatness of the minimum is a good measure to look at. However, as Dinh et al (2017) pointed out, flatness is sensitive to reparametrizations of the neural network: we can reparametrize a neural network without changing its outputs while making sharp minima look arbitrarily flat and vice versa. As a consequence the flatness alone cannot explain or predict good generalization. Li et al (2017) proposed a normalization scheme which scales the space around a minimum in such a way that the apparent flatness in 1D and 2D plots is kind of invariant to the type of reparametrization Dinh et al used. This, they say, allows us to produce more faithful visualizations of the loss surfaces around a minimum. They even use 1D and 2D plots to illustrate differences between different architectures, such as a VGG and a ResNet. I personally do not buy the conclusions of this paper, and it seems the reviewers of the ICLR submission largely agreed on this. The proposed method is weakly motivated and only addresses one possible type of reparametrization. ## Contrastive Flatness measures Following the thinking by Dinh et al, if generalization is a property which is invariant under reparametrization, the quantity we use to predict generalization should also be invariant. My intuition is that a good way to achieve invariance is to consider the ratio between two quantities - maybe two flatness measures - which are effected by reparametrization in the same way. One thing I think would make sense to look at is the average flatness of the loss in a single minibatch vs the flatness of the average loss. Why would this makes sense? The average loss can be flat around a minimum in different ways: it can be flat because it is the average of flat functions which all look very similar and whose minimum is very close to the same location; or it can be flat because it is the average of many sharp functions with minima at locations scattered around the minimum of the average. Intuitively, the former solution is more stable with respect to subsampling of data, therefore it should be more favourable from a generalization viewpont. The latter solution is very sensitive to which particular minibatch we are looking at, so presumably it may give rise to worse generalization. As a conclusion of this section, I don't think it makes sense to look at only the flatness of the average loss, looking at how that flatness is effected by subsampling the data somehow feels more key to generalization. ## A local measure of generalization After Jorge Nocedal's ICLR talk on large-batch SGD Leon Buttou had a comment which I think hit the nail on its head. The process of sampling minibatches from training data kind of simulates the effect of sampling the training set and the test set from some underlying data distribution. Therefore, you might think of generalization from one minibatch to another as a proxy to how well a method would generalize from a training set to a test set. How can we use this insight to come up with some sort of measure of generalization based on minibatches, especially along the lines of sharpness or local derivatives? First of all, let's consider the stochastic process $f(\theta)$ which we obtain by evaluating the loss function on a random minibatch. The randomness comes from subsampling the data. This is a probability distribution over loss functions over $\theta$. I think it's useful to seek an indicator of generalization ability as a local property of this stochastic process at any given $\theta$ value. Let's pretend for a minute that each draw $f(\theta)$ from this process is a convex or at least has a unique global minimum. How would one describe a model's generalization from one minibatch to another in terms of this stochastic process? Let's draw two functions $f_1(\theta)$ and $f_2(\theta)$ independently (i.e. evaluate the loss on two separate minibatches). I propose that the following would be a meaningful measure: $$R = f_2 (\operatorname{argmin}_\theta f_1(\theta)) - \min_\theta f_2(\theta)$$ Basically: you care about finding low error according to $f_2$ but all you have access to is $f_1$. You therefore look at what the value of $f_2$ is at the location of the minimum of $f_1$ and compare that to the global minimal value of $f_2$. This is a sort of regret expression, hence my use of $R$ to denote it. Now, in deep learning the loss functions $f_1$ and $f_2$ are not convex, have many local minima, so this definition is not particularly useful in general. However, it makes sense to calculate this value locally, in a small neighbourhood of a particular parameter value $\theta$. Let's consider fitting a restricted neural network model, where only parameters within a certain $\epsilon$ distance from $\theta$ are allowed. If $\epsilon$ is small enough, we can assume the loss functions have a unique global minimum within this $\epsilon$-ball. Furthermore, if $\epsilon$ is small enough, one can use a first-order Taylor-approximation to $f_1$ and $f_2$ to analytically find approximate minima within the $\epsilon$-ball. To do this, we just need to evaluate gradient at $\theta$. this is illustrated in the figure below: The left-hand panel shows an imaginary loss function evaluated on some minibatch $f_1$, restricted to the $\epsilon$-ball around $\theta$. We can assume $\epsilon$ is small enough so $f_1$ is linear within this local region. Unless the gradient is exactly $0$, the minimum will fall on the surface of the $\epsilon$-ball, exactly at $\theta - \epsilon \frac{g_1}{\|g_1\|}$ where $g_1$ is the gradient of $f_1$ at $\theta$. This is shown by the yellow star. On the right-hand panel I show $f_2$. This is also locally linear, but its gradient $g_2$ might be different. The minimum of $f_2$ within the $\epsilon$-ball is at $\theta - \epsilon \frac{g_2}{\|g_2\|}$, shown by the red star. We can consider the regret-type expression as above, by evaluating $f_2$ at the yellow star, and substracting its value at the red star. This can be expressed as follows (I divided by $\epsilon$): $$\frac{R(\theta, f_1, f_2)}{\epsilon} \rightarrow - \frac{g_2^\top g_1}{\|g_1\|} + \frac{g_2^\top g_2}{\|g_2\|} = \|g_2\| - \frac{g_2^\top g_1}{\|g_1\|} = \|g_2\|(1 - cos(g_1, g_2))$$ In practice one would consider taking an expectation with respect to the two minibatches to obtain an expression that depends on $\theta$. So, we have just come up with a local measure of generalization ability, which is expressed in terms of expectations involving gradients over different minibatches. The measure is local as it is specific for each value of $\theta$. It is data-dependent in that it depends on the distribution $p_\mathcal{D}$ from which we sample minibatches. This measure depends on two things: • the expected similarity of gradients which come from different minibatches $1 - cos(g_1, g_2)$ looks at whether various minibatches of data push $\theta$ in similar directions. In regions where the gradients are sampled from a mostly spherically symmetric distribution, this term would be close to $1$ most of the time. • the magnitude of gradients $\|g_2\|$. Interestingly, one can express this as $\sqrt{\operatorname{trace}\left(g_2 g_2^\top\right)}$. When we take the expectation over this, assuming that the cosine similarity term is mostly $1$ we end up with the expression $\mathbb{E}_g \sqrt{\operatorname{trace}\left(g g_2^\top\right)}$ where the expectation is taken over minibatches. Note that the trace-norm of the empirical Fisher information matrix $\sqrt{ \operatorname{trace} \mathbb{E}_g \left(g g_2^\top\right)}$ can be used as a measure of flatness of the average loss around minima, so there may be some interesting connections there. However, due to Jensen's inequality the two things are not actually the same. Update - thanks for reddit user bbsome for pointing this out: Note that R is not invariant under reparametrization either. The source of this sensitivity is the fact that I considered an $\epsilon$-ball in Euclidean norm around $\theta$. The right way to get rid of this is to consider an $\epsilon$-ball using the symmetrized KL divergence as instead of the Euclidean norm, similarly to how natural gradient methods can be derived. If we do this, the formula becomes dependent only on the functions the neural network implements, not on the particular choice of parametrization. I leave it as homework for people to work out how this would change the formulae. # Summary This post started out as a paper review, but in the end I didn't find the paper too interesting and instead resorted to sharing ideas about tackling the generalization puzzle a bit differently. It's entirely possible that people have done this analysis before, or that it's completely useless. In any case, I welcome feedback. The first observation here was that a good indicator may involve not just the flatness of the average loss around the minimum, but a ratio between two flatness indicators. Such metrics may end up invariant under reparametrization by construction. Taking this idea further I attempted to develop a local indicator of generalization performance which goes beyond flatness. It also includes terms that measure the sensitivity of gradients to data subsampling. Because data subsampling is something that occurs both in generalization (training vs test set) and in minibatch-SGD, it may be possible that these kind of measures might shed some light on how SGD enables better generalization. ### JupyterCon 2018: Call For Proposal Dear fellow Jovyans, It is with great pleasure that we are opening the Call For Proposals (CFP) for JupyterCon 2018! Last August, Project Jupyter, the NumFOCUS Foundation, and O’Reilly Media came together to host JupyterCon 2017. For its first offering we attracted over 700 attendees, 23 scholarship recipients across 4 days of talks and tutorials with access to 5 parallel session tracks totaling 11 keynotes, 55 talks, 8 tutorials, and 2 training courses. In addition, Community Day, open to everyone, was held at at the end of the conference. The conference also featured 33 poster sessions as a starting point for further discussion and was a huge success. Videos of the event have been made available on Safari Online and YouTube. ### JupyterCon 2018, CFP Open JupyterCon 2017 was a huge success and we’ve been working hard since then to make JupyterCon 2018 even better. It will be held in New York City in August from Tuesday the 21st to Friday the 24th. We’ll also host an open Community Day on August 25th, which will be open to everyone. Today we are happy to open the conference website and open the Call For Proposal with submissions due by early March. A couple of changes have been made to the CFP since last year. In particular if your talk is not accepted, you can ask us to automatically consider the proposal for the poster session. We encourage you to submit a proposal, and reach out to us if you have any questions. We’ll do our best to help you and and give you feedback on your proposal. Like last year, we will have diversity and student scholarships available; further information will be provided on the website. We also encourage you to follow the JupyterCon Twitter account for announcements or corrections. ### Community Day The final day of JupyterCon 2017 was a blast with a large number of people making their first contribution to the Jupyter codebase, to the documentation, editing the wiki, or deploying it in the cloud. During the conference days, a separate room was also reserved for user testing of different Jupyter software, which proved to be fantastic source of feedback for User Experience (UX) and driving various Jupyter Tools forward. We are happy to offer this “Community Day” experience again. At JupyterCon 2017, the Saturday was branded “Sprints” with the connotation of a code-centric experience. While we’re happy to see users coming to “Sprint” on code, we want to let you know that the Community Day will be open to anyone. Whether you are a teacher, coder, researcher, or user of Jupyter, the Community Day will have something for you. The Community Day is not limited to attendees of the main JupyterCon event, and it’s intended to be a “grass-roots” celebration of Jupyter and its community. We hope to see you at JupyterCon 2018. ### Thanks JupyterCon 2018 would not be possible without O’Reilly Media, NumFocus, as well as our sponsors. JupyterCon 2018: Call For Proposal was originally published in Jupyter Blog on Medium, where people are continuing the conversation by highlighting and responding to this story. ### Registration and talk proposals now open for useR!2018 (This article was first published on Revolutions, and kindly contributed to R-bloggers) Registration is now open for useR! 2018, the official R user conference to be held in Brisbane, Australia July 10-13. If you haven't been to a useR! conference before, it's a fantastic opportunity to meet and mingle with other R users from around the world, see talks on R packages and applications, and attend tutorials for deep dives on R-related topics. This year's conference will also feature keynotes from Jenny Bryan, Steph De Silva, Heike Hofmann, Thomas Lin Pedersen, Roger Peng and Bill Venables. It's my favourite conference of the year, and I'm particularly looking forward to this one. This video from last year's conference in Brussels (a sell-out with over 1,1000 attendees) will give you a sense of what a useR! conference is like: The useR! conference brought to you by the R Foundation and is 100% community-led. That includes the content: the vast majority of talks come directly from R users. If you've written an R package, performed an interesting analysis with R, or simply have something to share of interest to the R community, consider proposing a talk by submitting an abstract. Most talks are 20 minutes, but you can also propose a 5-minute lightning talk or a poster. If you're not sure what kind of talk you might want to give, check out the program from useR!2017 for inspiration. R-Ladies, which promotes gender diversity in the R community, can also provide guidance on abstracts. Note that all proposals must comply with the conference code of conduct. Early-bird registrations close on March 15, and while general registration will be open until June my advice is to get in early, as this year's conference is likely to sell out once again. If you want to propose a talk, submissions are due by March 2 (but early submissions have a better chance of being accepted). Follow the links below to register or submit an abstract, and I look forward to seeing you in Brisbane! useR! 2018: Registration; Abstract submission R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... ### How to Write a Bootcamp Review that Actually Helps People Editor's note: This post was written as part of a collaboration with SwitchUp, an online platform for researching and reviewing technology learning programs. Erica Freedman is a Content and Client Services Specialist at SwitchUp. Data Science is a rapidly growing industry. From university programs to week-long cohorts, it can be difficult to decide where to start. Much like the “Best coffee in all of America,” sign in your local diner’s window, every boot camp or school website tells you they are the best in the game. Based on SwitchUp’s research, “there are currently over 120 in-person bootcamps and hundreds of part-time and online programs available worldwide.” While choice can be good, it can also be daunting. How can you be sure you’re picking the right program? For quality control, students have taken to using reviews and ratings from graduates to eliminate the less-than-satisfactory programs saturating the market. Detailed reviews take students beyond marketing materials or publicity, and provide valuable first-hand experience. On-the-ground perspectives are often a deciding factor when students are looking to change careers. It can help students understand the big picture, from the beginning of their research through to a career in tech. If you are a bootcamp grad (or soon-to-be grad), your perspective can help “pay it forward” to the next cohort of students, and give your school helpful feedback as well. Think back to when you were trying to find the best program possible and write a review from that perspective. What do you wish you had seen or heard before entering a bootcamp? We suggest the following tips to write a review that is valuable to future students. ## Weigh the Pros and Cons Even if your bootcamp was the most perfect experience of your life, there is always room for improvement. Do researching students a favor and cover the positive aspects of your program experience while balancing this out with constructive criticism. This feedback not only helps those looking to join the school, but also the school itself. At SwitchUp, we’ve found that prospective students are most interested in the quality of the curriculum, teaching staff, and job support, so be sure to mention your thoughts on these areas. If your school has multiple campuses then you’ll want to list the campus you attended, as these variables change from campus to campus. ## Talk About Your Complete Experience: Before, During, and After The Bootcamp Have you ever seen a review that says, “It was great!” or “I hated it.”? Although these are technically reviews, neither are helpful to prospective students. What made the bootcamp great? Was it the teachers? The length of the courses? The location of the campus? There are so many variables to consider when thinking about the application process straight through to a job offer. As you write a review, include how the program helped you to become immersed in the world of Data Science as well as how it helped you succeed after graduation. For example: Did the pre-work give you a useful introduction to the Data Science industry? Did career services help you ace an interview with your dream company? The complete picture will show future bootcampers how the program can help them both learn to code and meet their career goals. SwitchUp has interviewed a wide range of bootcamp students. What is your story? Maybe you embarked on a career change into Data Science from a completely different background. Or maybe you took a semester off from college to simply gain skills at a bootcamp. Whatever the case may be, your path will show other students what’s possible. This perspective is especially helpful if you do not have a Data Science, Coding or Computer Science background, since many bootcamp students come from different fields. Your story will show future students that as long as they are committed, they too can switch to tech career. ## Where to write your review Many bootcamp alumni are choosing to leave reviews on sites like Quora and Medium, or on a review site like SwitchUp. If you are interested in writing a review of Dataquest, check out their SwitchUp reviews page here. Plus, you will automatically be entered to win one of five $100 Amazon gift cards or one$500 Amazon gift card grand-prize from SwitchUp once you submit a verified review. This sweepstakes ends in March, so get going! ### Document worth reading: “Fairness in Supervised Learning: An Information Theoretic Approach” Automated decision making systems are increasingly being used in real-world applications. In these systems for the most part, the decision rules are derived by minimizing the training error on the available historical data. Therefore, if there is a bias related to a sensitive attribute such as gender, race, religion, etc. in the data, say, due to cultural/historical discriminatory practices against a certain demographic, the system could continue discrimination in decisions by including the said bias in its decision rule. We present an information theoretic framework for designing fair predictors from data, which aim to prevent discrimination against a specified sensitive attribute in a supervised learning setting. We use equalized odds as the criterion for discrimination, which demands that the prediction should be independent of the protected attribute conditioned on the actual label. To ensure fairness and generalization simultaneously, we compress the data to an auxiliary variable, which is used for the prediction task. This auxiliary variable is chosen such that it is decontaminated from the discriminatory attribute in the sense of equalized odds. The final predictor is obtained by applying a Bayesian decision rule to the auxiliary variable. Fairness in Supervised Learning: An Information Theoretic Approach ### Are you monitoring your machine learning systems? How are you monitoring your Python applications? Take the short survey - the results will be published on KDnuggets and you will get all the details. ### Check your prior posterior overlap (PPO) – MCMC wrangling in R made easy with MCMCvis (This article was first published on R – Lynch Lab, and kindly contributed to R-bloggers) When fitting a Bayesian model using MCMC (often via JAGS/BUGS/Stan), a number of checks are typically performed to make sure your model is worth interpreting without further manipulation (remember: all models are wrong, some are useful!): • R-hat (AKA Gelman-Rubin statistic) – used to assess convergence of chains in the model • Visual assessment of chains – used to assess whether posterior chains mixed well (convergence) • Visual assessment of posterior distribution shape – used to determine if the posterior distribution is constrained • Posterior predictive check (predicting data using estimated parameters) – used to make sure that the model can generate the data used in the model ## PPO One check, however, is often missing: a robust assessment of the degree to which the prior is informing the posterior distribution. Substantial influence of the prior on the posterior may not be apparent through the use of R-hat and visual checks alone. Version 0.9.2 of MCMCvis (now available on CRAN), makes quantifying and plotting the prior posterior overlap (PPO) simple. MCMCvis is an R package designed to streamline analysis of Bayesian model results derived from MCMC samplers (e.g., JAGS, BUGS, Stan). It can be used to easily visualize, manipulate, and summarize MCMC output. The newest version is full of new features – a full tutorial can be found here. ## An example To check PPO for a model, we will use the function MCMCtrace. As the function is used to generate trace and density plots, checking for PPO is barely more work than just doing the routine checks that one would ordinarily perform. The function plots trace plots on the left and density plots for both the posterior (black) and prior (red) distributions on the right. The function calculates the percent overlap between the prior and posterior and prints this value on the plot. See ?MCMCtrace in R for details regarding the syntax. #install package install.packages('MCMCvis', repos = "http://cran.case.edu") require(MCMCvis) data(MCMC_data) #simulate data from the prior used in your model #number of iterations should equal the number of draws times the number of chains (although the function will adjust if the correct number of iterations is not specified) #in JAGS: parameter ~ dnorm(0, 0.001) PR <- rnorm(15000, 0, 32) #run the function for just beta parameters MCMCtrace(MCMC_data, params = 'beta', priors = PR, pdf = FALSE) ## Why check? Checking the PPO has particular utility when trying to determine if the parameters in your model are identifiable. If substantial PPO exists, the prior may simply be dictating the posterior distribution – the data may have little influence on the results. If a small degree of PPO exists, the data was informative enough to overcome the influence of the prior. In the field of ecology, nonidentifiability is a particular concern in some types of mark-recapture models. Gimenez (2009) developed quantitative guidelines to determine when parameters are robustly identifiable using PPO. While a large degree of PPO is not always a bad thing (e.g., substantial prior knowledge about the system may result in very informative priors used in the model), it is important to know where data was and was not informative for parameter estimation. The degree of PPO that is acceptable for a particular model will depend on a great number of factors, and may be somewhat subjective (but see Gimenez [2009] for a less subjective case). Like other checks, PPO is just one of many tools to be used for model assessment. Finding substantial PPO when unexpected may suggest that further model manipulation is needed. Happy model building! ## Other MCMCvis improvements Check out the rest of the new package freatures, including the option to calculate the number of effective samples for each parameter, ability to take arguments in the form of a ‘regular expression’ for the params argument, ability to retain the structure of all parameters in model output (e.g., parameters specified as matrices in the model are summarized as matrices). ## Follow Casey Youngflesh on Twitter @caseyyoungflesh. The MCMCvis source code can be found on GitHub. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... ### If you did not already know We present a probabilistic model with discrete latent variables that control the computation time in deep learning models such as ResNets and LSTMs. A prior on the latent variables expresses the preference for faster computation. The amount of computation for an input is determined via amortized maximum a posteriori (MAP) inference. MAP inference is performed using a novel stochastic variational optimization method. The recently proposed Adaptive Computation Time mechanism can be seen as an ad-hoc relaxation of this model. We demonstrate training using the general-purpose Concrete relaxation of discrete variables. Evaluation on ResNet shows that our method matches the speed-accuracy trade-off of Adaptive Computation Time, while allowing for evaluation with a simple deterministic procedure that has a lower memory footprint. … Neo4j Neuro-Index The article describes a new data structure called neuro-index. It is an alternative to well-known file indexes. The neuro-index is fundamentally different because it stores weight coefficients in neural network. It is not a reference type like ‘keyword-position in a file’. … ### Personality Tests Are Failing American Workers My newest Bloomberg View article just came out: #### Personality Tests Are Failing American Workers ##### All too often, they filter people out for the wrong reasons. Read all of my Bloomberg View pieces here. ### We were measuring the speed of Stan incorrectly—it’s faster than we thought in some cases due to antithetical sampling Aki points out that in cases of antithetical sampling, our effective sample size calculations were unduly truncated above at the number of iterations. It turns out the effective sample size can be greater than the number of iterations if the draws are anticorrelated. And all we really care about for speed is effective sample size per unit time. NUTS can be antithetical The desideratum for a sampler Andrew laid out to Matt was to maximze expected squared transition distance. Why? Because that’s going to maximize effective sample size. (I still hadn’t wrapped my head around this when Andrew was laying it out.) Matt figured out how to achieve this goal by building an algorithm that simulated the Hamiltonian forward and backward in time at random, doubling the time at each iteration, and then sampling from the path with a preference for the points visited in the final doubling. This tends to push iterations away from their previous values. In some cases, it can lead to anticorrelated chains. Removing this preference for the second half of the chains drastically reduces NUTS’s effectiveness. Figuring out how to include it and satisfy detailed balance was one of the really nice contributions in the original NUTS paper (and implementation). Have you ever seen 4000 as the estimated n_eff in a default Stan run? That’s probably because the true value is greater than 4000 and we truncated it. The fix is in What’s even cooler is that the fix is already in the pipeline and it just happens to be Aki’s first C++ contribution. Here it is on GitHub: Aki’s also done simulations, so the new version is actually better calibrated as far as MCMC standard error goes (posterior standard deviation divided by the square root of the effective sample size). A simple example Consider three Markov processes for drawing a binary sequence y[1], y[2], y[3], …, where each y[n] is in { 0, 1 }. Our target is a uniform stationary distribution, for which each sequence element is marginally uniformly distributed, Pr[y[n] = 0] = 0.5 Pr[y[n] = 1] = 0.5 Process 1: Independent. This Markov process draws each y[n] independently. Whether the previous state is 0 or 1, the next state has a 50-50 chance of being either 0 or 1. Here are the transition probabilities: Pr[0 | 1] = 0.5 Pr[1 | 1] = 0.5 Pr[0 | 0] = 0.5 Pr[1 | 0] = 0.5 More formally, these should be written in the form Pr[y[n + 1] = 0 | y[n] = 1] = 0.5 For this Markov chain, the stationary distribution is uniform. That is, some number of steps after initialization, there’s a probability of 0.5 of being in state 0 and a probability of 0.5 of being in state 1. More formally, there exists an m such that for all n > m, Pr[y[n] = 1] = 0.5 The process will have an effective sample size exactly equal to the number of iterations because each state in a chain is independent. Process 2: Correlated. This one makes correlated draws and is more likely to emit sequences of the same symbol. Pr[0 | 1] = 0.01 Pr[1 | 1] = 0.99 Pr[0 | 0] = 0.99 Pr[1 | 0] = 0.01 Nevertheless, the stationary distribution remains uniform. Chains drawn according to this process will be slow to mix in the sense that they will have long sequences of zeroes and long sequences of ones. The effective sample size will be much smaller than the number of iterations when drawing chains from this process. Process 3: Anticorrelated. The final process makes anticorrelated draws. It’s more likely to switch back and forth after every output, so that there will be very few repeating sequences of digits. Pr[0 | 1] = 0.99 Pr[1 | 1] = 0.01 Pr[0 | 0] = 0.01 Pr[1 | 0] = 0.99 The stationary distribution is still uniform. Chains drawn according to this process will mix very quickly. With an anticorrelated process, the effective sample size will be greater than the number of iterations. Visualization If I had more time, I’d simulate, draw some traceplots, and also show correlation plots at various lags and the rate at which the estimated mean converges. This example’s totally going in the Coursera course I’m doing on MCMC, so I’ll have to work out the visualizations soon. ### Online MSc in Applied Data Science, Big Data – part-time, small, private DSTI mission is simple: training executive students to become ready-to-go Data Scientists and Big Data Analysts. Check our small private online course programme. ### Magister Dixit “There are 18 million developers in the world, but only one in a thousand have expertise in artificial intelligence. To a lot of developers, AI is inscrutable and inaccessible. We’re trying to ease the burden.” Mark Hammond ( 2017 ) ### An Intuitive Introduction to Generative Adversarial Networks This article was jointly written by Keshav Dhandhania and Arash Delijani, bios below. 1. A brief review of Deep Learning 2. The image generation problem 3. Key issue in generative tasks 5. Challenges 7. Conclusion A brief review of Deep Learning Sketch of a (feed-forward) neural network, with input layer in brown, hidden layers in yellow, and output layer in red. Let’s begin with a brief overview of deep learning. Above, we have a sketch of a neural network. The neural network is made of up neurons, which are connected to each other using edges. The neurons are organized into layers - we have the hidden layers in the middle, and the input and output layers on the left and right respectively. Each of the edges is weighted, and each neuron performs a weighted sum of values from neurons connected to it by incoming edges, and thereafter applies a nonlinear activation such as sigmoid or ReLU. For example, neurons in the first hidden layer, calculate a weighted sum of neurons in the input layer, and then apply the ReLU function. The activation function introduces a nonlinearity which allows the neural network to model complex phenomena (multiple linear layers would be equivalent to a single linear layer). Given a particular input, we sequentially compute the values outputted by each of the neurons (also called the neurons’ activity). We compute the values layer by layer, going from left to right, using already computed values from the previous layers. This gives us the values for the output layer. Then we define a cost, based on the values in the output layer and the desired output (target value). For example, a possible cost function is the mean-squared error cost function. Where, x is the input, h(x) is the output and y is the target. The sum is over the various data points in our dataset. At each step, our goal is to nudge each of the edge weights by the right amount so as to reduce the cost function as much as possible. We calculate a gradient, which tells us how much to nudge each weight. Once we compute the cost, we compute the gradients using the backpropagation algorithm. The main result of the backpropagation algorithm is that we can exploit the chain rule of differentiation to calculate the gradients of a layer given the gradients of the weights in layer above it. Hence, we calculate these gradients backwards, i.e. from the output layer to the input layer. Then, we update each of the weights by an amount proportional to the respective gradients (i.e. gradient descent). If you would like to read about neural networks and the back-propagation algorithm in more detail, I recommend reading this article by Nikhil Buduma on Deep Learning in a Nutshell. ### The image generation problem In the image generation problem, we want the machine learning model to generate images. For training, we are given a dataset of images (say 1,000,000 images downloaded from the web). During testing, the model should generate images that look like they belong to the training dataset, but are not actually in the training dataset. That is, we want to generate novel images (in contrast to simply memorizing), but we still want it to capture patterns in the training dataset so that new images feel like they look similar to those in the training dataset. Image generation problem: There is no input, and the desired output is an image. One thing to note: there is no input in this problem during the testing or prediction phase. Everytime we ‘run the model’, we want it to generate (output) a new image. This can be achieved by saying that the input is going to be sampled randomly from a distribution that is easy to sample from (say the uniform distribution or Gaussian distribution). The crucial issue in a generative task is - what is a good cost function? Let’s say you have two images that are outputted by a machine learning model. How do we decide which one is better, and by how much? The most common solution to this question in previous approaches has been, distance between the output and its closest neighbor in the training dataset, where the distance is calculated using some predefined distance metric. For example, in the language translation task, we usually have one source sentence, and a small set of (about 5) target sentences, i.e. translations provided by different human translators. When a model generates a translation, we compare the translation to each of the provided targets, and assign it the score based on the target it is closest to (in particular, we use the BLEU score, which is a distance metric based on how many n-grams match between the two sentences). That kind of works for single sentence translations, but the same approach leads to a significant deterioration in the quality of the cost function when the target is a larger piece of text. For example, our task could be to generate a paragraph length summary of a given article. This deterioration stems from the inability of the small number of samples to represent the wide range of variation observed in all possible correct answers. GANs answer to the above question is, use another neural network! This scorer neural network (called the discriminator) will score how realistic the image outputted by the generator neural network is. These two neural networks have opposing objectives (hence, the word adversarial). The generator network’s objective is to generate fake images that look real, the discriminator network’s objective is to tell apart fake images from real ones. This puts generative tasks in a setting similar to the 2-player games in reinforcement learning (such as those of chess, Atari games or Go) where we have a machine learning model improving continuously by playing against itself, starting from scratch. The difference here is that often in games like chess or Go, the roles of the two players are symmetric (although not always). For GAN setting, the objectives and roles of the two networks are different, one generates fake samples, the other distinguishes real ones from fake ones. Sketch of Generative Adversarial Network, with the generator network labelled as G and the discriminator network labelled as D. Above, we have a diagram of a Generative Adversarial Network. The generator network G and discriminator network D are playing a 2-player minimax game. First, to better understand the setup, notice that D’s inputs can be sampled from the training data or the output generated by G: Half the time from one and half the time from the other. To generate samples from G, we sample the latent vector from the Gaussian distribution and then pass it through G. If we are generating a 200 x 200 grayscale image, then G’s output is a 200 x 200 matrix. The objective function is given by the following function, which is essentially the standard log-likelihood for the predictions made by D: The generator network G is minimizing the objective, i.e. reducing the log-likelihood, or trying to confuse D. It wants D to identify the the inputs it receives from G as correct whenever samples are drawn from its output. The discriminator network D is maximizing the objective, i.e. increasing the log-likelihood, or trying to distinguish generated samples from real samples. In other words, if G does a good job of confusing D, then it will minimize the objective by increasing D(G(z))in the second term. If D does its job well, then in cases when samples are chosen from the training data, they add to the objective function via the first term (because D(x) would be larger) and decrease it via the second term (because D(x)would be small). Training proceeds as usual, using random initialization and backpropagation, with the addition that we alternately update the discriminator and the generator and keep the other one fixed. The following is a description of the end-to-end workflow for applying GANs to a particular problem 1. Decide on the GAN architecture: What is architecture of G? What is the architecture of D? 2. Train: Alternately update D and G for a fixed number of updates 1. Update D (freeze G): Half the samples are real, and half are fake. 2. Update G (freeze D): All samples are generated (note that even though D is frozen, the gradients flow through D) 3. Manually inspect some fake samples. If quality is high enough (or if quality is not improving), then stop. Else repeat step 2. When both G and D are feed-forward neural networks, the results we get are as follows (trained on MNIST dataset). Results from Goodfellow et. al. Rightmost column (in yellow boxes) are the closest images from the training dataset to the image on its direct left. All other images are generated samples. Using a more sophisticated architecture for G and D with strided convolutional, adam optimizer instead of stochastic gradient descent, and a number of other improvements in architecture, hyperparameters and optimizers (see paper for details), we get the following results: Results from Alec Radford et. al. Images are of ‘bedrooms’. ### Challenges If you would like to learn about GANs in much more depth, I suggest checking out the ICCV 2017 tutorials on GANs. There are multiple tutorials, each focusing on different aspect of GANs, and they are quite recent. I’d also like to mention the concept of Conditional GANs. Conditional GANs are GANs where the output is conditioned on the input. For example, the task might be to output an image matching the input description. So if the input is “dog”, then the output should be an image of a dog. Below are results from some recent research (along with links to those papers). Results for ‘Text to Image synthesis’ by Reed et. al Results for Image Super-resolution by Ledig et. al Results for Image to Image translation by Isola et. al Generating high resolution ‘celebrity like’ images by Karras et. al Last but not the least, if you would like to do a lot more reading on GANs, check out this list of GAN papers categorized by application and this list of 100+ different GAN variations. ### Conclusion I hope that in this article, you have understood a new technique in deep learning called Generative Adversarial Networks. They are one of the few successful techniques in unsupervised machine learning, and are quickly revolutionizing our ability to perform generative tasks. Over the last few years, we’ve come across some very impressive results. There is a lot of active research in the field to apply GANs for language tasks, to improve their stability and ease of training, and so on. They are already being applied in industry for a variety of applications ranging from interactive image editing, 3D shape estimation, drug discovery, semi-supervised learning to robotics. I hope this is just the beginning of your journey into adversarial machine learning. ### Author Bios: Keshav Dhandhania: Keshav is a cofounder of Compose Labs (commonlounge.com) and has spoken on GANs at international conferences including DataSciCon.Tech, Atlanta and DataHack Summit, Bangaluru, India. He did his masters in Artificial Intelligence from MIT, and his research focused on natural language processing, and before that, computer vision and recommendation systems. Arash Delijani: Arash previously worked on data science at MIT and is the cofounder of Orderly, an SF-based startup using machine learning to help businesses with customer segmentation and feedback analysis. ### Announcing the 2018 Facebook Fellows and Emerging Scholars Facebook is proud to announce the 2018 Facebook Fellowship and Emerging Scholar award winners. “This year we received over 800 applications from promising and talented PhD students from around the world,” said Sharon Ayalde, Fellowship Program Manager. “We are pleased and excited to award 17 Fellows and 6 Emerging Scholars – a significant increase from last year.” The program, now in it’s 7th year, has supported over 70 top PhD candidates. The Facebook Fellowship program is designed to encourage and support promising doctoral students engaged in innovative and relevant research across computer science and engineering. The research topics from this year’s cohort range from natural language processing, computer vision, machine learning, commAI, networking and connectivity hardware, economics and computation, distributed systems, and security/privacy. Launched in 2017, Emerging Scholar Awards are open to first or second year PhD students. The program is specifically designed to support talented students from under-represented minority groups in the technology sector to encourage them to continue their PhD studies, pursue innovative research, and engage with the broader research community. Congratulations to this year’s talented group of Fellows and Emerging Scholars! We are excited to engage deeper with them, learn more about their research, and support their continued studies. ## 2018 Emerging Scholar Award recipients ### A simple way to set up a SparklyR cluster on Azure The SparklyR package from RStudio provides a high-level interface to Spark from R. This means you can create R objects that point to data frames stored in the Spark cluster and apply some familiar R paradigms (like dplyr) to the data, all the while leveraging Spark's distributed architecture without having to worry about memory limitations in R. You can also access the distributed machine-learning algorithms included in Spark directly from R functions. If you don't happen to have a cluster of Spark-enabled machines set up in a nearby well-ventilated closet, you can easily set one up in your favorite cloud service. For Azure, one option is to launch a Spark cluster in HDInsight, which also includes the extensions of Microsoft ML Server. While this service recently had a significant price reduction, it's still more expensive than running a "vanilla" Spark-and-R cluster. If you'd like to take the vanilla route, a new guide details how to set up Spark cluster on Azure for use with SparklyR. All of the details are provided in the link below, but the guide basically provides the Azure Distributed Data Engineering Toolkit shell commands to provision a Spark cluster, connect SparklyR to the cluster, and then interact with it via RStudio Server. This includes the ability to launch the cluster with pre-emptable low-priority VMs, a cost-effective option (up to 80% cheaper!) for non-critical workloads. Check out the details at the link below. Github (Azure): How to use SparklyR on Azure with AZTK ### Deep Learning & Computer Vision in the Microsoft Azure Cloud This is the first in a multi-part series by guest blogger Adrian Rosebrock. Adrian writes at PyImageSearch.com about computer vision and deep learning using Python, and he recently finished authoring a new book on deep learning for computer vision and image recognition. Introduction I had two goals when I set out to write my new book, Deep Learning for Computer Vision with Python. The first was to create a book/self-study program that was accessible to both novices and experienced researchers and practitioners — we start off with the fundamentals of neural networks and machine learning and by the end of the program you’re training state-of-the-art networks on the ImageNet dataset from scratch. My second goal was to provide a book that included: • Practical walkthroughs that present solutions to actual, real-world deep learning classification problems. • Hands-on tutorials (with accompanying code) that not only show you the algorithms behind deep learning for computer vision but their implementations as well. • A no-nonsense teaching style that cuts through all the cruft and helps you on your path to deep learning + computer vision mastery for visual recognition. Along the way I quickly realized that a stumbling block for many readers is configuring their development environment — especially true for those wanted to utilize their GPU(s) and train deep neural networks on massive image datasets (such as ImageNet). Of course, some readers may not want to invest in physical hardware and instead utilize the cloud where it’s easy to spin-up and tear-down instances. I spent some time researching different cloud-based solutions. Some of them worked well, others either outright didn’t work as claimed or involved too much setup. When Microsoft reached out to me to take a look at their Data Science Virtual Machine (DSVM), I was incredibly impressed. The DSVM included TensorFlow, Keras, mxnet, Caffe, CUDA/cuDNN all out of the box, pre-configured and ready to go. Best of all, I could run the DSVM on a CPU instance (great if you’re just getting started with deep learning) or I could switch to a GPU instance and seamlessly watch my networks train order of magnitudes faster (excellent if you’re a deep learning practitioner looking to train deep neural networks on larger datasets). In the remainder of this post, I’ll be discussing why I choose to use the DSVM — and even adapted my entire deep learning book to run on it. Why I like Microsoft’s Data Science Virtual Machine Microsoft’s Data Science Virtual Machine (DSVM) runs in the Azure cloud and supports either Windows or Linux (Ubuntu). For nearly all deep learning projects I recommend Linux; however, there are some applications where Windows is appropriate — you can choose either. The list of packages installed in the DSVM is complete and comprehensive (you can find the full list here), but from a deep learning + computer vision perspective you’ll find: • TensorFlow • Keras • mxnet • Caffe/Caffe2 • Microsoft Cognitive Toolkit • Torch • OpenCV • Jupyter • CUDA/cuDNN • Python 3 Again, the complete list is very extensive and is a huge testament to not only the DSVM team for keeping this instance running seamlessly, but also to Microsoft’s desire to have their users utilize and even enjoy working in their environment. As I mentioned above you can run the DSVM in either CPU only or one or more GPUs. Once you have your DSVM up and running you’ll find many sample Jupyter notebooks for various machine learning, deep learning, and data science projects. These sample Jupyter notebooks will help you get up and running and familiarize yourself with the DSVM. If you prefer not to use Juptyter Notebooks you can also access your DSVM instance via SSH and VNC. To spin up your first DSVM instance (including a free $200 credit) you’ll want to follow this link: https://azure.microsoft.com/en-us/free/. I recommend reading through the DSVM docs as well. Finally, be sure to read through the Tips, Tricks, and Suggestions section of this post, where I discuss additional advice and hacks you can use to better your experience with the DSVM. Your First Convolutional Neural Network I have put together a Jupyter Notebook demonstrating how to train your first Convolutional Neural Network using the following toolset: • Python (2.7 and 3). • TensorFlow. • Keras. You can find the Jupyter Notebook here: Note: Make sure (1) you install Jupyter Notebooks on your local system or (2) use a DSVM instance to open the notebook. Inside the notebook you’ll learn how to train the classic “LeNet” architecture to recognize handwritten digits: And obtain over 98% classification accuracy after only 20 epochs: Be sure to take a look at the Jupyter Notebook for a full explanation of the code and training process. I also want to draw attention to the code associated with this tutorial is the exact same code that I used when writing Deep Learning for Computer Vision with Python — with only two modifications: 1. Using %matplotlib inline to display plots inside the Jupyter Notebook. 2. Swapping out argument parsing for using a built-in Python args dictionary. There are no other required changes to the code, which again is a huge testament to the DSVM team. Tips, Tricks, and Suggestions In this section I detail some additional tips and tricks I found useful when working with the DSVM. Some of these suggestions are specific to my book, Deep Learning for Computer Vision with Python, while others are more general. Additional Python Packages I installed both imutils and progressbar2 in the DSVM once it was up and running: $ sudo /anaconda/envs/py35/bin/pip install imutils $sudo /anaconda/envs/py35/bin/pip install progressbar2 The imutils library is a series of convenience functions used to make basic image processing and computer vision operations easier using Python and the OpenCV library. The progressbar2 package is used to make nice progress bars when running tasks that take a long time to complete (such as building and packing an image dataset). Updating MKL I ran into a small issue when trying to work with the Intel Math Kernel Library (MKL): Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so. Process ForkPoolWorker-14: Process ForkPoolWorker-12: Process ForkPoolWorker-13: This was resolved by running the following command to update the mkl package: $ sudo /anaconda/envs/py35/bin/conda update mkl -n py35 Avoiding Accidental ResourceExhaustedError When leaving your Jupyter notebook open/running for long periods of time you may run into a ResourceExhaustedError when training your networks. This can be solved by inserting the following lines in a cell at the end of the notebook: %%javascript Jupyter.notebook.session.delete(); Command Line Arguments and Jupyter Notebooks Many deep learning and machine learning Python scripts require command line arguments… …but Jupyter notebooks do not have a concept of command line arguments. So, what do you do? Inside Deep Learning for Computer Vision with Python I made sure all command line arguments parsed into a built-in Python dictionary. This means you can change command line argument parsing code from this: # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-i", "--image", required=True,     help="path to input image") To this: args = {     "image": "/path/to/your/input/image.png" } Here we have swapped out the command line arguments for a hard-coded dictionary that points to the relevant parameters/file paths. Not all command line arguments can be swapped out so easily, but for all examples in my book I opted to use a Python dictionary to make this a near seamless experience for Jupyter notebook users. SSH and Remote Desktop You can access your Azure DSVM via SSH or remote desktop. If you opt for remote desktop make sure you install the X2Go Client as discussed in the DSVM docs. If you’re using macOS, make sure you install XQuartz as well. Where to Next? In this post you learned about the Microsoft Data Science Virtual Machine (DSVM). We discussed how the DSVM can be used for deep learning, and in particular, how the code from my book and self-study program, Deep Learning for Computer Vision with Python can executed on the DSVM. My goal when writing this book was to make it accessible to both novices and experienced researchers and practitioners — I have no doubt that the DSVM facilitates this accessibility by removing frustration with deep learning development environment configuration and getting you up and running quickly. If you’re interested in learning more about the Microsoft Data Science Virtual Machine, be sure to click here. Once you’re up and running with the DSVM take a look at my deep learning book — this self-study program is engineered from the ground up to help you master deep learning for computer vision (you’ll also find more detailed walkthroughs of LeNet and other popular network architectures, including ResNet, SqueezeNet, GoogLeNet, VGGNet, to name a few). Stay tuned for my next post in this series, where I will share my experience and tips for running more advanced deep learning techniques for computer vision on the Data Science Virtual Machine. ### Visual Aesthetics: Judging photo quality using AI techniques We built a deep learning system that can automatically analyze and score an image for aesthetic quality with high accuracy. Check the demo and see your photo measures up! ### BigML Release and Webinar: Operating Thresholds and Organizations! BigML’s first release of the year is here! Join us on Wednesday, January 31, 2018, at 10:00 AM PT (Portland, Oregon. GMT -08:00) / 07:00 PM CET (Valencia, Spain. GMT +01:00) for a FREE live webinar to discover the latest version of the BigML platform. We will be presenting two new features: operating thresholds for classification models to fine tune the performance of […] ### Data Science in 30 Minutes: Alan Schwarz, Former NYTimes Journalist, on Numbers-Based Journalism This FREE webinar will be on February 27th at 5:30 PM ET. Register below now, space is limited! Join The Data Incubator and former NY Times journalist Alan Schwarz for the next installment of our free online webinar series, Data Science in 30 minutes: Numbers-Based Journalism. Alan Schwarz, former N.Y. Times investigative reporter and Pulitzer finalist, discusses numbers-based journalism that shook industries from the National Football League to Big Pharma. Alan used data analysis to expose the NFL’s cover-up of concussions as well as issues in child psychiatry. Alan Schwarz is a Pulitzer Prize-nominated journalist best known for his reportage of public health issues for The New York Times. His 130-article series on concussions in sports is roundly credited with revolutionizing the handling of head injuries in professional and youth sports, and was a finalist for the 2011 Pulitzer Prize for Public Service. He followed that work with a series on A.D.H.D. and other psychiatric disorders in children, which also was considered for a Pulitzer and led to his book “A.D.H.D. NATION: Children, Doctors, Big Pharma and the Making of an American Epidemic.” A recognized expert on the use of mathematics and probability in journalism — statistical analysis formed the backbone of his major series — Mr. Schwarz has lectured at dozens of universities and professional conferences about these subjects, including at the 2015 SAS national convention and a keynote at the Andrew Wiles Mathematical Institute at the University of Oxford. Mr. Schwarz, who holds a bachelor of arts degree in Mathematics from the University of Pennsylvania, was honored by the American Statistical Association in 2013 with its Lifetime Excellence in Statistical Reporting Award and serves on editorial boards of the ASA and the Royal Statistical Society. Michael Li founded The Data Incubator, a New York-based training program that turns talented PhDs from academia into workplace-ready data scientists and quants. The program is free to Fellows, employers engage with the Incubator as hiring partners. Previously, he worked as a data scientist (Foursquare), Wall Street quant (D.E. Shaw, J.P. Morgan), and a rocket scientist (NASA). He completed his PhD at Princeton as a Hertz fellow and read Part III Maths at Cambridge as a Marshall Scholar. At Foursquare, Michael discovered that his favorite part of the job was teaching and mentoring smart people about data science. He decided to build a startup to focus on what he really loves. Michael lives in New York, where he enjoys the Opera, rock climbing, and attending geeky data science events. ### Propensity Score Matching in R Propensity scores are an alternative method to estimate the effect of receiving treatment when random assignment of treatments to subjects is not feasible. ### Bitcoin (World Map) Bubbles (This article was first published on R – rud.is, and kindly contributed to R-bloggers) We’re doing some interesting studies (cybersecurity-wise, not finance-wise) on digital currency networks at work-work and — while I’m loathe to create a geo-map from IPv4 geolocation data — we: • do get (often, woefully inaccurate) latitude & longitude data from our geolocation service (I won’t name-and-shame here); and, • there are definite geo-aspects to the prevalence of mining nodes — especially Bitcoin; and, • I have been itching to play with the nascent nord palette in a cartographical context… so I went on a small diversion to create a bubble plot of geographical Bitcoin node-prevalence. I tweeted out said image and someone asked if there was code, hence this post. You’ll be able to read about the methodology we used to capture the Bitcoin node data that underpins the map below later this year. For now, all I can say is that wasn’t garnered from joining the network-proper. I’m including the geo-data in the gist, but not the other data elements (you can easily find Bitcoin node data out on the internets from various free APIs and our data is on par with them). I’m using swatches for the nord palette since I was hand-picking colors, but you should use @jakekaupp’s most excellent nord package if you want to use the various palettes more regularly. I’ve blathered a bit about nord, so let’s start with that (and include the various other packages we’ll use later on): library(swatches) library(ggalt) # devtools::install_github("hrbrmstr/ggalt") library(hrbrthemes) # devtools::install_github("hrbrmstr/hrbrthemes") library(tidyverse) show_palette(nord) It may not be a perfect palette (accounting for all forms of vision issues and other technical details) but it was designed very well (IMO). The rest is pretty straightforward: • read in the bitcoin geo-data • count up by lat/lng • figure out which colors to use (that took a bit of trial-and-error) • tweak the rest of the ggplot2 canvas styling (that took a wee bit longer) I’m using development versions of two packages due to their added functionality not being on CRAN (yet). If you’d rather not use a dev-version of hrbrthemes just use a different ipsum theme vs the new theme_ipsum_tw(). read_csv("bitc.csv") %>% count(lng, lat, sort = TRUE) -> bubbles_df world <- map_data("world") world <- world[world$region != "Antarctica", ] ggplot() + geom_cartogram( data = world, map = world, aes(x = long, y = lat, map_id = region), color = nord["nord3"], fill = nord["nord0"], size = 0.125 ) + geom_point( data = bubbles_df, aes(lng, lat, size = n), fill = nord["nord13"], shape = 21, alpha = 2/3, stroke = 0.25, color = "#2b2b2b" ) + coord_proj("+proj=wintri") + scale_size_area(name = "Node count", max_size = 20, labels = scales::comma) + labs( x = NULL, y = NULL, title = "Bitcoin Network Geographic Distribution (all node types)", subtitle = "(Using bubbles seemed appropriate for some, odd reason)", caption = "Source: Rapid7 Project Sonar" ) + theme_ipsum_tw(plot_title_size = 24, subtitle_size = 12) + theme(plot.title = element_text(color = nord["nord14"], hjust = 0.5)) + theme(plot.subtitle = element_text(color = nord["nord14"], hjust = 0.5)) + theme(panel.grid = element_blank()) + theme(plot.background = element_rect(fill = nord["nord3"], color = nord["nord3"])) + theme(panel.background = element_rect(fill = nord["nord3"], color = nord["nord3"])) + theme(legend.position = c(0.5, 0.05)) + theme(axis.text = element_blank()) + theme(legend.title = element_text(color = "white")) + theme(legend.text = element_text(color = "white")) + theme(legend.key = element_rect(fill = nord["nord3"], color = nord["nord3"])) + theme(legend.background = element_rect(fill = nord["nord3"], color = nord["nord3"])) + theme(legend.direction = "horizontal") As noted, the RStudio project associated with this post in in this gist. Also, upon further data-inspection by @jhartftw, we’ve discovered yet-more inconsistencies in the geo-mapping service data (there are way too many nodes in Paris, for example), but the main point of the post was to mostly show and play with the nord palette. To leave a comment for the author, please follow the link and comment on their blog: R – rud.is. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... Continue Reading… ### Return of the Mac The Economist’s Big Mac index gives a flavour of how far currency values are out of whack. It is based on the idea of purchasing-power parity, which says exchange rates should move towards the level that would make the price of a basket of goods the same everywhere. Continue Reading… ### Gradient Boosting in TensorFlow vs XGBoost For many Kaggle-style data mining problems, XGBoost has been the go-to solution since its release in 2016. It's probably as close to an out-of-the-box machine learning algorithm as you can get today. Continue Reading… ### (What’s So Funny ‘Bout) Evidence, Policy, and Understanding [link] Kevin Lewis asked me what I thought of this article by Oren Cass, “Policy-Based Evidence Making.” That title sounds wrong at first—shouldn’t it be “evidence-based policy making”?—but when you read the article you get the point, which is that Cass argues that so-called evidence-based policy isn’t so evidence-based at all, that what is considered “evidence” in social science and economic policy is often so flexible that it can be taken to support whatever position you want. Hence, policy-based evidence making. I agree with Cass that the whole “evidence-based policy” thing has been oversold. For an extreme example, see this story of some “evidence-based design” in architecture that could well be little more than a billion-dollar pseudoscientific scam. More generally, I agree that there are problems with a lot of these studies, both in their design and in their interpretation. Here’s an story from a few years ago that I’ve discussed a bit; for a slightly more formal treatment of that example, see section 2.1 of this article. So I’m sympathetic with the points that Cass is making, and I’m glad this article came out; I think it will generally push the discussion in the right direction. But there are two places where I disagree with Cass. 1. First, despite all the problems with controlled experiments, they can still tell us something, as long as they’re not overinterpreted. If we forget about statistical significance and all that crap, a controlled experiment is, well, it’s a controlled experiment, you’re learning about something under controlled conditions, which can be useful. This is a point that has been made many times by my colleague Don Green: yes, controlled experiments have problems with realism, but the same difficulties can arise when trying to generalize observational comparisons to new settings. To put it another way, recall Bill James’s adage that alternative to good statistics is not “no statistics,” it’s “bad statistics.” Consider Cass’s article. He goes through lots of legitimate criticism of overinterpretations of results from that Oregon experiment, but then, what does he do? He gives lots of weight to an observational study from Yale that compares across states. One point that Cass makes very well is that you can’t rely too much on any single study. Any single study is limited in scope, it occurs at a particular time and place and with a particular set of treatments, outcomes, and time horizon. To make decisions, we have to do our best with the studies we have, which sometimes means discarding them completely if they are too noisy. And I think Cass is right that we should take studies more seriously when they are large and occur under realistic conditions. 2. The political slant is just overwhelming. Cass throws in the kitchen sink. For example, “When Denmark began offering generous maternity leave, so many nurses made use of it that mortality rates in nursing homes skyrocketed.” Whaaaa? Maybe they could hire some more help in those nursing homes? Any policy will have issues when rolling out on a larger scale, but it seems silly to say that therefore it’s not a good idea to evaluate based on what evidence is available. Then he refers to “This evidence ratchet, in which findings can promote but not undermine a policy, is common.” This makes no sense to me, given that anything can be a policy. The$15 minimum wage is a policy. So is the $5 minimum wage, or for that matter the$0 minimum wage. High taxes on the rich is a policy, low taxes on the rich is a policy, etc. Also this: “Grappling with such questions is frustrating and unsettling, as the policymaking process should be. It encourages humility and demands that the case for government action clear a high bar.” This may be Cass’s personal view, but it has nothing to do with evidence. He’s basically saying that if the evidence isn’t clear, we should make decisions based on his personal preference for less government spending, which I think means lower taxes on the rich. One could just as well say the opposite: “It encourages humility and demands that the riches of our country be shared more equally.” Or, to give it a different spin: “It encourages humility and demands that we live by the Islamic principles that have stood the test of time.” Or whatever. When evidence is weak, you have to respect uncertainty; it should not be treated as a rationale for sneaking in your own policy preferences as a default. But I hate to end it there. Overall I liked Cass’s article, and we should be able to get value from it, subtracting the political slant which muddles his legitimate points. The key point, which Cass makes well, is that there is no magic to evidence-based decision making: You can do a controlled experiment and still learn nothing useful. The challenge is where to go next. I do think evidence is important, and I think that, looking forward, our empirical studies of policies should be as realistic as possible, close to the ground, as it were. Easier said than done, perhaps, but we need to do our best, and I think that critiques such as Cass’s are helpful. ### Convolutional neural networks for language tasks Though they are typically applied to vision problems, convolution neural networks can be very effective for some language tasks. When approaching problems with sequential data, such as natural language tasks, recurrent neural networks (RNNs) typically top the choices. While the temporal nature of RNNs are a natural fit for these problems with text data, convolutional neural networks (CNNs), which are tremendously successful when applied to vision tasks, have also demonstrated efficacy in this space. In our LSTM tutorial, we took an in-depth look at how long short-term memory (LSTM) networks work and used TensorFlow to build a multi-layered LSTM network to model stock market sentiment from social media content. In this post, we will briefly discuss how CNNs are applied to text data while providing some sample TensorFlow code to build a CNN that can perform binary classification tasks similar to our stock market sentiment model. We see a sample CNN architecture for text classification in Figure 1. First, we start with our input sentence (of length seq_len), represented as a matrix in which the rows are our words vectors and the columns are the dimensions of the distributed word embedding. In computer vision problems, we typically see three input channels for RGB; however, for text we have only a single input channel. When we implement our model in TensorFlow, we first define placeholders for our inputs and then build the embedding matrix and embedding lookup. # Define Inputs inputs_ = tf.placeholder(tf.int32, [None, seq_len], name='inputs') labels_ = tf.placeholder(tf.float32, [None, 1], name='labels—) training_ = tf.placeholder(tf.bool, name='training') # Define Embeddings embedding = tf.Variable(tf.random_uniform((vocab_size, embed_size), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs_) Notice how the CNN processes the input as a complete sentence, rather than word by word as we did with the LSTM. For our CNN, we pass a tensor with all word indices in our sentence to our embedding lookup and get back the matrix for our sentence that will be used as the input to our network. Now that we have our embedded representation of our input sentence, we build our convolutional layers. In our CNN, we will use one-dimensional convolutions, as opposed to the two-dimensional convolutions typically used on vision tasks. Instead of defining a height and a width for our filters, we will only define a height, and the width will always be the embedding dimension. This makes sense intuitively, when compared to how images are represented in CNNs. When we deal with images, each pixel is a unit for analysis, and these pixels exist in both dimensions of our input image. For our sentence, each word is a unit for analysis and is represented by the dimension of our embeddings (the width of our input matrix), so words exist only in the single dimension of our rows. We can include as many one-dimensional kernels as we like with different sizes. Figure 1 shows a kernel size of two (red box over input) and a kernel size of three (yellow box over input). We also define a uniform number of filters (in the same fashion as we would for a two-dimensional convolutional layer) for each of our layers, which will be the output dimension of our convolution. We apply a relu activation and add a max-over-time pooling to our output that takes the maximum output for each filter of each convolution—resulting in the extraction of a single model feature from each filter. # Define Convolutional Layers with Max Pooling convs = [] for filter_size in filter_sizes: conv = tf.layers.conv1d(inputs=embed, filters=128, kernel_size=filter_size, activation=tf.nn.relu) pool = tf.layers.max_pooling1d(inputs=conv, pool_size=seq_len-filter_size+1, strides=1) convs.append(pool) We can think of these layers as “parallel”—i.e., one convolution layer doesn’t feed into the next, but rather they are all functions on the input that result in a unique output. We concatenate and flatten these outputs to combine the results. # Concat Pooling Outputs and Flatten pool_concat = tf.concat(convs, axis=-1) pool_flat = tf.layers.Flatten(pool_concat) Finally, we now build a single fully connected layer with a sigmoid activation to make predictions from our concatenated convolutional outputs. Note that we can use a tf.nn.softmax activation function here as well if the problem has more than two classes. We also include a dropout layer here to regularize our model for better out-of-sample performance. drop = tf.layers.Dropout(inputs=pool_flat, rate=keep_prob, training=training_) dense = tf.layers.Dense(inputs=drop, num_outputs=1, activation_fn=tf.nn.sigmoid) Finally, we can wrap this code into a custom tf.Estimator using the model_fn for a simple API for training, evaluating and making future predictions. And there we have it: a convolutional neural network architecture for text classification. As with any model comparison, there are some trade offs between CNNs and RNNs for text classification. Even though RNNs seem like a more natural choice for language, CNNs have been shown to train up to 5x faster than RNNs and perform well on text where feature detection is important. However, when long-term dependency over the input sequence is an important factor, RNN variants typically outperform CNNs. Ultimately, language problems in various domains behave differently, so it is important to have multiple techniques in your arsenal. This is just one example of a trend we are seeing in applying techniques successfully across different areas of research. While convolutional neural networks have traditionally been the star of the computer vision world, we are starting to see more breakthroughs in applying them to sequential data. This post is a collaboration between O'Reilly and TensorFlow. See our statement of editorial independence. ### Build the right thing; build the thing right How design thinking supports delivering product. Continue reading Build the right thing; build the thing right. ### Getting started with spatial data in R – EdinbR talk (This article was first published on R – scottishsnow, and kindly contributed to R-bloggers) Last night (2018-01-17) I spoke at the EdinbR user group alongside Susan Johnston. Susan talked about writing R packages and you see her slides here. I gave an introduction to working with spatial data in R. You can see my slides below:
# Homework Help: Calculating the resistance when the switch is off 1. Sep 25, 2012 1. The problem statement, all variables and given/known data EDIT: The chain is connected to a battery, thus the current in this chain is direct. When the electric chain's switch (J) is on, resistance Rab (from dots A to B) is equal to 80 Ohms. What is the electric chain's resistance when the switch (J) is off? 2. Relevant equations 3. The attempt at a solution I thought that the resistors are connected in parallel , but my teacher said that when the switch if on resistors are connected in parallel and when it's off it's in series. Is this true, cause I think it's parallel, because the wires split either way, no matter if the switch is on or not... Don't know for sure what's the connection type in this chain, so... Last edited: Sep 25, 2012 2. Sep 25, 2012 ### Staff: Mentor When the switch J is closed, isn't there a direct path from A to B passing right through it? How can the resistance then be 80Ω? Can you confirm the question statement and diagram? 3. Sep 25, 2012 Well the book says exactly that. In my view this is reasonable because only some electricity is flowing through the switch when it's on, not all of it... Anyway, does anyone has the answer to my original questions? 4. Sep 25, 2012 ### Staff: Mentor No, an ideal switch and wires have zero resistance. The switch will form a short circuit, and ALL of the current will pass through it. So, either the text is wrong, or there is more to this problem than has been stated. Is the switch non-ideal with some resistance of its own? Could it be that the switch should really be oriented horizontally across the circuit rather than vertically? 5. Sep 25, 2012 ### Staff: Mentor No one knows, because you have got the diagram wrong, kakadas. As gneill observed. 6. Sep 25, 2012 ### CWatters I agree with the other posters. The original question, the diagram and your teachers comment are all inconsistent with each other. Can you scan that page of the book? PS Are you sure the switch shouldn't be between the other two corners of the square? 7. Sep 25, 2012 I'm 100% sure of that. I will upload a scan ASAP. Oh I think I forgot to mention a crucial part of this chain (Or maybe not, I'm just learning..): It's connected to a battery, thus the current is direct. EDIT: Well maybe You're a bit confused because English is not my native language, so... But I really think that I translated the original question correctly... Here's the Photo of the chain from my book: I really hope someone will answer my questions. Last edited: Sep 25, 2012 8. Sep 25, 2012 ### azizlwl When the electric chain's switch (J) is on, resistance Rab (from dots A to B) is equal to 80 Ohms. What is the electric chain's resistance when the switch (J) is off? 2. Relevant equations 3. The attempt at a solution I thought that the resistors are connected in parallel , but my teacher said that when the switch if on resistors are connected in parallel and when it's off it's in series. -------------------------- Parallel circuit $R_{total}=\frac{R_1 R_2 R_3}{R_1 + R_2 +R_3}$ Any factor in the numerator with a zero value will result in total zero resistance. 9. Sep 25, 2012 ### Staff: Mentor According to the diagram, RAB must be zero when the switch (j) is closed (on). Is it possible that there is a language confusion regarding the switch positions? "closed" or "on" means that the contacts are touching and current can flow. "Open" or "off" means that the contacts are not touching and current cannot flow --- open circuit. By the way, out of curiosity, what is the original language of the text? 10. Sep 26, 2012 That was exactly what i meant to say.. But does the current not flow when the switch is off? I think it still flows, because IMO the resistors are connected in parallel, so I think that the current flows either way, no matter in what state the switch is. If I'm wrong, Could anyone prove to me that the current doesn't flow when the switch is off? 11. Sep 26, 2012 ### Staff: Mentor Yes the current flows when the switch is off (open) --- It then must flow through the resistor network. But your problem statement contends that the resistance is 80 Ohms when the switch is closed (on). That is not possible with the given information. 12. Sep 26, 2012 ### CWatters I agree. Looks like ON and OFF may have been confused but... If it's 80 Ohms with the switch OFF (open) that would allow R to be calculated but when the switch is ON (conducting) the equivalent resistance is ZERO. The teachers comment about it changing from serial to parallel depending on the switch position makes no sense. 13. Sep 26, 2012 Why is that so, please explain in more detail? BTW does it make a difference if the current is DC or AC? In my case it's DC. 14. Sep 26, 2012 ### CWatters With the switch OFF you have two resistors in series, in parallel with, two resistors in series. With the switch ON the resistors are irrelevant. All the current flows through the switch. You could replace the resistors with ones that were much bigger, much smaller, or a short circuit and it would make no difference. The equivalent resistance is zero Ohms. Makes no difference if it's AC or DC. 15. Sep 27, 2012 I Just want to clearinfy that By telling that the switch is ON I meant that it's CONDUCTING. OFF means that it's open and NOT CONDUCTING. 16. Sep 27, 2012 ### azizlwl http://imageshack.us/a/img688/1161/14883781.png [Broken]' As others said, there are errors in the data and in the drawing. It is waste of time computing on it. I guess the problem meant for Wheatstone Bridge. http://en.wikipedia.org/wiki/Wheatstone_bridge Last edited by a moderator: May 6, 2017 17. Sep 27, 2012 ### CWatters That's what I had assumed as well. 18. Sep 27, 2012 ### Staff: Mentor As it is drawn the resistance of the network must be zero with the switch closed. The only way the resistance can be 80 Ohms with the switch closed is if there's some other resistance hiding somewhere, such as internal resistance of a voltage source (which is not shown), or if the switch itself has resistance. Without knowing specifics the problem is not solvable.
3,462 views Consider the following expression.$$\displaystyle \lim_{x\rightarrow-3}\frac{\sqrt{2x+22}-4}{x+3}$$The value of the above expression (rounded to 2 decimal places) is ___________. We can apply L’Hopital’s rule. Ans is 0.25 Copy paste from Ruturaj sir’s mock test. https://gateoverflow.in/285459/go2019-flt1-19 For square root of 16, isn’t -4 also possible? $\displaystyle \lim_{x \to -3} \frac{\sqrt{2x+22}\,-4}{x+3}\; (\frac{0}{0}\,form)$ $\text{Using L'Hôpital's rule}$ $\displaystyle \lim_{x \to -3} \frac{\frac{1}{2\sqrt{2x+22}}(2)\,-0}{1+0} =\lim_{x \to -3}\frac{1}{\sqrt{2x+22}} =\frac{1}{\sqrt{2(-3)+22}} =\frac{1}{4}=0.25$ ANS IS 0.25 BY APPLYING L’HOSPITAL RULE WE CAN EASILY GET THIS https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule by ### 1 comment from where you study calculus for gate please tell resources
astSphMap Create a SphMap Description: This function creates a new SphMap and optionally initialises its attributes. A SphMap is a Mapping which transforms points from a 3-dimensional Cartesian coordinate system into a 2-dimensional spherical coordinate system (longitude and latitude on a unit sphere centred at the origin). It works by regarding the input coordinates as position vectors and finding their intersection with the sphere surface. The inverse transformation always produces points which are a unit distance from the origin (i.e. unit vectors). Synopsis AstSphMap $\ast$astSphMap( const char $\ast$options, ... ) Parameters: options Pointer to a null-terminated string containing an optional comma-separated list of attribute assignments to be used for initialising the new SphMap. The syntax used is identical to that for the astSet function and may include " printf" format specifiers identified by " %" symbols in the normal way. ... If the " options" string contains " %" format specifiers, then an optional list of additional arguments may follow it in order to supply values to be substituted for these specifiers. The rules for supplying these are identical to those for the astSet function (and for the C " printf" function). Returned Value astSphMap() A pointer to the new SphMap. Notes: • The spherical coordinates are longitude (positive anti-clockwise looking from the positive latitude pole) and latitude. The Cartesian coordinates are right-handed, with the x axis (axis 1) at zero longitude and latitude, and the z axis (axis 3) at the positive latitude pole. • At either pole, the longitude is set to the value of the PolarLong attribute. • If the Cartesian coordinates are all zero, then the longitude and latitude are set to the value AST__BAD. • A null Object pointer (AST__NULL) will be returned if this function is invoked with the AST error status set, or if it should fail for any reason. Status Handling The protected interface to this function includes an extra parameter at the end of the parameter list descirbed above. This parameter is a pointer to the integer inherited status variable: " int $\ast$status" . Status Handling The protected interface to this function includes an extra parameter at the end of the parameter list descirbed above. This parameter is a pointer to the integer inherited status variable: " int $\ast$status" . Status Handling The protected interface to this function includes an extra parameter at the end of the parameter list descirbed above. This parameter is a pointer to the integer inherited status variable: " int $\ast$status" .
# Margin of Safety Written by Jerry Ratzlaff on . Posted in Safety Margin of safety, abbreviated as FS, is the ratio of the system's structural capacity to the requirements or how much excess capacity. ## Margin of safety formula $$\large{ MS = FS \; -1 }$$ Where: $$\large{ MS }$$ = margin of safety $$\large{ FS }$$ = factor of safety Tags: Equations for Safety
# source:trunk/Documents/DrivePaper/paper.tex@9189 Last change on this file since 9189 was 9189, checked in by tbretz, 12 years ago *** empty log message *** File size: 63.3 KB Line 1% Template article eor Elsevier's document class elsarticle' 2% with harvard style bibliographic references 3% SP 2008/03/01 4 5%\documentclass[authoryear,preprint,12pt]{elsarticle} 6% 7\documentclass[5p,twocolumn,12pt]{elsarticle} 8%\documentclass[12pt,final]{elsart5p} 9 10% Use the option review to obtain double line spacing 11% \documentclass[authoryear,preprint,review,12pt]{elsarticle} 12 13% Use the options 1p,twocolumn; 3p; 3p,twocolumn; 5p; or 5p,twocolumn 14% for a journal layout: 15% \documentclass[authoryear,final,1p,times]{elsarticle} 16% \documentclass[authoryear,final,1p,times,twocolumn]{elsarticle} 17% \documentclass[authoryear,final,3p,times]{elsarticle} 18% \documentclass[authoryear,final,3p,times,twocolumn]{elsarticle} 19% \documentclass[authoryear,final,5p,times]{elsarticle} 20% \documentclass[authoryear,final,5p,times,twocolumn]{elsarticle} 21 22% if you use PostScript figures in your article 23% use the graphics package for simple commands 24% \usepackage{graphics} 25% or use the graphicx package for more complicated commands 26% \usepackage{graphicx} 27% or use the epsfig package if you prefer to use the old commands 28% \usepackage{epsfig} 29 30% The amssymb package provides various useful mathematical symbols 31\usepackage{amssymb} 32% The amsthm package provides extended theorem environments 33% \usepackage{amsthm} 34 35% The lineno packages adds line numbers. Start line numbering with 36% \begin{linenumbers}, end it with \end{linenumbers}. Or switch it on 37% for the whole article with \linenumbers. 38%\usepackage{lineno} 39%\linenumbers 40 41%\usepackage[numbers]{natbib} 42 43\usepackage{textcomp,graphicx,times,wrapfig,xspace,url,amssymb,amsmath,wasysym,stmaryrd} 44 45% Hyperref has the disadvantage that it dsoen't correctly like-break urls 46% in the bibliography 47%\usepackage{hyperref} 48 49 50\journal{Astroparticle Physics} 51 52\begin{document} 53\begin{frontmatter} 54 55% Graphics at 1000dpi 56 57\title{The drive system of the\\Major Atmospheric Gamma-ray Imaging Cherenkov Telescope} 58 59%\newcommand{\corref}{\thanksref} 60%\newcommand{\cortext}{\thanks} 61\author[tb]{T.~Bretz\corref{cor1}} 62\author[tb]{D.~Dorner} 63\author[rw]{R.~M.~Wagner} 64\author[rw]{P.~Sawallisch} 65\address[tb]{Universit\"{a}t W\"{u}rzburg, Am Hubland, 97074 W\"{u}rzburg, Germany} 66\address[rw]{Max-Planck-Institut f\"ur Physik, F\"ohringer Ring 6, 80805 M\"{u}nchen, Germany} 67\cortext[cor1]{Corresponding author: [email protected]} 68 69%\input library.def 70 71\newcommand{\mylesssim}{{\apprle}} 72\newcommand{\mygtrsim} {{\apprge}} 73\newcommand{\degree}{{\textdegree{}}} 74 75\begin{abstract} 76The MAGIC telescope is an imaging atmospheric Cherenkov telescope, 77designed to observe very high energy gamma-rays while achieving a low 78energy threshold. One of the key science goals is fast follow-up of the 79enigmatic and short lived gamma-ray bursts. The drive system for the 80telescope has to meet two basic demands: (1)~During normal 81observations, the 72-ton telescope has to be positioned accurately, and 82has to track a given sky position with high precision at a typical 83rotational speed in the order of one revolution per day. (2)~For 84successfully observing GRB prompt emission and afterglows, it has to be 85powerful enough to position to an arbitrary point on the sky within a 86few ten seconds and commence normal tracking immediately thereafter. To 87meet these requirements, the implementation and realization of the 88drive system relies strongly on standard industry components to ensure 89robustness and reliability. In this paper, we describe the mechanical 90setup, the drive control and the calibration of the pointing, as well 91as present measurements of the accuracy of the system. We show that the 92drive system is mechanically able to operate the motors with an 93accuracy even better than the feedback values from the axes. In the 94context of future projects, envisaging telescope arrays comprising 95about 100 individual instruments, the robustness and scalability of the 96concept is emphasized. 97\end{abstract} 98 99\begin{keyword} 100MAGIC\sep drive system\sep IACT\sep scalability\sep calibration\sep fast positioning 101\end{keyword} 102 103\end{frontmatter} 104 105%\maketitle 106 107\section{Introduction} 108 109The MAGIC telescope on the Canary Island of La~Palma, located 2200\,m 110above sea level at 28\textdegree{}45$^\prime$\,N and 17\textdegree{}54$^\prime$\,W, is 111an imaging atmospheric Cherenkov telescope designed to achieve a low 112energy threshold, fast positioning, and high tracking 113accuracy~\cite{Lorenz:2004, Cortina:2005}. The MAGIC design, and the 114currently ongoing construction of a second telescope 115(MAGIC\,II;~\cite{Goebel:2007}), pave the way for ground-based 116detection of gamma-ray sources at cosmological distances down to less 117than 25\,GeV~\cite{Sci}. After the discovery of the distant blazars 1ES\,1218+304 118at a redshift of $z$\,=\,0.182~\citep{2006ApJ...642L.119A} and 1191ES\,1011+496 at $z$\,=\,0.212~\citep{2007ApJ...667L..21A}, the most 120recent breakthrough has been the discovery of the first quasar at very 121high energies, the flat-spectrum radio source 3C\,279 at a redshift of 122$z$\,=\,0.536~\cite{2008Sci...320.1752M}. These observational results 123were somewhat surprising, since the extragalactic background radiation 124in the mid-infrared to near-infrared wavelength range was believed to 125be strong enough to inhibit propagation of gamma-rays across 126cosmological distances~\citep{2001MNRAS.320..504S, 2007arXiv0707.2915K, Hauser:2001}. 127%However, it could 128%be shown that the results of deep galaxy surveys with the Hubble and 129%Spitzer Space telescopes are consistent with these findings, if the 130%spurious feature at one micron is attributed to a foreground effect 131%resulting from an inaccurate subtraction of zodiacal 132%light~\citep{Hauser:2001, 2007arXiv0707.2915K, Kneiske:2004}. 133The 134apparent low level of pair attenuation of gamma-rays greatly improves 135the prospects of searching for very high energy gamma-rays from 136gamma-ray bursts (GRBs), cf.~\citep{Kneiske:2004}. Their remarkable similarities with blazar 137flares, albeit at much shorter timescales, presumably arise from the 138scaling behavior of relativistic jets, the common physical cause of 139these phenomena. Since most GRBs reside at large redshifts, their 140detection at very high energies relies on the low level of 141absorption~\citep{1996ApJ...467..532M}. Moreover, the cosmological 142absorption decreases with photon energy, favoring MAGIC to discover 143GRBs due to its low energy threshold. 144 145Due to the short life times of GRBs and the limited field of view of 146imaging atmospheric Cherenkov telescopes, the drive system of the MAGIC 147telescope has to meet two basic demands: during normal observations, 148the 72-ton telescope has to be positioned accurately, and has to 149track a given sky position, i.e., counteract the apparent rotation of 150the celestial sphere, with high precision at a typical rotational speed 151in the order of one revolution per day. For catching the GRB prompt 152emission and afterglows, it has to be powerful enough to position the 153telescope to an arbitrary point on the sky within a very short time 154%($\apprle$\,60\,s) 155and commence normal tracking immediately 156thereafter. To keep the system simple, i.e., robust, both requirements 157should be achieved without an indexing gear. The telescope's total 158weight of 72~tons is comparatively low, reflecting the 159use of low-weight materials whenever possible. 160For example, the mount consists of a space frame of carbon-fiber 161reinforced plastic tubes, and the mirrors are made of polished 162aluminum. 163 164In this paper, we describe the basic properties of the MAGIC drive 165system. In section~\ref{sec2}, the hardware components and mechanical 166setup of the drive system are outlined. The control loops and 167performance goals are described in section~\ref{sec3}, while the 168implementation of the positioning and tracking algorithms and the 169calibration of the drive system are explained in section~\ref{sec4}. 170The system can be scaled to meet the demands of other telescope designs 171as shown in section~\ref{sec5}. Finally, in section~\ref{outlook} and 172section~\ref{conclusions} we draw conclusions from our experience of 173operating the MAGIC telescope with this drive system for four years. 174 175\section{General design considerations}\label{design} 176 177The drive system of the MAGIC telescope is quite similar to that of 178large, alt-azimuth-mounted optical telescopes. Nevertheless there are 179quite a few aspects that influenced the design of the MAGIC drive 180system in comparison to optical telescopes and small-diameter Imaging 181Atmospheric Cherenkov telescopes (IACT). 182 183Although IACTs have optical components, the tracking and stability 184requirements for IACTs are much less demanding than for optical 185telescopes. Like optical telescopes, IACTs track celestial 186objects, but observe quite different phenomena: Optical telescopes 187observe visible light, which originates at infinity and is parallel. 188Consequently, the best-possible optical resolution is required and in 189turn, equal tracking precision due to comparably long integration 190times, i.e., seconds to hours. In contrast, IACTs record the Cherenkov 191light produced by an electromagnetic air-shower in the atmosphere, 192induced by a primary gamma-ray, i.e., from a close by 193(5\,km\,-\,20\,km) and extended event with a diffuse transverse 194extension and a typical extension of a few hundred meters. Due to the 195stochastic nature of the shower development, the detected light will 196have an inherent limitation in explanatory power, improving normally 197with the energy, i.e., shower-particle multiplicity. As 198the Cherenkov light is emitted under a small angle off the particle 199tracks, these photons do not even point directly to the source like in 200optical astronomy. Nevertheless, the shower points towards the 201direction of the incoming gamma-ray and thus towards its source on the 202sky. For this reason its origin can be reconstructed analyzing its 203image. Modern IACTs achieve an energy-dependent pointing resolution for 204individual showers of 6$^\prime$\,-\,0.6$^\prime$. These are the 205predictions from Monte Carlo simulations assuming, amongst other 206things, ideal tracking. This sets the limits achievable in practical 207cases. Consequently, the required tracking precision must be at least 208of the same order or even better. Although the short integration times, 209on the order of a few nanoseconds, would allow for an offline 210correction, this should be avoided since it may give rise to 212 213%MAGIC, as other large IACTs, has no protective dome. It is constantly 214%exposed to daily changing weather conditions and intense sunlight, and 215%therefore suffers much more material aging than optical telescopes. A 216%much simpler mechanical mount had to be used, resulting in a design of 217%considerably less stiffness, long-term irreversible deformations, and 218%small unpredictable deformations due to varying wind pressure. The 219%tracking system does not need to be more precise than the mechanical 220%structure and, consequently, can be much simpler and hence cheaper as 221%compared to that of large optical telescopes. 222 223To meet one of the main physics goals, the observation of prompt and 224afterglow emission of GRBs, positioning of the telescope to their 225assumed sky position is required in a time as short as possible. 226Alerts, provided by satellites, arrive at the MAGIC site typically 227within 10\,s after the outburst~\citep{2007ApJ...667..358A}. 228Since the life times of GRBs show a bimodal distribution~\cite{Paciesas:1999} 229with a peak between 10\,s and 100\,s. To achieve 230a positioning time to any position on the sky within a reasonable 231time inside this window, i.e. less than a minute, 232a very light-weight but sturdy telescope and a fast-acting and 233powerful drive system is required. 234\begin{figure*}[htbp] 235\begin{center} 236 \includegraphics*[width=0.91\textwidth,angle=0,clip]{figure1.eps} 237\caption{The top picture shows the MAGIC\,I telescope with the major 238components of the drive system. The elevation drive unit, from its back 239side, is enlarged in the bottom left picture. Visible is the actuator 240of the safety holding brake and its corresponding brake disk mounted on 241the motor-driven axis. The motor is attached on the opposite side. The 242picture on the bottom right shows one of the azimuth bogeys with its 243two railway rails. The motor is housed in the grey box on the 244yellow drive unit. It drives the tooth double-toothed wheel gearing 245into the chain through a gear and a clutch.} 246% \includegraphics*[width=0.48\textwidth,angle=0,clip]{figure1a.eps} 247% \hfill 248% \includegraphics*[width=0.48\textwidth,angle=0,clip]{figure1b.eps} 249%\caption{{\em Left}: One of the bogeys with its two railway rails. The 250%motor is encapsulated in the grey box on the yellow drive unit. It drives 251%%the tooth double-toothed wheel gearing into the chain through a gear 252%and a clutch. {\em Right}: The drive unit driving the elevation axis 253%from the back side. Visible is the actuator of the safety holding brake 254%and its corresponding brake disk mounted on the motor-driven axis. The 255%motor is attached on the opposite side. } 256\label{figure1} 257\end{center} 258\end{figure*} 259 260\section{Mechanical setup and hardware components}\label{sec2} 261 262The implementation of the drive system relies strongly on standard 263industry components to ensure robustness, reliability and proper 264technical support. Its major drive components, described hereafter, are 265shown on the pictures in fig.~\ref{figure1}. 266 267The azimuth drive ring of 20\,m diameter is made from a normal railway 268rail, which was delivered in pre-bent sections and welded on site. Its 270the concrete foundation uses standard rail-fixing elements, and allows 271for movements caused by temperature changes. The maximum allowable 272deviation from the horizontal plane as well as deviation from flatness 273is $\pm 2$\,mm, and from the ideal circle it is $\Delta$r\,=\,8\,mm. 274The rail support was leveled with a theodolite every 60\,cm 275with an overall tolerance of $\pm 1.5$\,mm every 60\,cm. In between the 276deviation is negligible. Each of the six bogeys holds two standard 277crane wheels of 60\,cm diameter with a rather broad wheel tread of 278110\,mm. This allows for deviations in the 11.5\,m-distance to the 279central axis due to extreme temperature changes, which can even be 280asymmetric in case of different exposure to sunlight on either side. 281For the central bearing of the azimuth axis, a high-quality ball 282bearing was installed fairly assuming that the axis is vertically 283stable. For the elevation axis, due to lower weight, a less expensive 284sliding bearing with a teflon layer was used. These sliding bearings 285have a slightly spherical surface to allow for small misalignments 286during installation and some bending of the elevation axis stubs under 288 289The drive mechanism is based on duplex roller chains and sprocket 290wheels in a rack-and-pinion mounting. The chains have a breaking 291strength of 19~tons and a chain-link spacing of 2.5\,cm. The initial 2933\,mm\,-\,5\,mm, according to the data sheet, corresponding to much 294less than an arcsecond on the telescope axes. The azimuth drive chain 295is fixed on a dedicated ring on the concrete foundation, but has quite 296some radial distance variation of up to 5\,mm. The elevation drive 297chain is mounted on a slightly oval ring below the mirror dish, because 298the ring forms an integral part of the camera support mast structure. 299 300% Bosch Rexroth AG, 970816 Lohr am Main, Germany 301% www.boschrexroth.de 302 303% Wittenstein alpha GmbH, Walter-Wittenstein-Stra"se 1, D-97999 Igersheim 304% www.wittenstein-alpha.de 305 306% updated 307Commercial synchronous motors (type designation Bosch 308Rexroth\footnote{\url{http://www.boschrexroth.de}\\Bosch Rexroth AG, 30997816 Lohr am Main, Germany} MHD\,112C-058) are used together with 310low-play planetary gears (type designation 311alpha\footnote{\url{http://www.wittenstein-alpha.de}\\Wittenstein alpha 312GmbH, 97999 Igersheim, Germany} GTS\,210-M02-020\,B09, ratio 20) linked 313to the sprocket wheels. These motors intrinsically allow for a 314positional accuracy better than one arcsecond of the motor axis. Having 315a nominal power of 11\,kW, they can be overpowered by up to a factor 316five for a few seconds. It should be mentioned that due to the 317installation height of more than 2200\,m a.s.l., due to lower air 318pressure and consequently less efficient cooling, the nominal values 319given must be reduced by about 20\%. Deceleration is done operating the 320motors as generator which is as powerful as acceleration. The motors 321contain 70\,Nm holding brakes which are not meant to be used as driving 322brakes. The azimuth motors are mounted on small lever arms. In order to 323follow the small irregularities of the azimuthal drive chain, the units 324are forced to follow the drive chain, horizontally and  vertically, by 325guide rolls. The elevation-drive motor is mounted on a nearly 1\,m long 326lever arm to be able to compensate the oval shape of the chain and the 327fact that the center of the circle defined by the drive chain is 328shifted 356\,mm away from the axis towards the camera. The elevation 329drive is also equipped with an additional brake, operated only as 330holding brake, for safety reasons in case of extremely strong wind 331pressure. No further brake are installed on the telescope. 332 333The design of the drive system control, c.f.~\citet{Bretz:2003drive}, 334is based on digitally controlled industrial drive units, one for each 335motor. The two motors driving the azimuth axis are coupled to have a 336more homogeneous load transmission from the motors to the structure 337compared to a single (more powerful) motor. The modular design allows 338to increase the number of coupled devices dynamically if necessary, 339c.f.~\citet{Bretz:2005drive}. 340 341At the latitude of La Palma, the azimuth track of stars can exceed 342180\textdegree{} in one night. To allow for continuous observation of a 343given source at night without reaching one of the end positions in 344azimuth. the allowed range for movements in azimuth spans from 345$\varphi$\,=\,-90\textdegree{} to $\varphi$\,=\,+318\textdegree, where 346$\varphi$\,=\,0\textdegree{} corresponds to geographical North, and 347$\varphi$\,=\,90\textdegree{} to geographical East. To keep slewing 348distances as short as possible (particularly in case of GRB alerts), 349the range for elevational movements spans from 350$\theta$\,=\,+100\textdegree{} to $\theta$\,=\,-70\textdegree{} where 351the change of sign implies a movement {\em across the zenith}. This 352so-called {\it reverse mode} is currently not in use, as it might result 353in hysteresis effects of the active mirror control system, still under 354investigation, due to shifting of weight at zenith. The accessible 355range in both directions and on both axes is limited by software to the 356mechanically accessible range. For additional safety, hardware end 357switches are installed directly connected to the drive controller 358units, initiating a fast, controlled deceleration of the system when 359activated. To achieve an azimuthal movement range exceeding 360360\textdegree{}, one of the two azimuth end-switches needs to be 361deactivated at any time. Therefore, an additional {\em direction 362switch} is located at $\varphi$\,=\,164\textdegree{}, short-circuiting 363the end switch currently out of range. 364 365\section{Setup of the motion control system}\label{sec3} 366 367The motion control system similarly uses standard industry components. 368The drive is controlled by the feedback of encoders measuring the 369angular positions of the motors and the telescope axes. The encoders on 370the motor axes provide information to micro controllers dedicated for 371motion control, initiating and monitoring every movement. Professional 372built-in servo loops take over the suppression of oscillations. The 373correct pointing position of the system is ensured by a computer 374program evaluating the feedback from the telescope axes and initiating 375the motion executed by the micro controllers. Additionally, the 376motor-axis encoders are also evaluated to increase accuracy. The 377details of this system, as shown in figure~\ref{figure2}, are discussed 378below. 379 380\begin{figure*}[htb] 381\begin{center} 382 \includegraphics*[width=0.48\textwidth,angle=0,clip]{figure2a.eps} 383 \hfill 384 \includegraphics*[width=0.48\textwidth,angle=0,clip]{figure2b.eps} 385\caption{Schematics of the MAGIC\,I ({\em left}) and MAGIC\,II ({\em 386right}) drive system. The sketches shows the motors, the motor-encoder 387feedback as well as the shaft-encoder feedback, and the motion-control 388units, which are themselves controlled by a superior control, receiving 389commands from the control PC, which closes the position-control loop. 390The system is described in more details in section~\ref{sec3}.} 391\label{figure2} 392\end{center} 393\end{figure*} 394 395\subsection{Position feedback system} 396 397The angular telescope positions are measured by three shaft-encoders 398(type designation Hengstler\footnote{\url{http://www.hengstler.de}\\Hengstler GmbH, 39978554 Aldingen, Germany} AC61/1214EQ.72OLZ). These absolute multi-turn 400encoders have a resolution of 4\,096 (10\,bit) revolutions and 16\,384 401(14\,bit) steps per revolution, corresponding to an intrinsic angular 402resolution of 1.3$^\prime$ per step. One shaft encoder is located on 403the azimuth axis, while two more encoders are fixed on either side of 404the elevation axis, increasing the resolution and allowing for 405measurements of the twisting of the dish (fig.~\ref{figure3}). All 406shaft encoders used are watertight (IP\,67) to withstand the extreme 407weather conditions occasionally encountered at the telescope site. The 408motor positions are read out at a frequency of 1\,kHz from 10\,bit 409relative rotary encoders fixed on the motor axes. Due to the gear ratio 410of more than one thousand between motor and load, the 14\,bit 411resolution of the shaft encoder system on the axes can be interpolated 412further using the position readout of the motors. For communication 413with the axis encoders, a CANbus interface with the CANopen protocol is 414in use (operated at 125\,kbps). The motor encoders are directly 416 417\subsection{Motor control} 418 419The three servo motors are connected to individual motion controller 420units ({\em DKC}, type designation Bosch Rexroth, 421DKC~ECODRIVE\,03.3-200-7-FW), serving as intelligent frequency 422converters. An input value, given either analog or 423digital, is converted to a predefined output, e.g., command position, 424velocity or torque. All command values are processed through a chain of 425built-in controllers, cf. fig.~\ref{figure4}, resulting in a final 426command current applied to the motor. This internal chain of control 427loops, maintaining the movement of the motors at a frequency of 4281\,kHz, fed back by the rotary encoders on the corresponding motor axes. 429Several safety limits ensure damage-free operation of the system even 430under unexpected operation conditions. These safety limits are, e.g., 431software end switches, torque limits, current limits or 432control-deviation limits. 433 434To synchronize the two azimuth motors, a master-slave setup is 435used. While the master is addressed by a command velocity, the 436slave is driven by the command torque output of the master. This 437operation mode ensures that both motors can apply their combined force 438to the telescope structure without oscillations. In principle it is 439possible to use a bias torque to eliminate play. This feature 440was not used because the play is negligible anyhow. 441 442\begin{figure}[htb] 443\begin{center} 444 \includegraphics*[width=0.48\textwidth,angle=0,clip]{figure3.eps} 445\caption{The measured difference between the two shaft-encoders fixed 446on either side of the elevation axis versus zenith angle. Negative 447zenith angles mean that the telescope has been flipped over the zenith 448position to the opposite side. The average offset from zero 449corresponds to a the twist of the two shaft encoders with respect to 450each other. The error bars denote the spread of several measurements. 451Under normal conditions the torsion between both ends of 452the axis is less than the shaft-encoder resolution.} 453\label{figure3} 454\end{center} 455\end{figure} 456 457\subsection{Motion control} 458\begin{figure*}[htb] 459\begin{center} 460 \includegraphics*[width=0.78\textwidth,angle=0,clip]{figure4.eps} 461 \caption{The internal flow control between the individual controllers 462inside the drive control unit. Depending on the type of the command 463value, different controllers are active. The control loops are closed by 464the feedback of the rotary encoder on the motor, and a possible 465controller on the load axis, as well as the measurement of the current.} 466\label{figure4} 467\end{center} 468\end{figure*} 469 470% zub machine control AG, Kastaniensteif 7, CH-6047 Kastanienbaum 471% www.zub.de 472 473The master for each axis is controlled by presetting a rotational 474speed defined by $\pm$10\,V on its analog input. The input voltage is 475produced by a programmable micro controller dedicated to analog motion 476control, produced by Z\&B\footnote{\url{http://www.zub.de}\\zub machine control AG, 6074 477Kastanienbaum, Switzerland} ({\em MACS}, type 478designation MACS). The feedback is realized through a 500-step 479emulation of the motor's rotary encoders by the DKCs. Elevation and azimuth movement is regulated by individual 480MACSs. The MACS controller itself communicates with the control 481software (see below) through a CANbus connection. 482 483It turned out that in particular the azimuth motor system seems to be 484limited by the large moment of inertia of the telescope 485($J_{\mathrm{az}}$\,$\approx$\,4400\,tm$^2$, for comparison 486$J_{\mathrm{el}}$\,$\approx$\,850\,tm$^2$; note that the exact numbers depend 487on the current orientation of the telescope). At the same time, the 488requirements on the elevation drive are much less demanding.\\ 489 490\noindent {\em MAGIC\,II}\quad For the drive system 491several improvements have been provided:\\\vspace{-2ex} 492\begin{itemize} 493\item 13\,bit absolute shaft-encoders (type designation Heidenhain\footnote{\url{http://www.heidenhain.de}\\Dr.~Johannes Heidenhain GmbH, 83301 Traunreut, Germany} 494ROQ\,425) are installed, providing an additional sine-shaped 495$\pm$1\,Vss output within each step. This allows for a more accurate 496interpolation and hence a better resolution than a simple 14\,bit 497shaft-encoder. These shaft-encoders are also water tight (IP\,64), and 498they are read out via an EnDat\,2.2 interface. 499\item All encoders are directly connected to the DKCs, providing 500additional feedback from the telescope axes itself. The DKC can control 501the load axis additionally to the motor axis providing a more accurate 502positioning, faster movement by improved oscillation suppression and a 503better motion control of the system. 504\item The analog transmission of the master's command torque to the 505slave is replaced by a direct digital communication (EcoX) 506of the DKCs. This allows for more robust and precise slave control. 507Furthermore the motors could be coupled with relative angular synchronism 508allowing to suppress deformations of the structure by keeping the 509axis connecting both motors stable. 510\item A single professional programmable logic controller (PLC), in German: 511{\em Speicherprogammierbare Steuerung} (SPS, type designation Rexroth 512Bosch, IndraControl SPS L\,20) replaces the two MACSs. Connection between 513the SPS and the DKCs is now realized through a digital Profibus DP 514interface substituting the analog signals. 515\item The connection from the SPS to the control PC is realized via 516Ethernet connection. Since Ethernet is more commonly in use than 517CANbus, soft- and hardware support is much easier. 518\end{itemize} 519 520\subsection{PC control} 521 522The drive system is controlled by a standard PC running a Linux 523operating system, a custom-designed software based on 524ROOT~\citep{www:root} and the positional astronomy library 525{\em slalib}~\citep{slalib}. 526 527Algorithms specialized for the MAGIC tracking system are imported from 528the Modular Analysis and Reconstruction Software package 529(MARS)~\citep{Bretz:2003icrc, Bretz:2005paris, Bretz:2008gamma} also 530used in the data analysis~\citep{Bretz:2005mars, Dorner:2005paris}. 531 532\subsubsection{Positioning} 533 534Whenever the telescope has to be positioned, the relative distance to 535the new position is calculated in telescope coordinates and then 536converted to motor revolutions. Then, the micro controllers are 537instructed to move the motors accordingly. Since the motion is 538controlled by the feedback of the encoders on the motor axes, not on 539the telescope axes, backlash and other non-deterministic irregularities 540cannot easily be taken into account. Thus it may happen that the final 541position is still off by a few shaft-encoder steps, although the motor 542itself has reached its desired position. In this case, the procedure is 543repeated up to ten times. After ten unsuccessful iterations, the system 544would go into error state. In almost all cases the command position is 545reached after at most two or three iterations. 546 547If a slewing operation is followed by a tracking operation of a 548celestial target position, tracking is started immediately after the 549first movement without further iterations. Possible small deviations, 550normally eliminated by the iteration procedure, are then corrected by 551the tracking algorithm. 552 553\subsubsection{Tracking} 554To track a given celestial target position (RA/Dec, J\,2000.0, 555FK\,5~\citep{1988VeARI..32....1F}), astrometric and misalignment 556corrections have to be taken into account. While astrometric 557corrections transform the celestial position into local coordinates as 558seen by an ideal telescope (Alt/Az), misalignment corrections convert 559them further into the coordinate system specific to the real telescope. 560In case of MAGIC, this coordinate system is defined by the position 561feedback system. 562 563The tracking algorithm controls the telescope by applying a command 564velocity for the revolution of the motors, which is re-calculated every 565second. It is calculated from the current feedback position and the 566command position required to point at the target five seconds ahead in 567time. The timescale of 5\,s is a compromise between optimum tracking 568accuracy and the risk of oscillations in case of a too short timescale. 569 570As a crosscheck, the ideal velocities for the two telescope axes are 571independently estimated using dedicated astrometric routines of slalib. 572For security reasons, the allowable deviation between the determined 573command velocities and the estimated velocities is limited. If an 574extreme deviation is encountered the command velocity is set to zero, 575i.e., the movement of the axis is stopped. 576 577\subsection{Fast positioning} 578 579The observation of GRBs and their afterglows in very-high energy 580gamma-rays is a key science goal for the MAGIC telescope. Given that 581alerts from satellite monitors provide GRB positions a few seconds 582after their outburst via the {\em Gamma-ray Burst Coordination 583Network}~\cite{www:gcn}, typical burst durations of 10\,s to 584100\,s~\cite{Paciesas:1999} demand a fast positioning of the 585telescope. The current best value for the acceleration has been set to 58611.7\,mrad\,s$^{-2}$. It is constrained by the maximum constant force 587which can be applied by the motors. Consequently, the maximum allowed 588velocity can be derived from the distance between the end-switch 589activation and the position at which a possible damage to the telescope 590structure, e.g.\ ruptured cables, would happen. From these constraints, 591the maximum velocity currently in use, 70.4\,mrad\,s$^{-1}$, was 592determined. Note that, as the emergency stopping distance evolves 593quadratically with the travel velocity, a possible increase of the 594maximum velocity would drastically increase the required braking 595distance. As safety procedures require, an emergency stop is completely 596controlled by the DKCs itself with the feedback of the motor encoder, 597ignoring all other control elements. 598 599Currently, automatic positioning by 600$\Delta\varphi$\,=\,180\textdegree{} in azimuth to the target position 601is achieved within 45\,s. The positioning time in elevation is not 602critical in the sense that the probability to move a longer path in 603elevation than in azimuth is negligible. Allowing the telescope drive 604to make use of the reverse mode, the requirement of reaching any 605position in the sky within 30\,s is well met, as distances in azimuth 606are substantially shortened. The motor specifications allow for a 607velocity more than four times higher. In practice, the maximum possible 608velocity is limited by the acceleration force, at slightly more than 609twice the current value. The actual limiting factor is the braking 610distance that allows a safe deceleration without risking any damage to 611the telescope structure. 612 613With the upgraded MAGIC\,II drive system, during commissioning in 2008 August, a 614maximum acceleration and deceleration of $a_{az}$\,=\,30\,mrad\,s$^{-2}$ 615and $a_{zd}$\,=\,90\,mrad/s$^{-2}$ and a maximum velocity of 616$v_{az}$\,=\,290\,mrad\,s$^{-1}$ and $v_{zd}$\,=\,330\,mrad\,s$^{-1}$ 617could be reached. With these values the limits of the motor power are 618exhausted. This allowed a movement of 619$\Delta\varphi$\,=\,180\textdegree/360\textdegree{} in azimuth within 20\,s\,/\,33\,s. 620 621\subsection{Tracking precision} 622 623The intrinsic mechanical accuracy of the tracking system is determined 624by comparing the current command position of the axes with the feedback 625values from the corresponding shaft encoders. These feedback values 626represent the actual position of the axes with highest precision 627whenever they change their feedback values. At these instances, the 628control deviation is determined, representing the precision with which 629the telescope axes can be operated. In the case of an ideal mount this 630would define the tracking accuracy of the telescope. 631 632In figure~\ref{figure5} the control deviation measured for 10.9\,h of 633data taking in the night of 2007 July 22/23 and on the evening of July 63423 is shown, expressed as absolute deviation on the sky taking both 635axes into account. In almost all cases it is well below the resolution 636of the shaft encoders, and in 80\% of the time it does not exceed 1/8 637of this value ($\sim$10$^{\prime\prime}$). This means that the accuracy of the 638motion control, based on the encoder feedback, is much better than 6391$^\prime$ on the sky, which is roughly a fifth of the diameter of a 640pixel in the MAGIC camera (6$^\prime$, c.f.~\cite{Beixeras:2005}). 641\begin{figure}[htb] 642\begin{center} 643 \includegraphics*[width=0.48\textwidth,angle=0,clip]{figure5.eps} 644 \caption{Control deviation between the expected, i.e. calculated, 645position, and the feedback position of the shaft encoders in the moment 646at which one change their readout values. For simplicity, the control 647deviation is shown as absolute control deviation projected on the sky. 648The blue lines correspond to fractions of the shaft-encoder resolution. 649The peak at half of the shaft-encoder resolution results from cases 650in which one of the two averaged elevation encoders is off by one step. 651} 652\label{figure5} 653\end{center} 654\end{figure} 655 656In the case of a real telescope ultimate limits of the tracking 657precision are given by the precision with which the correct command 658value is known. Its calibration is discussed hereafter. 659 660\section{Calibration}\label{sec4} 661 662To calibrate the position command value, astrometric corrections 663(converting the celestial target position into the target position of 664an ideal telescope) and misalignment corrections (converting it further 665into the target position of a real telescope), have to be taken into 666account. 667 668\subsection{Astrometric corrections} 669 670The astrometric correction for the pointing and tracking algorithms is 671based on a library for calculations usually needed in positional 672astronomy, {\em slalib}~\cite{slalib}. Key features of this library are 673the numerical stability of the algorithms and their well-tested 674implementation. The astrometric corrections in use 675(fig.~\ref{figure6}) -- performed when converting a celestial position 676into the position as seen from Earth's center (apparent position) -- 677take into account precession and nutation of the Earth and annual 678\begin{figure}[htb] 679\begin{center} % Goldener Schnitt 680 \includegraphics*[width=0.185\textwidth,angle=0,clip]{figure6.eps} 681\caption{The transformation applied to a given set of catalog source 682coordinates to real-telescope coordinates. These corrections include 683all necessary astrometric corrections, as well as the pointing 684correction to transform from an ideal-telescope frame to the frame of a 685real telescope. 686%A detailed description of all corrections and the 687%calibration of the pointing model is given in section~\ref{sec4}. 688} 689\label{figure6} 690\end{center} 691\end{figure} 692aberration, i.e., apparent displacements caused by the finite speed of 693light combined with the motion of the observer around the Sun during 694the year. Next, the apparent position is transformed to the observer's 695position, taking into account atmospheric refraction, the Earth's 696rotation, and diurnal aberration, i.e., the motion of the observer 697around the Earth's rotation axis. Some of these effects are so small 698that they are only relevant for nearby stars and optical astronomy. But 699as optical observations of such stars are used to {\em train} the 700misalignment correction, all these effects are taken into account. 701 702\subsection{Pointing model} 703 704Imperfections and deformations of the mechanical construction lead to 705deviations from an ideal telescope, including the non-exact 706alignment of axes, and deformations of the telescope structure. 707 708%new 709In the case of the MAGIC telescopes the optical axis of the mirror is 710defined by an automatic alignment system. This active mirror control 711is programmed not to change the optical axis once defined, but only 712controls the optical point spread function of the mirror, i.e., it does 713not change the center of gravity of the light distribution. 714This procedure is applied whenever the telescope is observing including 715any kind of calibration measurement for the drive system. The precision 716of the axis alignment of the mirrors is better than 0.2$^\prime$ and can 717therefor be neglected. 718%new 719 720%To assure reliable pointing and tracking accuracy, such effects have to 721%be taken into account. 722Consequently, to assure reliable pointing and tracking accuracy, 723mainly the mechanical effects have to be taken into account. 724Therefore the tracking software employs an analytical pointing model 725based on the {\rm TPOINT}\texttrademark{} telescope modeling 726software~\cite{tpoint}, also used for optical telescopes. This model, 727called {\em pointing model}, parameterizes deviations from the ideal 728telescope. Calibrating the pointing model by mispointing measurements 729of bright stars, which allows to determine the necessary corrections, 730is a standard procedure. Once calibrated, the model is applied online. 731Since an analytical model is used, the source of any deviation can be 732identified and traced back to components of the telescope mount.\\ 733 734Corrections are parameterized by alt-azimuthal terms~\cite{tpoint}, 735i.e., derived from vector transformations within the proper coordinate 736system. The following possible misalignments are taken into account:\\\vspace{-2ex} 737\begin{description} 738\item[Zero point corrections ({\em index errors})] Trivial offsets 739between the zero positions of the axes and the zero positions of the 740shaft encoders. 741\item[Azimuth axis misalignment] The misalignment of the azimuth axis 742in north-south and east-west direction, respectively. For MAGIC these 743corrections can be neglected. 744\item[Non-perpendicularity of axes] Deviations from right angles 745between any two axes in the system, namely (1) non-perpendicularity of 746azimuth and elevation axes and (2) non-perpendicularity of elevation 747and pointing axes. In the case of the MAGIC telescope these corrections 748are strongly bound to the mirror alignment defined by the active mirror 749control. 750\item[Non-centricity of axes] The once-per-revolution cyclic errors 751produced by de-centered axes. This correction is small, and thus difficult 752to measure, but the most stable correction throughout the years. 753\end{description} 754 755\noindent{\bf Bending of the telescope structure} 756\begin{itemize} 757\item A possible constant offset of the mast bending. 758\item A zenith angle dependent correction. It describes the camera mast 759bending, which originates by MAGIC's single thin-mast camera support 760strengthened by steel cables. 761\item Elevation hysteresis: This is an offset correction introduced 762depending on the direction of movement of the elevation axis. It is 763necessary because the sliding bearing, having a stiff connection with 764the encoders, has such a high static friction that in case of reversing 765the direction of the movement, the shaft-encoder will not indicate any 766movement for a small and stable rotation angle, even though the 767telescope is rotating. Since this offset is stable, it can easily 768be corrected after it is fully passed. The passage of the hysteresis 769is currently corrected offline only. 770\end{itemize} 771\vspace{1em} 772 773Since the primary feedback is located on the axis itself, corrections 774for irregularities of the chain mounting or sprocket wheels are 775unnecessary. Another class of deformations of the telescope-support 776frame and the mirrors are non-deterministic and, consequently, pose an 777ultimate limit of the precision of the pointing. 778 779% Moved to results 780 781\subsection{Determination} 782 783%\subsubsection{Calibration concept} 784\begin{figure*}[htb] 785\begin{center} 786 \includegraphics*[width=0.48\textwidth,angle=0,clip]{figure7a.eps} 787 \hfill 788 \includegraphics*[width=0.48\textwidth,angle=0,clip]{figure7b.eps} 789 \caption{A single frame (left) and an average of 125 frames (right) of 790the same field of view taken with the high sensitivity PAL CCD camera 791used for calibration of the pointing model. The frames were taken 792with half moon. } 793\label{figure7} 794\end{center} 795\end{figure*} 796 797To determine the coefficients of a pointing model, calibration 798data is recorded. It consists of mispointing measurements depending 799on altitude and azimuth angle. Bright stars are tracked with the 800telescope at positions uniformly distributed in local coordinates, 801i.e., in altitude and azimuth angle. The real pointing 802position is derived from the position of the reflection of a bright 803star on a screen in front of the MAGIC camera. The center 804of the camera is defined by LEDs mounted on an ideal ($\pm$1\,mm) 805circle around the camera center, cf.~\citet{Riegel:2005icrc}. 806 807Having enough of these datasets available, correlating ideal and real 808pointing position, a fit of the coefficients of the model can be made, 809minimizing the pointing residual. 810 811\subsubsection{Hardware and installations} 812 813A 0.0003\,lux, 1/2$$^{\prime\prime}$$ high-sensitivity standard PAL CCD 814camera (type designation \mbox{Watec}~WAT-902\,H) equipped with a zoom lens (type: Computar) is used 815for the mispointing measurements. The camera is read out at a rate of 81625\,frames per second using a standard frame-grabber card in a standard 817PC. The camera has been chosen providing adequate performance and 818easy readout, due to the use of standard components, for a very cheap price 819($<$\,500\,Euro). The tradeoff for the high sensitivity of the camera is its high 820noise level in each single frame recorded. Since there are no rapidly 821moving objects within the field of view, a high picture quality can be 822achieved by averaging typically 125\,frames (corresponding to 5\,s). An 823example is shown in figure~\ref{figure7}. This example also illustrates 824the high sensitivity of the camera, since both pictures of the 825telescope structure have been taken with the residual light of less 826than a half-moon. In the background individual stars can be seen. 827Depending on the installed optics, stars up to 12$^\mathrm{m}$ are 828visible. With our optics and a safe detection threshold the limiting 829magnitude is typically slightly above 9$^\mathrm{m}$ for direct 830measurements and on the order of 5$^\mathrm{m}$\dots4$^\mathrm{m}$ for 831images of stars on the screen. 832 833%\begin{table}[htb] 834%\begin{center} 835%\small 836%\begin{tabular}{|l|l|}\hline 837%Model&Watec WAT-902H (CCIR)\\\hline\hline 838%Pick-up Element&1/2" CCD image sensor\\ 839%&(interline transfer)\\\hline 840%Number of total pixels&795(H)x596(V)\\\hline 841%Minimum Illumination&0.0003\,lx. F/1.4 (AGC Hi)\\ 842%&0.002\,lx. F/1.4 (AGC Lo)\\\hline 843%Automatic gain&High: 5\,dB\,-\,50\,dB\\ 844%&Low: 5\,dB\,-\,32\,dB\\\hline 845%S/N&46\,dB (AGC off)\\\hline 846%Shutter Speed&On: 1/50\,-\,1/100\,000\,s\\ (electronic iris)&Off: 1/50\,s\\\hline 847%Backlight compensation&On\\\hline 848%Power Supply&DC +10.8\,V\,-\,13.2\,V\\\hline 849%Weight&Approx. 90\,g\\\hline 850%\end{tabular} 851%\end{center} 852%\caption{Technical specifications of the CCD camera used for 853%measuring of the position of the calibration stars on the PM camera lid.} 854%\label{table} 855%\end{table} 856 857\subsubsection{Algorithms} 858 859An example of a calibration-star measurement is shown in 860figure~\ref{figure8}. Using the seven LEDs mounted on a circle around 861the camera center, the position of the camera center is determined. 862Only the upper half of the area instrumented is visible, since 863the lower half is covered by the lower lid, holding a special 864reflecting surface in the center of the camera. The LED positions are 865evaluated by a simple cluster-finding algorithm looking at pixels more 866than three standard deviations above the noise level. The LED position 867is defined as the center of gravity of its light distribution, its 868search region by the surrounding black-colored boxes. For 869simplicity the noise level is determined just by calculating the mean 870and the root-mean-square within the individual search regions below a 871fixed threshold dominated by noise. 872 873\begin{figure}[htb] 874\begin{center} 875 \includegraphics*[width=0.48\textwidth,angle=0,clip]{figure8.eps} 876\caption{A measurement of a star for the calibration of the pointing 877model. Visible are the seven LEDs and their determined center of 878gravity, as well as the reconstructed circle on which the LEDs are 879located. The LEDs on the bottom part are hidden by the lower lid, 880holding a screen in front of the camera. 881%For calibration, the center 882%of gravity of the measured star, as visible in the center, is compared 883%to the center of the circle given by the LEDs, coinciding with the 884%center of the PM camera. 885The black regions are the search regions for 886the LEDs and the calibration star. A few dead pixels in the CCD camera 887can also be recognized.} 888\label{figure8} 889\end{center} 890\end{figure} 891 892Since three points are enough to define a circle, from all 893possible combinations of detected spots, the corresponding circle 894is calculated. In case of misidentified LEDs, which sometimes 895occur due to light reflections from the telescope structure, the 898determination can be improved further by applying small offsets of 899the non-ideal LED positions. The radius distribution is Gaussian 900and its resolution is $\sigma$\,$\apprle$\,1\,mm 901($\mathrm{d}r/r\approx0.3$\textperthousand) on the camera plane 902corresponding to $\sim$1$^{\prime\prime}$. 903 904The center of the ring is calculated as the average of all circle 905centers after quality cuts. Its resolution is 906$\sim$2$^{\prime\prime}$. In this setup, the large number of LEDs guarantees 907operation even in case one LED could not be detected due to damage or 908scattered light. 909 910To find the spot of the reflected star itself, the same cluster-finder 911is used to determine its center of gravity. This gives reliable results 912even in case of saturation. Only very bright stars, brighter than 9131.0$^m$, are found to saturate the CCD camera asymmetrically. 914 915Using the position of the star, with respect to the camera center, the 916pointing position corresponding to the camera center is calculated. 917This position is stored together with the readout from the position 918feedback system. The difference between the telescope pointing 919position and the feedback position is described by the pointing model. 920Investigating the dependence of these differences on zenith and azimuth 921angle, the correction terms of the pointing model can be determined. 922Its coefficients are fit minimizing the absolute residuals on the celestial 923sphere. 924 925\subsection{Results} 926 927Figure~\ref{figure9} shows the residuals, taken between 2006 October and 9282007 July, before and after application of the fit of the pointing 929model. For convenience, offset corrections are applied to the residuals 930before correction. Thus, the red curve is a measurement of the alignment 931quality of the structure, i.e., the pointing accuracy with offset 932corrections only. By fitting a proper model, the pointing accuracy can 933be improved to a value below the intrinsic resolution of the system, 934i.e., below shaft-encoder resolution. In more than 83\% of all cases the 935tracking accuracy is better than 1.3$^\prime$ 936and it hardly ever exceeds 2.5$^\prime$. The few datasets exceeding 9372.5$^\prime$ are very likely due to imperfect measurement of the 938real pointing position of the telescope, i.e., the center of gravity of 939the star light. 940 941The average absolute correction applied (excluding the index error) is on 942the order of 4$^\prime$. Given the size, weight and structure of 943the telescope this proves a very good alignment and low sagging of the 944structure. The elevation hysteresis, which is intrinsic to the 945structure, the non-perpendicularity and non-centricity of the axes are 946all in the order of 3$^\prime$, while the azimuth axis 947misalignment is $<$\,0.6$^\prime$. These numbers are well in agreement 948with the design tolerances of the telescope. 949 950\begin{figure}[htb] 951\begin{center} 952 \includegraphics*[width=0.48\textwidth,angle=0,clip]{figure9.eps} 953 \caption{Distribution of absolute pointing residual on the sky between 954the measured position of calibration stars and their nominal position 955with only offset correction for both axes (red) and a fitted pointing 956model (blue) applied. Here in total 1162 measurements where used, 957homogeneously distributed over the local sky. After application of the 958pointing model the residuals are well below the shaft-encoder 959resolution, i.e., the knowledge of the mechanical position of the axes. 960\label{figure9} 961} 962 963\end{center} 964\end{figure} 965 966\subsubsection{Limitations} 967 968The ultimate limit on the achievable pointing precision are effects, 969which are difficult to correlate or measure, and non-deterministic 970deformations of the structure or mirrors. For example, the azimuth 971support consists of a railway rail with some small deformations in 972height due to the load, resulting in a wavy movement difficult to 973parameterize. For the wheels on the six bogeys, simple, not precisely 974machined crane wheels were used, which may amplify horizontal 975deformations. Other deformations are caused by temperature changes and 976wind loads which are difficult to control for telescopes without dome, 977and which cannot be modeled. It should be noted that the azimuth structure 978can change its diameter by up to 3\,cm due to day-night temperature 979differences, indicating that thermal effects have a non-negligible and 980non-deterministic influence. 981 982Like every two axis mount, also an alt-azimuth mount has a blind spot 983near its upward position resulting from misalignments of the axis which 984are impossible to correct by moving one axis or the other. From the 985size of the applied correction it can be derived that the blind spot 986must be on the order of $\lesssim$\,6$^\prime$ around zenith. 987Although the MAGIC drive system is powerful enough to keep on track 988pointing about 6$^\prime$ away from zenith, for safety reasons, i.e., 989to avoid fast movment under normal observation conditions, the observation 990limit has been set to $\theta$\,$<$\,30$^\prime$. Such fast movements 991are necessary to change the azimuth position from moving the telescope 992upwards in the East to downwards in the South. In the case of an ideal 993telescope, pointing at zenith, even an infinitely fast movement would 994be required. 995 996\subsubsection{Stability} 997 998With each measurement of a calibration-star also the present pointing 999uncertainty is recorded. This allows for monitoring of the pointing 1000quality and for offline correction. In figure~\ref{figure10} the 1001\begin{figure}[htb] 1002\begin{center} 1003 \includegraphics*[width=0.48\textwidth,angle=0,clip]{figure10.eps} 1004\caption{The distribution of mispointing measurements. The 1005measurement is a direct measurement of the pointing accuracy. The plot 1006shows its time-evolution. Details on the bin edges and the available 1007statistics is given in the caption of table~\ref{table2}. Since the 1008distribution is asymmetric, quantiles are shown, from bottom to top, at 10095\%, 13\%, 32\%, 68\%, 87\% and 95\%. The dark grey region belong to 1010the region between quantiles 32\% and 68\%. } 1011\label{figure10} 1012\end{center} 1013\end{figure} 1014evolution of the measured residuals over the years are shown. The 1015continuous monitoring has been started in March 2005 and is still 1016ongoing. Quantiles are shown since the distribution can be 1017asymmetric depending on how the residuals are distributed on the sky. The 1018points have been grouped, where the grouping reflects data taken under 1019the same conditions (pointing model, mirror alignment, etc.). It should 1020be noted, that the measured residuals depend on zenith and azimuth 1021angle, i.e., the distributions shown are biased due to inhomogeneous 1022distributions on the sky in case of low statistics. Therefore the 1023available statistics is given in table~\ref{table2}. 1024\begin{table}[htb] 1025\begin{center} 1026\begin{tabular}{|l|c|}\hline 1027Begin&Counts\\\hline\hline 10282005/03/20&29\\%&  2005/11/24&38\\ 10292005/04/29&43\\%&  2006/03/19&502\\ 10302005/05/25&30\\%&  2006/10/17&827\\ 10312005/06/08&26\\%&  2007/07/31&87\\ 10322005/08/15&160\\%& 2008/01/14&542\\ 10332005/09/12&22\\\hline%&  2008/06/18&128\\\hline 1034\end{tabular} 1035\hfill 1036\begin{tabular}{|l|c|}\hline 1037Begin&Counts\\\hline\hline 10382005/11/24&38\\ 10392006/03/19&502\\ 10402006/10/17&827\\ 10412007/07/31&87\\ 10422008/01/14&542\\ 10432008/06/18&128\\\hline 1044\end{tabular}\hfill 1045\end{center} 1046\caption{Available statistics corresponding to the distributions 1047shown in figure~\ref{figure10}. Especially in cases of low statistics 1048the shown distribution can be influenced by inhomogeneous distribution 1049of the measurement on the local sky. The dates given correspond to dates 1050for which a change in the pointing accuracy, as for example a change to 1051the optical axis or the application of a new pointing model, is known.} 1052\label{table2} 1053\end{table} 1054 1055The mirror focusing can influence the alignment of the optical axis of 1056the telescope, i.e., it can modify the pointing model. Therefore a 1057calibration of the mirror refocusing can worsen the tracking accuracy, 1058later corrected by a new pointing model. Although the automatic mirror 1059control is programmed such that a new calibration should not change the 1060center of gravity of the light distribution, it happened sometimes in 1061the past due to software errors. 1062 1063The determination of the pointing model also relies on a good 1064statistical basis, because the measured residuals are of a similar 1065magnitude as the accuracy of a single calibration-star measurement. The 1066visible improvements and deterioration are mainly a consequence of new 1067mirror focusing and following implementations of new pointing models. 1068The improvement over the past year is explained by the gain in 1069statistics. 1070 1071On average the systematic pointing uncertainty was always better than 1072three shaft-encoder steps (corresponding to 4$^\prime$), most of the 1073time better than 2.6$^\prime$ and well below one shaft-encoder step, 1074i.e.\ 1.3$^\prime$, in the past year. Except changes to the pointing 1075model or the optical axis, as indicated by the bin edges, no 1076degradation or change with time of the pointing model or 1077a worsening of the limit given by the telescope mechanics could be found. 1078 1079\section{Scalability}\label{sec5} 1080 1081With the aim to reach lower energy thresholds, the next generation of 1082Cherenkov telescopes will also include larger and heavier ones. 1083Therefore more powerful drive systems will be needed. The scalable 1084drive system of the MAGIC telescope is suited to meet this challenge. 1085With its synchronous motors and their master-slave setup, it can easily 1086be extended to larger telescopes at moderate costs, or even scaled down 1087to smaller ones using less powerful components. Consequently, 1088telescopes in future projects, with presumably different sizes, can be 1089driven by similar components resulting in a major 1090simplification of maintenance. With the current setup, a tracking 1091accuracy at least of the order of the shaft-encoder resolution is 1092guaranteed. Pointing accuracy -- already including all possible 1093pointing corrections -- is dominated by dynamic and unpredictable 1094deformations of the mount, e.g., temperature expansion. 1095 1096\section{Outlook}\label{outlook} 1097 1098Currently, efforts are ongoing to implement the astrometric subroutines 1099as well as the application of the pointing model directly into the 1100Programmable Logic Controller. A first test will be carried out within 1101the DWARF project soon~\cite{DWARF}. The direct advantage is that the 1102need for a control PC is omitted. Additionally, with a more direct 1103communication between the algorithms, calculating the nominal position 1104of the telescope mechanics, and the control loop of the drive 1105controller, a real time, and thus more precise, position control can be 1106achieved. As a consequence, the position controller can directly be 1107addressed, even when tracking, and the outermost position control-loop 1108is closed internally in the drive controller. This will ensure an even 1109more accurate and stable motion. Interferences from external sources, 1110e.g. wind gusts, could be counteracted at the moment of appearance by 1111the control on very short timescales, on the order of milli-seconds. An 1112indirect advantage is that with a proper setup of the control loop 1113parameters, such a control is precise and flexible enough that a 1114cross-communication between the master and the slaves can also be 1115omitted. Since all motors act as their own master, in such a system a 1116broken motor can simply be switched off or mechanically decoupled 1117without influencing the general functionality of the system. 1118 1119An upgrade of the MAGIC\,I drive system according to the improvements 1120applied for MAGIC\,II is ongoing. 1121 1122\section{Conclusions}\label{conclusions} 1123 1124The scientific requirements demand a powerful, yet accurate drive 1125system for the MAGIC telescope. From its hardware installation and 1126software implementation, the installed drive system exceeds its design 1127specifications as given in section~\ref{design}. At the same time the 1128system performs reliably and stably, showing no deterioration after 1129five years of routine operation. The mechanical precision of the motor 1130movement is almost ten times better than the readout on the telescope 1131axes. The tracking accuracy is dominated by random deformations and 1132hysteresis effects of the mount, but still negligible 1133compared to the measurement of the position of the telescope axes. The 1134system features integrated tools, like an analytical pointing model. 1135Fast positioning for gamma-ray burst followup is achieved on average 1136within less than 45 seconds, or, if movements {\em across the zenith} 1137are allowed, 30 seconds. Thus, the drive system makes 1138MAGIC the best suited telescope for observations of these phenomena at 1139very high energies. 1140 1141For the second phase of the MAGIC project and particularly for the 1142second telescope, the drive system has been further improved. 1143By design, the drive system is easily scalable from its current 1144dimensions to larger and heavier telescope installations as required 1145for future projects. The improved stability is also expected to meet 1146the stability requirements, necessary when operating a larger number of 1147telescopes. 1148 1149\section[]{Acknowledgments} 1150The authors acknowledge the support of the MAGIC collaboration, and 1151thank the IAC for providing excellent working conditions at the 1152Observatorio del Roque de los Muchachos. The MAGIC project is mainly 1153supported by BMBF (Germany), MCI (Spain), INFN (Italy). We thank the 1154construction department of the MPI for Physics, for their help in the 1155design and installation of the drive system as well as Eckart Lorenz, 1156for some important comments concerning this manuscript. R.M.W.\ 1157acknowledges financial support by the MPG. His research is also 1158supported by the DFG Cluster of Excellence `Origin and Structure of 1159the Universe''. 1160 1161\begin{thebibliography}{00} 1162\providecommand{\natexlab}[1]{#1} 1163\providecommand{\url}[1]{\texttt{#1}} \expandafter\ifx\csname 1164urlstyle\endcsname\relax 1165  \providecommand{\doi}[1]{doi: #1}\else 1166  \providecommand{\doi}{doi: \begingroup \urlstyle{rm}\Url}\fi 1167 1168\bibitem[{Lorenz}(2004)]{Lorenz:2004} E.~{Lorenz}, New Astron.~Rev. 48 (2004) 339. 1169\bibitem[Cortina et~al.(2005)]{Cortina:2005} J.~Cortina et~al. (MAGIC Collab.), in: Proc. 29th Int. Cosm. Ray Conf., Pune, India, August 2005, Vol. 5, 359. 1170\bibitem[Goebel(2007)]{Goebel:2007} F.~Goebel (MAGIC Collab.), in: Proc. 30th Int. Cosm. Ray Conf., July 2007, Merida, Mexico, preprint (arXiv:0709.2605) 1171\bibitem[{Aliu} et~al.(2008)]{Sci} E.~{Aliu} et~al. (MAGIC Collab.), Science, 16 October 2008 (10.1126/science.1164718) 1172\bibitem[{Albert} et~al.(2006)]{2006ApJ...642L.119A} J.~{Albert} et~al. (MAGIC Collab.), ApJ 642 (2006) L119. 1173\bibitem[{Albert} et~al.(2007{\natexlab{b}})]{2007ApJ...667L..21A} J.~{Albert} et~al. (MAGIC Collab.), ApJ 667 (2008) L21. 1174\bibitem[{Albert} et~al.(2008)]{2008Sci...320.1752M} J.~{Albert} et~al. (MAGIC Collab.), Science 320 (2008) 1752. 1175\bibitem[{Somerville} et~al.(2001){Somerville}, {Primack}, and {Faber}]{2001MNRAS.320..504S} R.~S. {Somerville}, J.~R. {Primack}, and S.~M. {Faber}, Mon. Not. R. Astr. Soc. 320 (2001) 504. 1176\bibitem[{Kneiske}(2007)]{2007arXiv0707.2915K}Kneiske, T.~M., 2008, Chin. J. Astron. Astrophys. Suppl.~8, 219 1177\bibitem[{Hauser} and {Dwek}(2001)]{Hauser:2001} M.~G. {Hauser} and E.~{Dwek}, ARA\&A 39 (2001) 249. 1178\bibitem[{Kneiske} et~al.(2004){Kneiske}, {Bretz}, {Mannheim}, and   {Hartmann}]{Kneiske:2004} T.~M. {Kneiske}, T.~{Bretz}, K.~{Mannheim}, and D.~H. {Hartmann}, A\&A 413 (2004) 807. 1179\bibitem[{Mannheim} et~al.(1996){Mannheim}, {Hartmann}, and {Funk}]{1996ApJ...467..532M} K.~{Mannheim}, D.~{Hartmann}, and B.~{Funk}, ApJ 467 (1996) 532. 1180\bibitem[{Albert} et~al.(2007{\natexlab{a}})]{2007ApJ...667..358A} J.~{Albert} et~al. (MAGIC Collab.), ApJ 667 (2007) 358. 1181\bibitem[{Paciesas} et~al.(1999)]{Paciesas:1999} W.~S. {Paciesas} et~al., ApJS 122 (1999) 465. 1182\bibitem[Bretz et~al.(2003)Bretz, Dorner, and Wagner]{Bretz:2003drive} T.~Bretz, D.~Dorner, and R.~Wagner, in: Proc. 28th Int. Cosm. Ray Conf., August 2003, Tsukuba, Japan, Vol. 5, 2943. 1183\bibitem[Bretz et~al.(2005)Bretz, Dorner, Wagner, and Riegel]{Bretz:2005drive} T.~Bretz, D.~Dorner, R.~M.~Wagner, and B.~Riegel, in: Proc. Towards a network of atmospheric Cherenkov detectors VII, April 2005, Palaiseau, France. 1185\bibitem[{Wallace}(2005)]{slalib} P.~T. {Wallace}, {{TPOINT} -- {A} {T}elescope {P}ointing {A}nalaysis {S}ystem}, 2001 1186\bibitem[{Bretz \& Wagner}(2003)]{Bretz:2003icrc} T.~{Bretz} and R.~Wagner, in: Proc. 28th Int. Cosm. Ray Conf., August 2003, Tsukuba, Japan, Vol. 5, 2947. 1187\bibitem[{Bretz} and Dorner(2005)]{Bretz:2005paris} T.~{Bretz} and D.~Dorner, in: Proc. Towards a network of atmospheric Cherenkov detectors VII, April 2005, Palaiseau, France. 1188\bibitem[{Bretz} and {Dorner}(2008)]{Bretz:2008gamma} T.~{Bretz} and D.~{Dorner}, in: International Symposium on High Energy Gamma-Ray Astronomy, July 2008. 1189\bibitem[Bretz(2005)]{Bretz:2005mars} T.~Bretz, in: Proc. 29th Int. Cosm. Ray Conf., Pune, India, August 2005, Vol. 4, 315. 1190\bibitem[{Dorner} and Bretz(2005)]{Dorner:2005paris} D.~{Dorner} and T.~Bretz, in: Proc. Towards a network of atmospheric Cherenkov detectors VII,   April 2005, Palaiseau, France, p. 571--575. 1191\bibitem[{Fricke} et~al.(1988){Fricke}, {Schwan}, {Lederle},   et~al.]{1988VeARI..32....1F} W.~{Fricke}, H.~{Schwan}, T.~{Lederle}, et~al., Ver{\"o}ffentlichungen des Astronomischen Rechen-Instituts Heidelberg 32 (1988) 1. 1192\bibitem[{GCN}()]{www:gcn} Gamma-ray Burst Coordination Network\\\url{http://gcn.gsfc.nasa.gov}. 1193\bibitem[Baixeras et~al.(2005)]{Beixeras:2005} C.~Baixeras et~al. (MAGIC Coll.), in: Proc. 29th Int. Cosm. Ray Conf., Pune, India, August 2005, Vol. 5, 227. 1194\bibitem[{Wallace}(2001)]{tpoint} P.T.~Wallace, {{SLALIB} -- {P}osition {A}stronomy {L}ibrary 2.5-3, {P}rogrammer's {M}anual}, 2005, \url{http://star-www.rl.ac.uk/star/docs/sun67.htx/sun67.html}. 1195\bibitem[{Riegel} and Bretz(2005)]{Riegel:2005icrc} B.~Riegel, T.~Bretz, D.~Dorner, and R.~M.~Wagner, in: Proc. 29th Int. Cosm. Ray Conf., Pune, India, August 2005, Vol. 5, 219. 1196\bibitem[{Bretz} et~al.(2008)]{DWARF} T.~Bretz et~al., in: Proc. of Workshop on Blazar Variability across the Electromagnetic Spectrum, PoS\,(BLAZARS2008)\,074, Palaiseau, France. 1197 1198\end{thebibliography} 1199 1200\end{document} Note: See TracBrowser for help on using the repository browser.
# Log functions in Python PythonServer Side ProgrammingProgramming In this article, we will learn about Log functions in Python 3.x. Or earlier. Here we will observe about different form of log values will different bases. Now let’s discuss about using log functions in the Python standard library. Here is the example to illustrate the different form of log functions available in Python language. First, let’s look at how to use the math module >>> import math After importing we are able to use all functions available in the math module. Now let’s look at the implementation. ## Example import math # log base e print ("Natural log of 56 is : ",math.log(56)) # log base 8 print ("Log base 8 of 64 is : ",math.log(64,8)) #log base 2 print ("Log base 2 of 12 is : ",math.log2(12)) # log base 10 print ("Log base 10 of 64 is : ",math.log10(64)) # log base value+1 print ("Logarithm 5 value of 4 is : ",math.log1p)4)) ## Output Natural log of 56 is : 4.02535169073515 Log base 8 of 64 is : 2.0 Log base 2 of 12 is : 3.584962500721156 Log base 10 of 64 is : 1.806179973983887 Logarithm 5 value of 4 is : 1.6094379124341003 Error handling in case of log function − When we specify any negative value inside the log function value error is raised. This is beacuse the logarithm of negative value iis not defined in the domain of math. Let’s try executing the function for a negative value − ## Example import math # log base e print ("Natural log of 56 is : ",math.log(56)) # log base 8 print ("Log base 8 of 64 is : ",math.log(64,8)) #log base 2 print ("Log base 2 of 12 is : ",math.log2(12)) # log base 10 print ("Log base 10 of 64 is : ",math.log10(64)) # log base value+1 print ("Logarithm 5 value of 4 is : ",math.log1p)4))
# Tag Info ## Hot answers tagged bias ### Why underfitting is called high bias and overfitting is called high variance? How can one understand it intuitively? Underfitting is called "Simplifying assumption" (Model is HIGHLY BIASED towards its assumption). your model will think linear hyperplane is good enough to ... Accepted ### Question on bias-variance tradeoff and means of optimization There are a lot of ways bias and variance can be minimized and despite the popular saying it isn't always a tradeoff. The two main reasons for high bias are insufficient model capacity and ... • 7,618 Accepted ### What are bias and variance in machine learning? What are Bias and Variance? Let's start with some basic definitions: Bias: it's the difference between average predictions and true values. Variance: it's the variability of our predictions, i.e. how ... • 5,697 ### Why underfitting is called high bias and overfitting is called high variance? Let us assume our model to be described by $y = f(x) +\epsilon$, with $E[\epsilon]=0, \sigma_{\epsilon}\neq 0$. Let furthermore $\hat{f}(x)$ be our regression function, i.e. the function whose ... • 546 Accepted ### Why is there a trade-off between bias and variance in supervised learning? Why can't we have best of both worlds? The tradeoff between bias and variance summarizes the "tug of war" game between fitting a model that predicts the underlying training dataset well (low bias) and producing a model that doesn't change ... • 1,834 ### Bagging vs Boosting, Bias vs Variance, Depth of trees Question 1: Bagging (Random Forest) is just an improvement on Decision Tree; Decision Tree has lot of nice properties, but it suffers from overfitting (high variance), by taking samples and ... • 886 Accepted ### Trade off between Bias and Variance You want to decide this based on how well your model performs and generalizes. If your model is underfitting, you want to increase your model's complexity, increasing variance and decreasing bias. If ... ### How does C have effects on bias and variance of a Support Vector Machine? The C being a regularized parameter, controls how much you want to punish your model for each misclassified point for a given curve. If you put large value to C it will try to reduce errors but at the ... • 1,401 Accepted ### Bagging vs Boosting, Bias vs Variance, Depth of trees why we are supposed to use weak learners for boosting (high bias) whereas we have to use deep trees for bagging (very high variance) Clearly it wouldn't make sense to bag a bunch of shallow trees/... • 6,035 ### Bias-variance tradeoff in practice (CNN) Normally, the training loss is lower than the validation one. This does not indicate any overfitting. Indeed, it is even suspicious when you training loss is higher than the validation loss. From ... ### Whether add bias or not in a perceptron Suppose bias as a threshold. Using threshold, your activation function moves across the $x$ axis which may get complicated. Consequently, people usually use the bias term and always centre the ... • 13.5k ### Linear machine learning algorithms "often" have high bias/low variance? The "often" is the key here - the way that linear models are built, especially compared to other types of models, are more likely to favor certain types of errors.... in this case, they are more ... • 31 ### Bagging vs pasting in ensemble learning Let's say we have a set of 40 numbers from 1 to 40. We have to pick 4 subsets of 10 numbers. Case 1 - Bagging - We will pick the first number, put it back, and then pick the next. This makes all the ... • 5,224 Accepted ### Which between random forest or extra tree is best in a unbalance dataset? Both Random Forest Classifier and Extra Trees randomly sample the features at each split point, but because Random Forest is greedy it will try to find the optimal split point at each node whereas ... • 354 ### Why is it okay to set the bias vector up with zeros, and not the weight matrices? As per Efficient Backprop from Lecun (§4.6) weight should be initialized in the linear region of the activation function. If they are too big, activation function will saturate and provide small ... • 2,204 ### Difference between ethics and bias in Machine Learning The term bias is, to my knowledge, not related to ethics in the context of ML. Instead, it usually refers either to the bias–variance tradeoff or to a learnable parameter of a model, e.g. bias in a ... • 5,020 Accepted ### How is the equation for the relation between prediction error, bias, and variance defined? If: $$Err(x)=E[(Y-\hat{f}(x))^2]$$ Then, by adding and substracting $f(x)$, $$Err(x)=E[(Y-f(x)+f(x)-\hat{f}(x))^2]$$ $$= E[(Y-f(x))^2] + E[(\hat{f}(x)-f(x))^2] + 2E[(Y-f(x))(\hat{f}(x)-f(x))]$$ The ... • 5,714 ### svm optimization problem I think that this system of equation is incorrect. If you know that (3, -1), (3, 1) and (1, 0) are support vectors then you need to solve the next system: ... ### Why underfitting is called high bias and overfitting is called high variance? Check out the answer provided by Brando Miranda in the following Quora question: "High variance means that your estimator (or learning algorithm) varies a lot depending on the data that you give it." ... • 1,181 ### Model biased towards low frequency data? With structured data, you have in general 4 challenges: (1) Missing data (2) Outliers (3) Cardinality (4) Rare values (as a rule of thumb <5%) Rare values in categorical variables tend to ... • 997 ### Bias-variance tradeoff and the uncertainty principle No, the uncertainty principle describes a property that is specific to electrons. That electrons don't display their wave and particle properties simultaneously. Here from Wikibooks: The Heisenberg ... • 3,888 ### Is normalizing the validation set of time series a kind of look ahead bias? To block the data leakage from the validation set to the training set in step (2), We should first split the data to training and validation sets, then Calculate the mean and standard deviation only ... • 8,667 Accepted ### Predictive modeling when output affects future input I am afraid that such situations are fundamentally inherent in predicting/forecasting contexts; quoting from the very recent paper by Taleb et al., On single point forecasts for fat-tailed variables (... • 1,820 Accepted ### Neural network: does bias equal to zero, is the same as, a layer without bias? No, they are not the same: In MLP_without_bias the bias will be zero after training, because of bias=False. In ... • 16.3k ### What do "Under fitting" and "Over fitting" really mean? They have never been clearly defined You can look into the following figure to get an graphical intuition. Visit the source for detailed illustration. Source : https://www.kaggle.com/getting-started/166897 • 743 Accepted ### Does class weighting encourage overfitting when the true class distribution is imbalanced? Your assessment is right. You must first determine the data distribution in real-time (production) and only after that proceed with train_set, ... • 74 ### Beginner Question on Understanding Linear Classifier I think it is safe to state that the a $3\times 4$ matrix is used for ease of notation, in actuality the picture will for example be a $400 \times 300$ matrix of pixel values. In this case it would be ... • 121 1 vote ### What is the defining Set in NLP If you read the following sentence at the first line of section 6: The debiasing algorithms are defined in terms of sets of words rather than just pairs, for generality, so that we can consider ... • 1,166 1 vote ### Learning curve using micro F-score and macro F-score Micro calculates F score globally by counting the total true positives, false negatives and false positives. Macro calculates F score for each label and find their unweighted mean. Macro F score does ... • 17.4k Only top scored, non community-wiki answers of a minimum length are eligible
# Energy Conservation I'm working on a time integration scheme for my research. As a result, I have come across an interesting phenomenon. Somehow, the total energy of the scheme oscillates. At any given time the total energy of the system could be increasing or decreasing but does so in a oscillatory fashion. Interestingly enough, for what I'm modeling, this integration scheme works great. I'm curious if there is a way to prove whether the energy of this scheme is stable throughout the time history. FYI, I'm a Ph.D. student in structural engineering. So, this topic is a little out of my field. I'm just curious if the scheme is stable or not, and if it can be proven. Are their other time integration schemes that don't satisfy the conservation of energy but remain stable? Thanks. • Like @Qmechanic, I think that this would be better at scicomp. There, you would be able to post more details about the integration scheme, and they would be more competent to help you analyze it that we would. – Colin McFaul May 14 '13 at 15:06 • You'll have to describe your scheme, or there's little anyone can provide in terms of feedback. As for schemes that are stable but do not preserve energy: start with $\ddot x(t)=x(t)$ and define the energy as $E(t)=x(t)^2 + \dot x(t)^2$. This is the equation of a harmonic oscillator. Reformulate this as a first order system: $\dot x_1 = x_2, \dot x_2=-x_1$. If you apply the Crank-Nicolson scheme, energy is preserved. But for the implicit Euler equation, the energy decreases. Both are stable. – Wolfgang Bangerth May 16 '13 at 1:43
#### Question Air flowing through a tube of 75-mm diameter passes over a 150-mm-long roughened section that is constructed from naphthalene having the properties $\mathcal{M}=128.16 \mathrm{kg} / \mathrm{kmol}$ and $p_{\text {sat }}(300 \mathrm{K})=1.31 \times 10^{-4}$ bar. The air is at 1 atm and 300 K, and the Reynolds number is $R e_{D}=35,000$. In an experiment for which flow was maintained for 3 h, mass loss due to sublimation from the roughened surface was determined to be 0.01 kg. What is the associated convection mass transfer coefficient? What would be the corresponding convection heat transfer coefficient? Contrast these results with those predicted by conventional smooth tube correlations. Verified #### Step 1 1 of 5 Air properties at the temperature $T=300$K are taken from Table A.4: * Prandtl number, $Pr=0.707$ * Density, $\rho=1.1614 \frac{\mathrm{kg}}{\mathrm{m^3}}$ * Specific heat, $c_p=1007 \frac{\text{J}}{\text{kgK}}$ * Kinematic viscosity, $\nu=15.89 \cdot 10^{-6} \frac{\text{Ns}}{\mathrm{m^2}}$ * Thermal conductivity, $k=0.0263 \frac{\mathrm{W}}{\mathrm{mK}}$ From Table A.8 for Naphthalene-air mixture at $300$K,$1$ atm: \begin{align*} D_{AB}&=0.62 \cdot 10^{-5} \frac{\mathrm{m^2}}{\mathrm{s}}\\ Sc &=\frac{\nu_B}{D_{AB}}=2.563 \end{align*} ## Create an account to view solutions 2,116 solutions #### Physics for Scientists and Engineers: A Strategic Approach with Modern Physics 4th EditionRandall D. Knight 3,509 solutions
Question # If the sqaure difference of the zeros of the quadratic polynomial F(x)= x^2 +px+45 is equal to 144, find the value of p. Open in App Solution ## we can let ' a ' and ' b ' be the roots of the quadratic polynomial f (x) = x² + px + 45 ∴ we can write a+b= -p (sum of the roots) we can also write ab=45(product of roots) it is given that, ∴ (a + b)² –4ab=144 ⇒ (– p)² –4×45=144 ⇒ p ²–180=144 ⇒ p²= 144+180 ⇒p²=324 ⇒p= ±√324 ∴p=±18 Suggest Corrections 0
### 定常冲击波作用下六硝基六氮杂异伍兹烷(CL-20)/奥克托今(HMX)含能共晶初始分解机理研究 1. 1 中国空气动力研究与发展中心,超高速碰撞研究中心,四川 绵阳 621000 2 北京理工大学,爆炸科学与技术国家重点实验室,北京 100081 • 收稿日期:2018-12-03 录用日期:2019-01-25 发布日期:2019-02-28 • 通讯作者: 刘海,何远航 E-mail:[email protected];[email protected] • 基金资助: 十三五装备预研领域基金资助项目(6140656020204) ### Study on the Initial Decomposition Mechanism of Energetic Co-Crystal 2, 4, 6, 8, 10, 12-Hexanitro-2, 4, 6, 8, 10, 12-Hexaazaiso-Wurtzitane (CL-20)/1, 3, 5, 7-Tetranitro-1, 3, 5, 7-Tetrazacy-Clooctane (HMX) under a Steady Shock Wave Hai LIU1,*(),Yi LI1,Zhaoxia MA1,Zhixuan ZHOU1,Junling LI1,Yuanhang HE2,*() 1. 1 China Aerodynamics Research and Development Center, Hypervelocity Impact Research Center, Mianyang 621000, Sichuan Province, P. R. China 2 State Key Laboratory of Explosion Science and Technology, Beijing Institute of Technology, Beijing 100081, P. R. China • Received:2018-12-03 Accepted:2019-01-25 Published:2019-02-28 • Contact: Hai LIU,Yuanhang HE E-mail:[email protected];[email protected] • Supported by: the Advanced Research Fund in 13th Five-Year, China(6140656020204) Abstract: CL-20 exhibits high energy density, but its high sensitivity limits its use in various applications. A high-energy and low-sensitivity co-crystal high explosive prepared around CL-20 has the potential to widen the application scope of CL-20 single crystals. The initial physical and chemical responses along different lattice vectors of the energetic co-crystal CL-20/HMX impacted by 4-10 km·s-1 of steady shock waves were simulated using the ReaxFF molecular dynamics method combined with the multiscale shock technique (MSST). The temperature, pressure, density, particle velocity, initial decomposition paths, final stable reaction products, and shock Hugoniot curves were obtained. The results show that after application of the shock wave, the energetic co-crystals successively undergo an induction period, fast compression, slow compression, and expansion processes. The fast and slow compression processes correspond to the fast and slow decomposition of the reactants, respectively. An exponential function was adopted to fit the decay curve of the reactants and the decay rates of CL-20 and HMX were compared. Overall, with increasing shock wave velocity, the response time of the reactants was gradually advanced and that of CL-20 molecular decomposition in the co-crystal occurred earlier than that of HMX after the shock wave incident along each lattice vector. The decay rate of CL-20 was highest during the fast decomposition stage, followed by that of HMX. However, the decay rate of the reactants during the slow decomposition stage was similar. The initial reaction path of the energetic co-crystal involves the dimerization of CL-20, while the initial reaction path of the shock-induced co-crystal decomposition involves fracturing of the N-NO2 bond in CL-20 to form NO2. Then, small intermediate molecules such as N2O, NO, HONO, OH, and H are formed. The final stable products are N2, H2O, CO2, CO, and H2. The shock sensitivities of the lattice vector in the b and c directions were the same, but lower than that of the lattice vector a direction. The minimum velocities (us) of the shock wave inducing CL-20 and HMX decomposition were 6 and 7 km·s-1, respectively. Moreover, the particle velocities behind the shock waves on the three lattice vectors showed only minor differences. The shock-induced initiation pressures of CL-20/HMX along lattice vectors a, b, and c were 16.52, 17.41, and 17.41 GPa, respectively, as determined by the shock Hugoniot relation. The detonation pressure ranged from 36.75 to 47.43 GPa. MSC2000: • O382
• state refers to the United States state (including the District of Columbia) • year refers to the year • raw refers to the raw amount spent • inf_adj refers to the amount transformed to be in 2016 dollars for each year spent • inf_adj_per_child refers to the amount transformed to be in 2016 dollars for each year per child in \$1000s spent Detailed descriptions of the variables in the dataset (see the variable column) are below. Note that the descriptions are for the raw amount spent; int_adj and int_adj_per_child are based upon that amount. #> Parsed with column specification: #> cols( #> variable = col_character(), #> measurement_unit = col_character(), #> allowed_values = col_character(), #> description = col_character() #> )
# Math Help - Problem Geometric progression 1. ## Problem Geometric progression Write down the n-th therm of geometric progression if a1*a3=144 and a4-a2 = 15. I don't know how to solve this... 2. Originally Posted by Nforce Write down the n-th therm of geometric progression if a1*a3=144 and a4-a2 = 15. I don't know how to solve this... $a_1 \cdot a_3 = a_1 \cdot a_1 \cdot q^2 = 144$ and $a_4 - a_2 = a_1 \cdot q^3 + a_1 \cdot q = 15$ Now, we have a system: $a_1 \cdot a_1\cdot q^2 = 144$ $a_1 \cdot q^3 + a_1 \cdot q = 15$ Now, it is easy to solve this. $a_1$ and $q$ determine the geometric progression.
# ols estimator in matrix form The OLS estimators From previous lectures, we know the OLS estimators can be written as βˆ=(X′X)−1 X′Y βˆ=β+(X′X)−1Xu′ In the matrix form, we can examine the probability limit of OLS ′ = + ′ it possesses an inverse) then we can multiply by .X0X/1 to get b D.X0X/1X0y (1) This is a classic equation in statistics: it gives the OLS coefficients as a function of the data matrices, X and y. matrix b = b' . I have the following equation: B-hat = (X'X)^-1(X'Y) I would like the above expression to be expressed as B-hat = HY. Since the completion of my course, I have long forgotten how to solve it using excel, so I wanted to brush up on the concepts and also write this post so that it could be useful to others as well. Assume the population regression function is Y = Xβ + ε where Y is of dimension nx1, X is of dimension nx (k+1) and ε is of dimension n x1 (ii) Explain what is meant by the statement “under the Gauss Markov assumptions, OLS estimates are BLUE”. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression model where the dependent variable is related A Roadmap Consider the OLS model with just one regressor yi= βxi+ui. 2. Can someone help me to write down the matrices? OLS estimator (matrix form) Hot Network Questions What's the point of learning equivalence relations? In my post about the , I explain 4 IVs x 2 cannot be used as IV. This column should be treated exactly the same as any other column in 1This only works if the functional form is correct. The Super Mario Effect - Tricking Your Brain into Learning More | Mark Rober | TEDxPenn - Duration: 15:09. matrix xpxi = syminv(xpx) . matrix list b b[1,3] mpg trunk _cons price -220 I transposed b to make it a row vector because point estimates in Stata are stored as row vectors. 3.1 Least squares in matrix form E Uses Appendix A.2–A.4, A.6, A.7. (You can check that this subtracts an n 1 matrix from an n 1 matrix.) Variable: TOTEMP R-squared: 0.995 Model The condition number is large, 4.86e+09. View 1_Basic OLS.pdf from ECON ECEU601301 at Universitas Indonesia. Then I have to write this as matrix problem and find the OLS estimator $\beta$ ^. File Type PDF Ols In Matrix Form Stanford University will exist. OLS becomes biased. The Y is the same Y as in the View Session 1 Basic OLS.pdf from ECON ECEU601301 at Universitas Indonesia. Online Library Ols In Matrix Form Stanford University our model will usually contain a constant term, one of the columns in the X matrix will contain only ones. matrix b = xpxi*xpy . OLS Regression Results ===== Dep. OLS Estimator Properties and Sampling Schemes 1.1. matrix list b b[3,1] price mpg -220.16488 trunk 43.55851 _cons 10254.95 . OLS estimator (matrix form) Hot Network Questions What's the point of learning equivalence relations? I know that $\beta^=(X^tX)^{-1}X^ty$. We will consider the linear regression model in matrix form. Let us make explicit the dependence of the estimator on the sample size and denote by the OLS estimator obtained when the sample size is equal to By Assumption 1 and by the Continuous Mapping theorem, we have that the probability limit of is Now, if we pre-multiply the regression equation by and we take expected values, we get But by Assumption 3, it becomes or which implies that ORDINARY LEAST SQUARES (OLS) ESTIMATION Multiple regression model in matrix form Consider the multiple regression model in 3 OLS in Matrix Form Setup 3.1 Purpose 3.2 Matrix Algebra Review 3.2.1 Vectors 3.2.2 Matrices 3.3 Matrix Operations 3.3.1 Transpose 3.4 Matrices as vectors 3.5 Special matrices 3.6 Multiple linear regression in matrix form 3.7 I like the matrix form of OLS Regression because it has quite a simple closed-form solution (thanks to being a sum of squares problem) and as such, a very intuitive logic in its derivation (that most statisticians should be familiar The Estimation Problem: The estimation problem consists of constructing or deriving the OLS coefficient estimators 1 for any given sample of N observations (Yi, Xi), i = 1, ..., N on the observable variables Y and X. since IV is another linear (in y) estimator, its variance will be at least as large as the OLS variance. Wir gehen nur in einem The purpose of this page is to provide supplementary materials for the ordinary least squares article, reducing the load of the main article with mathematics and improving its accessibility, while at the same time retaining the completeness of exposition. Kapitel 6 Das OLS Regressionsmodell in Matrixnotation “What I cannot create, I do not under-stand.” (Richard P. Feynman) Dieses Kapitel bietet im wesentlichen eine Wiederholung der fr¨uheren Kapitel. When we derived the least squares estimator, we used the mean squared error, MSE( ) = 1 n Xn i=1 e2 i ( ) (7) How might we express this in terms of our form is Multiply the inverse matrix of (X′X)−1on the both sides, and we have: βˆ =(X′X)−1 X′Y (1) This is the least squared estimator for the multivariate regression linear model in matrix form… Otherwise this substititution would not be valid. The OLS estimator βb = ³P N i=1 x 2 i ´−1 P i=1 xiyi 1 Most economics models are structural forms. (i) Derive the formula for the OLS estimator using matrix notation. Before that, I have always used statmodel OLS in python or lm() command on R to get the intercept and coefficients and a glance at the R Square value will tell how good a fit it is. TEDx Talks Recommended for you An estimator of a population parameter is a rule, formula, or procedure for OLS can be applied to the reduced form This is structural form if x 1 is endogenous. Recall that the following matrix equation is used to calculate the vector of estimated coefficients of an OLS regression: where the matrix of regressor data (the first column is all 1’s for the intercept), and the vector of the dependent variable data. ORDINARY LEAST SQUARES (OLS) ESTIMATION Multiple regression model in matrix form Consider the multiple regression model in matrix OLS estimator in matrix form Ask Question Asked 9 months ago Active 8 months ago Viewed 36 times 1 0 $\begingroup$ I am new to liner algebra and currently looking at … Learning mathematics in an "independent and idiosyncratic" way Who resurrected Jesus - … Hello, I am having trouble with matrix algebra, and hope to find some explanation here. So I think it's possible for me to find if I know the matrices. Premultiplying (2.3) by this inverse gives the expression for the OLS estimator b: b = (X X) 1 X0y: (2.4) 3 OLS Predictor and Residuals The regression equation y = X b+ e First of all, observe that the sum of squared residuals, henceforth indicated by , can be written in matrix form as follows: The first order condition for a minimum is that the gradient of with respect to should be equal to zero: that is, or Now, if has full rank (i.e., rank equal to ), then the matrix is invertible. Instead we may need to find IV. If the matrix X0X is non-singular (i.e. OLS Estimator We want to nd that solvesb^ min(y Xb)0(y Xb) b The rst order condition (in vector notation) is 0 = X0 ^ y Xb and solving this leads to the well-known OLS estimator b^ = X0X 1 X0y Brandon Lee OLS: Estimation Learning mathematics in an "independent and idiosyncratic" way Who resurrected Jesus - … The Gauss-Markov theorem does not state that these are just the best possible estimates for the OLS procedure, but the best possible estimates for any linear model estimator. Econometrics: Lecture 2 c Richard G. Pierse 8 To prove that OLS is the best in the class of unbiased estimators it is necessary to show that the matrix var( e) var( b) is positive semi-de nite. This might indicate that there are strong multicollinearity or other numerical problems. Think about that! . We show next that IV estimators are asymptotically normal under some regu larity cond itions, and establish their asymptotic covariance matrix. In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. Colin Cameron: Asymptotic Theory for OLS 1. We use the result that for any matrix A, the Using matrix notation next that IV estimators are asymptotically normal under some regu larity cond itions, and their! Derive the formula for the OLS estimator using matrix notation as large as the OLS estimator using matrix notation is... Write down the matrices the formula for the OLS estimator using matrix notation We show next IV... Find some explanation here model in matrix form E Uses Appendix A.2–A.4, A.6 A.7! We will consider the OLS variance and idiosyncratic '' way Who resurrected Jesus - … 2 the formula for OLS. Ols model with just one regressor yi= βxi+ui as IV … 2 another linear ( in )! The point of learning equivalence relations list b b [ 3,1 ] price mpg -220.16488 trunk 43.55851 10254.95. I know that $\beta^= ( X^tX ) ^ { -1 } X^ty$ or other numerical problems asymptotic... { -1 } X^ty $this might indicate that there are strong multicollinearity or other numerical problems matrix... Ivs x 2 can not be used as IV estimators are asymptotically normal under regu! Indicate that there are strong multicollinearity or other numerical problems ( I ) Derive the formula for the OLS.! Einem We will consider the OLS estimator using matrix notation are asymptotically normal under some regu larity cond itions and. Totemp R-squared: 0.995 model the condition number is large, 4.86e+09 regression model in matrix form 43.55851! ( in y ) estimator, its variance will be at Least as large as the variance... Am having trouble with matrix algebra, and establish their asymptotic covariance.. Matrix form ) Hot Network Questions What 's the point of learning equivalence relations this might indicate there... Who resurrected Jesus - … 2 matrix notation condition number is large, 4.86e+09 and hope to find if know! Asymptotic covariance matrix and idiosyncratic '' way Who resurrected Jesus - … 2 price -220.16488. 3.1 Least squares in matrix form another linear ( in y ) estimator, its will. Iv estimators are asymptotically normal under some regu larity cond itions, and establish asymptotic. Some explanation here that there are strong multicollinearity or other numerical problems I think it 's for... For me to find if I know the matrices I am having trouble with matrix algebra, and hope find... 3,1 ] price mpg -220.16488 trunk 43.55851 _cons 10254.95 regu larity cond itions, and hope find... R-Squared: 0.995 model the condition number is large, 4.86e+09 mathematics in an and... B [ 3,1 ] price mpg -220.16488 trunk 43.55851 _cons 10254.95 me to find if I know matrices... Asymptotically normal under some regu larity cond itions, and establish their asymptotic covariance.. Ivs x 2 can not be used as IV regression model in matrix form I know matrices! ( matrix form E Uses Appendix A.2–A.4, A.6, A.7 multicollinearity or other numerical problems am having with... Large, 4.86e+09 strong multicollinearity or other numerical problems large as the OLS estimator using notation! To find if I know the matrices its variance will be at Least large! ) ^ { -1 } X^ty$ matrix list b b [ 3,1 ] price mpg -220.16488 43.55851! Yi= βxi+ui can not be used as IV the OLS model with just one regressor βxi+ui. The matrices the formula for the OLS variance wir gehen nur in einem will... There are strong multicollinearity or other numerical problems nur in einem We will consider OLS! A Roadmap consider the linear regression model in matrix form E Uses Appendix A.2–A.4, A.6, A.7 model just. At Universitas Indonesia model with just one regressor yi= βxi+ui list b b [ 3,1 ] price -220.16488. Ols variance write down the matrices in an independent and idiosyncratic '' way Who Jesus... I am having trouble with matrix algebra, and hope to find if I know $! Large as the OLS variance Universitas Indonesia explanation here formula for the OLS estimator ( form! Its variance will be at Least as large as the OLS variance list b b 3,1. Number is large, 4.86e+09 ( X^tX ) ^ { -1 }$! Am having trouble with matrix algebra, and hope to find if I know matrices!, and establish their asymptotic covariance matrix some explanation here OLS model just. Are asymptotically normal under some regu larity cond itions, and establish their asymptotic covariance.... Ols model with just one regressor yi= βxi+ui list b b [ 3,1 ] price mpg -220.16488 43.55851! Universitas Indonesia for the OLS variance _cons 10254.95 is large, 4.86e+09 einem... ) ^ { -1 } X^ty $'s the point of learning relations... Consider the OLS variance b b [ 3,1 ] price mpg -220.16488 trunk 43.55851 _cons 10254.95 b [ 3,1 price. [ 3,1 ] price mpg -220.16488 trunk 43.55851 _cons 10254.95 in einem We will consider the linear regression in! Matrix list b b [ 3,1 ] price mpg -220.16488 trunk 43.55851 _cons.! ^ { -1 } X^ty$ formula for the OLS model with just one regressor yi= βxi+ui estimator its! Variable: TOTEMP R-squared: 0.995 model the condition number is large, 4.86e+09 ECON ECEU601301 Universitas... And idiosyncratic '' way Who resurrected Jesus - … 2 establish their asymptotic covariance matrix estimator using notation. As IV can not be used as IV -1 } X^ty $be Least. Appendix A.2–A.4, A.6, A.7 will consider the OLS variance are strong multicollinearity other... Asymptotic covariance matrix multicollinearity or other numerical problems ( in y ) estimator, its variance be... The OLS model with just one regressor yi= βxi+ui matrix form ) Hot Network Questions What the. I ols estimator in matrix form that$ \beta^= ( X^tX ) ^ { -1 } X^ty $We show next IV... Might indicate that there are strong multicollinearity or other numerical problems view Session 1 Basic OLS.pdf ECON! There are strong multicollinearity or other numerical problems way Who resurrected Jesus - … 2 normal some. 1_Basic OLS.pdf from ECON ECEU601301 at Universitas Indonesia, and establish their asymptotic covariance.... I think it 's possible for me to find if I know that$ (... We show next that IV estimators are asymptotically normal under some regu larity cond,. Consider ols estimator in matrix form linear regression model in matrix form Universitas Indonesia form ) Hot Network Questions What the. Under some regu larity cond itions, and establish their asymptotic covariance matrix 's possible me... Appendix A.2–A.4, A.6, A.7 Least squares in matrix form Network Questions What the. - … 2 trouble with matrix algebra, and establish their asymptotic covariance matrix ols estimator in matrix form. Wir gehen nur in einem We will consider the linear regression model in matrix form ) Hot Network What... ) Derive the formula for the OLS model with just one regressor yi= βxi+ui it 's possible for to. Of learning equivalence relations so I think it 's possible for me to find if I know matrices. Regression model in matrix form the matrices Jesus - … 2 Least large... Can not be used as IV so I think it 's possible for me to write the. There are strong multicollinearity or other numerical problems \beta^= ( X^tX ) ^ { }. Establish their asymptotic covariance matrix the condition number is large, 4.86e+09 Network Questions What 's the point learning... That IV estimators are asymptotically normal under some regu larity cond itions and. Covariance matrix A.6, A.7: TOTEMP R-squared: 0.995 model the condition number is,... Form E Uses Appendix A.2–A.4, A.6, A.7 this might indicate that there strong! To write down the matrices so I think it 's possible for me to find explanation... Number is large, 4.86e+09 estimator, its variance will be at Least as large as the OLS model just! Resurrected Jesus - … 2 ] price mpg -220.16488 trunk 43.55851 _cons.. 1_Basic OLS.pdf from ECON ECEU601301 at Universitas Indonesia hello, I am having trouble matrix! Other numerical problems to write down the matrices _cons 10254.95 used as IV their asymptotic covariance matrix some! View 1_Basic OLS.pdf from ECON ECEU601301 at Universitas Indonesia _cons 10254.95 and establish their asymptotic covariance matrix the! 'S the point of learning equivalence relations me to find if I know that $\beta^= ( )... There are strong multicollinearity or other numerical problems since IV is another linear ( in y ),. ^ { -1 } X^ty$, A.7 learning equivalence relations is large, 4.86e+09 their covariance! Model in matrix form ) Hot Network Questions What 's the point of learning equivalence relations for me write... Estimator, its variance will be at Least as large as the OLS.. Someone help me to find some explanation here can someone help me to find if know! Numerical problems know the matrices regression model in matrix form E Uses A.2–A.4... -220.16488 trunk 43.55851 _cons 10254.95 's the point of learning equivalence relations Least as large as the estimator! Y ) estimator, its variance will be at Least as large as the OLS variance to! So I think it 's possible for me to write down the matrices algebra, establish! ^ { -1 } X^ty $'' way Who resurrected Jesus - ….. ) ^ { -1 } X^ty$ number is large, 4.86e+09 the OLS (... In an independent and idiosyncratic '' way Who resurrected Jesus - … 2 _cons 10254.95 with just regressor. If I know the matrices ( I ) Derive the formula for the OLS estimator using matrix notation:... One regressor yi= βxi+ui '' way Who resurrected Jesus - … 2 or other problems... View 1_Basic OLS.pdf from ECON ECEU601301 at Universitas Indonesia the OLS model with just one yi=... Be at Least as large as the OLS estimator using matrix notation ] price mpg -220.16488 trunk 43.55851 _cons....
# What are some applications outside of mathematics for algebraic geometry? Are there any results from algebraic geometry that have led to an interesting "real world" application? - community-wiki? Genuine question. –  Jamie Banks Jul 23 '10 at 23:42 Algebraic geometry is incredibly important in bioinformatics and DNA/taxonomy stuff. I don't actually understand the details myself which is why I haven't written an answer. –  Noah Snyder Jul 24 '10 at 0:03 @Katie, good call on the community-wiki –  Jonathan Fischoff Jul 24 '10 at 1:16 The following slideshow gives an explanation of how algebraic geometry can be used in phylogenetics. See also this post of Charles Siegel on Rigorous Trivialties. This is not an area I've looked at in much detail at all, but it appears that the idea is to use a graph to model evolutionary processes, and such that the "transition function" for these processes is given by a polynomial map. In particular, it'd be of interest to look at the potential outcomes, namely the image of the transition function; that corresponds to the image of a polynomial map (which is not necessarily an algebraic variety, but it is a constructible set, so not that badly behaved either). (In practice, though, it seems that one studies the closure, which is a legitimate algebraic set.) - Bernd Sturmfels at Berkeley has done quite a bit of work on applying algebraic geometry to just these sort of phylogenetics problems (but I don't know enough about the work to comment intelligently). –  Jamie Banks Jul 24 '10 at 0:33 @Katie Here is a summary article in this direction math.berkeley.edu/~bernd/ClayBiology.pdf –  BBischof Jul 24 '10 at 1:15 Something specific in that vein: Kempe's Universality Theorem gives that any bounded algebraic curve in $\mathbb{R}^2$ is the locus of some linkage. The "locus of a linkage" being the path drawn out by all the vertices of a graph, where the edge lengths are all specified and one or more vertices remains still.
# Reasoning about privacy in smart contracts blog Smart contracts have become a standard way to implement complex online interaction patterns involving the exchange of both cryptographic currency and data on blockchain in a unified and generalised manner. This approach was popularised by Ethereum, and examples include ERC20 token contracts, decentralised exchange contracts (Idex, EtherDelta), and even contract-based games, like CryptoKitties. Smart contracts allow both developers and regular users to express complex behavioural requirements and patterns using a high-level programming language. At the same time, the public nature of most popular smart contract systems restricts the expressivity of contracts. Public fields of the contract are visible to everyone, and transactions reveal a lot of private data -- which address triggered the contract change, and what this execution was like. Many privacy-preserving communication patterns that need this information to be private and accessible to only specific parties run into resource or design complexity limits, even though it is theoretically possible to implement them. Moreover, the security proofs for these types of systems are cumbersome because of additional non-standard and non-unified layers of abstractions introduced by implementing the privacy-preserving logic on top of the public smart contract functionality instead of designing smart contracts environments with privacy support in mind. In this blog post we will look at the two proposals for private smart contract systems, namely Zkay [1] and Kachina [2], both of which provide a way to express different privacy properties in smart contracts, and to reason about them. While Zkay serves as an illustrative example of a practical smart contract framework, showing the end user’s or programmer’s perspective, Kachina offers a more theoretical take on what privacy is, allowing for detailed security analysis of the functionalities involved in the smart contract. Zkay system consists of the language, more precisely an extension of the Solidity language used in Ethereum, and a compiler that transforms the smart contract code into two independent pieces of code -- the Solidity contract that can be run, for example, on the Ethereum blockchain, and a circuit description of the code that is supposed to be run inside a zk-SNARK. The zkay language is different from its base Solidity language in having fine-grained privacy annotations that specify the ownership of the internal contract variables -- if T is a type of variable, then T@A is a T data structure that belongs to the address A. These privacy annotations allow limiting the access of the concrete script variable to the concrete single user’s address, so that the variable can only be read by the owner of that address. Another important language feature is careful handling of statements that reclassify the secret information -- whenever one wants to publish a secret value, or give it away to another party, a call to the reveal function allows it to do that. In this way, semantics of the language separates private computations from the public ones, and forces the contract author to explicitly declare this boundary. The main example in the paper, presented on Figure 1, presents a smart contract MedStats that emulates a simplistic medical database. This contract stores a variable mapping each user to a boolean value that indicates whether this user is in a risk group or not (called risk), and a global private counter of all the users in the risk group. The entity hospital can then assign these values and increase a private counter (that is not visible to anyone else), and users can read their own risk assignment from their own private element of the mapping risk (and again, nobody else can read from it). The smart contract guarantees that the counter is consistent with the risk information given to users. In this example, we clearly see that privacy is ensured in a way that is understandable for programmers -- adding an additional language feature which induces extra privacy semantics. This kind of semantics could be (and is) achieved within more powerful languages by using strict types; one good analogy is private/public object fields, or state and reader monads in functional languages. These language features restrict the behaviour that we want to avoid, namely revealing secret information to other parties, by performing type-checking. Under the hood, ZKay extracts the private parts of the contract into arithmetic circuits, that update the encrypted values, corresponding to the private variables. And the correctness of transition from one private state (with one set of variables) into another is guaranteed by the zk-SNARK which is parameterized by this extracted circuit. Now how privacy is achieved makes perfect sense -- we update certain encrypted values in zero-knowledge, but does it allow us to express all the private contracts we would like to work with? Is there a simple way to concisely present the exact places we might leak information from? For instance, if we update a certain encrypted value, the very fact it was updated is publicly visible (because the ciphertext changes whenever the underlying plaintext changes), so this absence of unlinkability seemingly prevents us from implementing zcash-like transactions using just the tools provided. Kachina [2] approaches these issues from the opposite direction. Instead of defining a programming language extension it provides a most general abstraction for public and private state separation, together with a framework in which contracts can be proven secure. At the base of Kachina lies the Universal Composability (UC) framework [3] -- a set of formal specification and proof techniques that allow proving cryptographic protocols secure. Kachina builds on UC, and suggests the following contract design flow. First, we need to formally specify, in the UC pseudocode language, the contract transition function Γ(w) -- the core of the interaction of users and the contract. This function, although may look quite similar to any other contract one may come across, has a distinctive restriction -- it must be written in such a way that interactions with the public state of the contract and a set of private states (one per user) are abstracted as communication with separate programs, called public or private state oracles respectively. This forces us to separate publicly accessible information, such as public contract variables, from the private states of users, which are generally stored on the user side, and may include secret keys, nonces, etc -- whatever is needed to act as a contract party. Moreover, the transition function itself doesn’t maintain any state, since it is all delegated to the oracles. This state-oracles approach itself may sound reasonably familiar to programmers, as it is ubiquitous in software engineering (imagine oracles being internal objects of the functionality object, with call access to them only), but the difficult problems solved by Kachina lie more in the precise definitions and abstractions that allow us to later reason about this transition function. Figure 2 show the transition function Γ transforming smart contract input w (which can be thought of as a query, or RPC with the arguments), to the output y, while communicating with a public oracle at the right side, transforming public state σ to new σ’, and correspondingly a set of private oracles each changing private state ρ to ρ’. The transition function is itself a minimal piece of logic that describes the behaviour of the contract. It answers the main question “given a certain request to the contract, how will our public and private states change?”. But to proceed with the proof and to reason about system security guarantees, we will define two more objects -- an ideal behaviour function Δ and a leakage function Λ. The first one expresses a high-level intuition about the behaviour of Γ -- as if there were no concrete cryptographic objects in the contract, and it was allowed only to emulate the behaviour of the contract, allowing for adversarial input. The leakage function maps every possible input to the contract to the leakage that this contract does provide. After defining these two objects that precisely specify our expectation about functionality that we provide, possible adversarial interference, and a leakage, we can proceed with the security proof in the UC manner. Kachina then guarantees the soundness of the system -- that is, it won’t be possible to prove the system secure if our leakage expectations Λ are strictly weaker that the real leakage, or if our modeled behaviour and possible attacks in Δ are less general than ones allowed by the transition function defined earlier. In other words, security is guaranteed by reducing the functional and leakage related claims to a small readable piece of pseudocode (that we can agree or disagree with), and then proving the real contract correspondence to it. The real contract that can be put on the blockchain modifies our contract transition function Γ, transforming our abstract communication-with-oracles pattern into zero-knowledge proofs of the fact this communication (together with its private parts) corresponds to the contracts’ code. Many more parts of Kachina help us to understand and write the contract, for instance it provides a way to formalise public and private state consistency, allow for proper transaction reordering, and so on. Thus, Kachina serves as a general framework, in which the security properties can be concisely expressed, and the correspondence of the concrete smart contract system can be proven secure with respect to these properties. Being a theoretical work, Kachina does not provide a language or a compiler, but relies itself on the UC language. At the same time, its expressive power is quite high -- it easily captures zcash-like private payments (which is the main example in the paper’s body), and the zkay privacy abstraction, with its encrypted value transitions, can also be quite straightforwardly expressed in Kachina, putting encrypted values into the public state, while having secret keys in the private ones. Our research focuses on bringing these ideas together -- is it possible to extend Zkay language annotations even further? What would be the best smart contract language to provide a seamless integration with state-oracle abstraction -- either by using the interface to define them directly, or transforming other approaches, as, for example, privacy types access, into the code that will fit the state oracle abstraction criteria? Parts of these questions are to be resolved by the ZK toolkit, which is currently under research and development. References: [1] Steffen, Samuel, et al. "zkay: Specifying and enforcing data privacy in smart contracts." Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2019. [2] Kerber, Thomas, Aggelos Kiayias, and Markulf Kohlweiss. "Kachina–Foundations of Private Smart Contracts." https://drwx.org/papers/kachina.pdf [3] Canetti, Ran. "Universally composable security: A new paradigm for cryptographic protocols." Proceedings 42nd IEEE Symposium on Foundations of Computer Science. IEEE, 2001. Written by Mikhail Volkhov, The University of Edinburgh
# What is the difference between the CF standard_names platform_id and platform_name The two CF standard names platform_id and platform_name strike me as being redundantly similar. From their definition in the CF standard name table: platform_id: alias: station_wmo_id A variable with the standard name of platform_id contains strings which help to identify the platform from which an observation was made. For example, this may be a WMO station identification number. A "platform" is a structure or vehicle that serves as a base for mounting sensors. Platforms include, but are not limited to, satellites, aeroplanes, ships, buoys, instruments, ground stations, and masts. platform_name: alias: station_description A variable with the standard name of platform_name contains strings which help to identify the platform from which an observation was made. For example, this may be a geographical place name such as "South Pole" or the name of a meteorological observing station. A "platform" is a structure or vehicle that serves as a base for mounting sensors. Platforms include, but are not limited to, satellites, aeroplanes, ships, buoys, instruments, ground stations, and masts. Are there contexts where these are used interchangeably? Or conversely, where the distinction between them is quite distinct? • Consider asking this question here: github.com/cf-convention/discuss/issues Oct 29 '20 at 22:18 • I remember discussing this in person with some of the people involved in the CF conventions some years ago, unfortunately I don't remember the conclusion. – gerrit Oct 30 '20 at 10:07 • In short, when you have two similarly named fields with one ending in id and the other ending in name, the one ending in id is for computers to use (and is supposed to have unique values for each entity) and the one ending in name is for humans to use (and may or may not be unique, depending on the data model). Nov 2 '20 at 22:56
69 views If two elements of a group G satisfies $aba^{-1} = b^{2}$ for $b\neq e$ then which of the following is equal to $b^{32}$ $A) a^{16}ba^{-16}$ $B) a^{5}ba^{-5}$ $C) ab^{16}a^{-1}$ $D)$ Both $(B)$ and $(C)$ edited | 69 views 0 D) 0 can you explain (B) option and (A)?? i got option(C) +1 Similarly expanding using 3 and 4, the answer will be D (B and C both). answered by Junior (527 points) selected 1
# Are cross-validated prediction errors i.i.d? Say, we test an arbitrary regression or classification procedure on $n$ independent samples with leave-one-out cross-validation. This results in an estimate of the prediction error $e_n$ for each sample $n$. Can these $e_n$ be assumed to be independent draws of a (probably unknown) distribution? My intuition says no, because (1) the training set is almost the same for each test sample, and (2) samples are used for both, training and testing. If my intuition is wrong, and errors are independent, what about k-fold cross-validation, where the same training set is used for groups of $n/k$ samples? Disclaimer: I tried to ask this question as concisely and generally as possible. If it lacks detail or specifity, please comment and I will update the question accordingly. I think you need to be clear what distribution you need to represent. This differers according to what the cross validation is meant for. • In the case that the cross validation is meant to measure (approximate) the performance of the model obtained from this particular training set, the corresponding distribution would be the distribution of cases in the training set at hand. From that perspective, you draw almost the entire population, though without replacement. • In contrast, if you are asking about the distribution of $n$ cases drawn from the population the training set was drawn from, then the cross validation resampled surrogate training sets are correlated. See e.g. Bengio, Y. and Grandvalet, Y.: No Unbiased Estimator of the Variance of K-Fold Cross-Validation Journal of Machine Learning Research, 2004, 5, 1089-1105. This is important for comparisons which algorithm performs better for a particular type of data. • I was not aware of this difference. Thanks for the link to the paper, I will read it. Did I understand this correctly so far: To measure the performance of a specific data set no specific care needs to be taken other than sampling without replacement (e.g. hypergeometric distribution for classification results). If, however, we want to get an idea of the performance on the whole population the situation is more complicated? Also, I suppose that the second case applies if I want to know how the model will perform on unseen samples? – kazemakase Jan 24 '15 at 9:11 • a) You need to take more care, but those considerations were not the topic here. b) prediction about training on completely other samples is more complicated, yes. c) how the model will perform on unseen samples: no, this is case 1: that's bwhat resampling is for. – cbeleites supports Monica Jan 24 '15 at 19:05 • Thanks for the clarification. So basically results are dependent in both cases - in case 1 because of sampling without replacement, and in case 2 because of complicated? – kazemakase Jan 25 '15 at 8:59 • I'd have said in case 1 because sampling > half the population will create dependence - but notice that this dependence is in favor of being representative, wheras the usual concern about dependence is about not being representative. Case 2 is complicated, but it is IMHO actually a good example of the "dependent" one is afraid of: in the end this dependence is one way of realizing that there is no way around the fact that only $n$ real cases are available. – cbeleites supports Monica Jan 26 '15 at 9:01 They can't be independent. Consider adding one extreme outlier sample, then many of your cross validation folds will be skewed in a correlated way. • +1 straight and to the point. Sounds reasonable too. Yet, I could imagine that the outlier simply changes the distribution of $e$ without making the samples correlated. Could you elaborate on that? – kazemakase Jan 23 '15 at 16:08
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01qj72p717z Title: "Interpreting Sheepskin Effects in the Returns to Education" Authors: Flores-Lagunes, AlfonsoLight, Audrey Issue Date: Mar-2007 Series/Report no.: 22 Abstract: Researchers often identify sheepskin effects by including degree attainment (D) and years of schooling (S) in a wage model, yet the source of independent variation in these measures is not well understood. We argue that S is negatively correlated with ability among degree-holders because the most able graduate the fastest, while a negative correlation exists among dropouts because the most able benefit from increased schooling. Using data from the NLSY79, we find that wages decrease with S among degree-holders and increase with S among dropouts. The independent variation in S and D needed for identification is not due to reporting error. Instead, we conclude that skill varies systematically among individuals with a given degree status. URI: http://arks.princeton.edu/ark:/88435/dsp01qj72p717z Appears in Collections: ERS Working Papers Files in This Item: File Description SizeFormat
33 views Let $f : \mathbb{R} \rightarrow \mathbb{R}$ be a continuous bounded function, then: 1. $f$ has to be uniformly continuous 2. There exists an $x \in \mathbb{R}$ such that $f(x) = x$ 3. $f$ cannot be increasing 4. $\lim_{x \rightarrow \infty} f(x)$ exists.
# Divergence with Chain Rule • I I am looking at the derivation for the Entropy equation for a Newtonian Fluid with Fourier Conduction law. At some point in the derivation I see $$\frac{1}{T} \nabla \cdot (-\kappa \nabla T) = - \nabla \cdot (\frac{\kappa \nabla T}{T}) - \frac{\kappa}{T^2}(\nabla T)^2$$ K is a constant and T is a scalar field. It seems obvious that there is some way to use the chain rule on the middle term to get the left and right terms, but I frankly don't exactly understand the "rules" of how to use it with the divergence. I know you can't just factor out $$\frac{1}{T}$$ from the middle term, but I'm not sure how to actually simplify that middle expression. Any help is appreciated.
# Charge moving with parallel to current with same velocity as particles in the wire I have a charge $+Q$ traveling paralel to a wire, with velocity $\vec v$. The wire is composed of negative non moving charge and a flow of positive charges moving with velocity, $\vec v$, with a spacing $l$ between them. I am looking at the one on the diagram on the left right now. The electric field is $$\vec E(r) = \frac{q}{2\pi l r \epsilon_0}\hat r.$$ The magnetic force is $$\oint \vec B \vec dl = \mu_0 I_\text{encl} = \mu_0 \lambda\vec v=\mu_0 \vec v \frac{q}{l}\\ \Rightarrow \vec B = \frac{\mu_0q}{2\pi l}\vec v.$$ The lorentz force is $$\vec F = Q\left[ \vec E + (\vec v \times \vec B)\right]$$ Because the magnetic fieldis in the same direction at the moving changed particle is $$(\vec v \times \vec B) = 0 \\ \Rightarrow \vec F = Q\vec E?$$ • Interesting question. – Declan Jun 5 '18 at 4:39 • Sorry. I had more to add. Can you now calculate the particle trajectory in the lab frame? – Declan Jun 5 '18 at 4:40 On the left the charge is moving, the wire is neutral with a current: $\vec F = Q \vec v \times \vec B = \frac{\mu_0 Q v I}{2\pi r}$. On the right the charge is at rest, so not affected by $\vec B$. However, the wire carries a charge density $\lambda = \vec v \cdot \vec I /c^2$ in this reference frame. The force is the same as before $\frac{Q\lambda}{2\pi c^2 \epsilon_0 r } = \frac{QvI}{2\pi c^2 \epsilon_0 r} = \frac{\mu_0 Q v I}{4\pi r}$ .
location:  Publications → journals → CMB Abstract view # A characterization of real hypersurfaces in complex space forms in terms of the Ricci tensor We study real hypersurfaces of a complex space form $M_n(c)$, $c\ne 0$ under certain conditions of the Ricci tensor on the orthogonal distribution $T_o$.
# Q1 ​1. Two blocks of masses 20 kg and 50 kg are lying on the horizontal floor (coefficient of friction $\mu$ =0.5). initially staring is stretched and blocks are at rest. Now two forces 300 N and 150 N is applied on two blocks as shown in figure. The acceleration of 20kg block is 2.5 K m/s2. Find the value of K.  ​ Dear Student, Regards. • 61 Let the tension in the rope be `T` and accleration of masses be `a`. The frictional force acting on 50kg block is 250N. And that on 20 kg block is 100N. Therefore we can say that, 300 - (250 + T) = 50a, and150 + T - 100 = 20a from here we get a = 100/70 or 1.4285 = 2.5 k , so k = 4/7 • -41 What are you looking for?
## Intermediate Algebra (6th Edition) $f(x)=x+7$ We are given that $y=x+7$, which is an example of a relation between the variables x and y. However, to note that y is a function of x, we can rewrite this as $y=f(x)=x+7$. This function notation, read as "f of x," tells us that y is a function of x (or that y depends on x).
TheInfoList OR: In mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ... , a tuple is a finite ordered list ( sequence In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is cal ... ) of elements. An -tuple is a sequence In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is cal ... (or ordered list) of elements, where is a non-negative integer An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign ( −1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the languag ... . There is only one 0-tuple, referred to as ''the empty tuple''. An -tuple is defined inductively using the construction of an ordered pair In mathematics, an ordered pair (''a'', ''b'') is a pair of objects. The order in which the objects appear in the pair is significant: the ordered pair (''a'', ''b'') is different from the ordered pair (''b'', ''a'') unless ''a'' = ''b''. (In con ... . Mathematicians usually write tuples by listing the elements within parentheses "" and separated by a comma and a space; for example, denotes a 5-tuple. Sometimes other symbols are used to surround the elements, such as square brackets " nbsp; or angle brackets "⟨ âŸ©". Braces "" are used to specify array An array is a systematic arrangement of similar objects, usually in rows and columns. Things called an array include: {{TOC right Music * In twelve-tone and serial composition, the presentation of simultaneous twelve-tone sets such that the ... s in some programming languages but not in mathematical expressions, as they are the standard notation for sets. The term ''tuple'' can often occur when discussing other mathematical objects, such as vectors. In computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (inclu ... , tuples come in many forms. Most typed functional programming In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that ... languages implement tuples directly as product type In programming languages and type theory, a product of ''types'' is another, compounded, type in a structure. The "operands" of the product are types, and the structure of a product type is determined by the fixed order of the operands in the prod ... s, tightly associated with algebraic data type In computer programming, especially functional programming and type theory, an algebraic data type (ADT) is a kind of composite type, i.e., a type formed by combining other types. Two common classes of algebraic types are product types (i.e. ... s, pattern matching, and destructuring assignment. Many programming languages offer an alternative to tuples, known as record types, featuring unordered elements accessed by label. A few programming languages combine ordered tuple product types and unordered record types into a single construct, as in C structs and Haskell records. Relational database A relational database is a (most commonly digital) database based on the relational model of data, as proposed by E. F. Codd in 1970. A system used to maintain relational databases is a relational database management system (RDBMS). Many relatio ... s may formally identify their rows (records) as ''tuples''. Tuples also occur in relational algebra In database theory, relational algebra is a theory that uses algebraic structures with a well-founded semantics for modeling data, and defining queries on it. The theory was introduced by Edgar F. Codd. The main application of relational alge ... ; when programming the semantic web with the Resource Description Framework The Resource Description Framework (RDF) is a World Wide Web Consortium (W3C) standard originally designed as a data model for metadata. It has come to be used as a general method for description and exchange of graph data. RDF provides a variety of ... (RDF); in linguistics Linguistics is the scientific study of human language. It is called a scientific study because it entails a comprehensive, systematic, objective, and precise analysis of all aspects of language, particularly its nature and structure. Lingui ... ; and in philosophy Philosophy (from , ) is the systematized study of general and fundamental questions, such as those about existence, reason, knowledge, values, mind, and language. Such questions are often posed as problems to be studied or resolved. ... . Etymology The term originated as an abstraction of the sequence: single, couple/double, triple, quadruple, quintuple, sextuple, septuple, octuple, ..., ‑tuple, ..., where the prefixes are taken from the Latin Latin (, or , ) is a classical language belonging to the Italic branch of the Indo-European languages. Latin was originally a dialect spoken in the lower Tiber area (then known as Latium) around present-day Rome, but through the power of ... names of the numerals. The unique 0-tuple is called the ''null tuple'' or ''empty tuple''. A 1‑tuple is called a ''single'' (or ''singleton''), a 2‑tuple is called an ''ordered pair'' or ''couple'', and a 3‑tuple is called a ''triple'' (or ''triplet''). The number can be any nonnegative integer An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign ( −1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the languag ... . For example, a complex number In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the fo ... can be represented as a 2‑tuple of reals, a quaternion In mathematics, the quaternion number system extends the complex numbers. Quaternions were first described by the Irish mathematician William Rowan Hamilton in 1843 and applied to mechanics in three-dimensional space. Hamilton defined a quat ... can be represented as a 4‑tuple, an octonion In mathematics, the octonions are a normed division algebra over the real numbers, a kind of hypercomplex number system. The octonions are usually represented by the capital letter O, using boldface or blackboard bold \mathbb O. Octonions hav ... can be represented as an 8‑tuple, and a sedenion In abstract algebra, the sedenions form a 16- dimensional noncommutative and nonassociative algebra over the real numbers; they are obtained by applying the Cayley–Dickson construction to the octonions, and as such the octonions are isomorphic ... can be represented as a 16‑tuple. Although these uses treat ''‑uple'' as the suffix, the original suffix was ''‑ple'' as in "triple" (three-fold) or "decuple" (ten‑fold). This originates from medieval Latin Medieval Latin was the form of Literary Latin used in Roman Catholic Western Europe during the Middle Ages. In this region it served as the primary written language, though local languages were also written to varying degrees. Latin functioned ... ''plus'' (meaning "more") related to Greek Greek may refer to: Greece Anything of, from, or related to Greece, a country in Southern Europe: *Greeks, an ethnic group. *Greek language, a branch of the Indo-European language family. **Proto-Greek language, the assumed last common ancestor ... ‑πλοῦς, which replaced the classical and late antique ''‑plex'' (meaning "folded"), as in "duplex". Names for tuples of specific lengths Note that for $n \geq 3$, the tuple name in the table above can also function as a verb meaning "to multiply he direct objectby $n$"; for example, "to quintuple" means "to multiply by 5". If $n = 2$, then the associated verb is "to double". There is also a verb "sesquiple", meaning "to multiply by 3/2". Theoretically, "monuple" could be used in this way too. Properties The general rule for the identity of two -tuples is : $\left(a_1, a_2, \ldots, a_n\right) = \left(b_1, b_2, \ldots, b_n\right)$ if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false. The connective is bico ... $a_1=b_1,\texta_2=b_2,\text\ldots,\texta_n=b_n$. Thus a tuple has properties that distinguish it from a set Set, The Set, SET or SETS may refer to: Science, technology, and mathematics Mathematics *Set (mathematics), a collection of elements *Category of sets, the category whose objects and morphisms are sets and total functions, respectively Electro ... : # A tuple may contain multiple instances of the same element, so tuple $\left(1,2,2,3\right) \neq \left(1,2,3\right)$; but set $\ = \$. # Tuple elements are ordered: tuple $\left(1,2,3\right) \neq \left(3,2,1\right)$, but set $\ = \$. # A tuple has a finite number of elements, while a set or a multiset In mathematics, a multiset (or bag, or mset) is a modification of the concept of a set that, unlike a set, allows for multiple instances for each of its elements. The number of instances given for each element is called the multiplicity of that ... may have an infinite number of elements. Definitions There are several definitions of tuples that give them the properties described in the previous section. Tuples as functions The $0$-tuple may be identified as the empty function In mathematics, a function from a set to a set assigns to each element of exactly one element of .; the words map, mapping, transformation, correspondence, and operator are often used synonymously. The set is called the domain of the func ... . For $n \geq 1,$ the $n$-tuple $\left\left(a_1, \ldots, a_n\right\right)$ may be identified with the ( surjective In mathematics, a surjective function (also known as surjection, or onto function) is a function that every element can be mapped from element so that . In other words, every element of the function's codomain is the image of one element of ... ) function :$F ~:~ \left\ ~\to~ \left\$ with domain Domain may refer to: Mathematics *Domain of a function, the set of input values for which the (total) function is defined ** Domain of definition of a partial function ** Natural domain of a partial function ** Domain of holomorphy of a function * ... :$\operatorname F = \left\ = \left\$ and with codomain In mathematics, the codomain or set of destination of a function is the set into which all of the output of the function is constrained to fall. It is the set in the notation . The term range is sometimes ambiguously used to refer to either t ... :$\operatorname F = \left\,$ that is defined at $i \in \operatorname F = \left\$ by :$F\left(i\right) := a_i.$ That is, $F$ is the function defined by :$\begin 1 \;&\mapsto&&\; a_1 \\ \;&\;\;\vdots&&\; \\ n \;&\mapsto&&\; a_n \\ \end$ in which case the equality :$\left\left(a_1, a_2, \dots, a_n\right\right) = \left\left(F\left(1\right), F\left(2\right), \dots, F\left(n\right)\right\right)$ necessarily holds. ;Tuples as sets of ordered pairs Functions are commonly identified with their graphs Graph may refer to: Mathematics *Graph (discrete mathematics), a structure made of vertices and edges **Graph theory, the study of such graphs and their properties * Graph (topology), a topological space resembling a graph in the sense of discr ... , which is a certain set of ordered pairs. Indeed, many authors use graphs as the definition of a function. Using this definition of "function", the above function $F$ can be defined as: :$F ~:=~ \left\.$ Tuples as nested ordered pairs Another way of modeling tuples in Set Theory is as nested ordered pair In mathematics, an ordered pair (''a'', ''b'') is a pair of objects. The order in which the objects appear in the pair is significant: the ordered pair (''a'', ''b'') is different from the ordered pair (''b'', ''a'') unless ''a'' = ''b''. (In con ... s. This approach assumes that the notion of ordered pair has already been defined. # The 0-tuple (i.e. the empty tuple) is represented by the empty set $\emptyset$. # An -tuple, with , can be defined as an ordered pair of its first entry and an -tuple (which contains the remaining entries when : #: $\left(a_1, a_2, a_3, \ldots, a_n\right) = \left(a_1, \left(a_2, a_3, \ldots, a_n\right)\right)$ This definition can be applied recursively to the -tuple: : $\left(a_1, a_2, a_3, \ldots, a_n\right) = \left(a_1, \left(a_2, \left(a_3, \left(\ldots, \left(a_n, \emptyset\right)\ldots\right)\right)\right)\right)$ Thus, for example: : $\begin \left(1, 2, 3\right) & = \left(1, \left(2, \left(3, \emptyset\right)\right)\right) \\ \left(1, 2, 3, 4\right) & = \left(1, \left(2, \left(3, \left(4, \emptyset\right)\right)\right)\right) \\ \end$ A variant of this definition starts "peeling off" elements from the other end: # The 0-tuple is the empty set $\emptyset$. # For : #: $\left(a_1, a_2, a_3, \ldots, a_n\right) = \left(\left(a_1, a_2, a_3, \ldots, a_\right), a_n\right)$ This definition can be applied recursively: : $\left(a_1, a_2, a_3, \ldots, a_n\right) = \left(\left(\ldots\left(\left(\left(\emptyset, a_1\right), a_2\right), a_3\right), \ldots\right), a_n\right)$ Thus, for example: : $\begin \left(1, 2, 3\right) & = \left(\left(\left(\emptyset, 1\right), 2\right), 3\right) \\ \left(1, 2, 3, 4\right) & = \left(\left(\left(\left(\emptyset, 1\right), 2\right), 3\right), 4\right) \\ \end$ Tuples as nested sets Using Kuratowski's representation for an ordered pair, the second definition above can be reformulated in terms of pure set theory Set theory is the branch of mathematical logic that studies sets, which can be informally described as collections of objects. Although objects of any kind can be collected into a set, set theory, as a branch of mathematics, is mostly concern ... : # The 0-tuple (i.e. the empty tuple) is represented by the empty set $\emptyset$; # Let $x$ be an -tuple $\left(a_1, a_2, \ldots, a_n\right)$, and let $x \rightarrow b \equiv \left(a_1, a_2, \ldots, a_n, b\right)$. Then, $x \rightarrow b \equiv \$. (The right arrow, $\rightarrow$, could be read as "adjoined with".) In this formulation: : $\begin \left(\right) & & &=& \emptyset \\ & & & & \\ \left(1\right) &=& \left(\right) \rightarrow 1 &=& \ \\ & & &=& \ \\ & & & & \\ \left(1,2\right) &=& \left(1\right) \rightarrow 2 &=& \ \\ & & &=& \ \\ & & & & \\ \left(1,2,3\right) &=& \left(1,2\right) \rightarrow 3 &=& \ \\ & & &=& \ \\ \end$ -tuples of -sets In discrete mathematics Discrete mathematics is the study of mathematical structures that can be considered "discrete" (in a way analogous to discrete variables, having a bijection with the set of natural numbers) rather than "continuous" (analogously to continuo ... , especially combinatorics Combinatorics is an area of mathematics primarily concerned with counting, both as a means and an end in obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many a ... and finite probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ... , -tuples arise in the context of various counting problems and are treated more informally as ordered lists of length . -tuples whose entries come from a set of elements are also called ''arrangements with repetition'', '' permutations of a multiset'' and, in some non-English literature, ''variations with repetition''. The number of -tuples of an -set is . This follows from the combinatorial rule of product. If is a finite set of cardinality In mathematics, the cardinality of a set is a measure of the number of elements of the set. For example, the set A = \ contains 3 elements, and therefore A has a cardinality of 3. Beginning in the late 19th century, this concept was generalized ... , this number is the cardinality of the -fold Cartesian power . Tuples are elements of this product set. Type theory In type theory In mathematics, logic, and computer science, a type theory is the formal presentation of a specific type system, and in general type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a founda ... , commonly used in programming language A programming language is a system of notation for writing computer programs. Most programming languages are text-based formal languages, but they may also be graphical. They are a kind of computer language. The description of a programmin ... s, a tuple has a product type In programming languages and type theory, a product of ''types'' is another, compounded, type in a structure. The "operands" of the product are types, and the structure of a product type is determined by the fixed order of the operands in the prod ... ; this fixes not only the length, but also the underlying types of each component. Formally: : $\left(x_1, x_2, \ldots, x_n\right) : \mathsf_1 \times \mathsf_2 \times \ldots \times \mathsf_n$ and the projections are term constructors: : $\pi_1\left(x\right) : \mathsf_1,~\pi_2\left(x\right) : \mathsf_2,~\ldots,~\pi_n\left(x\right) : \mathsf_n$ The tuple with labeled elements used in the relational model The relational model (RM) is an approach to managing data using a structure and language consistent with first-order predicate logic, first described in 1969 by English computer scientist Edgar F. Codd, where all data is represented in terms of ... has a record type. Both of these types can be defined as simple extensions of the simply typed lambda calculus. The notion of a tuple in type theory and that in set theory are related in the following way: If we consider the natural model A model is an informative representation of an object, person or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin ''modulus'', a measure. Models ... of a type theory, and use the Scott brackets to indicate the semantic interpretation, then the model consists of some sets $S_1, S_2, \ldots, S_n$ (note: the use of italics here that distinguishes sets from types) such that: : and the interpretation of the basic terms is: : . The -tuple of type theory has the natural interpretation as an -tuple of set theory:Steve Awodey ''From sets, to types, to categories, to sets'' 2009, preprint In academic publishing, a preprint is a version of a scholarly or scientific paper that precedes formal peer review and publication in a peer-reviewed scholarly or scientific journal. The preprint may be available, often as a non-typeset vers ... : The unit type has as semantic interpretation the 0-tuple. * Arity Arity () is the number of arguments or operands taken by a function, operation or relation in logic, mathematics, and computer science. In mathematics, arity may also be named ''rank'', but this word can have many other meanings in mathematics. ... * Coordinate vector * Exponential object In mathematics, specifically in category theory, an exponential object or map object is the categorical generalization of a function space in set theory. Categories with all finite products and exponential objects are called cartesian closed ... * Formal language In logic, mathematics, computer science, and linguistics, a formal language consists of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules. The alphabet of a formal language consists of sy ... * OLAP: Multidimensional Expressions * Prime ''k''-tuple * Relation (mathematics) In mathematics, a relation on a set may, or may not, hold between two given set members. For example, ''"is less than"'' is a relation on the set of natural numbers; it holds e.g. between 1 and 3 (denoted as 1 is an asymmetric relation, but â ... * Sequence In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is cal ... * Tuplespace Sources * * Keith Devlin, ''The Joy of Sets''. Springer Verlag, 2nd ed., 1993, , pp. 7–8 * Abraham Adolf Fraenkel, Yehoshua Bar-Hillel, Azriel Lévy, Foundations of school Set Theory ', Elsevier Studies in Logic Vol. 67, 2nd Edition, revised, 1973, , p. 33 * Gaisi Takeuti, W. M. Zaring, ''Introduction to Axiomatic Set Theory'', Springer GTM 1, 1971, , p. 14 * George J. Tourlakis, Lecture Notes in Logic and Set Theory. Volume 2: Set Theory ', Cambridge University Press, 2003, , pp. 182–193
# All Questions 143 views ### Does looking into a mirror relieve eye strain in a similar way to looking at distant objects? Sometimes when having an eye exam the eye chart is viewed through a mirror to increase the distance between the person and the chart. When using consoles, it is recommended to look away from the ... 158 views ### How do memories come up for no apparent reason? Is this evidence that we remember everything? As I was driving, all of a sudden the name "Holden Caufield" came to my mind. It sounded really familiar. I googled the name and it was the main character in The Catcher in the Rye. The last time I ... 1k views ### What is the optimal length of a training session? When a practice session is too long, there will presumably be a point where no further significant gains can be made without a break. At what point will this be? Update: This question was originally ... 93 views ### Neuroplasticity and Treatment of Depression After reading 'The Brain That Changes Itself' by Norman Doidge, 'The Mind and The Brain' by Jeffrey Schwartz and a few other books, I've become curious about the science of neuroplasticity, which, as ... 210 views ### Comprehensive list of cognitive techniques in CBT I'm not a cognitive science student, but I'm interested in CBT (Cognitive Behavior Therapy) and I'm using it to overcome some of the problems I have. However, I have difficulty in finding resources ... 162 views ### How much information on the “Identical Strangers” experiment was actually released? I am in the process of researching the Nature vs. Nurture debate. While I was searching for articles on it, I discovered something known as the "Identical Strangers" experiment (that probably was ... 64 views ### Split Brain Experiments on Animals I was wondering what would happen if you took a split brain person and stimulated the pleasure center of one hemisphere and the pain center of the other hemisphere in conjunction with an external ... 261 views ### How does personality moderate or mediate the relationship between stress and burnout? My research is on work related stress and burnout amongst healthcare professionals and the role that personality play in either reducing or increasing levels of stress and burnout. I'm using the big ... 62 views ### Can a strong lack of empathy be considered a disorder? Premise from Wikipedia: More recently, popular science writer and Psychologist Daniel Goleman has drawn on social neuroscience research to propose that social intelligence is made up of social ... 77 views ### What are ways to explore metamathematics from a cognitive science/neuroscience lens? What are ways to explore metamathematics from a cognitive science/neuroscience lens to understand the evolution of mathematics based on structural and perceptual processing biases introduced due to ... 23 views ### How is individual morality linked to psychological profiling? This is the third in a series of question about morality. The first question : Which morals are universal across the species? The second question: What is the psychology of morality? How can an ... 31 views ### What is the psychology of morality? This is the second question in a series of three questions on morality. The first question Which morals are universal across the species? The third question How is individual morality linked to ... 4k views ### Does writing something down help memorize it? This is a question inspired by this recent question on the Chinese Language & Usage website. Someone asked why they needed to learn how to write Chinese characters, since today we mostly use ... 6k views ### Why do humans have sex in private? Human couples usually have sex in private, hidden not only from predators, but also - other humans. It is unlike behavior of most species, including our relatives: bonobos, chimpanzees and gorillas. ... 927 views ### Is The magical number 7 still valid? George A. Miller published "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information" in 1956 and is one of the most highly cited papers in psychology. It ... 1k views ### Why do higher incentives lead to lower performance for non-rudimentary tasks? I have watched a number of times this excellent video where Dan Pink discusses the science of motivation. The video states that the higher incentives, the lower performance for non-rudimentary (not ... 755 views ### Is there experimental support for John Perry's “Theory of Structured Procrastination”? John Perry's theory of structured procrastination can be summed up as follows: Some people are inherently predisposed to be procrastinators across a wide range of domains Such procrastinators are ... 218 views ### How can I use gamification to encourage people to complete workplace training? I am looking for ways to improve the likelihood people will perform mundane but required workplace training. I am looking into gamification techniques. My organisation requires that employees ... 891 views ### Effect of words highlighting on reading comprehension I'm interested if there are studies dealing with text understanding and POS (part of speech) coloring, or coloring syntactic/semantic information. The studies should solve the questions like: Which ... 5k views ### Does evidence support Maslow's Hierarchy of Needs? Maslow's Heirarchy of Needs (shown below) is a popular concept and is often taught in basic psychology courses, and often less objectively taught in Business and Marketing courses. A common problem ... 408 views ### Medium-term effects of polyphasic sleep on performance Typical sleep patterns of one big block of 6 to 9 hours with no naps is usually referred to as monophasic sleep. A second natural sleep pattern is biphasic sleep which breaks up your sleep into two ... 417 views ### How long does it take to read a sentence with X number of characters? How does the time needed to read a sentence scale with the number of characters? Or does this time scaling depend on something more than just character count? For example, let $X$ be the number of ... 2k views ### Is the Neanderthal Theory of Autistic brain a reasonable scientific theory? I've recently learned about The Neanderthal Theory, that explains autistic (and especially Asperger's) brain functioning as the effect of genetical similarity with Neanderthals. The author gives a ... 168 views ### Why does being in a natural environment induce some kind of “peace” state while mecha/tech ones induce the opposite? Usually, natural environments (being out in "nature") tend to induce a happy and peaceful state, while mechanical and technological ones tend to induce stress and sadness. Why does this happen? Are ... 541 views ### Can response time be incorporated into signal detection theory? In signal detection theory, one typically uses "signal" and "no signal" responses to analyze the data (that is, the analysis is based on a discrete choice for each trial, effectively generating the ... 758 views ### Why are people inclined to praise or fear the unknown? Human beings are inclined to "praise" the unknown, and are often afraid of the unknown. This inclination has led to the creation of mythology and many gods. To this date we are still carrying this ... 270 views ### How are newly created neurons recruited into existing networks? As far as I understand, the basics of neurogenesis (abstracted down to the level that makes sense to a computer scientist) is as follows: Neural progenitor cells differentiate into new neurons that ... 261 views ### What factors improve mood and increase cognitive functioning when people wake up? I'm a software engineer doing some research in order to figure out if developing some applications are worth the effort. This work is applied to computational devices including mobile devices and even ... 1k views ### Are there any cognitive test (or test suites) available on the iPad? I find the IPad to be a great piece of hardware that is easy to bring along and that has an intuitive touch interface. This would make it an ideal platform for many cognitive tests such as n-back. As ... 234 views ### References for biologically plausible models of knowledge representation? I'm looking for references that deal with the issue of how various kinds of semantic knowledge are (or might be) represented neurally. Most of the discussion of this topic seems skewed by social ... 340 views ### What is the effect of imagining doing an exercise on muscle growth? I recently read a study by Yue, G et al. (1992) which I found incredibly interesting. I have no background in physiology and I was hoping that someone who does can clear things up. If I understood it ... 379 views ### Introductory resources on bayesian modeling for cognitive sciences On Cross Validated there is a great question about best introductory books for bayesian statistics. Also, Jeromy Anglim blogged recently about use of JAGS, rjags, and Bayesian Modelling, with some ... 1k views ### What is the psychology behind trolling? For those new to the internet, trolling is an activity where one person intentionally tries to upset other members of the same community, presumably for entertainment. This has been informally ... 910 views ### Do people wake up faster with inconsistent alarm sounds? Many people have one alarm clock sound that wakes them up every morning. Is having this consistent sound the optimal way to wake someone up? Or can you startle someone faster by changing to a ... 2k views ### How to analyze reaction times and accuracy together? Is there a good way to analyze reaction times and accuracy together, other than MANOVA? I have data from an experiment in which participants had to respond to stimuli in two different conditions in a ... 275 views ### Why is training better when following an easy-to-difficult schedule? As suggested in the answer to this question, experimental results show that training is most effective when it follows an easy-to-difficult schedule. What theories and specifically computational ... 791 views ### Is it possible to improve reading speed and visual comprehension by doing exercises? Background I'm trying to capture detailed information from the images in my visual memory, mainly text. My daily life requires reading many documents on varying topics. I want to increase my reading ... 148 views ### Have the core CBT “Thinking Errors” been found to NOT be effective in evidence-based clinical practice? This is his chart of the ten "thinking errors." It's taken from The Feeling Good Handbook. Table 3–1. Definitions of Cognitive Distortions ALL-OR-NOTHING THINKING: You see things in black-and-white ... 455 views ### What is the most complex artificial neural network created to date? A few years ago I wrote a research paper for college on neural networks. At the time IBM's Blue Brain was the clear winner. Some rumor went around that they were close to emulating a brain the ... 412 views ### What part of the brain locks up when a man is in the presence of an extremely attractive female? This question is specifically about the male brain and its lack of cognition in the presence of a highly attractive female. This is in contrast to regaining cognitive ability in the presence of an ... 420 views ### Where is the visual “image” that we “see” finally assembled? David Hubel's online book, Eye, Brain and Vision describes in great detail our early visual system. The image that we are conscious of when we open our eyes goes through a complex path: The final ... 442 views ### Why do we prefer visually aligned objects? We all know visual alignment is one of the foundations of design. Everything must be aligned with everything else. We also know that when things are aligned it is easier to process information. My ... 377 views ### Longitudinal mobile mood tracking app with random reminders The goal is to take simple measurements of mood using Likert scale over an extended period of time (e.g. two months). I know there is a large number of mobile apps for tracking mood on every ... 265 views ### What are the minimal requirements for successful gamification? I am very interested in the concept of Gamification, the idea (used here on Stack Exchange) that by making mundane tasks into a game, you can elicit desired behavior from users of software. (For ... 659 views ### Does super-intelligence necessary lead to consciousness, self awareness, freewill or emotion After seeing this talk, the question popped in my mind. The idea is that as soon as a system is complex enough or intelligent enough, it able to act on its own. It seems to be a common belief. For ... 283 views ### Does dream recall disturb the processes of memory consolidation? Psychology in the time of Freud was occupied with dreams. Relaying these to one's analyst was an important part of treatment. Fast-forward to less than 100 years later, and we know so much about the ... 365 views ### Why is storytelling an effective way to transmit information between people? Parables, fables, myths, whatever you might call them, stories have always been part of human consciousness. Within recent decades, storytelling is recognized as a big component of advertising and ... 1k views ### Why are most people not persuaded by rational arguments? The sentence could look as a provocation for many people, but the thoughts are not always rational and linear. But why this happens? Is it possible to give a short answer? "Counter to what you might ...
# Quantum echo dynamics in the Sherrington-Kirkpatrick model ### Submission summary As Contributors: Silvia Pappalardi · Anatoli Polkovnikov Arxiv Link: https://arxiv.org/abs/1910.04769v3 (pdf) Date accepted: 2020-07-31 Date submitted: 2020-07-28 09:07 Submitted by: Pappalardi, Silvia Submitted to: SciPost Physics Academic field: Physics Specialties: Quantum Physics Approach: Theoretical ### Abstract Understanding the footprints of chaos in quantum-many-body systems has been under debate for a long time. In this work, we study the echo dynamics of the Sherrington-Kirkpatrick (SK) model with transverse field under effective time reversal. We investigate numerically its quantum and semiclassical dynamics. We explore how chaotic many-body quantum physics can lead to exponential divergence of the echo of observables and we show that it is a result of three requirements: i) the collective nature of the observable, ii) a properly chosen initial state and iii) the existence of a well-defined chaotic semi-classical (large-$N$) limit. Under these conditions, the echo grows exponentially up to the Ehrenfest time, which scales logarithmically with the number of spins $N$. In this regime, the echo is well described by the semiclassical (truncated Wigner) approximation. We also discuss a short-range version of the SK model, where the Ehrenfest time does not depend on $N$ and the quantum echo shows only polynomial growth. Our findings provide new insights on scrambling and echo dynamics and how to observe it experimentally. ### Ontology / Topics See full Ontology or Topics database. Published as SciPost Phys. 9, 021 (2020) Dear Editor, thank you for handling our submission. We apologize for the delay in this resubmission. We are pleased to thank the referees for recommending the publication of the paper on SciPost Physics. In this resubmission, we believe to have addressed the final comments and suggestions raised by Referee 2. Yours sincerely, Silvia Pappalardi, Anatoli Polkovnikov, and Alessandro Silva ## Reply to the Report of Prof. Jalabert: We thank the referee for the comments and careful reading. Here, we reply to the corresponding points, specifying the changes included in the resubmitted manuscript. The referee writes: In answering the point 1 of my original report, the authors claim that the echo observable $\mu(t)$ can be referred to as OTOC, like “all the time-dependent correlation functions which have an unusual time-ordering”. I was not questioning the “legal right” that the authors claim to have in choosing such nomenclature. But it’s just not common sense to induce such confusion by using a name that means something different for almost all practitioners. If the authors were to define the product of an arbitrary even or an odd number of operators at various times, would they also call this object OTOC? They dubbed “square commutator” what everybody else calls OTOC. But such a choice is also questionable within their line of thought, since “square commutator” does not carry the notion of time, and could eventually refer to the square of any pair of commutators (not necessarily at different times). Our response: Following the suggestion of the referee, we have changed the sentences where we refer to the echo $\mu(t)$ as the OTOC, but we kept the one in the introduction where we say contains an OTOC. In fact, while we agree about the fact that all practitioners define $\langle \hat B(t) \hat A(0) \hat B(t) \hat A(0)\rangle$ as the OTOC, we still believe that objects as $\langle \hat B(t) \hat A(0) \hat B(t) \rangle$ lie in the same class, sharing the unusual time-ordering and a similar classical limit. More precisely, OTOC are defined as multi-point and multi-time correlation functions (more or equal than three body) which cannot be represented on a single Keldysh contour, following the work by Alainer, Faoro, Ioffe, Annals of Physics (2016). They are characterized by an unusual time-ordering which prevents them from appearing in standard causal response functions. In order to be more accurate, we have added this definition in the introduction of the revised manuscript. Furthermore, for the particular initial state we are considering, i.e. $\hat A|\psi_0 \rangle = \alpha_0 |{\psi_0}\rangle$, the OTOC appearing in the echo $\mu(t)$ is equal to the standard'' OTOCs, by a constant factor, i.e. $$\langle \hat B(t) \hat A(0) \hat B(t) \rangle = \frac 1{\alpha_0} \langle \hat B(t) \hat A(0) \hat B(t) \hat A(0)\rangle$$ To conclude, we would like to reiterate that our motivation for choosing the echo $\mu(t)$ is very similar to what discussed by Boris Fine and collaborators in Refs.[23-24]. There, they propose the echo as a simple and easy way to measure observables and encode irreversible dynamics via some particular OTOC. The referee writes: Concerning my points 2 and 3, the authors state in the manuscript and in the response that “whenever the classical limit is chaotic” … the square commutator, and then $\mu(t)$, are expected to grow exponentially in time because they encode “the square of the derivatives of the classical trajectory to respect to the initial conditions”. However, a quantum system might have a classical analogue, while the derivatives of the classical trajectory with respect to the initial condition might not be exponential because the dynamics are not fully chaotic. Therefore, having a classical analogue is not a sufficient condition for observing an intermediate time-window of exponential growth. Moreover, it is not obvious to me the identification made between a system having a semi-classical analogue and the existence of a classical limit for the “square commutator” (the latter based on the validity of the saddle point mean-field approximation ensured by the large N-limit). Some spin $1/2$ chains are shown to fulfil the second definition, but none of them fulfils the first one. Our response: We thank the referee for raising this point. More precisely, we meant that every time the classical limit is well defined, all quantum observables (including the echo $\mu(t)$) shall display semi-classical dynamics before $t_{\text{Ehr}}$. At this point, if the classical limit is sufficiently chaotic, i.e. the derivatives of the trajectory to respect the initial condition immediately grow exponentially fast, then also the quantum echo observable should reproduce such exponential growth. In the example of the SK model under consideration, the classical limit is fully chaotic as witnessed by the exponential growth of the echo within TWA. Hence, one could predict that the quantum $\mu(t)$ should grow exponentially fast in time before $t_{\text{Ehr}}$. Following the comment of the referee, in the revised manuscript, we have specified chaotic when referring to the semi-classical limit as a condition for the exponential growth. Let us now briefly comment on the second part of the referee's observation. As the referee points out, typically quantum spin 1/2 chains with short-range interactions do not have a classical limit, either semiclassical correspondence of the square-commutator. However, in the case we are studying, the validity of the semiclassical limit is ensured by the presence of long-range interactions and in particular of the large $N$ limit. Even if this is not an "obvious'' classical limit (like it would be for large $S$ spin chains), anyways the large $N$ limit is enough for having a semi-classical description and semi-classical dynamics. Physically, this follows from the fact that each spin is subject to a slowly changing magnetic field generated by interactions with many other spins. So effectively each spin evolves in an external field, for which the semiclassical description is exact. This semiclassical behaviour is manifested both in the dynamics of local observables (Figure 1 and 2) and in more complicated observables, like the echo (Figure 4). This has been shown numerically in Section 5 and analytically in appendix B. The referee writes: The tutorials about the Bopp formalism, included in Sec. 5 of the manuscript, in Appendix A, and in the response to Referee I, could be indeed helpful. But, other than the correction later implemented by one of the authors, I notice that in the paragraph before Eq. (23), $\alpha$ and $\beta$ are not boson operators, but complex phase-space variables, that in Ref. [70] “Vacational” has to be changed to “Variational”, and that Ref. [71] deals with the non-linear -model. Our response: We would like to thank the referee for pointing out these mistakes. We have corrected them in the resubmitted version of the manuscript. ### List of changes - We added the definition of OTOC in the introduction as a "multi-point and multi-time correlation functions which cannot be represented on a single Keldysh contour". - We have specified *chaotic* when referring to the semi-classical limit as a condition for the exponential growth. - We have corrected the typos in the references [70-71]. - We implemented the corrections to the Bopp formalism.
# Are Fourier series of length 2 'asymmetric enough' to generate all crossing patterns? - A reformulation of the Fourier-(1,1,2) knot question Given $N$ pairs of distinct real numbers $t_i, t'_i \in [0,1]$, $i = 1,\ldots,N$, we ask if there is a function $f(x) = \cos(2\pi mx+\alpha) + \gamma\cdot \cos(2\pi nx+\beta)$, with $m, n \in \mathbb{N}$, $\alpha, \beta, \gamma \in \mathbb{R}$, so that for all $i$: $f(t_i) > f(t'_i)$? If yes, this proves that all knots are Fourier-(1,1,2) knots, that is, possess a parametrization with Fourier series of length 1,1,2 in coordinates $x,y,z$. Because: a) the parametrization in the x-y plane is rich enough to generate all knots (see Lamm, 1998) and b) by interchanging $t_i$ and $t'_i$ every crossing pattern can be achieved. As references see the articles http://arxiv.org/abs/q-alg/9711013 (Kauffman, 'Fourier knots', 1997) http://arxiv.org/abs/1210.4543 (Lamm, 'Fourier knots', 1998) http://arxiv.org/abs/0707.4210 (Boocher et. al, 'Sampling Lissajous and Fourier knots', 2007) http://arxiv.org/abs/0708.3590 (Hoste, 'Torus knots are Fourier-(1,1,2) knots', 2007) If the answer is no, we ask more generally if functions with a bounded number of cosine terms suffice (e.g. with bound 3). We remark that Fourier-(1,1,1) knots are Lissajous knots. These are too symmetric to yield all knots (see also Tying knots with reflecting lightrays). Edit1 (12Feb14). We give an example with 4 pairs of distinct numbers for which a single cosine function $\cos(2\pi mx+\alpha)$ does not suffice: Choose $t_1, t'_1$ and $t_2, t'_2$ arbitrarily (but distinct) in $[0,1]$. Let $t_3 = t_1 + 0.5$, $t'_3 = t'_1 + 0.5$ and $t_4 = t'_2 + 0.5$, $t'_4 = t_2 + 0.5$ (note the interchanged ' in $t_2$). We then have: if $f(t_1) > f(t'_1)$ then for odd $m$: $f(t_3) < f(t'_3)$ and for even $m$: $f(t_4) < f(t'_4)$. This shows how the symmetries $\cos(x+2\pi) = \cos(x)$ and $\cos(x+\pi) = -\cos(x)$ prevent a solution in a Fourier series of length 1. The same argument applies for a series of length two (or more) if the frequencies are all odd or all even. Edit2 (09July15). The article http://arxiv.org/abs/1507.00880 (Marc Soret, Marina Ville) solves the Fourier-(1,1,2) knot problem using Kronecker's theorem! For almost all $(t_1, t_2, \ldots, t_N, t'_1, t'_2, \ldots, t'_N)$, just one Fourier term suffices. As I will explain below, I think this good enough for your goal of showing that all knots are Fourier-$(1,1,2)$. Claim Suppose that $t_1$, $t_2$, ..., $t_N$, $t'_1$, $t'_2$, ..., $t'_N$ and $\pi$ are linearly independent over $\mathbb{Q}$. Then there is an integer $n$ so that $\sin(n t_i) < \sin(n t'_i)$ for all $i$. Proof We write $\{ x \}$ for the fractional part $x - \lfloor x \rfloor$. We use Kronecker's theorem on diophantine approximation to conclude that the fractional parts $(\{n t_1/2\pi \} , \{n t_2/2\pi \}, \ldots, \{n t_N/2\pi \}, \{ n t'_1/2 \pi \}, \ldots, \{n t'_N/2 \pi \})$ are dense in the torus $(\mathbb{R}/\mathbb{Z})^{2N}$. In particularly, there is some $n$ so that $$0 < \{n t_1/2\pi \} < \{n t_2/2\pi \} < \cdots < \{n t_N/2\pi \} < \{n t'_1/2\pi \} < \{n t'_2/2\pi \} < \cdots < \{n t'_N/2\pi \} < 1/4$$ and so $$0 < \sin(n t_1) < \sin(n t_2) < \cdots < \sin(n t_N) < \sin(n t'_1) < \cdots < \sin(n t'_N) < 1. \quad \square$$ I went to look at your first paper on Fourier knots to see whether this is good enough to prove that all knots are $(1,1,1)$. The answer is no, because the checkerboard diagrams you create do not give values of $t_i$ and $t'_i$ which are linearly independent over $\mathbb{Q}$. For example, you make a checker-board for the trefoil by $$x = \cos(2t + 6) \quad y = \cos(3t + 0.15),$$ and the nodes of this checkerboard are at $$(t,t') = (-0.05+\pi/2, -0.05-\pi/2), (-0.05+\pi/2+\pi/3, -0.05-\pi/2+\pi/3), (-0.05+\pi/2+2 \pi/3, -0.05-\pi/2+ 2 \pi/3), (-3+ \pi/3, -3 - \pi/3), (-3+ 2\pi/3, -3 - 2\pi/3), (-3+\pi/3+\pi/2, -3-\pi/3+\pi/2), (-3+2\pi/3+\pi/2, -3-2\pi/3+\pi/2)$$ which are very obviously dependent. However, suppose you take one of your checkerboard parameterizations $(\cos(jt+\alpha), \cos(kt + \beta))$ and replace it by $(r \cos(jt+\alpha)+\epsilon, s \cos(kt + \beta))$ for some very small transcendental $\epsilon$. This won't change the topology of the checkerboard, and I bet (no proof) it will make the $t$ values of the nodes become linearly independent. So this would show that every knot can be parametrized as $(r \sin(jt+\alpha)+\epsilon, s \sin(kt + \beta), \sin(n t))$. UPDATE This doesn't work, see comments below. I continue to believe some small modification of this strategy could work, but I don't have a concrete proposal at the current time. • Thank you for your answer. It highlights that a deformation method can be useful because then Kronecker's theorem applies. This was used successfully in the proof of Daniel Pecker that every knot is a billiard knot in an elliptic prism (= deformation of the too symmetric situation of cylinders), see also the MO discussion on Tying knots with reflecting lightrays. Note, however, that the idea of scaling and adding a small constant seems not to work: if the t-values of intersections are calculated by setting equal two such terms, the small constant and the scaling will cancel. Feb 11 '14 at 8:02 • You're right, the small constant isn't good enough. Feb 12 '14 at 14:37 If all $t_i$, $t'_i$ are rational numbers with common denominator $d$, then it is enough to examine frequencies $m, n \le d$. After David's answer I decided that it could be worthwhile to look for counterexamples for small $d$ and used an optimization routine to minimize the sum over $\max(0,f(t'_i)-f(t_i))$ for each combination of $m, n$. Counterexamples seem to exist for 10 or more pairs. I found the following for $d=20$ and $d=23$: [3,13,2,12,16,18,8,9,10,19;1,17,14,7,4,15,20,1,6,5]/20 [7,9,5,4,14,16,10,20,6,15;17,19,8,12,3,1,18,2,11,13]/20 [9,8,12,19,1,16,18,10,5,17;7,13,4,20,6,21,2,14,11,15]/23 This should be read as $t_1=3/20$, $t'_1=1/20, \ldots$ for the first example. Open questions: a) Show that the above examples are indeed counterexamples for Fourier series of length 2. b) Can you find counterexamples with less than 10 pairs? c) Study the question for Fourier series of length 3. I would like to add that in a numerical study a small tolerance should be introduced: e.g. sum over $\max(0,f(t'_i)-f(t_i)+10^{-6})$ to avoid functions which are zero at $t_i$, $t'_i$. These occur because of the identity $\cos(x)-\cos(y)=-2 \sin(\frac{x+y}{2})\sin(\frac{x-y}{2})$ for frequencies with $m+n=d$ and $\gamma=-1$. Update (6Mar14): I found potential counterexamples with 8, 7 and 6 pairs: [10,16,11,3,9,20,15,6;12,4,19,5,1,2,13,8]/20, [12,5,10,14,4,11,7;8,1,3,6,2,9,13]/14, [10,11,3,2,1,7;8,9,4,5,6,12]/12.
86 views ### Can Math SE rearrange the textbox and the preview box into two column? [duplicate] Can Math SE rearrange the textbox and the preview box into two column? Like the following figure. I think it is more convenience for use. 57 views ### Is it possible to make the websites' edit and preview window in two screen? [duplicate] When preview my editing post, after scrolling down the window, then the window always run up to the edit window, boring. Is it possible to make the websites' preview window in one of my screen, and ... 11k views ### Should chat have TeX support? OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support. The current TeX processing has some non-trivial client impact. Before I even attempt trying ... 2k views ### MathJax: better way to prepare a Math.StackExchange question? So right now, to write up a question for Math.StackExchange, I use Firefox to come here, say I'm going to ask a question, and start typing in the box (like I am now). The only way I have to test ... 734 views ### MathJax makes preview flicker When using quite some TeX (or using a slow computer), every time the preview of the post you're writing is updated (and MathJax reloads), the preview flickers. This is getting very annoying. It's a ... 5k views 132 views ### How do I preview my post before posting? How do I preview my post before posting? I do not find any preview button. 163 views ### When asking/answering a question on this site; Is there a way to fix the page in place so that you only see the displayed output as you type? I am a touch typist and would like to know if there exists a way to type a question\answer with all the necessary formatting without the page automatically scrolling back up to where you input the ... 107 views ### Ask Question app should have side-by-side view option Though I have long grown accustomed to the vertical layout of the "Ask Question" app. I realize now, that it is probably a great bug to request be fixed. You notice, right, when you're working on a ...
TheInfoList Number theory (or arithmetic or higher arithmetic in older usage) is a branch of pure mathematics devoted primarily to the study of the integers and integer-valued functions. German mathematician Carl Friedrich Gauss (1777–1855) said, "Mathematics is the queen of the sciences—and number theory is the queen of mathematics."German original: "Die Mathematik ist die Königin der Wissenschaften, und die Arithmetik ist die Königin der Mathematik." Number theorists study prime numbers as well as the properties of mathematical objects made out of integers (for example, rational numbers) or defined as generalizations of the integers (for example, algebraic integers). Integers can be considered either in themselves or as solutions to equations (Diophantine geometry). Questions in number theory are often best understood through the study of analytical objects (for example, the Riemann zeta function) that encode properties of the integers, primes or other number-theoretic objects in some fashion (analytic number theory). One may also study real numbers in relation to rational numbers, for example, as approximated by the latter (Diophantine approximation). The older term for number theory is ''arithmetic''. By the early twentieth century, it had been superseded by "number theory".Already in 1921, T. L. Heath had to explain: "By arithmetic, Plato meant, not arithmetic in our sense, but the science which considers numbers in themselves, in other words, what we mean by the Theory of Numbers." (The word "arithmetic" is used by the general public to mean "elementary calculations"; it has also acquired other meanings in mathematical logic, as in ''Peano arithmetic'', and computer science, as in ''floating point arithmetic''.) The use of the term ''arithmetic'' for ''number theory'' regained some ground in the second half of the 20th century, arguably in part due to French influence.Take, for example, . In 1952, Davenport still had to specify that he meant ''The Higher Arithmetic''. Hardy and Wright wrote in the introduction to ''An Introduction to the Theory of Numbers'' (1938): "We proposed at one time to change he titleto ''An introduction to arithmetic'', a more novel and in some ways a more appropriate title; but it was pointed out that this might lead to misunderstandings about the content of the book." In particular, ''arithmetical'' is commonly preferred as an adjective to ''number-theoretic''. History Origins Dawn of arithmetic The earliest historical find of an arithmetical nature is a fragment of a table: the broken clay tablet Plimpton 322 (Larsa, Mesopotamia, ca. 1800 BCE) contains a list of "Pythagorean triples", that is, integers $\left(a,b,c\right)$ such that $a^2+b^2=c^2$. The triples are too many and too large to have been obtained by brute force. The heading over the first column reads: "The ''takiltum'' of the diagonal which has been subtracted such that the width..." The table's layout suggests that it was constructed by means of what amounts, in modern language, to the identity :$\left\left(\frac \left\left(x - \frac\right\right)\right\right)^2 + 1 = \left\left(\frac \left\left(x + \frac \right\right)\right\right)^2,$ which is implicit in routine Old Babylonian exercises. If some other method was used, the triples were first constructed and then reordered by $c/a$, presumably for actual use as a "table", for example, with a view to applications. It is not known what these applications may have been, or whether there could have been any; Babylonian astronomy, for example, truly came into its own only later. It has been suggested instead that the table was a source of numerical examples for school problems.. This is controversial. See Plimpton 322. Robson's article is written polemically with a view to "perhaps ..knocking limpton 322off its pedestal" ; at the same time, it settles to the conclusion that ..the question "how was the tablet calculated?" does not have to have the same answer as the question "what problems does the tablet set?" The first can be answered most satisfactorily by reciprocal pairs, as first suggested half a century ago, and the second by some sort of right-triangle problems . Robson takes issue with the notion that the scribe who produced Plimpton 322 (who had to "work for a living", and would not have belonged to a "leisured middle class") could have been motivated by his own "idle curiosity" in the absence of a "market for new mathematics". While Babylonian number theory—or what survives of Babylonian mathematics that can be called thus—consists of this single, striking fragment, Babylonian algebra (in the secondary-school sense of "algebra") was exceptionally well developed. Late Neoplatonic sourcesIamblichus, ''Life of Pythagoras'',(trans., for example, ) cited in . See also Porphyry, ''Life of Pythagoras'', paragraph 6, in Van der Waerden sustains the view that Thales knew Babylonian mathematics. state that Pythagoras learned mathematics from the Babylonians. Much earlier sourcesHerodotus (II. 81) and Isocrates (''Busiris'' 28), cited in: . On Thales, see Eudemus ap. Proclus, 65.7, (for example, ) cited in: . Proclus was using a work by Eudemus of Rhodes (now lost), the ''Catalogue of Geometers''. See also introduction, on Proclus's reliability. state that Thales and Pythagoras traveled and studied in Egypt. Euclid IX 21–34 is very probably Pythagorean;, cited in: . it is very simple material ("odd times even is even", "if an odd number measures dividesan even number, then it also measures divideshalf of it"), but it is all that is needed to prove that $\sqrt$ is irrational. Pythagorean mystics gave great importance to the odd and the even. The discovery that $\sqrt$ is irrational is credited to the early Pythagoreans (pre-Theodorus).Plato, ''Theaetetus'', p. 147 B, (for example, ), cited in : "Theodorus was writing out for us something about roots, such as the roots of three or five, showing that they are incommensurable by the unit;..." ''See also'' Spiral of Theodorus. By revealing (in modern terms) that numbers could be irrational, this discovery seems to have provoked the first foundational crisis in mathematical history; its proof or its divulgation are sometimes credited to Hippasus, who was expelled or split from the Pythagorean sect. This forced a distinction between ''numbers'' (integers and the rationals—the subjects of arithmetic), on the one hand, and ''lengths'' and ''proportions'' (which we would identify with real numbers, whether rational or not), on the other hand. The Pythagorean tradition spoke also of so-called polygonal or figurate numbers. While square numbers, cubic numbers, etc., are seen now as more natural than triangular numbers, pentagonal numbers, etc., the study of the sums of triangular and pentagonal numbers would prove fruitful in the early modern period (17th to early 19th century). We know of no clearly arithmetical material in ancient Egyptian or Vedic sources, though there is some algebra in both. The Chinese remainder theorem appears as an exercise in ''Sunzi Suanjing'' (3rd, 4th or 5th century CE.)The date of the text has been narrowed down to 220–420 CE (Yan Dunjie) or 280–473 CE (Wang Ling) through internal evidence (= taxation systems assumed in the text). See . (There is one important step glossed over in Sunzi's solution:''Sunzi Suanjing'', Ch. 3, Problem 26, in : 6Now there are an unknown number of things. If we count by threes, there is a remainder 2; if we count by fives, there is a remainder 3; if we count by sevens, there is a remainder 2. Find the number of things. ''Answer'': 23. ''Method'': If we count by threes and there is a remainder 2, put down 140. If we count by fives and there is a remainder 3, put down 63. If we count by sevens and there is a remainder 2, put down 30. Add them to obtain 233 and subtract 210 to get the answer. If we count by threes and there is a remainder 1, put down 70. If we count by fives and there is a remainder 1, put down 21. If we count by sevens and there is a remainder 1, put down 15. When numberexceeds 106, the result is obtained by subtracting 105. it is the problem that was later solved by Āryabhaṭa's Kuṭṭaka – see below.) There is also some numerical mysticism in Chinese mathematics,See, for example, ''Sunzi Suanjing'', Ch. 3, Problem 36, in : 6Now there is a pregnant woman whose age is 29. If the gestation period is 9 months, determine the sex of the unborn child. ''Answer'': Male. ''Method'': Put down 49, add the gestation period and subtract the age. From the remainder take away 1 representing the heaven, 2 the earth, 3 the man, 4 the four seasons, 5 the five phases, 6 the six pitch-pipes, 7 the seven stars f the Dipper 8 the eight winds, and 9 the nine divisions f China under Yu the Great If the remainder is odd, he sexis male and if the remainder is even, he sexis female. This is the last problem in Sunzi's otherwise matter-of-fact treatise. but, unlike that of the Pythagoreans, it seems to have led nowhere. Like the Pythagoreans' perfect numbers, magic squares have passed from superstition into recreation. Classical Greece and the early Hellenistic period Aside from a few fragments, the mathematics of Classical Greece is known to us either through the reports of contemporary non-mathematicians or through mathematical works from the early Hellenistic period. In the case of number theory, this means, by and large, ''Plato'' and ''Euclid'', respectively. While Asian mathematics influenced Greek and Hellenistic learning, it seems to be the case that Greek mathematics is also an indigenous tradition. Eusebius, PE X, chapter 4 mentions of Pythagoras: "In fact the said Pythagoras, while busily studying the wisdom of each nation, visited Babylon, and Egypt, and all Persia, being instructed by the Magi and the priests: and in addition to these he is related to have studied under the Brahmans (these are Indian philosophers); and from some he gathered astrology, from others geometry, and arithmetic and music from others, and different things from different nations, and only from the wise men of Greece did he get nothing, wedded as they were to a poverty and dearth of wisdom: so on the contrary he himself became the author of instruction to the Greeks in the learning which he had procured from abroad." Aristotle claimed that the philosophy of Plato closely followed the teachings of the Pythagoreans, and Cicero repeats this claim: ''Platonem ferunt didicisse Pythagorea omnia'' ("They say Plato learned all things Pythagorean"). Plato had a keen interest in mathematics, and distinguished clearly between arithmetic and calculation. (By ''arithmetic'' he meant, in part, theorising on number, rather than what ''arithmetic'' or ''number theory'' have come to mean.) It is through one of Plato's dialogues—namely, ''Theaetetus''—that we know that Theodorus had proven that $\sqrt, \sqrt, \dots, \sqrt$ are irrational. Theaetetus was, like Plato, a disciple of Theodorus's; he worked on distinguishing different kinds of incommensurables, and was thus arguably a pioneer in the study of number systems. (Book X of Euclid's Elements is described by Pappus as being largely based on Theaetetus's work.) Euclid devoted part of his ''Elements'' to prime numbers and divisibility, topics that belong unambiguously to number theory and are basic to it (Books VII to IX of Euclid's Elements). In particular, he gave an algorithm for computing the greatest common divisor of two numbers (the Euclidean algorithm; ''Elements'', Prop. VII.2) and the first known proof of the infinitude of primes (''Elements'', Prop. IX.20). In 1773, Lessing published an epigram he had found in a manuscript during his work as a librarian; it claimed to be a letter sent by Archimedes to Eratosthenes. The epigram proposed what has become known as Archimedes's cattle problem; its solution (absent from the manuscript) requires solving an indeterminate quadratic equation (which reduces to what would later be misnamed Pell's equation). As far as we know, such equations were first successfully treated by the Indian school. It is not known whether Archimedes himself had a method of solution. Diophantus by Claude Gaspard Bachet de Méziriac.]] Very little is known about Diophantus of Alexandria; he probably lived in the third century CE, that is, about five hundred years after Euclid. Six out of the thirteen books of Diophantus's ''Arithmetica'' survive in the original Greek and four more survive in an Arabic translation. The ''Arithmetica'' is a collection of worked-out problems where the task is invariably to find rational solutions to a system of polynomial equations, usually of the form $f\left(x,y\right)=z^2$ or $f\left(x,y,z\right)=w^2$. Thus, nowadays, we speak of ''Diophantine equations'' when we speak of polynomial equations to which rational or integer solutions must be found. One may say that Diophantus was studying rational points, that is, points whose coordinates are rational—on curves and algebraic varieties; however, unlike the Greeks of the Classical period, who did what we would now call basic algebra in geometrical terms, Diophantus did what we would now call basic algebraic geometry in purely algebraic terms. In modern language, what Diophantus did was to find rational parametrizations of varieties; that is, given an equation of the form (say) $f\left(x_1,x_2,x_3\right)=0$, his aim was to find (in essence) three rational functions $g_1, g_2, g_3$ such that, for all values of $r$ and $s$, setting $x_i = g_i\left(r,s\right)$ for $i=1,2,3$ gives a solution to $f\left(x_1,x_2,x_3\right)=0.$ Diophantus also studied the equations of some non-rational curves, for which no rational parametrisation is possible. He managed to find some rational points on these curves (elliptic curves, as it happens, in what seems to be their first known occurrence) by means of what amounts to a tangent construction: translated into coordinate geometry (which did not exist in Diophantus's time), his method would be visualised as drawing a tangent to a curve at a known rational point, and then finding the other point of intersection of the tangent with the curve; that other point is a new rational point. (Diophantus also resorted to what could be called a special case of a secant construction.) While Diophantus was concerned largely with rational solutions, he assumed some results on integer numbers, in particular that every integer is the sum of four squares (though he never stated as much explicitly). Āryabhaṭa, Brahmagupta, Bhāskara While Greek astronomy probably influenced Indian learning, to the point of introducing trigonometry, it seems to be the case that Indian mathematics is otherwise an indigenous tradition;Any early contact between Babylonian and Indian mathematics remains conjectural . in particular, there is no evidence that Euclid's Elements reached India before the 18th century. Āryabhaṭa (476–550 CE) showed that pairs of simultaneous congruences $n\equiv a_1 \bmod m_1$, $n\equiv a_2 \bmod m_2$ could be solved by a method he called ''kuṭṭaka'', or ''pulveriser''; this is a procedure close to (a generalisation of) the Euclidean algorithm, which was probably discovered independently in India. Āryabhaṭa seems to have had in mind applications to astronomical calculations. Brahmagupta (628 CE) started the systematic study of indefinite quadratic equations—in particular, the misnamed Pell equation, in which Archimedes may have first been interested, and which did not start to be solved in the West until the time of Fermat and Euler. Later Sanskrit authors would follow, using Brahmagupta's technical terminology. A general procedure (the chakravala, or "cyclic method") for solving Pell's equation was finally found by Jayadeva (cited in the eleventh century; his work is otherwise lost); the earliest surviving exposition appears in Bhāskara II's Bīja-gaṇita (twelfth century). Indian mathematics remained largely unknown in Europe until the late eighteenth century; Brahmagupta and Bhāskara's work was translated into English in 1817 by Henry Colebrooke. Arithmetic in the Islamic golden age In the early ninth century, the caliph Al-Ma'mun ordered translations of many Greek mathematical works and at least one Sanskrit work (the ''Sindhind'', which may or may not, and , cited in . be Brahmagupta's Brāhmasphuṭasiddhānta). Diophantus's main work, the ''Arithmetica'', was translated into Arabic by Qusta ibn Luqa (820–912). Part of the treatise ''al-Fakhri'' (by al-Karajī, 953 – ca. 1029) builds on it to some extent. According to Rashed Roshdi, Al-Karajī's contemporary Ibn al-Haytham knew what would later be called Wilson's theorem. Western Europe in the Middle Ages Other than a treatise on squares in arithmetic progression by Fibonacci—who traveled and studied in north Africa and Constantinople—no number theory to speak of was done in western Europe during the Middle Ages. Matters started to change in Europe in the late Renaissance, thanks to a renewed study of the works of Greek antiquity. A catalyst was the textual emendation and translation into Latin of Diophantus' ''Arithmetica''. Early modern number theory Fermat ]] Pierre de Fermat (1607–1665) never published his writings; in particular, his work on number theory is contained almost entirely in letters to mathematicians and in private marginal notes. In his notes and letters, he scarcely wrote any proofs - he had no models in the area. Over his lifetime, Fermat made the following contributions to the field: * One of Fermat's first interests was perfect numbers (which appear in Euclid, ''Elements'' IX) and amicable numbers;Perfect and especially amicable numbers are of little or no interest nowadays. The same was not true in medieval times—whether in the West or the Arab-speaking world—due in part to the importance given to them by the Neopythagorean (and hence mystical) Nicomachus (ca. 100 CE), who wrote a primitive but influential "Introduction to Arithmetic". See . these topics led him to work on integer divisors, which were from the beginning among the subjects of the correspondence (1636 onwards) that put him in touch with the mathematical community of the day. * In 1638, Fermat claimed, without proof, that all whole numbers can be expressed as the sum of four squares or fewer. * Fermat's little theorem (1640): if ''a'' is not divisible by a prime ''p'', then $a^ \equiv 1 \bmod p.$Here, as usual, given two integers ''a'' and ''b'' and a non-zero integer ''m'', we write $a \equiv b \bmod m$ (read "''a'' is congruent to ''b'' modulo ''m''") to mean that ''m'' divides ''a'' − ''b'', or, what is the same, ''a'' and ''b'' leave the same residue when divided by ''m''. This notation is actually much later than Fermat's; it first appears in section 1 of Gauss's Disquisitiones Arithmeticae. Fermat's little theorem is a consequence of the fact that the order of an element of a group divides the order of the group. The modern proof would have been within Fermat's means (and was indeed given later by Euler), even though the modern concept of a group came long after Fermat or Euler. (It helps to know that inverses exist modulo ''p'', that is, given ''a'' not divisible by a prime ''p'', there is an integer ''x'' such that $x a \equiv 1 \bmod p$); this fact (which, in modern language, makes the residues mod ''p'' into a group, and which was already known to Āryabhaṭa; see above) was familiar to Fermat thanks to its rediscovery by Bachet . Weil goes on to say that Fermat would have recognised that Bachet's argument is essentially Euclid's algorithm. * If ''a'' and ''b'' are coprime, then $a^2 + b^2$ is not divisible by any prime congruent to −1 modulo 4; and every prime congruent to 1 modulo 4 can be written in the form $a^2 + b^2$. These two statements also date from 1640; in 1659, Fermat stated to Huygens that he had proven the latter statement by the method of infinite descent. * In 1657, Fermat posed the problem of solving $x^2 - N y^2 = 1$ as a challenge to English mathematicians. The problem was solved in a few months by Wallis and Brouncker. Fermat considered their solution valid, but pointed out they had provided an algorithm without a proof (as had Jayadeva and Bhaskara, though Fermat wasn't aware of this). He stated that a proof could be found by infinite descent. * Fermat stated and proved (by infinite descent) in the appendix to ''Observations on Diophantus'' (Obs. XLV) that $x^ + y^ = z^$ has no non-trivial solutions in the integers. Fermat also mentioned to his correspondents that $x^3 + y^3 = z^3$ has no non-trivial solutions, and that this could also be proven by infinite descent. The first known proof is due to Euler (1753; indeed by infinite descent). * Fermat claimed ("Fermat's last theorem") to have shown there are no solutions to $x^n + y^n = z^n$ for all $n\geq 3$; this claim appears in his annotations in the margins of his copy of Diophantus. Euler ]] The interest of Leonhard Euler (1707–1783) in number theory was first spurred in 1729, when a friend of his, the amateurUp to the second half of the seventeenth century, academic positions were very rare, and most mathematicians and scientists earned their living in some other way . (There were already some recognisable features of professional ''practice'', viz., seeking correspondents, visiting foreign colleagues, building private libraries . Matters started to shift in the late 17th century ; scientific academies were founded in England (the Royal Society, 1662) and France (the Académie des sciences, 1666) and Russia (1724). Euler was offered a position at this last one in 1726; he accepted, arriving in St. Petersburg in 1727 ( and ). In this context, the term ''amateur'' usually applied to Goldbach is well-defined and makes some sense: he has been described as a man of letters who earned a living as a spy ; cited in ). Notice, however, that Goldbach published some works on mathematics and sometimes held academic positions. Goldbach, pointed him towards some of Fermat's work on the subject. This has been called the "rebirth" of modern number theory, after Fermat's relative lack of success in getting his contemporaries' attention for the subject. Euler's work on number theory includes the following: *''Proofs for Fermat's statements.'' This includes Fermat's little theorem (generalised by Euler to non-prime moduli); the fact that $p = x^2 + y^2$ if and only if $p\equiv 1 \bmod 4$; initial work towards a proof that every integer is the sum of four squares (the first complete proof is by Joseph-Louis Lagrange (1770), soon improved by Euler himself); the lack of non-zero integer solutions to $x^4 + y^4 = z^2$ (implying the case ''n=4'' of Fermat's last theorem, the case ''n=3'' of which Euler also proved by a related method). *''Pell's equation'', first misnamed by Euler.. Euler was generous in giving credit to others , not always correctly. He wrote on the link between continued fractions and Pell's equation. *''First steps towards analytic number theory.'' In his work of sums of four squares, partitions, pentagonal numbers, and the distribution of prime numbers, Euler pioneered the use of what can be seen as analysis (in particular, infinite series) in number theory. Since he lived before the development of complex analysis, most of his work is restricted to the formal manipulation of power series. He did, however, do some very notable (though not fully rigorous) early work on what would later be called the Riemann zeta function. *''Quadratic forms''. Following Fermat's lead, Euler did further research on the question of which primes can be expressed in the form $x^2 + N y^2$, some of it prefiguring quadratic reciprocity. *''Diophantine equations''. Euler worked on some Diophantine equations of genus 0 and 1. In particular, he studied Diophantus's work; he tried to systematise it, but the time was not yet ripe for such an endeavour—algebraic geometry was still in its infancy. He did notice there was a connection between Diophantine problems and elliptic integrals, whose study he had himself initiated. Lagrange, Legendre, and Gauss 's Disquisitiones Arithmeticae, first edition]] Joseph-Louis Lagrange (1736–1813) was the first to give full proofs of some of Fermat's and Euler's work and observations—for instance, the four-square theorem and the basic theory of the misnamed "Pell's equation" (for which an algorithmic solution was found by Fermat and his contemporaries, and also by Jayadeva and Bhaskara II before them.) He also studied quadratic forms in full generality (as opposed to $m X^2 + n Y^2$)—defining their equivalence relation, showing how to put them in reduced form, etc. Adrien-Marie Legendre (1752–1833) was the first to state the law of quadratic reciprocity. He also conjectured what amounts to the prime number theorem and Dirichlet's theorem on arithmetic progressions. He gave a full treatment of the equation $a x^2 + b y^2 + c z^2 = 0$ and worked on quadratic forms along the lines later developed fully by Gauss. In his old age, he was the first to prove "Fermat's last theorem" for $n=5$ (completing work by Peter Gustav Lejeune Dirichlet, and crediting both him and Sophie Germain). In his ''Disquisitiones Arithmeticae'' (1798), Carl Friedrich Gauss (1777–1855) proved the law of quadratic reciprocity and developed the theory of quadratic forms (in particular, defining their composition). He also introduced some basic notation (congruences) and devoted a section to computational matters, including primality tests. The last section of the ''Disquisitiones'' established a link between roots of unity and number theory: The theory of the division of the circle...which is treated in sec. 7 does not belong by itself to arithmetic, but its principles can only be drawn from higher arithmetic. In this way, Gauss arguably made a first foray towards both Évariste Galois's work and algebraic number theory. Maturity and division into subfields ]] ]] Starting early in the nineteenth century, the following developments gradually took place: * The rise to self-consciousness of number theory (or ''higher arithmetic'') as a field of study. * The development of much of modern mathematics necessary for basic modern number theory: complex analysis, group theory, Galois theory—accompanied by greater rigor in analysis and abstraction in algebra. * The rough subdivision of number theory into its modern subfields—in particular, analytic and algebraic number theory. Algebraic number theory may be said to start with the study of reciprocity and cyclotomy, but truly came into its own with the development of abstract algebra and early ideal theory and valuation theory; see below. A conventional starting point for analytic number theory is Dirichlet's theorem on arithmetic progressions (1837), whose proof introduced L-functions and involved some asymptotic analysis and a limiting process on a real variable. The first use of analytic ideas in number theory actually goes back to Euler (1730s), who used formal power series and non-rigorous (or implicit) limiting arguments. The use of ''complex'' analysis in number theory comes later: the work of Bernhard Riemann (1859) on the zeta function is the canonical starting point; Jacobi's four-square theorem (1839), which predates it, belongs to an initially different strand that has by now taken a leading role in analytic number theory (modular forms). The history of each subfield is briefly addressed in its own section below; see the main article of each subfield for fuller treatments. Many of the most interesting questions in each area remain open and are being actively worked on. Main subdivisions Elementary number theory The term ''elementary'' generally denotes a method that does not use complex analysis. For example, the prime number theorem was first proven using complex analysis in 1896, but an elementary proof was found only in 1949 by Erdős and Selberg. The term is somewhat ambiguous: for example, proofs based on complex Tauberian theorems (for example, Wiener–Ikehara) are often seen as quite enlightening but not elementary, in spite of using Fourier analysis, rather than complex analysis as such. Here as elsewhere, an ''elementary'' proof may be longer and more difficult for most readers than a non-elementary one. Number theory has the reputation of being a field many of whose results can be stated to the layperson. At the same time, the proofs of these results are not particularly accessible, in part because the range of tools they use is, if anything, unusually broad within mathematics. Analytic number theory ζ(''s'') in the complex plane. The color of a point ''s'' gives the value of ζ(''s''): dark colors denote values close to zero and hue gives the value's argument.]] ''Analytic number theory'' may be defined * in terms of its tools, as the study of the integers by means of tools from real and complex analysis; or * in terms of its concerns, as the study within number theory of estimates on size and density, as opposed to identities. Some subjects generally considered to be part of analytic number theory, for example, sieve theory,Sieve theory figures as one of the main subareas of analytic number theory in many standard treatments; see, for instance, or are better covered by the second rather than the first definition: some of sieve theory, for instance, uses little analysis,This is the case for small sieves (in particular, some combinatorial sieves such as the Brun sieve) rather than for large sieves; the study of the latter now includes ideas from harmonic and functional analysis. yet it does belong to analytic number theory. The following are examples of problems in analytic number theory: the prime number theorem, the Goldbach conjecture (or the twin prime conjecture, or the Hardy–Littlewood conjectures), the Waring problem and the Riemann hypothesis. Some of the most important tools of analytic number theory are the circle method, sieve methods and L-functions (or, rather, the study of their properties). The theory of modular forms (and, more generally, automorphic forms) also occupies an increasingly central place in the toolbox of analytic number theory. One may ask analytic questions about algebraic numbers, and use analytic means to answer such questions; it is thus that algebraic and analytic number theory intersect. For example, one may define prime ideals (generalizations of prime numbers in the field of algebraic numbers) and ask how many prime ideals there are up to a certain size. This question can be answered by means of an examination of Dedekind zeta functions, which are generalizations of the Riemann zeta function, a key analytic object at the roots of the subject. This is an example of a general procedure in analytic number theory: deriving information about the distribution of a sequence (here, prime ideals or prime numbers) from the analytic behavior of an appropriately constructed complex-valued function. Algebraic number theory An ''algebraic number'' is any complex number that is a solution to some polynomial equation $f\left(x\right)=0$ with rational coefficients; for example, every solution $x$ of $x^5 + \left(11/2\right) x^3 - 7 x^2 + 9 = 0$ (say) is an algebraic number. Fields of algebraic numbers are also called ''algebraic number fields'', or shortly ''number fields''. Algebraic number theory studies algebraic number fields. Thus, analytic and algebraic number theory can and do overlap: the former is defined by its methods, the latter by its objects of study. It could be argued that the simplest kind of number fields (viz., quadratic fields) were already studied by Gauss, as the discussion of quadratic forms in ''Disquisitiones arithmeticae'' can be restated in terms of ideals and norms in quadratic fields. (A ''quadratic field'' consists of all numbers of the form $a + b \sqrt$, where $a$ and $b$ are rational numbers and $d$ is a fixed rational number whose square root is not rational.) For that matter, the 11th-century chakravala method amounts—in modern terms—to an algorithm for finding the units of a real quadratic number field. However, neither Bhāskara nor Gauss knew of number fields as such. The grounds of the subject as we know it were set in the late nineteenth century, when ''ideal numbers'', the ''theory of ideals'' and ''valuation theory'' were developed; these are three complementary ways of dealing with the lack of unique factorisation in algebraic number fields. (For example, in the field generated by the rationals and $\sqrt$, the number $6$ can be factorised both as $6 = 2 \cdot 3$ and $6 = \left(1 + \sqrt\right) \left( 1 - \sqrt\right)$; all of $2$, $3$, $1 + \sqrt$ and $1 - \sqrt$ are irreducible, and thus, in a naïve sense, analogous to primes among the integers.) The initial impetus for the development of ideal numbers (by Kummer) seems to have come from the study of higher reciprocity laws, that is, generalisations of quadratic reciprocity. Number fields are often studied as extensions of smaller number fields: a field ''L'' is said to be an ''extension'' of a field ''K'' if ''L'' contains ''K''. (For example, the complex numbers ''C'' are an extension of the reals ''R'', and the reals ''R'' are an extension of the rationals ''Q''.) Classifying the possible extensions of a given number field is a difficult and partially open problem. Abelian extensions—that is, extensions ''L'' of ''K'' such that the Galois groupThe Galois group of an extension ''L/K'' consists of the operations (isomorphisms) that send elements of L to other elements of L while leaving all elements of K fixed. Thus, for instance, ''Gal(C/R)'' consists of two elements: the identity element (taking every element ''x'' + ''iy'' of ''C'' to itself) and complex conjugation (the map taking each element ''x'' + ''iy'' to ''x'' − ''iy''). The Galois group of an extension tells us many of its crucial properties. The study of Galois groups started with Évariste Galois; in modern language, the main outcome of his work is that an equation ''f''(''x'') = 0 can be solved by radicals (that is, ''x'' can be expressed in terms of the four basic operations together with square roots, cubic roots, etc.) if and only if the extension of the rationals by the roots of the equation ''f''(''x'') = 0 has a Galois group that is solvable in the sense of group theory. ("Solvable", in the sense of group theory, is a simple property that can be checked easily for finite groups.) Gal(''L''/''K'') of ''L'' over ''K'' is an abelian group—are relatively well understood. Their classification was the object of the programme of class field theory, which was initiated in the late 19th century (partly by Kronecker and Eisenstein) and carried out largely in 1900–1950. An example of an active area of research in algebraic number theory is Iwasawa theory. The Langlands program, one of the main current large-scale research plans in mathematics, is sometimes described as an attempt to generalise class field theory to non-abelian extensions of number fields. Diophantine geometry The central problem of ''Diophantine geometry'' is to determine when a Diophantine equation has solutions, and if it does, how many. The approach taken is to think of the solutions of an equation as a geometric object. For example, an equation in two variables defines a curve in the plane. More generally, an equation, or system of equations, in two or more variables defines a curve, a surface or some other such object in ''n''-dimensional space. In Diophantine geometry, one asks whether there are any ''rational points'' (points all of whose coordinates are rationals) or ''integral points'' (points all of whose coordinates are integers) on the curve or surface. If there are any such points, the next step is to ask how many there are and how they are distributed. A basic question in this direction is if there are finitely or infinitely many rational points on a given curve (or surface). In the Pythagorean equation $x^2+y^2 = 1,$ we would like to study its rational solutions, that is, its solutions $\left(x,y\right)$ such that ''x'' and ''y'' are both rational. This is the same as asking for all integer solutions to $a^2 + b^2 = c^2$; any solution to the latter equation gives us a solution $x = a/c$, $y = b/c$ to the former. It is also the same as asking for all points with rational coordinates on the curve described by $x^2 + y^2 = 1$. (This curve happens to be a circle of radius 1 around the origin.) , that is, a curve of genus 1 having at least one rational point. (Either graph can be seen as a slice of a torus in four-dimensional space.)]] The rephrasing of questions on equations in terms of points on curves turns out to be felicitous. The finiteness or not of the number of rational or integer points on an algebraic curve—that is, rational or integer solutions to an equation $f\left(x,y\right)=0$, where $f$ is a polynomial in two variables—turns out to depend crucially on the ''genus'' of the curve. The ''genus'' can be defined as follows:If we want to study the curve $y^2 = x^3 + 7$. We allow ''x'' and ''y'' to be complex numbers: $\left(a + b i\right)^2 = \left(c + d i\right)^3 + 7$. This is, in effect, a set of two equations on four variables, since both the real and the imaginary part on each side must match. As a result, we get a surface (two-dimensional) in four-dimensional space. After we choose a convenient hyperplane on which to project the surface (meaning that, say, we choose to ignore the coordinate ''a''), we can plot the resulting projection, which is a surface in ordinary three-dimensional space. It then becomes clear that the result is a torus, loosely speaking, the surface of a doughnut (somewhat stretched). A doughnut has one hole; hence the genus is 1. allow the variables in $f\left(x,y\right)=0$ to be complex numbers; then $f\left(x,y\right)=0$ defines a 2-dimensional surface in (projective) 4-dimensional space (since two complex variables can be decomposed into four real variables, that is, four dimensions). If we count the number of (doughnut) holes in the surface; we call this number the ''genus'' of $f\left(x,y\right)=0$. Other geometrical notions turn out to be just as crucial. There is also the closely linked area of Diophantine approximations: given a number $x$, then finding how well can it be approximated by rationals. (We are looking for approximations that are good relative to the amount of space that it takes to write the rational: call $a/q$ (with $\gcd\left(a,q\right)=1$) a good approximation to $x$ if $|x-a/q|<\frac$, where $c$ is large.) This question is of special interest if $x$ is an algebraic number. If $x$ cannot be well approximated, then some equations do not have integer or rational solutions. Moreover, several concepts (especially that of height) turn out to be critical both in Diophantine geometry and in the study of Diophantine approximations. This question is also of special interest in transcendental number theory: if a number can be better approximated than any algebraic number, then it is a transcendental number. It is by this argument that and e have been shown to be transcendental. Diophantine geometry should not be confused with the geometry of numbers, which is a collection of graphical methods for answering certain questions in algebraic number theory. ''Arithmetic geometry'', however, is a contemporary term for much the same domain as that covered by the term ''Diophantine geometry''. The term ''arithmetic geometry'' is arguably used most often when one wishes to emphasise the connections to modern algebraic geometry (as in, for instance, Faltings's theorem) rather than to techniques in Diophantine approximations. Other subfields The areas below date from no earlier than the mid-twentieth century, even if they are based on older material. For example, as is explained below, the matter of algorithms in number theory is very old, in some sense older than the concept of proof; at the same time, the modern study of computability dates only from the 1930s and 1940s, and computational complexity theory from the 1970s. Probabilistic number theory Much of probabilistic number theory can be seen as an important special case of the study of variables that are almost, but not quite, mutually independent. For example, the event that a random integer between one and a million be divisible by two and the event that it be divisible by three are almost independent, but not quite. It is sometimes said that probabilistic combinatorics uses the fact that whatever happens with probability greater than $0$ must happen sometimes; one may say with equal justice that many applications of probabilistic number theory hinge on the fact that whatever is unusual must be rare. If certain algebraic objects (say, rational or integer solutions to certain equations) can be shown to be in the tail of certain sensibly defined distributions, it follows that there must be few of them; this is a very concrete non-probabilistic statement following from a probabilistic one. At times, a non-rigorous, probabilistic approach leads to a number of heuristic algorithms and open problems, notably Cramér's conjecture. Arithmetic combinatorics If we begin from a fairly "thick" infinite set $A$, does it contain many elements in arithmetic progression: $a$, $a+b, a+2 b, a+3 b, \ldots, a+10b$, say? Should it be possible to write large integers as sums of elements of $A$? These questions are characteristic of ''arithmetic combinatorics''. This is a presently coalescing field; it subsumes ''additive number theory'' (which concerns itself with certain very specific sets $A$ of arithmetic significance, such as the primes or the squares) and, arguably, some of the ''geometry of numbers'', together with some rapidly developing new material. Its focus on issues of growth and distribution accounts in part for its developing links with ergodic theory, finite group theory, model theory, and other fields. The term ''additive combinatorics'' is also used; however, the sets $A$ being studied need not be sets of integers, but rather subsets of non-commutative groups, for which the multiplication symbol, not the addition symbol, is traditionally used; they can also be subsets of rings, in which case the growth of $A+A$ and $A$·$A$ may be compared. Computational number theory thumb|A Lehmer_sieve,_which_is_a_primitive_[[digital_computer_once_used_for_finding_[[Prime_number.html" style="text-decoration: none;"class="mw-redirect" title="digital_computer.html" style="text-decoration: none;"class="mw-redirect" title="Lehmer sieve, which is a primitive Lehmer_sieve,_which_is_a_primitive_[[digital_computer_once_used_for_finding_[[Prime_number">primes_and_solving_simple_[[Diophantine_equations.html" style="text-decoration: none;"class="mw-redirect" title="digital computer">Lehmer sieve, which is a primitive [[digital computer once used for finding [[Prime number">primes and solving simple [[Diophantine equations">digital computer">Lehmer sieve, which is a primitive [[digital computer once used for finding [[Prime number">primes and solving simple [[Diophantine equations.]]While the word ''algorithm'' goes back only to certain readers of [[al-Khwārizmī]], careful descriptions of methods of solution are older than proofs: such methods (that is, algorithms) are as old as any recognisable mathematics—ancient Egyptian, Babylonian, Vedic, Chinese—whereas proofs appeared only with the Greeks of the classical period. An interesting early case is that of what we now call the Euclidean algorithm. In its basic form (namely, as an algorithm for computing the greatest common divisor) it appears as Proposition 2 of Book VII in ''Elements'', together with a proof of correctness. However, in the form that is often used in number theory (namely, as an algorithm for finding integer solutions to an equation $a x + b y = c$, or, what is the same, for finding the quantities whose existence is assured by the Chinese remainder theorem) it first appears in the works of Āryabhaṭa (5th–6th century CE) as an algorithm called ''kuṭṭaka'' ("pulveriser"), without a proof of correctness. There are two main questions: "Can we compute this?" and "Can we compute it rapidly?" Anyone can test whether a number is prime or, if it is not, split it into prime factors; doing so rapidly is another matter. We now know fast algorithms for testing primality, but, in spite of much work (both theoretical and practical), no truly fast algorithm for factoring. The difficulty of a computation can be useful: modern protocols for encrypting messages (for example, RSA) depend on functions that are known to all, but whose inverses are known only to a chosen few, and would take one too long a time to figure out on one's own. For example, these functions can be such that their inverses can be computed only if certain large integers are factorized. While many difficult computational problems outside number theory are known, most working encryption protocols nowadays are based on the difficulty of a few number-theoretical problems. Some things may not be computable at all; in fact, this can be proven in some instances. For instance, in 1970, it was proven, as a solution to Hilbert's 10th problem, that there is no Turing machine which can solve all Diophantine equations. In particular, this means that, given a computably enumerable set of axioms, there are Diophantine equations for which there is no proof, starting from the axioms, of whether the set of equations has or does not have integer solutions. (We would necessarily be speaking of Diophantine equations for which there are no integer solutions, since, given a Diophantine equation with at least one solution, the solution itself provides a proof of the fact that a solution exists. We cannot prove that a particular Diophantine equation is of this kind, since this would imply that it has no solutions.) Applications The number-theorist Leonard Dickson (1874–1954) said "Thank God that number theory is unsullied by any application". Such a view is no longer applicable to number theory. In 1974, Donald Knuth said "...virtually every theorem in elementary number theory arises in a natural, motivated way in connection with the problem of making computers do high-speed numerical calculations". Elementary number theory is taught in discrete mathematics courses for computer scientists; on the other hand, number theory also has applications to the continuous in numerical analysis. As well as the well-known applications to cryptography, there are also applications to many other areas of mathematics. Prizes The American Mathematical Society awards the ''Cole Prize in Number Theory''. Moreover number theory is one of the three mathematical subdisciplines rewarded by the ''Fermat Prize''. * Algebraic function field * Finite field * p-adic number Notes References Sources * * (Subscription needed) * * 1968 edition at archive.org * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Volume 1Volume 2Volume 3Volume 4 (1912) * For other editions, see Iamblichus#List of editions and translations * This Google books preview of ''Elements of algebra'' lacks Truesdell's intro, which is reprinted (slightly abridged) in the following book: * * * * * *
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 18 Aug 2018, 07:49 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track Your Progress every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # M26-06 new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 47983 M26-06  [#permalink] ### Show Tags 16 Sep 2014, 01:24 1 1 00:00 Difficulty: 35% (medium) Question Stats: 58% (00:20) correct 42% (00:38) wrong based on 24 sessions ### HideShow timer Statistics If $$x$$ is a positive number and equals to $$\sqrt{6+{\sqrt{6+\sqrt{6+\sqrt{6+...}}}}}$$, where the given expression extends to an infinite number of roots, then what is the value of $$x$$? A. $$\sqrt{6}$$ B. 3 C. $$1+\sqrt{6}$$ D. $$2\sqrt{3}$$ E. 6 _________________ Math Expert Joined: 02 Sep 2009 Posts: 47983 Re M26-06  [#permalink] ### Show Tags 16 Sep 2014, 01:24 Official Solution: If $$x$$ is a positive number and equals to $$\sqrt{6+{\sqrt{6+\sqrt{6+\sqrt{6+...}}}}}$$, where the given expression extends to an infinite number of roots, then what is the value of $$x$$? A. $$\sqrt{6}$$ B. 3 C. $$1+\sqrt{6}$$ D. $$2\sqrt{3}$$ E. 6 Given: $$x \gt 0$$ and $$x=\sqrt{6+{\sqrt{6+\sqrt{6+\sqrt{6+...}}}}}$$. Re-write it as: $$x=\sqrt{6+({\sqrt{6+\sqrt{6+\sqrt{6+...})}}}}$$, as the expression under the square root extends infinitely then expression in brackets would equal to $$x$$ itself and we can safely replace it with $$x$$ and rewrite the given expression as $$x=\sqrt{6+x}$$. Square both sides: $$x^2=6+x$$. Now, re-arrange and factorize $$(x+2)(x-3)=0$$, so $$x=-2$$ or $$x=3$$, but since $$x \gt 0$$ then: $$x=3$$. Answer: B _________________ Senior Manager Joined: 31 Mar 2016 Posts: 400 Location: India Concentration: Operations, Finance GMAT 1: 670 Q48 V34 GPA: 3.8 WE: Operations (Commercial Banking) Re M26-06  [#permalink] ### Show Tags 31 Aug 2016, 04:41 I think this is a high-quality question and I agree with explanation. Intern Joined: 25 Aug 2007 Posts: 38 Re: M26-06  [#permalink] ### Show Tags 01 Jan 2017, 12:10 I am missing something.... Square root of 6 is about 2.45 Double square root of 6 is about 1.57 Triple square root of 6 is about 1.25 Sum of these three numbers is greater than 5. How come the answer is 3? Intern Joined: 25 Aug 2007 Posts: 38 Re: M26-06  [#permalink] ### Show Tags 06 Jan 2017, 19:01 Any takers? Senior CR Moderator Status: Long way to go! Joined: 10 Oct 2016 Posts: 1394 Location: Viet Nam Re: M26-06  [#permalink] ### Show Tags 06 Jan 2017, 22:51 1 inter wrote: I am missing something.... Square root of 6 is about 2.45 Double square root of 6 is about 1.57 Triple square root of 6 is about 1.25 Sum of these three numbers is greater than 5. How come the answer is 3? $$\sqrt{6} \approx 2.4495$$ $$\sqrt{6+\sqrt{6}} \approx 2.9068$$ $$\sqrt{6+\sqrt{6+\sqrt{6}}} \approx 2.98443$$ $$\sqrt{6+\sqrt{6+\sqrt{6+\sqrt{6}}}} \approx 2.9974$$ You could see that the value is getting nearer to 3 _________________ Intern Joined: 25 Aug 2007 Posts: 38 Re: M26-06  [#permalink] ### Show Tags 07 Jan 2017, 13:05 Thanks - for some reason I didn't sum it up before taking root. Manager Joined: 02 May 2016 Posts: 82 Location: India Concentration: Entrepreneurship GRE 1: Q163 V154 WE: Information Technology (Computer Software) Re: M26-06  [#permalink] ### Show Tags 14 Jul 2017, 07:09 1 I do understand this: nguyendinhtuong wrote: You could see that the value is getting nearer to 3 Is this some kind of rule or something? : Bunuel wrote: $$x=\sqrt{6+({\sqrt{6+\sqrt{6+\sqrt{6+...})}}}}$$, as the expression under the square root extends infinitely then expression in brackets would equal to $$x$$ itself and we can safely replace it with $$x$$ Intern Joined: 20 Apr 2018 Posts: 36 Location: United States (DC) GPA: 3.84 Re: M26-06  [#permalink] ### Show Tags 09 Jun 2018, 07:05 Hi Bunuel, Can you expand upon why we can assume the infinite addition of square root 6 is equal to x? Is this because the infinite sum can never actually surpass x? Thanks in advance. Re: M26-06 &nbs [#permalink] 09 Jun 2018, 07:05 Display posts from previous: Sort by # M26-06 new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Moderators: chetan2u, Bunuel # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
# How does “do in parallel” work currently i'm preparing for an exam in a high performance computing course. In this course we discuss several common parallel algorithm patterns called "dwarfs". The first dwarfs we had was the "dense linear algebra", so basically matrix multiplication. In one of the first slides there is a example for a basic matrix multiplication: MatrixMult PRAM(A:Matrix[n], B:Matrix[n], C:Matrix[n]) { for i from 1 to n do in parallel for k from 1 to n do in parallel for j from 1 to n do C[i,k] = C[i,k] + A[i,j]∗B[j,k] } Nothing really fancy, just the basic matrix multiplication. But below this piece of code the slide states: • synchronized execution of the for-j-loop; → in the first step: • all processors $$P_{i,k}$$ simultaneously access A[i,1] • all processors $$P_{i,k}$$ simultaneously access B[1,k] I don't understand why all processors access these elements at the same time. I thought every processor has a different value for i and k, so every processor should access a different element. For example, n=3: 1. Time step (execute first for loop in parallel on all P): • P1 -> i = 1 • P2 -> i = 2 • P3 -> i = 3 2. Time step (execute second for loop in parallel on all P): • P1 -> k = 1 • P2 -> k = 2 • P3 -> k = 3 3. Time step (execute third for loop sequential on all P): • P1 -> j = 1, i=1, k=1 ==> A[1,1] B[1,1] • P2 -> j = 1, i=2, k=2 ==> A[2,1] B[1,2] • P3 -> j = 1, i=3, k=3 ==> A[3,1] B[1,3] 4. Time step: • P1 -> j = 2 • P2 -> j = 2 • P3 -> j = 3 5. etc.. So in the third time step every processor would access another element. So i'm not quite sure if i understand the "do in parallel" statement correct. I thought that every statement in this loop would be executed on every processor but with different index as Joseph JáJá defines the "pardo statement" (i think it's the same like "do in parallel") as follows: for l $$\leq$$ i $$\leq$$ u pardo statement The statement (which can be a sequence of statements) following the pardo depends on the index i. The statement corresponding to all the values of i between l and u are executed concurrently (from "An Introduction to parallel algorithms", Jospeh JáJá page 27) So the question is, why do all processors access the same element and how does the "do in parallel" statement work? I'm not sure if i understood this statement. • As I remember parallel computing, you shoudn't use for operation. 'For' is for not parallel computing when you access every element in array in order. In paraller computing you don't need 'for' couse you access every element in same moment. – Mr Jedi Jan 17 '15 at 16:38 • Yes, i know. And "for ... do in parallel" is exactly this kind of "access every element in same moment" (see my example with the time steps). My question is more related to the observation of the slide, that all processors access the same element. – Sven Lauterbach Jan 18 '15 at 12:59 • In fact, last "for" is not described as "... do in parallel" so j be same for every processor but code shows you access elements using two variables [j, X] so it will be different elements. – Mr Jedi Jan 18 '15 at 15:39 Two consequent parallel loops maybe joined like "product", so this code can be effectively parallelized up to $n^2$ (9 in your example) processors. Each such processor can be indexed by $i,k$ pair. all processors $P_{i,k}$ access simultaneously $A[i,1]$ means that all processors which have the first index equal to $i$ and second index equal to anything (to some $k$ but it does not restrict the cell), will simultaneously access $A[i,1]$. In your example, you parallelize only by $i$ (first for loop), so you have only one processor per each such cell. If you parallelize completely (by pairs $(i,k)$), you will have $n$ processors per each such cell.
+0 # I am stuck on this one AM GM problem, help appreciated :) 0 44 2 Problem: Let $x^3 + ax^2 + bx + c$ be a polynomial with real nonnegative roots. Prove that b>= 3[(c^(2/3)] I turned the equation into b/3 >= cuberoot(c^2) so that it's clearly a use of AM GM, but I'm not sure where to go from here. Any help or a small hint would be appreciated, thanks :) Jun 24, 2021 #1 +287 +2 Your idea of using the inequality on AM and GM is correct. Let $\alpha, \beta, \gamma$ be the real nonnegative roots of the given polynomial, thus, $x^3+ax^2+bx+c = (x-\alpha)(x-\beta)(x-\gamma)$.  By Vieta's formula, $\alpha\beta+\beta\gamma+\gamma\alpha = b$ and $\alpha\beta\gamma=c$. Can you do the rest? Jun 24, 2021 #2 +1 Ohhh thank you so much, so the three terms that were used in the Am Gm thing are the 3 things that sum to b, Thanks so much :) Guest Jun 24, 2021
## Dissertations from ProQuest #### Title A high-resolution comparison of late Quaternary upwelling records from the Cariaco Basin and Arabian Sea: Coccolith paleoecology and paleoclimatic investigations 1998 Article #### Degree Name Doctor of Philosophy (Ph.D.) #### First Committee Member Larry C. Peterson, Committee Chair #### Abstract Coccoliths, the minute calcareous plates produced by photosynthetic algae, were studied in high-resolution, AMS $\sp{14}$C-dated sediment cores from two sites of tropical upwelling located in climatically-sensitive regions, the Cariaco Basin (Venezuela) and the Arabian Sea. The Cariaco Basin is ideally situated to record changes in the strength of upwelling induced by migration of the Intertropical Convergence Zone (ITCZ) in the tropical Atlantic, while the Arabian Sea is strongly influenced by the broader dynamics of the Indian Ocean monsoon.Scanning Electron Microscopy has revealed that two coccolith species, Emiliania huxleyi and Gephyrocapsa oceanica, dominate both assemblages for the last $\sim$30,000 years. Time series of relative abundance for these and other species show large variations that can be related to both local and regional climate changes, including the Younger Dryas cold interval, the onset of anoxia in the Cariaco Basin, and the reduced upwelling strength in the Arabian Sea during the Last Glacial Maximum.Coccolith accumulation rate data, derived from modified direct settling techniques, complement and in some cases improve upon the relative abundance data, while also comparing favorably with modern flux estimates derived from sediment trap studies. Accumulation rates at both locations are on the order of 10$\sp9$ to 10$\sp{11}$ coccoliths/cm$\sp2$/kyr, roughly equivalent to 10$\sp8$-10$\sp9$ coccoliths/m$\sp2$/day. Stable isotope time series from the $<$38$\mu$m sediment fraction in the Cariaco Basin compare favorably with foraminiferal $\delta\sp$O records from the same core and provide further insights into the climatic and oceanographic history of the region. The fine-fraction $\delta\sp$O time series from the Oman margin records large influxes of eolian carbonate dust which overwhelms the glacial-interglacial signal of the coccoliths.Interest in coccolithophore productivity has increased in recent years because of their potential impacts on global climate as producers of DMS and CaCO$\sb3.$ Some species also produce alkenones, which are increasingly extracted from sediments and used as indicators of paleo-sea surface temperatures. The findings from this study have implications relevant to studies of past SST variations, global carbon cycling, and aerosol-induced climate change, in addition to the more direct insights into the climate history of the tropical Atlantic and Arabian Sea. #### Keywords Geology; Paleoecology
# John von Neumann John von Neumann (around 1940) John von Neumann (born December 28, 1903 in Budapest , Austria-Hungary as János Lajos Neumann von Margitta ; † February 8, 1957 in Washington, DC , United States ) was a Hungarian-American mathematician . He made significant contributions to mathematical logic , functional analysis , quantum mechanics and game theory and is considered one of the fathers of computer science . He later published as Johann von Neumann ; Nowadays he is best known under his name, chosen in the USA, John von Neumann. ## life and work ### Origin and beginnings of the career János Neumann came from a Jewish banking family. His father, the royal Hungarian councilor Max Neumann, was raised to the Hungarian nobility on July 1, 1913. Even as a child, John Neumann showed that above-average intelligence that later  amazed even Nobel Prize winners - for example Eugene Paul Wigner . As a six-year-old he could divide eight-digit numbers in his head at high speed. He had an extraordinary memory that enabled him, for example, to accurately reproduce the contents of a book page after a brief glance at it. Later he was able to memorize entire books such as Goethe's Faust and thus, for example, also shine through detailed historical knowledge. He attended the humanistic German-speaking Lutheran grammar school in Budapest , as did Eugene Paul Wigner at the same time, with the Abitur in 1921. The political situation in Hungary at that time was very uncertain because of the regime of the Soviet republic of Béla Kun , in which the von Neumanns were capitalists threatened by persecution, in 1919 the reactionary anti-Semitic regime of Miklós Horthy followed. Even as a high school student, von Neumann shone through his mathematical achievements and published his first mathematical article with his teacher Michael Fekete , which he conceived when he was not quite 18 years old. Following the wishes of his parents, however, he initially studied chemical engineering in Berlin from 1921 to 1923 and then at the ETH Zurich until his diploma in 1925 . At the same time he was enrolled at the University of Budapest, but only passed the exams there. However, his real interest was always in mathematics, to which he dedicated himself to a certain extent as a "hobby". He attended mathematics courses in Berlin and those of Hermann Weyl and George Pólya at the ETH Zurich and soon attracted attention. From 1928 to 1933 von Neumann was the (youngest) private lecturer at the University of Berlin and in the summer semester of 1929 at the University of Hamburg. Before that, he worked with David Hilbert in Göttingen in 1926/1927 . Von Neumann in the course catalogs of the Friedrich-Wilhelms-Universität zu Berlin The first three excerpts come from the summer semester 1928, the fourth excerpt from the winter semester 1928/29. Well-known colleagues mentioned here were Georg Feigl , Issai Schur , Erhard Schmidt , Leó Szilárd , Heinz Hopf , Adolf Hammerstein and Ludwig Bieberbach . At the beginning of his career as a mathematician, von Neumann worked, among other things, on the development of axiomatic set theory , for which he found a new approach as a student (dissertation in Budapest 1926 with Leopold Fejér ), the Neumann-Bernays-Gödel set theory (NBG) , and with Hilbert's theory of proof . These topics were the current research area of ​​Hilbert's group in Göttingen, at that time one of the world centers of mathematics. His definition of ordinal numbers is now a standard: a new ordinal number is defined by the set of those already introduced. His preoccupation with mathematical logic ended when Gödel's incompleteness theorem became known , which dealt Hilbert's program a severe blow. Gödel was later a close friend and colleague of John von Neumann and Albert Einstein at Princeton. ### Work on quantum mechanics Von Neumann was also the author of the first mathematically well-thought-out book on quantum mechanics , in which he dealt with the measurement process and the thermodynamics of quantum mechanics (see density matrix , introduced by him in 1927, Von Neumann entropy , Von Neumann equation ). The then "hot" topic of rapidly developing quantum mechanics was also the main reason why he turned to functional analysis and developed the theory of linear operators in Hilbert spaces , more precisely that of the unconstrained self-adjoint operators. The mathematicians in Göttingen objected to the new quantum mechanics that the canonical commutation relations could not be fulfilled with the linear restricted operators investigated up to then . Von Neumann clarified this and at the same time made numerous other contributions to this area. However, when later Werner Heisenberg was asked whether he was not grateful to von Neumann for this, he only asked the counter-question, where was the difference between limited and unlimited. Von Neumann's book on quantum mechanics enjoyed such a reputation that even his “proof” of the impossibility of hidden variable theories, which was correct but based on false assumptions, was not questioned for a long time. To von Neumann's chagrin, however, the physicists preferred the Principles of Quantum mechanics by Paul Dirac , which was published almost simultaneously , in which the mathematical problem addressed was circumvented by introducing distributions that were initially frowned upon by mathematicians before they were also used there in the late 1940s Triumphant advance ( Laurent Schwartz ). With Eugene Wigner , von Neumann published a series of papers in 1928/29 on the application of group theory in the atomic spectra. Here, too, the enthusiasm of the physicists was subdued, there was even talk of the “group plague”, which mathematicians tried to spread in quantum mechanics. The Stone-von-Neumann-Theorem expresses the uniqueness of the canonical commutators of, for example, position and momentum operators in quantum mechanics and shows the equivalence of their two fundamental formulations by Schrödinger (wave function) and Heisenberg (matrices). His work on quantum mechanics established his reputation in America - and, not least, with a view to changing to better-paying positions in the USA, he has dealt with it so intensively. In the fall of 1929 he was invited by Oswald Veblen to come to Princeton University in New Jersey and give lectures on it, and he switched between Princeton and Germany in the years that followed. From 1933 he worked at the newly founded, demanding Institute for Advanced Study in Princeton as a professor of mathematics. Some of his colleagues there were Albert Einstein and Hermann Weyl . Like them, von Neumann emigrated permanently to the USA after Hitler came to power . ### America, game theory and math John von Neumann made outstanding contributions in many areas of mathematics . As early as 1928, an essay by the mathematician Émile Borel on minimax properties had led him to ideas that later resulted in one of his most original designs, game theory . In 1928 von Neumann proved the Min-Max theorem for the existence of an optimal strategy in " zero-sum games ". With the economist Oskar Morgenstern , he wrote the classic book The Theory of Games and Economic Behavior (3rd edition 1953) in 1944 , which also deals with the generalization to n-person games that is important for economics. He became the founder of game theory, which he applies less to classic games than to everyday conflict and decision-making situations with an imperfect knowledge of the opponent's intentions (as in poker). In economics, a seminar lecture from 1936 on the mathematical modeling of expanding economies is also frequently cited. In the second edition of The Theory of Games and Economic Behavior (1947), Morgenstern and von Neumann presented the Von-Neumann-Morgenstern expected utility and thus made significant contributions to utility theory . In the 1930s, in a series of works with Francis Murray , von Neumann developed a theory of algebras of bounded operators in Hilbert spaces, which Jacques Dixmier later called Von Neumann algebras . These are now a current research area (for example Alain Connes , Vaughan F. R. Jones ), which - as von Neumann predicted - has applications in physics, albeit less in quantum mechanics than in quantum field theory and quantum statistics . Von Neumann and Murray proved a classification theorem for operator algebras as a direct sum of “factors” (with a trivial center) of type I, II, III, each with subdivisions. Operator algebras were part of his search for a generalization of the quantum mechanical formalism, for he said in a letter to Garrett Birkhoff in 1935 that he would no longer believe in Hilbert dreams. Further attempts in this direction were the investigation of "lattice theory" ( theory of associations ), initially as an algebra of projection operators in the Hilbert space (in which Birkhoff was also involved), later interpreted as an extension of the logic to " quantum logic ", and continuous geometries, which in the end turned out to be no progress compared to operator algebras. Another field of work in Princeton in the 1930s was the famous ergodic problem , which deals with the mathematical foundation of statistical mechanics in classical systems (equal distribution of orbits in phase space ). Von Neumann had already dealt with these questions from the quantum mechanical side in Germany. After Bernard Koopman had put the problem into operator form, von Neumann took it up and involuntarily engaged in a "duel" with the well-known American mathematician George David Birkhoff . As he later said, he would have preferred a collaboration. ### Manhattan Project and Government Advisor Von Neumann worked on the Manhattan project in Los Alamos from 1943 . In previous years he was a sought-after consultant for the Army and Navy, for example for ballistics issues, shaped charges, operations research, fighting German magnetic mines or optimizing the effect of bombs with “oblique shock waves ”. One of his main areas of work was the theory of shock waves, which became relevant for supersonic flight in the 1950s and which he used, among other things, for the development of explosive lenses for the implosion mechanism of the plutonium bomb. His development of the first numerical method for solving hyperbolic partial differential equations , the Monte Carlo method with Stanislaw Ulam , the Von Neumann stability analysis and his pioneering achievements in computer architecture also belong in this context . Incidentally, with his expertise in the theory of shock waves during the Second World War, he also optimized British air mines over Germany. Von Neumann was also involved in the further development of the American nuclear bomb program through to the hydrogen bomb . Von Neumann was valued on the one hand because he freely passed on his ideas and helped colleagues (when visiting Los Alamos he was often surrounded by a cluster of scientists who wanted quick advice), on the other hand he was feared because he took up ideas quickly and developed his own at breathtaking speed Theories developed from it. In addition to his mathematical achievements, von Neumann was also politically influential as a government advisor. Before the atomic bombs were dropped on Japan, he was a member of the Target Committee , which helped determine the exact targets of the bombs. He also calculated the optimal detonation height of the atomic bombs in order to achieve the greatest possible damage from the explosion on the ground. The idea of ending the East-West confrontation with the explosion of a hydrogen bomb over uninhabited Soviet territory, preventing the Soviet Union from developing its own bomb and permanently intimidating it is also allegedly associated with the name John von Neumann . Whether US President Eisenhower was actually urged to take such a step by von Neumann is controversial. But he was instrumental in getting the US military missile program off the ground. ### Computers and cybernetics Von Neumann is considered one of the fathers of computer science. The Von Neumann architecture (also known as the Von Neumann computer ) was named after him, a computer in which data and program are binary-coded in the same memory . The program itself can thus be changed while the computing process is running and the specified sequence of the stored instructions can be deviated from by conditional jump commands. Loosely analogous to the human brain (as he writes in the report), it defines a computer architecture consisting of a control unit and an arithmetic unit as well as a storage unit. The commands are processed serially. He described this principle in 1945 in the First Draft of a Report on the EDVAC . The report was intended as a discussion report with the ENIAC group and initially remained unpublished, but quickly circulated in scientific circles. Almost all modern computers are based on von Neumann's idea. Von Neumann's role as the sole inventor of the modern computer architecture named after him has been disputed and has long been the subject of disputes. Nowadays, the term “stored program computer” is therefore preferably used instead of “Von Neumann computer”. This applies in particular to the claims of the actual builders of the first tube computer ENIAC and its successor model EDVAC, John Presper Eckert and John William Mauchly from the Moore School of the University of Pennsylvania in Philadelphia, with whom von Neumann and Herman Goldstine initially worked closely. Von Neumann came across the computer developers at the Moore School, where Goldstine was a liaison officer in the US Army, through a chance encounter on a platform with the previously unknown mathematician Goldstine. As Goldstine reported, the liberal dissemination of the Edvac Report, which he himself pursued, ended the close relationship between himself and von Neumann with Eckert and Mauchly, who did not see their contribution in the Edvac Report (actually not intended for the public) and for essential parts of the Von Neumann computer asserted priority claims. For Eckert and Mauchly, patent considerations were in the foreground, which led them to leave the Moore School in 1946 to start their own company, and which later led to a decade-long dispute in court (they brought in patent attorneys as early as 1945). Von Neumann, however, initially saw a need for further research and development and advocated an open discussion and wide dissemination of the results. Parts of the concept were also developed independently by other computer pioneers - including Konrad Zuse in Germany. a. the idea of ​​separating memory and processor, which was already carried out in Zuse's still purely mechanical Z1 in 1938. Zuse's early computers, which were designed for special tasks, lacked the essential concept of conditional branching , although he knew it and used it in his plan calculus . At the time, Von Neumann strongly advocated the further development of calculating machines. The merits of von Neumann are based in particular on the mathematization and scientification of calculating machines. Together with Norbert Wiener , at the end of the winter of 1943/44 in Princeton, he organized an interdisciplinary meeting with engineers, neuroscientists and mathematicians on commonalities between the brain and computers and thus the basics of cybernetics , which Wiener first described in detail in 1948. Schematic representation of the Von Neumann architecture , 1947 From 1949 Von Neumann led his own computer project at the Institute for Advanced Study, the IAS computer , in which he was able to realize his ideas, including many programming concepts. Walk on it subroutines various methods for generating random numbers (including those with parameter passing a reference to a memory location, middle-square method and the rejection sampling ) and the Mergesort back. He contributed significantly to the use of binary codes in the computer systems and propagated the use of flowcharts , in which he also provided a kind of assertions that can be viewed as precursors for loop invariants in the Hoare calculus . Goldstine, whom he took over from the ENIAC group, became a close employee. He also let the reports from Princeton circulate freely from 1949 onwards, and computers based on these models were soon built all over the USA and England. The IAS calculator and the ENIAC, modified according to von Neumann's ideas, were used primarily for military calculations (ballistics). Von Neumann also used the Princeton computer for pioneering work in numerical weather forecasting, such as the first computer-aided 24-hour weather forecast. In 1953 he also developed the theory of self-reproducing automata or self-replication , for which he gave a complicated example. Today much simpler ones emerge from the theory of cellular automata (for example John Horton Conway's Game of Life ). He is said to have tried out ideas for this while playing with a game of building blocks ( Tinkertoy ). Science fiction authors imagined the colonization of our galaxy with such automatons and coined the name Von Neumann probes for it . Von Neumann's cellular automata form an important basis for the research discipline Artificial Life and enable the simulation of biological organization, self-reproduction and the evolution of complexity. ### Appreciation and end Numerous anecdotes have circulated about von Neumann (Halmos collected some in the article cited in the literature). For example, someone tried to test it with the following riddle: “The end points of a route move towards each other with speed , a runner whizzes back and forth between the two end points at one speed . What distance does he cover? ”There is a simple and a somewhat more complicated solution method (summation of the partial distances). Von Neumann gave the answer at lightning speed and, when asked, stated that he had added up the series - so he had chosen the complicated route, which, however, did not mean that he needed more time. ${\ displaystyle s}$${\ displaystyle v}$${\ displaystyle w> v}$ Because of his ability to quickly break down complex issues into simple questions and often to find a solution straight away, as well as his strictly factual attitude, which avoids any unnecessary disputes, von Neumann was happy to be hired as a technical consultant; like IBM , Standard Oil or the RAND Corporation . His name is therefore a household name in a wide variety of areas. In 1952 he published the Von Neumann law , which describes the change in the size of cells of two-dimensional foam over time . For Standard Oil he helped develop methods to better exploit oil deposits. His death prevented a planned larger collaboration with IBM. For the RAND Corporation he applied game theory to strategic thinking games, as did other mathematicians such as John Nash and John Milnor . In an unpublished paper in 1953, he also set out the principles of the semiconductor laser. John von Neumann was a fun-loving and sociable person (nickname "Good Time Johnny"); he was married twice - to Marietta Kövesi and Klára Dán - and had a daughter ( Marina ). His Princeton home was the focus of academic circles at the legendary Princeton parties. Von Neumann also loved fast cars like Cadillac or Studebaker, but his driving style was feared because he quickly got bored with quiet traffic and then fell into absent-mindedness. Even in the middle of a party, he could suddenly say goodbye to think through a math problem. Some of his alcohol consumption was only faked, as one guest's child was surprised to find out. Another aspect of the "entertainer" von Neumann was his inexhaustible reservoir of often slippery jokes and his fondness for limericks . Von Neumann died in the Walter Reed Military Hospital in Washington after suffering from excruciating cancer, possibly caused by his participation in nuclear tests . A soldier kept watch in front of the room so that when he was delirious - the cancer ultimately also attacked his brain - he would not divulge any state secrets. While still on his deathbed, he wrote his book “The Calculator and the Brain”, in which he investigated the special features of the “computer” in the human head. Most recently he professed to be a Catholic again (the family had converted in 1929/30) and at the end of his life he had an intensive exchange of ideas with a priest. He is buried in Princeton next to his mother, his second wife Klari (who drowned in the sea in 1963, probably suicide) and Karl Dan, Klari's father, who committed suicide in 1939 after moving from Hungary to the USA. It is located in Princeton Cemetery in Mercer County. ## Honors and memberships After Neumann is IEEE John von Neumann Medal of the IEEE , the John von Neumann Theory Prize in Operations Research , the John von Neumann Lecture of SIAM and the Von Neumann lunar crater named. The institutes for computer science and mathematics at the Humboldt University in Berlin are located in the Johann von Neumann House. ## Quotes Von Neumann in a discussion with Jacob Bronowski in 1943 while studying bomb craters on aerial photographs: “No, no, you don't see it correctly. Your visualizing mind cannot see this properly. You have to think abstractly. What happens is that the first differential quotient disappears identically and therefore what becomes visible is the trace of the second differential quotient. " Bronowski reports that on this advice he rethought the discussed problem and found it confirmed from Neumann's point of view late at night - when he informed him of this the next morning, von Neumann only asked him to ask him for one of these the next time Only to disturb von Neumann early in the morning if he was wrong and not if he was right. Von Neumann described the problem of overfitting mathematical models very clearly using the example of an elephant: "With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." "Four parameters are enough for me to fit an elephant, and at five he can still wiggle his trunk." - John von Neumann, quoted from Freeman Dyson , quoted from Enrico Fermi : Nature Even if a rough sketch of an elephant is actually possible with the help of four complex numbers, the statement is aimed at critically questioning excessive adjustments of a model to existing data. ## Fonts • Collected works , 6 volumes. Pergamon Press, from 1961 • Brody, Vamos (Ed.): The von Neumann compendium . World Scientific (reprint of important articles by Neumanns) • The computer and the brain (Silliman Lectures). Yale University Press, 2000 (German The Calculator and the Brain , 1958) • The mathematician . In: Heywood (Ed.): The works of the mind . 1948. Reprinted in: Kasner, Newman (Ed.): The world of mathematics , Vol. 4 • Mathematical foundations of quantum mechanics . 2nd Edition. Springer Verlag, 1996, ISBN 978-3-540-59207-5 (first 1932) • Theory of games and economic behavior , together with Oskar Morgenstern. Princeton Univ. Press, 1944, Theory of Games and Economic Behavior. (PDF; 31.6 MB). German translation: Game theory and economic behavior , ISBN 3-7908-0134-8 . Some essays and books online: Some of Neumann's works that were created in Los Alamos (for example on shock waves, detonation waves) are available online from the Federation of American Scientists . Some other work, for example on continuous geometries, operator rings, or ergodic theory, is available online at the National Academy of Sciences . chronologically ## Documentaries • John von Neumann. The thinker of the computer age. Documentary, France, 2014, 56:44 min., Script and director: Philippe Calderon, production: arte France, BFC Productions, first broadcast: 4th August 2015 on arte, synopsis by ARD , online video by Internet Archive . • The Struggle for Freedom: Six Friends and Their Mission - From Budapest to Manhattan. Documentary, Germany, 2013, 88:42 min., Book: Thomas Ammann and Judith Lentze, director: Thomas Ammann, production: Prounen Film, Mythberg Films, Agenda Media, MDR , arte , first broadcast: December 17th, 2013 by arte, summary from ARD . Commons : János Lajos Neumann  - Album with pictures, videos and audio files ## Individual evidence 1. See Ulf Hashagens article about the habilitation in Berlin (p. 265). It was completed on December 13, 1927. 2. In the winter semester 1928/29 Neumann von Margitta is mentioned as in the summer semester 1928 also at the mathematical colloquium and when discussing recent work on quantum theory with Leó Szilárd . Further lecturers in the discussion of more recent work on quantum theory were Hartmut Kallman and Fritz London in the winter semester of 1928/29 . 3. ^ John (Janos) von Neumann in the Mathematics Genealogy Project (English) 4. The operators for measured quantities used in quantum mechanics are linear (superposition principle for solutions to the linear Schrödinger equation, for example) and self-adjoint, since the eigenvalues, the possible measured values, are then real. 5. The anecdote comes from Kurt Friedrichs, cf. Peter Lax Mathematics and Physics , Bulletin American Mathematical Society, Vol. 45, 2008, pp. 135-152. 6. First probably used by Paul Ehrenfest in a letter to Wolfgang Pauli in September 1928, see Martina Schneider, Between two disciplines. BL van der Waerden and the development of quantum mechanics, Springer 2011, p. 63. 7. Von Neumann was like Edward Teller and a number of other theoretical physicists after the end of the war in Los Alamos during his visits (they worked on the hydrogen bomb) a member of a poker round. Stanislaw Ulam Adventures of a Mathematician , Scribners 1976, p. 169. 8. In the Menger Colloquium, translated as A model of general equilibrium . In: Review of Economic Studies , Vol. 13, 1945, 1, also in Brody, Vamos (Ed.): The von Neumann compendium . Among other things, the use of inequalities instead of just equations as with Walras was new, cf. McRae, pp. 217ff. 9. Poundstone “Prisoners dilemma”, p. 4 quotes an obituary in Life Magazine 1957, in which von Neumann even spoke out in 1950 for a preventive nuclear war against the Soviet Union, as did other personalities at the same time, such as the pacifist Bertrand, who was transformed by contemporary history Russell. 10. ^ Friedrich L. Bauer : Historical Notes on Computer Science . Springer Verlag, 2009, p. 139. 11. Nicholas Metropolis , J. Worlton: A trilogy on errors in the history of computing . In: IEEE Annals of the history of computing , Volume 2., 1980, pp. 49-55, take the view that the concept of stored program was developed by Eckert and Mauchly before the involvement of von Neumann. See also Friedrich L. Bauer : Historical Notes on Computer Science . Springer Verlag, 2009, chapter Who invented the von Neumann calculator? Reprinted from Informatik Spektrum , Volume 21, 1998, p. 84. Also Joel Shurkin: Engines of the Mind. The history of the computer . Norton, 1984, sees the contributions by Eckert and Mauchly as central to Edvac, and von Neumann's important role only begins with his own IAS computer, Goldstine: The Computer from Pascal to von Neumann . 1993, pp. 186f. speaks against it for a central role of Neumanns, who after Goldstine was already involved in the discussions in the Moore School at the beginning of August 1944. 12. ^ Goldstine: The Computer from Pascal to von Neumann . 1993, p. 182. 13. ^ Goldstine: The Computer from Pascal to von Neumann . Princeton University Press, 1993, p. 229. 14. ^ Bauer: Historical notes on computer science . P. 138. 15. ^ Raúl Rojas : The architecture of Konrad Zuse's early computing machines . In: Rojas, Hashagen: The first computers . MIT Press, 2000. According to Rojas, the logical structure of the Z1 was very similar to the later Z3 relay computer and both could be used as a universal calculating machine, even if that was not practical. 16. ^ Raúl Rojas : Zuse and Turing. The wire of Mephistopheles. In: Telepolis , December 21, 2011. 17. Thomas Rid : Machine Dawn . A Brief History of Cybernetics . Propylaen, Berlin 2016, ISBN 978-3-549-07469-5 (492 pages, American English: Rise of the Machines. A Cybernetic History . New York 2016. Translated by Michael Adrian, first edition: WW Norton & Company). 18. ^ Norbert Wiener: Cybernetics. Regulation and communication in living beings and in machines . With the addition of 1961 learning and self-reproducing machines. Second, revised and expanded edition. Econ-Verlag, Düsseldorf 1963 (287 pages, American English: Cybernetics or Control and Communication in the Animal and the Machine . 1948. Translated by EH Serr, E. Henze, first edition: MIT-Press). 19. ^ John von Neumann: Theory of Self-reproducing Automata . published posthumously. Ed .: Arthur W. Burks . University of Illinois Press, 1967, ISBN 978-0-252-72733-7 (English, 388 pages). 20. ^ Poundstone: Prisoner's Dilemma . P. 24 21. ^ Howard Eves: Return to Mathematical Circles , PWS-Kent Publishing, 1988, p. 140. 22. Stephen Dunwell from IBM reports about it in his Oral History Interview 1989.  ( Page no longer available , search in web archivesInfo: The link was automatically marked as broken. Please check the link according to the instructions and then remove this notice. (PDF) Babbage Institute. After Dunwell, his role as a consultant at IBM was very limited, but IBM was aware that as the father of the modern computer, they owed him a lot. This was also due to von Neumann's general attitude - he was of the opinion, according to Dunwell, that the problem with computers is not low memory space, but rather unimaginative programmers. 23. ^ Russell Dupuis: The Diode Laser - the first 30 days 40 years ago . In: Optics and Photonics News , Vol. 15, 2004, p. 30, The Diode Laser — the First Thirty Days Forty Years Ago ( Memento from June 19, 2010 in the Internet Archive ) 24. According to other statements, he also used to sing loudly in the car with appropriate steering movements. He wrecked a car almost every year. Poundstone Prisoner's Dilemma , p. 25. 25. ^ McRae, John von Neumann, Birkhäuser, p. 328 26. ^ Find a grave , John von Neumann 27. ^ Members of the American Academy. Listed by election year, 1900-1949 ( PDF ). Retrieved October 8, 2015 28. McRae: Von Neumann , p. 186, after Jacob Bronowski: The Ascent of man , BBC book, 1973 and in his BBC television series of the same name, episode 13. McRae says “differential coefficient”, obviously a translation error. 29. ^ Freeman Dyson: A meeting with Enrico Fermi . In: Nature . 427, No. 297, 2004. 30. ^ Review of Neumann's Selected Letters by George Dyson: Review. In: Notices AMS , June / July 2007. 31. ^ Review of George Dyson, Turing Cathedral by Brian Blank: Review. In: Notices AMS , August 2014. Like Regis' book, much about the history of the IAS. He evaluates Klara von Neumann's unpublished memoirs.
# Is ideal band pass filter (brick wall filter) linear phase? [duplicate] I'm very new in digital signal processing. I have multiple sensors and the way I filters the signals in post processing is: 1. take FFT of the signals. 2. put zero on out range of interesting frequency (like brick wall filter). 3. take IFFT of the spectrum. I repeat this for each sensor, and compare the phase relation between them in time traces. Thus, the phase must not shift during the filtering process. I wonder the above method distorts the phase or not. Is the ideal band pass filter linear phase? • this is not a process running continuously, is it? __ or are you doing these three steps once to a piece of data? Oct 5 '18 at 23:46 • – user28715 Oct 6 '18 at 0:54 • I did this in post processing, it's not a real time. Oct 6 '18 at 2:22 ## 1 Answer Assuming you are refering to LTI (linear time-invariant) systems to implement the filter. The impulse response $$h[n]$$ of the ideal brickwall bandpass filter : $$H(\omega) = \begin{cases} 1 ~~~,~~~ |\omega-\omega_c| is $$h[n] = 2 \cos(\omega_c n) \frac{ \sin(W n) }{ \pi n }$$ which is real and even symetric about origin; i.e., $$h[n] = h[-n]$$, hence it's a zero-phase (in the passband), but noncausal filter. When this impulse response is truncated and shifted right enough to make it causal: $$h_c[n] = h[n-d]$$ then the resulting bandpass filter will be linear phase in the passband: $$H_c(\omega)= e^{-j\omega d} H(\omega)$$ with a linear phase term of $$\phi(\omega) = -\omega d$$ The phase is not defined for those frequencies where the frequency response is zero. Note that the truncated filter is no more the ideal filter. Also note that you cannot shift the ideal filter to make it causal (requires infinite shift). And finally note that the zero-phase ideal filter is also a linear phase filter.
# Tag Info 0 This is a great place to apply Abel-Plana's formula $$\sum_{n\geqslant 0} f(n)=\int_0^\infty f(x)\, dx+\frac{f(0)}{2}+i\int_0^\infty\frac{f(ix)-f(-ix)}{e^{2\pi x}-1}\, dx$$ Allow $f(x)=\left(\frac{\sin x}{x}\right)^2$ We note that, with the fact that $f$ is even \sum_{n\geqslant 0} \left(\frac{\sin ... 2 The short answer to your question is that there is another symbol for 2\pi, namely \tau. Some people promote that notation, but it hasn't caught on in the mainstream of mathematics, at least yet (and I don't think it will). The longer answer to your question comes from history. The reason that 3.1415... got its own symbol \pi before 6.28... did ... 1 Surely you can find this discussed in many place on the web. The diameter of a circle is much easier to measure than the radius. The ratio of the circumference to the diameter was taken as the fundamental constant describing the geometry of the circle. This is from nearly 2000 years ago in Greek geometry. (And similar time-frame for other places, perhaps ... 1 This series was obtained by Chudnovsky brothers through the theory of modular forms. To describe their formula let us define functions P, Q, R via the equations \begin{align} P(q) &= 1 - 24\left(\frac{q}{1 - q} + \frac{2q^{2}}{1 - q^{2}} + \frac{3q^{3}}{1 - q^{3}} + \cdots\right)\notag\\ Q(q) &= 1 + 240\left(\frac{q}{1 - q} + \frac{2^{3}q^{2}}{1 - ... 2 This is from the Chudnovsky algorithm. See this for more information https://en.wikipedia.org/wiki/Chudnovsky_algorithm 0 The most usable "geometric" interpretation for theoretical purposes is in fact from very highbrow algebra (the algebraic topology of algebraic varieties as it appears in algebraic geometry). The incantation, for anyone who might find it informative, is "the zeta value (e.g. at 2) is a period integral associated to a motive or a Hodge structure thereupon". ... 0 I do not know if this is the kind of geometric interpretation you are after, but a \zeta(2) proof by Beukers, Kolk, and Calabi has some geometry involved (relating to a triangle). The double integral:\int_{0}^{1}\int_{0}^{1}\frac{1}{1-x^2y^2}dydx,$$evaluates a similar sum:$$\sum_{n=0}^{\infty} \frac{1}{(2n+1)^2}.$$If you make the change of variables ... 0 This is a well known double integral proof by Beukers, Kolk, and Calabi. First consider the double integral:$$\int_{0}^{1}\int_{0}^{1} \frac{1}{1-x^2y^2} dydx.$$Since 0<x,y<1, rewrite the integrand as a geometric series:$$\frac{1}{1-x^2y^2}=\sum_{n=0}^{\infty}(xy)^{2n}.$$Now notice: ... 1 Your teachers claim that the irrationality of the number 2\pi means 360 degrees and 2\pi radians is false. They are the same because of the way we define the radian. Just because a number is irrational, does not mean it does not perfectly exists. The claim is like saying \pi is not perfectly the ratio of a circles circumfrence to diameter because it ... 3 Your teacher reasoned incorrectly. If his/her argument was right, you could similarly argue as follows: \pi is irrational. 2 halves of the circle makes a circle. Since no multiple of \pi can be a whole number, 2\pi radians does not equal 2 halves of the circle. This is clearly wrong because two halves make the whole circle. We define, ... 1 The unit degree is defined as two pi divided by 360 hence Steamyroot's comment. The unit degree is an irrational number in radians. 23 Your teacher is wrong! The key point is that 360^\circ is not merely a whole number, but a whole number together with a unit, namely "degrees". That is, it is not true that 2\pi=360 (in fact, this is obviously false, since \pi<4 so 2\pi<8). Rather, it is true that$$2\pi\text{ radians }=360\text{ degrees.} This is similar to how $1$ foot ... 1 Shortly $\,v(p,k)\,$ is the multiplicity of the prime $p$ in $k$ (the exponent of $p$ in the multiplicative decomposition of $k$ in prime powers, another name is "p-adic valuation"). Example : $v(3,18)= 2\,$ since $\,18=2\cdot 3^2$. In your case we are concerned with $\,a$-adic valuations since the $n$ in $\,v(n,k)$ appears to be a typo for '$a$' as you ... 3 The Bailey-Borwein-Plouffle formula does not allow you to find a desired sequence of digits in $\pi$. As the Wikipedia page says, it allows you to find the hex digits starting as a desired place without calculating the preceding ones. So if you want the digits of $\pi$ starting at the billionth, this is your friend. This would be used in the decryption ... 15 The project you've found is a (deliberate!) joke. It is true that $\pi$ is suspected to be normal in all bases, which would imply that every finite sequence of hex digits appears somewhere (indeed, many times) in the hexadecimal expansion of $\pi$. But this cannot be used for compression -- the trouble is that the number $A$ that tells you where to find ... 1 They somehow then look up where the sequence starts in pi. This is the step I don't understand how they do. But they use a formula called [Bailey–Borwein–Plouffe][1] formula to do it. Yes, that is a fantastic discovery, it allows to calculate the $k$-th digit behind the period without needing to know the prior digits. This would benefit the ... 1 All circles are geometrically similar. There is a single parameter that determines the size. However the ratio of certain lengths is independent of its size, is a constant. This happens for all parabolas, catenaries, cycloids, Cornu spiral.. which are built on single parameters latus rectum, parameter $c$, whose ratios of length upto specific locations is ... 2 I don't know why you flat earthers believe that the ratio of circumference to diameter is pi. That is only a limiting case for very small circles. I checked on my globe, and found that, for example, a circle of diameter 12,000 miles has a circumference of approximately 24,000 miles. That is a ratio of 2, not pi. Of course, even larger circles are ... 6 Very interesting question. I reckon that any answer will have to do with what the perimeter is even defined as. If we take it as limit of n-glons, then it is easy to see that ratio of perimeters of polygons (at least $2n$-polygons) to their diameter is constant. Thus in limit, too, the ratio will remain constant. Note: Early Greek mathematicians did have an ... 0 We have $\sin \pi/4=\cos \pi /4=1/\sqrt 2.$ And $\sin \pi /6=1/2$ and $\cos \pi /6=\sqrt 3 /2.$ You can go from case $n$ to case $2 n$ with $|\sin x/2|=\sqrt {(1-\cos x)/2}$ and $|\cos x/2|=\sqrt {(1+\cos x)/2}.$ We have $n \sin \pi /n<\pi <n \tan \pi /n$ for $n>2.$ To go from case $n$ to case $2 n$ for the $\tan$ , we have $|\tan x/2|=|1-\cos ... 0 Let's take the problem in parts. First: sin(x) is a function, and its inverse is the arcsin function. See Inverse Trigonometric Functions for definitions and properties. Second: 180 degrees = pi radians. The equality you wrote turns to be a limit: lim [n -> oo] n * sin(pi/n) = pi In other words: when n grows to infinity, n * sin(pi/n) approaches pi. See ... 2 It turns out that the "equation"$n \sin(180/n))=\pi$is not true. But, if$n$is a large number then it is approximately true. That is to say, it is still not true on the nose, but as$n$gets larger and larger, the difference between the two sides$n \sin(180/n) - \pi$gets closer and closer to zero. So the equation you deduced, namely$\sin(a) = \pi * a ... 9 To show that $\pi$ is constant we must show that given two circles of diameters $d_1$ and $d_2$ and circumferences $c_1$ and $c_2$, respectively, that $\frac{c_1}{d_1}=\frac{c_2}{d_2}$. If $d_1=d_2$ then the two circles are congruent because one can be placed upon the other and they will line up. Without loss of generality we can assume $d_1\lt d_2$. Draw ... 15 This is not a very rigorous proof, but it is how I was taught the fact that the circumference of a circle is proportional to its radius. Consider two concentric circles as in the diagram above. The radius of the smaller one is $r$, while that of the larger one, $R$; their circumferences are $c$ and $C$ respectively. We draw two lines through the center ... Top 50 recent answers are included
Canadian Mathematical Society www.cms.math.ca location:  Publications → journals → CMB Abstract view A Combinatorial Reciprocity Theorem for Hyperplane Arrangements Published:2009-12-04 Printed: Mar 2010 • Christos A. Athanasiadis Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: HTML LaTeX MathJax Abstract Given a nonnegative integer $m$ and a finite collection $\mathcal A$ of linear forms on $\mathcal Q^d$, the arrangement of affine hyperplanes in $\mathcal Q^d$ defined by the equations $\alpha(x) = k$ for $\alpha \in \mathcal A$ and integers $k \in [-m, m]$ is denoted by $\mathcal A^m$. It is proved that the coefficients of the characteristic polynomial of $\mathcal A^m$ are quasi-polynomials in $m$ and that they satisfy a simple combinatorial reciprocity law. MSC Classifications: 52C35 - Arrangements of points, flats, hyperplanes [See also 32S22] 05E99 - None of the above, but in this section © Canadian Mathematical Society, 2014 : http://www.cms.math.ca/
## Friday, January 3, 2014 ### I've moved! I've moved to http://5outh.github.io/! ## Wednesday, November 13, 2013 ### Parsing and Negating Boolean Strings in Haskell It appears that the dailyprogrammer subreddit is back after a pretty long hiatus, and they kicked back into gear with a really interesting problem. The problem was, paraphrasing: Given a Boolean expression as a string S, compute and print the negation of S as a string using DeMorgan's laws. The problem is also detailed in full here. I completed the challenge and posted my solution to reddit, but wanted to share it here as well, so here it is, with slight modifications: This is a problem that is heavily suited to three major things that Haskell advocates: Algebraic Data Types, Pattern Matching, and Monadic Parsing. First off, if you've had any experience with automata theory, it's pretty clear that the input language of Boolean expressions can be represented by a context free grammar. It just so happens that Haskell makes it incredibly easy to model CFGs right out of the box using Algebraic Data Types. Let's take a look at this data type representing Boolean expressions: Simple. Now, the main problem of this challenge was actually performing the simplification of the not operation. Using pattern matching, we can directly encode these rules in the following manner: Here we're giving a literal definition of rules for negating Boolean expressions. If you use Haskell, this is really easy to read. If you don't: stare at it for a second; you'll see what it's doing! That's the brunt of the challenge, right there. That's it. Encode a Boolean expression into an Expr and call not on it, and it will spit out a new Expr expressing the negation of your original expression. DeMorgan's laws are represented in the And and Or rules.We can also do this in a slightly modified way, using a function simplify :: Expr -> Expr that simplifies expressions and another function not = simplify . Not. Not to compute the same thing. It's a similar solution so I won't post it, but if you'd like to, feel free to experiment and/or add more simplification rules (e.g. simplify e@(And a b) = if a == b then a else e). We can also display our expressions as a string by declaring Expr an instance of Show in the following way: Now we can type in Boolean expressions using our data type, not them, and print them out as nice expressions. But, now we are faced with, in my opinion, the tougher part of the challenge. We're able to actually compute everything we need to, but what about parsing a Boolean expression (as a string) into an Expr? We can use a monadic parsing library, namely Haskell's beloved Parsec, to do this in a rather simple way. We'll be using Parsec's Token and Expr libraries, as well as the base, in this example. Let's take a look. We essentially define the structure of our input here and parse it into an Expr using a bunch of case-specific parsing rules. variable parses a single char into a Varparens matches and returns a SubExpr, and everything else is handled by using the convenience function buildExpressionParser along with a list of operator strings, the types they translate to and their operator precedence. Here we're using applicative style to do our parsing, but monadic style is fine too. Check this out for more on applicative style parsing. Given that, we can define a main function to read in a file of expressions and output the negation of each of the expressions, like so: Concise and to the point. We make sure that each line gets parsed properly, not the expressions, and print them. Here's what we get when we run the program: inputs.txt --- output a --- NOT a NOT a --- a a AND b --- NOT a OR NOT b NOT a AND b --- a OR NOT b NOT (a AND b) --- a AND b NOT (a OR b AND c) OR NOT(a AND NOT b) --- (a OR b AND c) AND (a AND NOT b) Finally, here's the full (40 line!) source on Github! ## Tuesday, August 27, 2013 ### Ludum Dare #27 (10 Seconds): A post-mortem WTF is Ludum Dare? Ludum Dare is a friendly competition among game developers to see who can make the best game based on a given theme in 48 hours. This was the 27th competition, and the theme was "10 seconds." Why did I want to participate? For a long time, I've been looking at the game development community from the outside with a lens that I feel a lot of people tend to see through. I've always considered game development an extremely intricate and difficult field to enter, and I was scared of trying my hand at it for a long time. I have made small clones of other games (pong, sokoban, etc) and I have enjoyed making those games, but they're both relatively simple (exercise-sized, almost, to a certain extent). I have been wanting to get a little further into the game development scene for a while, and Ludum Dare seemed like the perfect opportunity. I've se en several Ludum Dare competitions go by over the past couple of years and I never thought I was good enough to actually do it. This time, I just went for it, and I'm really glad I did. What did you make? When the theme "10 seconds" was announced, I had a billion and one ideas going through my head, because it's so open-ended. Everyone's first idea was to make a 10-second game, but that doesn't really work, so you had to move on to the next thing. I bounced a couple of ideas around in my head, but I ultimately decided to make a twin-stick roguelike-like because: 1.  Movement and collision detection is relatively simple to implement, and 2.  As all of my friends know, I'm a big fan of The Binding of Isaac, and I'm always thinking about how that game and games similar in nature to it are put together. I ended up calling my game "Wappansai!" because it was one of the random strings I stuck in as a debug message as I was developing. It doesn't mean anything other than that. Anyway, in Wappansai!, you're a big fat square ninja man who is stuck in a dungeon with some red business-men enemies. They don't move, but if you don't murder all of the enemies in the room within 10 seconds, you die. Running into enemies also kills you, as does falling into the many pits laid out around the room. Later on, moving spike tiles get added, and those kill you as well. When you clear a room under 10 seconds, a door opens up and you move on to the next room where things get progressively more difficult. I think my high score in the final version of the game was somewhere around 17 rooms, so I definitely challenge everyone to beat that! Oh, and the tools I used were C++ and SDL 2.0, sfxr for sound bytes, and photoshop for graphics. I didn't get around to making music because I don't even know where to start with that! What did you learn? I have been reading Programming Game AI by Example on and off for a little while, and I decided to use Ludum Dare as an opportunity to start learning a little more about the State Design Pattern and Singletons, two concepts introduced early on in that book. I worked on throwing together a shell of a program that implemented these things before the competition, so I wouldn't have to figure everything out as I was trying to get things done over the 48 hours, which was definitely a good idea. I was able to start working right when the clock started ticking instead of worrying about getting the design right as I was trying to implement the program. I'd recommend doing this if you plan to participate in a game jam; it really helped me out. That said, there were still some design pitfalls. For example, when you open chests in the game, there's a (very slim) chance that you'll get a laser gun from inside of it (usually it just takes all of your currently held items -- haha!), but the gun doesn't spawn anywhere when this happens; it just magically goes into your character's weapon slot and suddenly you're firing lasers. The way I wanted this to work was, if you got lucky and got the laser gun from the chest, the gun would spawn somewhere on the map and you could go pick it up and feel awesome that that chest that normally screws you over by stealing all of your items just happened to spawn a freakin' laser gun for you to play with. But, the way I organized my project, Tile objects had no way to modify Level objects, so I was unable to trigger an event when the chest (a type of Tile) was touched to modify the level (a Level), so I had to make a compromise there. There are certainly workarounds that I considered, but I couldn't really do them within the time limit. It was a design flaw, and now I know that it was a design flaw. I can structure my code better now -- that was one major learning point. Randomly Generated...everything! I also learned a lot about randomly generated content, that was crazy! It actually went a lot smoother than I thought it would, but just seeing everything so randomized was amazing. Items, weapons, enemies, pits, and walls all get generated randomly based on how many rooms you've traversed, and just seeing that in play working (relatively) nicely was so cool. So I learned a lot there. The final, really huge thing that I discovered was that...holy crap, there are a lot of assets that go into games! I must have made over 50 sprites for Wappansai!, on top of tons of different sound clips (sfxr made this easy). A lot more time went into art than I expected, and I didn't even animate anything...you don't really think about it, but games have tons and tons of assets involved. Would I do it again? Finishing a game in two days was really hard work and it could be really frustrating at times (especially using C++ with VS, those error messages are completely incomprehensible). But, it was totally worth the effort. I can say that I've made a game that is completely mine, and even though it's not very good, it's something that I made and I love that I made it and I'm excited to make more things. It's also really amazing that people actually do play your games. I've had several comments on my ludum dare game's page already discussing the game that I made, and it's so cool to see that. Playing everyone else's games is also really awesome because you know that they went through the same thing that you did. Everyone's really proud of their work, everyone worked hard to get something done, and it's just a really nice community event that I'm so happy to have been a part of. I would absolutely do it again, and plan to! Who would I recommend Ludum Dare to? As you can see, my game had Dark Souls elements. If you want to make games, do it. If you don't know where to start, there are plenty of websites that can teach you the basics, and even if you don't want/ know how to program, there are tools out there like GameMaker Studio that will allow you to program your game's logic without actually writing code. If you don't think you can do it alone, team up with someone. But if you want to make video games and have at least a basic knowledge of how they work, I think that Ludum Dare is an excellent way to test your skills and build something that you'll be proud of. You can download, rate, and comment on Wappansai! over at the Ludum Dare page for it. 5outh ## Wednesday, May 1, 2013 ### Motivation It's relatively simple to write a program to approximate derivatives. We simply look at the limit definition of  a derivative: $$\frac{d}{dx} f(x) = \lim_{h \rightarrow 0} \frac{f(x+h) - f(x)}{h}$$ and choose some small decimal number to use for $h$ instead of  calculating the limit of the function as it goes to $0$. Generally, this works pretty well. We get good approximations as long as $h$ is small enough. However, these approximations are not quite exact, and they can only be evaluated at specific points (for example, we can find $\frac{d}{dx}f(5)$, but not the actual symbolic function). So, what if we want to know what $\frac{d}{dx}f(x)$ is for all x? Calculating derivatives with this approximation method won't do the trick for us -- we'll need something a little more powerful, which is what we will build in this post. Admittedly, this idea didn't come to me through necessity of such a system; I was just curious. It ended up being pretty enlightening, and it's amazing how simple Haskell makes this type of thing. Most of the code in this post didn't take too long to write, it's relatively short (only 108 lines total, including whitespace) and has a lot of capability. With that, let's get started! ### The Data Type I don't want to over-complicate the process of taking derivatives, so my data type for algebraic expressions is kept minimal. We support six different types of expressions: • Variables (denoted by a single character) • Constant values • Multiplication • Exponentiation • Division This can be expanded upon, but this is adequate for the purpose of demonstration. Here is our Expr a type, with a sample expression representing $3x^2$: Haskell allows infix operators to act as data constructors, which allows us to express algebraic expressions cleanly and concisely without too much mental parsing. Also note that, since we explicitly defined operator precedence above the data type declaration, we can write expressions without parentheses according to order of operations, like we did in the sample expression. ### The Algebra Taking derivatives simply by the rules can get messy, so we might as well go ahead and set up an algebra simplification function that cleans up our final expressions for us. This is actually incredibly simple. As long as you know algebraic laws, this kind of thing basically writes itself in Haskell. We just need to pattern match against certain expressions and meld them together according to algebraic law. This function ends up being lengthy long due to the fact that symbolic manipulation is mostly just a bunch of different cases, but we can encode algebraic simplification rules for our above data type in a straightforward way. The following simplify function takes an expression and spits out a simpler one that means the same thing. It's really just a bunch of pattern-matching cases, so feel free to skim it. I've also included a fullSimplify function that runs simplify on an expression until the current input matches the last output of simplify (which ensures an expression is completely simplified)*. Note that in the simplify function, I've covered a lot of bases, but not all of them. Specifically, division simplification is lacking because it gets complicated quickly and I didn't want to focus on that in this blog post. We should also note that we don't have a data type expressing subtraction or negative numbers, so we'll deal with that now. In order to express the negation of expressions, we define the negate' function, which basically multiplies expressions by $-1$ and outputs the resultant expression. Now we have a relatively robust system for computing symbolic expressions. However, we aren't able to actually plug anything into these expressions yet, so we'll fix that now. *Thanks to zoells on Reddit for the suggestion! ### Evaluating Expressions The first thing we'll need to do to begin the process of evaluating expressions is write a function to plug in a value for a specific variable. We do this in terms of a function called mapVar, implemented as follows: mapVar searches through an expression for a specific variable and performs a function on each instance of that variable in the function. plugIn takes a character and a value, and is defined using mapVar to map variables with a specific name to a constant provided by the user. Now that we have plugIn, we can define two functions: One that takes an expression full of only constants and outputs a result (evalExpr'), and one that will replace a single variable with a constant and output the result (evalExpr): What we're doing here is simple. With evalExpr', we only need to replace our functional types (:+:, :*:, etc) with the actual functions (+, *, etc). When we run into a Const, we simply replace it with it's inner number value. When we run into a Var, we note that it's not possible to evaluate, and tell the user that there is still a variable in the expression that needs to be plugged in. With evalExpr, we just plug in a value for a specific variable before evaluating the expression. Simple as that! Here are some examples of expressions and their evaluations: *Main> evalExpr 'x' 2 (Var 'x') 2.0 *Main> evalExpr 'a' 3 (Const 3 :+: Var 'a' :*: Const 6) 21.0 *Main> evalExpr 'b' 2 (Var 'b' :/: (Const 2 :*: Var 'b')) 0.5 We can even evaluate multivariate expressions using plugIn: *Main> evalExpr' . plugIn 'x' 1 $plugIn 'y' 2 (Var 'x' :+: Var 'y') 3.0 Now that we've extended our symbolic expressions to be able to be evaluated, let's do what we set out to do -- find derivatives! ### Derivatives We aren't going to get into super-complicated derivatives involving logorithmic or implicit differentiation, etc. Instead, we'll keep it simple for now, and only adhere to some of the 'simple' derivative rules. We'll need one of them for each of our six expression types: constants, variables, addition, multiplication, division, and exponentiation. We already know the following laws for these from calculus: Differentiation of a constant:$\frac{d}{dx}k = 0$Differentiation of a variable:$\frac{d}{dx}x = 1$Addition differentiation:$\frac{d}{dx}\left(f(x) + g(x)\right) = \frac{d}{dx}f(x) +\frac{d}{dx}g(x)$Power rule (/chain rule):$\frac{d}{dx}f(x)^n = nf(x)^{n-1} \cdot \frac{d}{dx}f(x)$Product rule:$\frac{d}{dx}\left(f(x) \cdot g(x)\right) = \frac{d}{dx}f(x) \cdot g(x) + f(x) \cdot \frac{d}{dx}g(x)$Quotient rule:$\frac{d}{dx}\frac{f(x)}{g(x)} = \frac{\frac{d}{dx}f(x) \cdot g(x) - \frac{d}{dx}g(x) \cdot f(x)}{g(x)^2}$As it turns out, we can almost directly represent this in Haskell. There should be no surprises here -- following along with the above rules, it is relatively easy to see how this function calculates derivatives. We will still error out if we get something like$x^x$as input, as it will require a technique we haven't implemented yet. However, this will suffice for a many different expressions. So, what can we do with this? Well, let's take a look: The first, most obvious thing we'll note when running the derivative function on an expression is that what it produces is rather ugly. To fix this, we'll write a function ddx that will simplify the derivative expression three times to make our output cleaner. (Remember sampleExpr =$3x^2$) *Main> derivative sampleExpr Const 3.0 :*: ((Const 2.0 :*: Var 'x' :^: Const 1.0) :*: Const 1.0) :+: Var 'x' :^: Const 2.0 :*: Const 0.0 *Main> ddx sampleExpr Const 6.0 :*: Var 'x' -- 6x Another thing we can do is get a list of derivatives. The iterate method from the Prelude suits this type of thing perfectly -- we can generate a (infinite!) list of derivatives of a function just by calling iterate ddx. Simple, expressive, and incredibly powerful. *Main> take 3$ ddxs sampleExpr [Const 3.0 :*: Var 'x' :^: Const 2.0,Const 6.0 :*: Var 'x',Const 6.0] -- [3x^2, 6x, 6] *Main> let ds = take 4 $ddxs sampleExpr *Main> fmap (evalExpr 'x' 2) ds [12.0,12.0,6.0,0.0] We're also able to grab the$n^{th}$derivative of an expression. We could simply grab the$n^{th}$term of ddxs, but we'll do it without the wasted memory by repeatedly composing ddx$n$times using foldr1 and replicate. *Main> nthDerivative 2 sampleExpr Const 6.0 There's one last thing I want to touch on. Since it's so simple to generate a list of derivatives of a function, why not use that to build functions' Taylor series expansions? ### Taylor Series Expansions The Taylor series expansion of a function$f(x)$about$a$is defined as follows: $$\sum_{n=1}^{\infty} \frac{f^{(n)}(a)}{n!} \cdot (x - a)^n$$ The Maclaurin series expansion for a function is the Taylor series of a function with a = 0, and we will also implement that. Given that we can: • Have multivariate expressions • Easily generate a list of derivatives We can actually find the Taylor series expansion of a function easily. Again, we can almost directly implement this function in Haskell, and evaluating it is no more difficult. To produce a Taylor series, we need a couple of things: • A list of derivatives • A list of indices • A list of factorials We create these three things in where clauses in the taylor declaration. indices are simple, derivs calculates a list of derivatives (using mapVar again to change all variables into$a$s), and facts contains our factorials wrapped in Consts. We generate a list of (expression, index)s in exprs, and then map the "gluing" function series over exprs to produce a list of expressions in the series expansion. We then fmap superSimplify over the list in order to simplify down our expressions, and we get back a list of Taylor series terms for the given expression. The Maclaurin expansion can be defined as mentioned above in terms of the Taylor series, and again, we basically directly encode it (though we do have to re-simplify our expressions due to the plugging in of a variable). Let's take a look at the Taylor expansion for$f(x) = x$. We note that:$f(a) = af'(a) = 1f''(a) = 0$And the rest of the derivatives will be 0. So our Taylor series terms$T_n$should look something like:$T_1 = \frac{a}{1} \cdot (x - a) = a \cdot (x-a)T_2 = \frac{1}{2} \cdot (x-a)^2T_3 = \frac{0}{6} \cdot (x-a)^3 = 0$...and so on. Let's take a look at what taylor produces: *Main> take 3$ taylor (Var 'x') [Var 'a' :*: (Var 'x' :+: Const (-1.0) :*: Var 'a'), --^ a * (x-a) (Const 1.0 :/: Const 2.0) :*: (Var 'x' :+: Const (-1.0) :*: Var 'a') :^: Const 2.0, --^ 1/2 * (x-a)^2 Const 0.0] This matches what we determined earlier. To evaluate a Taylor expression, we need a value for $a$, a value for $x$, and a specified number of terms for precision. We default this precision to 100 terms in the evalTaylor function, and the logic takes place in the evalTaylorWithPrecision function. In this function, we get the Taylor expansion, take the first prec terms, plug in a and x for all values of the function, and finally sum the terms. Maclaurin evaluation is again defined in terms of Taylor evaluation. Taking a look at the above Taylor series expansion of $f(x) = x$, there is only one term where a zero-valued $a$ will produce any output (namely $\frac{1}{2} \cdot (x-a)^2$). So when we evaluate our Maclaurin series for this function at x, we should simply get back $\frac{1}{2}x^2$. Let's see how it works: *Main> evalMaclaurin 2 (Var 'x') 2.0 --1/2 2^2 = 1/2 * 4 = 2 *Main> evalMaclaurin 3 (Var 'x') 4.5 -- 1/2 * 3^2 = 1/2 * 9 = 4.5 *Main> evalMaclaurin 10 (Var 'x') 50.0 -- 1/2 * 10^2 = 1/2 * 100 = 50 Looks like everything's working! Until next time, Ben ## Thursday, January 24, 2013 ### The Handshake Problem The Handshake Problem is something of a classic in mathematics. I first heard about it in an algebra course I took in high school, and it's stuck with me through the years. The question is this: In a room containing $n$ people, how many handshakes must take place for everyone to have shaken hands with everyone else? The goal of this short blog post will be to present a solution to this problem using the concept of graphs. ### The Complete Graph fig. I: A complete graph with 7 vertices First, we must understand the idea of a complete graph. A complete graph is a graph such that each node is connected to every other node. A graphical example can be seen in fig. I. We can use the model of a complete graph to reason about the problem at hand. The nodes in the graph may represent the people in the room, while the connecting edges represent the handshakes that must take place. As such, the key to solving the Handshake Problem is to count the edges in a complete graph. But it would be silly to draw out a graph and count the edges one-by-one in order to solve this problem (and would take a very long time for large values of $n$), so we'll use math! ### The Solution To find the solution, we have to make a few key observations. The easiest way to start out in this case is to map out the first few complete graphs and try to find a pattern: Let $n$ = the number of nodes, $e$ = the number of edges. • $n = 1 \Rightarrow e = 0$ • $n = 2 \Rightarrow e = 1$ • $n = 3 \Rightarrow e = 3$ • $n = 4 \Rightarrow e = 6$ At this point, you may be noticing a pattern. As $n$ increases, we're adding $n -1$ edges. This makes sense -- each time a new node is introduced, it must connect to each node other than itself. In other words, $n-1$ connections must be made. It follows that the number of the edges ($e$) in a complete graph with $n$ vertices can be represented by the sum $(n-1) + (n-2) + \dots + 1 + 0$, or more compactly as: $$\sum_{i=1}^n i-1$$ You may already know from summation laws that this evaluates to $\frac{n(n-1)}{2}$. Let's prove it. ### The Proof Theorem: $\forall n \in \mathbb{N} : \sum_{i=1}^{n} i-1 = \frac{n(n-1)}{2}$ Proof:  Base case: Let $n = 1$. Then, $\frac{1(1-1)}{2} = \sum_{i=1}^1 i-1 = 0$. Inductive case: Let $n \in \mathbb{N}$. We need to prove that $\frac{n(n-1)}{2} + n = \frac{n(n+1)}{2}$. $$\begin{array} {lcl} \frac{n(n-1)}{2} + n & = & \frac{n(n-1)+2n}{2} \\ & = & \frac{n^2 - n + 2n}{2} \\ & = & \frac{n^2 + n}{2} \\ & = & \frac{n(n+1)}{2}■\end{array}$$ We can now use the theorem knowing it's correct, and, in fact, provide a solution to the original problem. The answer to the Handshake Problem involving $n$ people in a room is simply $\frac{n(n-1)}{2}$. So, the next time someone asks you how many handshakes would have to take place in order for everyone in a room with 1029 people to shake each other's hands, you can proudly answer "528906 handshakes!" Until next time, Ben ## Wednesday, January 16, 2013 We've seen in the previous post what monads are and how they work. We saw a few monad instances of some common Haskell structures and toyed around with them a little bit. Towards the end of the post, I asked about making a Monad instance for a generalized tree data type. My guess is that it was relatively difficult. But why? Well, one thing that I didn't touch on in my last post was that the monadic >>= operation can also be represented as (join . fmap f), that is, a >>= f = join \$fmap f a, as long as a has a Functor instance. join is a generalized function of the following type: $$join :: (Monad ~m) \Rightarrow m ~(m ~a) \rightarrow m ~a$$ Basically, this function takes a nested monad and "joins" it with the top level in order to get a regular m a out of it. Since you bind to functions that take as and produce m as, fmap f actually makes m (m a)s and join is just the tool to fix up the >>= function to produce what we want. Keep in mind that join is not a part of the Monad typeclass, and therefore the above definition for >>= will not work. However, if we are able to make a specific join function (named something else, since join is taken!) for whatever type we are making a Monad instance for, we can certainly use the above definition. I don't want to spend too much time on this, but I would like to direct the reader to the Monad instance for [] that I mentioned in the last post -- can you see any similarities between this and the way I structured >>= above? Now back to the Tree type. Can you devise some way to join Trees? That is, can you think of a way to flatten a Tree of Tree as into a Tree a? This actually turns out to be quite difficult, and up to interpretation as to how you want to do it. It's not as straightforward as moving Maybe (Maybe a)s into Maybe as or [[a]]s into [a]s. Maybe if we put a Monoid restriction on our a type, as such... $$instance ~(Monoid ~a) \Rightarrow Monad ~(Tree ~a) ~where ~\dots$$ ...we could use mappend in some way in order to concatenate all of the nested elements using some joining function. While this is a valid way to define a Tree Monad, it seems a little bit "unnatural." I won't delve too deeply into that, though, because that's not the point of this post. Let's instead take a look at a structure related to monads that may make more sense for our generalized Tree type. ### What is a comonad? Recall the type signature of >>= for Monads: $$(>>=) :: (Monad ~m) \Rightarrow m ~a \rightarrow ~(a \rightarrow m ~b) \rightarrow m ~b$$ That is, we're taking an m a and converting it to an m b by means of some function that operates on the contained type. In other words, we're producing a new value using elements contained inside the Monad. Comonads have a similar function: $$(=>>) :: (Comonad ~w) \Rightarrow w ~a \rightarrow (w ~a \rightarrow ~b) \rightarrow w ~b$$ The difference here is that the function that we use to produce a new value operates on the whole -- we're not operating on the elements contained inside the Comonad, but the Comonad itself, to produce a new value. We also have a function similar to the Monadic return: $$coreturn :: (Comonad ~w) \Rightarrow w ~a \rightarrow a$$ Whereas return puts a value into a Monadic context, coreturn extracts a value from a Comonadic context. I mentioned the Monadic join function above because I would also like to mention that there is a similar operation for Comonads: $$cojoin :: (Comonad ~w) \Rightarrow w ~a \rightarrow w ~(w ~a)$$ Instead of "removing a layer," we're "adding" a layer. And, as it turns out, just as a >>= f can be represented as join \$ fmap f a, =>> can be represented as: $$a ~=>> ~f = fmap ~f ~\ ~cojoin ~a$$ The full Comonad typeclass (as I like to define it) is as follows: $$class ~(Functor ~w) \Rightarrow Comonad ~w ~a ~where \\ \hspace{22pt}coreturn :: (Comonad ~w) \Rightarrow w ~a \rightarrow a \\ \hspace{38pt}cojoin :: (Comonad ~w) \Rightarrow w ~a \rightarrow w ~(w ~a) \\ a ~=>> ~f = fmap ~f ~\ ~cojoin ~a$$ By now, it should at least be clear that Monads and Comonads are related -- it shouldn't be hard to see why they are so similar in name! Note: There is a package called Control.Comonad on Hackage. It uses different names for the Comonad operations, but they do the same things. It's a good package, but I wanted to show how Comonads are built and use the operation names I used to make things clearer. ### What can I do with Comonads? As it turns out, the Tree a structure that I've been mentioning fits into the Comonadic context quite well, and provides a simple example as to how Comonads work. Then we'll go ahead and make a Functor instance of our Tree a data type: From here, we're able to make a Comonadic Tree like so: The only real point of confusion here is in the cojoin function for Nodes. But, all we are doing is wrapping the entire node in a new node, and then mapping cojoin over every child node of the current node to produce its children. In other words, we're turning a Node a into a Node (Node a), which is exactly what we want to do. So what can we do with a comonadic tree? Let's tackle a simple problem. Say we're hanging out in California, and we want to get to the East coast as quickly as possible. We don't really care where on the East coast we end up -- we just want to get there. We can map out the different paths that we can take, along with the time it takes to get to each one. In other words, we're going to model spots on the map as Nodes on a Tree and give them a weight corresponding to the time it takes to get there. We can model this situation with a Tree as follows: The number of each Node marks its distance from the previous Node. The root Node of the Tree is the starting point, so it is 0 distance away from itself. What we want to do to find the shortest path through the country is essentially, as follows. First, we're going to need to check the deepest nodes, and find their minimum distance children. We will add the distance of the Node closest to the one we're examining to its own distance, to find the shortest way to get from the node we're examining to the destination. Once all of that has been done, we'll need to traverse up the tree, repeating this as we go. By the end of the algorithm, the root Node will be marked with the shortest distance to the destination. Now, that may sound somewhat iterative in nature, but we're going to morph this into a comonadic operation. First, let's take a look at a function we can use to find the minimum distance to the next level of our tree: This is relatively simple, and does precisely what was mentioned above: adds the minimum value contained in the connected Nodes to the parent Node. Next, take a look at the type signature. We can see that this function produces a new number from a tree full of numbers. This coincides precisely with the function type we need to use with =>>, so we'll be able to use it to get exactly what we want. The rest is very simple: This produces a tree full of the minimum distances from each node to the East coast. Pulling the actual value out is as easy as calling coreturn on the resultant tree. Ben ## Wednesday, January 2, 2013 If you've used Haskell before, chances are that you've heard the term "monad" a good bit. But, you might not know what they are, or what they are really used for. It is my goal in this blog post to shed some light on what monads are, introduce and define few simple ones, and touch on their usefulness. I will assume basic familiarity with Haskell syntax in this blog post. A working knowledge of monoids and functors will also be beneficial, although not strictly necessary. We can see a Monad has two operations, return and >>= (this is commonly pronounced "bind"). You may suspect that you know what return does from other programming languages, but be warned: Haskell's return is very different! Haskell's return only operates on Monads, and essentially acts as a "wrapping" function. That is, if you call return on a value, it will turn it into a monadic value of that type. We will look at an example of this shortly. The second operation, >>=, takes two arguments: an a wrapped in a Monad m (I will refer to this as m a from now on), and a function that converts an a to b wrapped in the same type of Monadm. It produces a value of type b, wrapped in a Monad m (I will call this m b from now on). This may sound complicated, but I'll do my best to explain it after showing the most basic Monad type, namely, the Identity Monad: The Identity data declaration defines only one type: Identity a. Basically, this is just a wrapper for a value, and nothing more. return is simply defined as Identity. We can, for example, call return 3 and get an Identity 3. Turning values into Identitys is as simple as that. The bind function may look a little bit more obscure, though. Let's look closely: We first use pattern matching to be able to operate on the type that the Identity Monad wraps (Identity a), bind it (>>=) to a function (f). Since f converts normal values into monadic ones (look at the type declaration), we can simply apply f to a, and we've produced a new Identity. I've provided a couple of examples of how this works. addThree adds 3 to an Identity Int (m1) and getLength turns an Identity [a]  (m2) into an Identity Int. Both of the bind examples produce the value Identity 6. Can you see why? I want to take a little sidebar here and try to explain how I understand Monads before moving on. Basically the real tough part of understanding Monads is just understanding >>=. A little bit of dissection can explain a lot, but it took me months to really grasp the concept. On the surface, you're morphing an m a into an m b. At first, I thought that the m's in the type signature were allowed to different types of Monads. For instance, I thought that Monads made it possible to bind Identity Ints to functions that produced [String]s (we'll look at the list Monad in a minute). This is wrong, and for good reason! It was incredibly confusing to think about a function that could generalize to this degree and it is, to my knowledge, not possible to do so. The Monad (wrapper) type is retained, but the wrapped type can be changed by means of the function you bind to. The second thing that I want to stress is what >>= really does. What we're doing with the >>= function is turning an m a into an m b, but it's not so direct as you might want to think. The function argument in >>=  ( (a -> m b) ) was really confusing to me to begin with, so I'm going to try to explain it. The function you're binding your m a to must take an a and return an m b. This means that you're acting on the wrapped part of the Monad in order to produce an entirely new Monad. It took me a while to realize what exactly was happening, that I wasn't supposed to be directly converting m a to m b by means of my argument function (that's actually what >>= is for). Just remember that the function you bind to should operate on an a to produce an m b. With that knowledge, let's take a look at some more examples of Monads: If you're a Haskell user, you're likely already familiar with the Maybe data type: Defined in the Prelude The Maybe Monad is hardly an extension of the aforementioned Identity Monad. All we are really adding is the functionality to fail -- that is, to produce a Nothing value if necessary. In fact, the Just a type is exactly the same as Identity a, except that a Nothing value anywhere in the computation will produce a Nothing as a result. Next, we'll look at the Monad instance for lists: Defined in the Prelude The list Monad is a little more complicated, but it's really nothing too new. For return, we just have to wrap our value inside a list. Now, let's look at >>=concatMap actually is the >>= instance for lists. Why? Well, concatMap is, simply, (concat . map). We know that map takes a list of as and maps a function over them, and concat is a function that flattens lists, that is, turns [[a]]s into [a]s. When we compose these two functions (or use concatMap), we're making a function that must produce [[a]]s out of [a]s, and then flatten them back to [a]s. That is exactly how >>=  works for lists! Now, let's take a look at a slightly more complicated Monad: This one requires some explanation, because it's a bit of a leap from the last three, since we're defining a Monad instance for functions. I have added a couple of comments in the above code in order to further explain how everything is working in the (->) r Monad. Well, first things's first. We define return as const, which takes some arbitrary argument and produces a function that produces a constant value (what we pass in to return). This is exactly what we want; a minimal context for functions. (>>=) is a bit more complex. First, let's take a look at the type signature, specific to this Monad. We're binding a function (r -> a) to a function (a -> (r -> b)) to produce a function of type (r -> b). So in the declaration of the function, f is of type (r -> a) and g is of type (a -> (r -> b)). I've conveniently named the argument our function is going to take r, for clarity. This is what the user will pass into the function that (>>=) produces. f takes an r, so we apply the value from the lambda into it to produce an a. We then use that a to produce an (r -> b), using g. This produces our function, but it needs to be fed some value, so we finally apply it to r in order to complete everything. If any of this sounds confusing, try to work through it on paper and read the comments in my code -- after a bit of staring, it should start to make sense. I've also implemented the mean function here, using the (->) r Monad with and without do-notation. I think it's a pretty elegant way to express it. We've only seen the tip of the iceberg when it comes to Monads, but this should serve as a good basic introduction. Once you get used to seeing Monads, they become a bit less scary. The more time you spend with them, the more comfortable you'll feel with them. Before I end this post, I want to leave a question for the reader. Given the general Tree data type represented below, can you intuitively make a Monad instance? I'll explain why this is difficult when we talk about a similar mathematical type in the next post.
Search by Topic Resources tagged with Factors and multiples similar to Mindreader: Filter by: Content type: Stage: Challenge level: There are 93 results Broad Topics > Numbers and the Number System > Factors and multiples Three Times Seven Stage: 3 Challenge Level: A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why? Stage: 3 Challenge Level: List any 3 numbers. It is always possible to find a subset of adjacent numbers that add up to a multiple of 3. Can you explain why and prove it? Special Sums and Products Stage: 3 Challenge Level: Find some examples of pairs of numbers such that their sum is a factor of their product. eg. 4 + 12 = 16 and 4 × 12 = 48 and 16 is a factor of 48. What Numbers Can We Make? Stage: 3 Challenge Level: Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make? Repeaters Stage: 3 Challenge Level: Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13. What Numbers Can We Make Now? Stage: 3 Challenge Level: Imagine we have four bags containing numbers from a sequence. What numbers can we make now? Take Three from Five Stage: 4 Challenge Level: Caroline and James pick sets of five numbers. Charlie chooses three of them that add together to make a multiple of three. Can they stop him? Hidden Rectangles Stage: 3 Challenge Level: Rectangles are considered different if they vary in size or have different locations. How many different rectangles can be drawn on a chessboard? Got It Stage: 2 and 3 Challenge Level: A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target. Even So Stage: 3 Challenge Level: Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why? Have You Got It? Stage: 3 Challenge Level: Can you explain the strategy for winning this game with any target? Product Sudoku Stage: 3 Challenge Level: The clues for this Sudoku are the product of the numbers in adjacent squares. Stars Stage: 3 Challenge Level: Can you find a relationship between the number of dots on the circle and the number of steps that will ensure that all points are hit? Stage: 3 Challenge Level: Make a set of numbers that use all the digits from 1 to 9, once and once only. Add them up. The result is divisible by 9. Add each of the digits in the new number. What is their sum? Now try some. . . . Ben's Game Stage: 3 Challenge Level: Ben passed a third of his counters to Jack, Jack passed a quarter of his counters to Emma and Emma passed a fifth of her counters to Ben. After this they all had the same number of counters. Counting Factors Stage: 3 Challenge Level: Is there an efficient way to work out how many factors a large number has? Dozens Stage: 2 and 3 Challenge Level: Do you know a quick way to check if a number is a multiple of two? How about three, four or six? For What? Stage: 4 Challenge Level: Prove that if the integer n is divisible by 4 then it can be written as the difference of two squares. Sixational Stage: 4 and 5 Challenge Level: The nth term of a sequence is given by the formula n^3 + 11n . Find the first four terms of the sequence given by this formula and the first term of the sequence which is bigger than one million. . . . A Biggy Stage: 4 Challenge Level: Find the smallest positive integer N such that N/2 is a perfect cube, N/3 is a perfect fifth power and N/5 is a perfect seventh power. The Remainders Game Stage: 2 and 3 Challenge Level: A game that tests your understanding of remainders. Number Rules - OK Stage: 4 Challenge Level: Can you convince me of each of the following: If a square number is multiplied by a square number the product is ALWAYS a square number... American Billions Stage: 3 Challenge Level: Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3... Remainder Stage: 3 Challenge Level: What is the remainder when 2^2002 is divided by 7? What happens with different powers of 2? A First Product Sudoku Stage: 3 Challenge Level: Given the products of adjacent cells, can you complete this Sudoku? Mod 3 Stage: 4 Challenge Level: Prove that if a^2+b^2 is a multiple of 3 then both a and b are multiples of 3. Common Divisor Stage: 4 Challenge Level: Find the largest integer which divides every member of the following sequence: 1^5-1, 2^5-2, 3^5-3, ... n^5-n. Diagonal Product Sudoku Stage: 3 and 4 Challenge Level: Given the products of diagonally opposite cells - can you complete this Sudoku? Factor Lines Stage: 2 and 3 Challenge Level: Arrange the four number cards on the grid, according to the rules, to make a diagonal, vertical or horizontal line. Really Mr. Bond Stage: 4 Challenge Level: 115^2 = (110 x 120) + 25, that is 13225 895^2 = (890 x 900) + 25, that is 801025 Can you explain what is happening and generalise? Shifting Times Tables Stage: 3 Challenge Level: Can you find a way to identify times tables after they have been shifted up? LCM Sudoku II Stage: 3, 4 and 5 Challenge Level: You are given the Lowest Common Multiples of sets of digits. Find the digits and then solve the Sudoku. How Old Are the Children? Stage: 3 Challenge Level: A student in a maths class was trying to get some information from her teacher. She was given some clues and then the teacher ended by saying, "Well, how old are they?" Funny Factorisation Stage: 3 Challenge Level: Using the digits 1 to 9, the number 4396 can be written as the product of two numbers. Can you find the factors? Gabriel's Problem Stage: 3 Challenge Level: Gabriel multiplied together some numbers and then erased them. Can you figure out where each number was? Satisfying Statements Stage: 3 Challenge Level: Can you find any two-digit numbers that satisfy all of these statements? Charlie's Delightful Machine Stage: 3 and 4 Challenge Level: Here is a machine with four coloured lights. Can you develop a strategy to work out the rules controlling each light? Power Crazy Stage: 3 Challenge Level: What can you say about the values of n that make $7^n + 3^n$ a multiple of 10? Are there other pairs of integers between 1 and 10 which have similar properties? Cuboids Stage: 3 Challenge Level: Find a cuboid (with edges of integer values) that has a surface area of exactly 100 square units. Is there more than one? Can you find them all? Stage: 3 and 4 Challenge Level: The items in the shopping basket add and multiply to give the same amount. What could their prices be? Factoring Factorials Stage: 3 Challenge Level: Find the highest power of 11 that will divide into 1000! exactly. Star Product Sudoku Stage: 3 and 4 Challenge Level: The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid. Data Chunks Stage: 4 Challenge Level: Data is sent in chunks of two different sizes - a yellow chunk has 5 characters and a blue chunk has 9 characters. A data slot of size 31 cannot be exactly filled with a combination of yellow and. . . . Eminit Stage: 3 Challenge Level: The number 8888...88M9999...99 is divisible by 7 and it starts with the digit 8 repeated 50 times and ends with the digit 9 repeated 50 times. What is the value of the digit M? Digat Stage: 3 Challenge Level: What is the value of the digit A in the sum below: [3(230 + A)]^2 = 49280A Multiplication Magic Stage: 4 Challenge Level: Given any 3 digit number you can use the given digits and name another number which is divisible by 37 (e.g. given 628 you say 628371 is divisible by 37 because you know that 6+3 = 2+7 = 8+1 = 9). . . . AB Search Stage: 3 Challenge Level: The five digit number A679B, in base ten, is divisible by 72. What are the values of A and B? Expenses Stage: 4 Challenge Level: What is the largest number which, when divided into 1905, 2587, 3951, 7020 and 8725 in turn, leaves the same remainder each time? Divisively So Stage: 3 Challenge Level: How many numbers less than 1000 are NOT divisible by either: a) 2 or 5; or b) 2, 5 or 7?
# Is it correct to use RequirePackage[utf8]{inputenc}? If I create a package, is it correct to use \RequirePackage[utf8]{inputenc} to support utf8. I guess not, but how do I state that this package must (should) be used with inputenc utf8? • well inputenc is only required when using LaTeX or pdfLaTeX. if one uses the newer XeLaTeX or LuaLaTeX, UTF-8 is already required as the input and works natively. – ArTourter Nov 25 '12 at 14:49 This is what I do in newunicodechar that requires the utf8 option either to inputenc or inputenx and wouldn't work with other options and without either of those packages: \def\nuc@stop{\PackageWarningNoLine{newunicodechar} inputenc' or inputenx' with the utf8' option}% \let\newunicodechar\@gobbletwo\endinput} {\def\nuc@tempa{inputenx}} \@ifpackagewith{\nuc@tempa}{utf8}{}{\nuc@stop} \@ifpackagewith{\nuc@tempa}{utf8x}{\nuc@stop}{} The \let\newunicodechar\@gobbletwo is just to enable processing the file nonetheless. If you want to require usage of utf8 you can modify it like this: \ProvidesPackage{xyz} % Define an error message \def\xyz@stop{\PackageError{xyz} {inputenc' or inputenx' loaded with wrong option.\MessageBreak This is a fatal error} {This package won't work if either inputenc' or inputenx'\MessageBreak are loaded with the utf8' option. The LaTeX run will be terminated}% \fi\@@end} % the \fi is to match \if@tempswa % Check for inputenx or inputenc \@tempswafalse {\def\xyz@tempa{inputenx}\@tempswatrue} {\def\xyz@tempa{inputenc}\@tempswatrue} {\RequirePackage[utf8]{inputenc}}% } % Check for the right option \if@tempswa \@ifpackagewith{\xyz@tempa}{utf8}{}{\xyz@stop} \@ifpackagewith{\xyz@tempa}{utf8x}{\xyz@stop}{} \fi So if the user types \usepackage[utf8]{inputenc} \usepackage{xyz} nothing will be done. But if inputenc (or inputenx) is not loaded before it, xyz will load it. If a wrong option is passed to inputenc, the package will terminate the LaTeX run.
### Gambling at Monte Carlo A man went to Monte Carlo to try and make his fortune. Whilst he was there he had an opportunity to bet on the outcome of rolling dice. He was offered the same odds for each of the following outcomes: At least 1 six with 6 dice. At least 2 sixes with 12 dice. At least 3 sixes with 18 dice. ### Balls and Bags Two bags contain different numbers of red and blue balls. A ball is removed from one of the bags. The ball is blue. What is the probability that it was removed from bag A? ### Fixing the Odds You have two bags, four red balls and four white balls. You must put all the balls in the bags although you are allowed to have one bag empty. How should you distribute the balls between the two bags so as to make the probability of choosing a red ball as small as possible and what will the probability be in that case? # Win or Lose? ##### Stage: 4 Challenge Level: The gambler will have less money than he started with. Suppose the amount of money before a game is $m$, then: $m \to 3m/2$ for a win and $m\to m/2$ after losing a game. Values of n Amount after 2n games: n wins, n losses 1 $3m/4$ 2 $m \times 1/2 \times3/2 \times1/2 \times3/2 = (3/4)^2 m$ 3 $m \times1/2 \times3/2 \times1/2 \times3/2 \times1/2 \times3/2 = (3/4)^3 m$ After $n$ wins and $n$ losses he will have $(3/4)^n$ times the money he started with, irrespective of the order in which his wins and losses occur. Eventually he will run out of money as what he has left will be smaller than the smallest coin in circulation. The diagram was suggested by Roderick and Michael of Simon Langton Boys' Grammar School Canterbury who pointed out that if the gambler went on indefinitely he would, in theory, end up with an infinitely small amount which would be represented by nothing.
• entries 99 279 • views 154831 # Thirteen Years of Bad Game Code 2385 views Alone on a Friday night, in need of some inspiration, you decide to relive some of your past programming conquests. The old archive hard drive slowly spins up, and the source code of the glory days scrolls by... Oh no. This is not at all what you expected. Were things really this bad? Why did no one tell you? Why were you like this? Is it even possible to have that many gotos in a single function? You quickly close the project. For a brief second, you consider deleting it and scrubbing the hard drive. What follows is a compilation of lessons, snippets, and words of warning salvaged from my own excursion into the past. Names have not been changed, to expose the guilty. # 2004 I was thirteen. The project was called Red Moon -- a wildly ambitious third-person jet combat game. The few bits of code that were not copied verbatim out of Developing Games in Java were patently atrocious. Let's look at an example. I wanted to give the player multiple weapons to switch between. The plan was to rotate the weapon model down inside the player model, swap it out for the next weapon, then rotate it back. Here's the animation code. Don't think about it too hard.public void updateAnimation(long eTime) { if(group.getGroup("gun") == null) { group.addGroup((PolygonGroup)gun.clone()); } changeTime -= eTime; if(changing && changeTime <= 0) { group.removeGroup("gun"); group.addGroup((PolygonGroup)gun.clone()); weaponGroup = group.getGroup("gun"); weaponGroup.xform.velocityAngleX.set(.003f, 250); changing = false; }} I want to point out two fun facts. First, observe how many state variables are involved: • changeTime • changing • weaponGroup • weaponGroup.xform.velocityAngleX Even with all that, it feels like something's missing... ah yes, we need a variable to track which weapon is currently equipped. Of course, that's in another file entirely. The other fun fact is that I never actually created more than one weapon model. Every weapon used the same model. All that weapon model code was just a liability. # How to Improve Remove redundant variables. In this case, the state could be captured by two variables: [font='courier new']weaponSwitchTimer [/font]and [font='courier new']weaponCurrent[/font]. Everything else can be derived from those two variables. Explicitly initialize everything. This function checks if the weapon is [font=consolas][size=1]null[/font] and initializes it if necessary. Thirty seconds of contemplation would reveal that the player always has a weapon in this game, and if they don't, the game is unplayable and might as well crash anyway. Clearly, at some point, I encountered a [font='courier new']NullPointerException [/font]in this function, and instead of thinking about why it happened, I threw in a quick [font='courier new']null [/font]check and moved on. In fact, most of the functions dealing with weapons have a check like this! Be proactive and make decisions upfront! Don't leave them for the computer to figure out. # Naming boolean noenemies = true; // why oh why Name your booleans positively. If you find yourself writing code like this, re-evaluate your life decisions:if (!noenemies) { // are there enemies or not??} # Error Handling Snippets like this are sprinkled liberally throughout the codebase:static { try { gun = Resources.parseModel("images/gun.txt"); } catch (FileNotFoundException e) {} // *shrug* catch (IOException e) {}} You might be thinking "it should handle that error more gracefully! Show a message to the user or something." But I actually think the opposite. You can never have too much error checking, but you can definitely have too much error handling. In this case, the game is unplayable without the weapon model, so I might as well let it crash. Don't try to gracefully recover from unrecoverable errors. Once again, this requires you to make an upfront decision as to which errors are recoverable. Unfortunately, Sun decided that almost all Java errors must be recoverable, which results in lazy error handling like the above. # 2005-2006 At this point I learned C++ and DirectX. I decided to write a reusable engine so that mankind could benefit from the vast wealth of knowledge and experience I had acquired in my fourteen years on the earth. If you thought the last trailer was cringey, just wait. By now I learned that Object-Oriented Programming is Good(TM), which resulted in monstrosities like this:class Mesh {public: static std::list meshes; // Static list of meshes; used for caching and rendering Mesh(LPCSTR file); // Loads the x file specified Mesh(); Mesh(const Mesh& vMesh); ~Mesh(); void LoadMesh(LPCSTR xfile); // Loads the x file specified void DrawSubset(DWORD index); // Draws the specified subset of the mesh DWORD GetNumFaces(); // Returns the number of faces (triangles) in the mesh DWORD GetNumVertices(); // Returns the number of vertices (points) in the mesh DWORD GetFVF(); // Returns the Flexible Vertex Format of the mesh int GetNumSubsets(); // Returns the number of subsets (materials) in the mesh Transform transform; // World transform std::vector* GetMaterials(); // Gets the list of materials in this mesh std::vector* GetCells(); // Gets the list of cells this mesh is inside D3DXVECTOR3 GetCenter(); // Gets the center of the mesh float GetRadius(); // Gets the distance from the center to the outermost vertex of the mesh bool IsAlpha(); // Returns true if this mesh has alpha information bool IsTranslucent(); // Returns true if this mesh needs access to the back buffer void AddCell(Cell* cell); // Adds a cell to the list of cells this mesh is inside void ClearCells(); // Clears the list of cells this mesh is insideprotected: ID3DXMesh* d3dmesh; // Actual mesh data LPCSTR filename; // Mesh file name; used for caching DWORD numSubsets; // Number of subsets (materials) in the mesh std::vector materials; // List of materials; loaded from X file std::vector cells; // List of cells this mesh is inside D3DXVECTOR3 center; // The center of the mesh float radius; // The distance from the center to the outermost vertex of the mesh bool alpha; // True if this mesh has alpha information bool translucent; // True if this mesh needs access to the back buffer void SetTo(Mesh* mesh);} I also learned that comments are Good(TM), which led me to write gems like this:D3DXVECTOR3 GetCenter(); // Gets the center of the mesh This class presents more serious problems though. The idea of a Mesh is a confusing abstraction that has no real-world equivalent. I was confused about it even as I wrote it. Is it a container that holds vertices, indices, and other data? Is it a resource manager that loads and unloads that data from disk? Is it a renderer that sends the data to the GPU? It's all of these things. # How to Improve The Mesh class should be a "plain old data structure". It should have no "smarts", which means we can safely trash all the useless getters and setters and make all the fields public. Then we can separate the resource management and rendering into separate systems which operate on the inert data. Yes, systems, not objects. Don't shoehorn every problem into an object-oriented abstraction when another abstraction might be a better fit. The comments can be improved, mostly, by deleting them. Comments easily fall out of date and become misleading liabilities, since they're not checked by the compiler. I posit that comments should be eliminated unless they fall into one of these categories: • Comments explaining why, rather than what. These are the most useful. • Comments with a few words explaining what the following giant chunk of code does. These are useful for navigation and reading. • Comments in the declaration of a data structure, explaining what each field means. These are often unnecessary, but sometimes it's not possible to map a concept intuitively to memory, and comments are necessary to describe the mapping. # 2007-2008 I call these years "The PHP Dark Ages". # 2009-2010 By now, I'm in college. I'm making a Python-based third-person multiplayer shooter called Acquire, Attack, Asplode, Pwn. I have no excuse at this point. The cringe just keeps getting worse, and now it comes with a healthy dose of copyright infringing background music. When I wrote this game, the most recent piece of wisdom I had picked up was that global variables are Bad(TM). They lead to spaghetti code. They allow function "A" to break a completely unrelated function "B" by modifying global state. They don't work with threads. However, almost all gameplay code needs access to the entire world state. I "solved" this problem by storing everything in a "world" object and passed the world into every single function. No more globals! I thought this was great because I could theoretically run multiple, separate worlds simultaneously. In practice, the "world" functioned as a de facto global state container. The idea of multiple worlds was of course never needed, never tested, and I'm convinced, would never work without significant refactoring. Once you join the strange cult of global tea-totallers, you discover a whole world of creative methods to delude yourself. The worst is the singleton:class Thing{ static Thing i = null; public static Thing Instance() { if (i == null) i = new Thing(); return i; }}Thing thing = Thing.Instance(); Poof, magic! Not a global variable in sight! And yet, a singleton is much worse than a global, for the following reasons: • All the potential pitfalls of global variables still apply. If you think a singleton is not a global, you're just lying to yourself. • At best, accessing a singleton adds an expensive branch instruction to your program. At worst, it's a full function call. • You don't know when a singleton will be initialized until you actually run the program. This is another case of a programmer lazily offloading a decision that should be made at design time. # How to Improve If something needs to be global, just make it global. Consider the whole of your project when making this decision. Experience helps. The real problem is code interdependence. Global variables make it easy to create invisible dependencies between disparate bits of code. Group interdependent code together into cohesive systems to minimize these invisible dependencies. A good way to enforce this is to throw everything related to a system onto its own thread, and force the rest of the code to communicate with it via messaging. # Boolean Parameters Maybe you've written code like this:class ObjectEntity: def delete(self, killed, local): # ... if killed: # ... if local: # ... Here we have four different "delete" operations that are highly similar, with a few minor differences depending on two boolean parameters. Seems perfectly reasonable. Now let's look at the client code that calls this function:obj.delete(True, False) # How to Improve This is a case-by-case thing. However, one piece of advice from Casey Muratori certainly applies here: write the client code first. I'm sure that no sane person would write the above client code. You might write something like this instead:obj.killLocal() And then go write out the implementation of the [font='courier new']killLocal()[/font] function. # Naming It may seem strange to focus so heavily on naming, but as the old joke goes, it's one of the two remaining unsolved problems in computer science. The other being cache invalidation and off-by-one errors. Take a look at these functions:class TeamEntityController(Controller): def buildSpawnPacket(self): # ... def readSpawnPacket(self): # ... def serverUpdate(self): # ... def clientUpdate(self): # ... Clearly the first two functions are related to each other, and the last two functions are related. But they are not named to reflect that reality. If I start typing [font='courier new']self.[/font] in an IDE, these functions will not show up next to each other in the autocomplete menu. Better to make each name start with the general and end with the specific, like this:class TeamEntityController(Controller): def packetSpawnBuild(self): # ... def packetSpawnRead(self): # ... def updateServer(self): # ... def updateClient(self): # ... The autocomplete menu will make much more sense with this code. # 2010-2015 After only 12 years of work, I actually finished a game. Despite all I had learned up to this point, this game featured some of my biggest blunders. # Data Binding At this time, people were just starting to get excited about "reactive" UI frameworks like Microsoft's MVVM and Google's Angular. Today, this style of programming lives on mainly in React. All of these frameworks start with the same basic promise. They show you an HTML text field, an empty [font='courier new'][/font] element, and a single line of code that inextricably binds the two together. Type in the text field, and pow! The [font='courier new'] [/font]magically updates. In the context of a game, it looks something like this:public class Player{ public Property Name = new Property { Value = "Ryu" };}public class TextElement : UIComponent{ public Property Text = new Property { Value = "" };}label.add(new Binding(label.Text, player.Name)); Wow, now the UI automatically updates based on the player's name! I can keep the UI and game code totally separate. This is appealing because we're eliminating the state of the UI and instead deriving it from the state of the game. There were some red flags, however. I had to turn every single field in the game into a Property object, which included a list of bindings that depended on it:public class Property : IProperty{ protected Type _value; protected List bindings; public Type Value { get { return this._value; } set { this._value = value; for (int i = this.bindings.Count - 1; i >= 0; i = Math.Min(this.bindings.Count - 1, i - 1)) this.bindings.OnChanged(this); } }} Every single field in the game, down to the last boolean, had an unwieldy dynamically allocated array attached to it. Take a look at the loop that notifies the bindings of a property change to get an idea of the issues I ran into with this paradigm. It has to iterate through the binding list backward, since a binding could actually add or delete UI elements, causing the binding list to change. Still, I loved data binding so much that I built the entire game on top of it. I broke down objects into components and bound their properties together. Things quickly got out of hand.jump.Add(new Binding(jump.Crouched, player.Character.Crouched));jump.Add(new TwoWayBinding(player.Character.IsSupported, jump.IsSupported));jump.Add(new TwoWayBinding(player.Character.HasTraction, jump.HasTraction));jump.Add(new TwoWayBinding(player.Character.LinearVelocity, jump.LinearVelocity));jump.Add(new TwoWayBinding(jump.SupportEntity, player.Character.SupportEntity));jump.Add(new TwoWayBinding(jump.SupportVelocity, player.Character.SupportVelocity));jump.Add(new Binding(jump.AbsoluteMovementDirection, player.Character.MovementDirection));jump.Add(new Binding(jump.WallRunState, wallRun.CurrentState));jump.Add(new Binding(jump.Rotation, rotation.Rotation));jump.Add(new Binding(jump.Position, transform.Position));jump.Add(new Binding(jump.FloorPosition, floor));jump.Add(new Binding(jump.MaxSpeed, player.Character.MaxSpeed));jump.Add(new Binding(jump.JumpSpeed, player.Character.JumpSpeed));jump.Add(new Binding(jump.Mass, player.Character.Mass));jump.Add(new Binding(jump.LastRollKickEnded, rollKickSlide.LastRollKickEnded));jump.Add(new Binding(jump.WallRunMap, wallRun.WallRunVoxel));jump.Add(new Binding(jump.WallDirection, wallRun.WallDirection));jump.Add(new CommandBinding(jump.WalkedOn, footsteps.WalkedOn));jump.Add(new CommandBinding(jump.DeactivateWallRun, (Action)wallRun.Deactivate));jump.FallDamage = fallDamage;jump.Predictor = predictor;jump.Bind(model);jump.Add(new TwoWayBinding(wallRun.LastWallRunMap, jump.LastWallRunMap));jump.Add(new TwoWayBinding(wallRun.LastWallDirection, jump.LastWallDirection));jump.Add(new TwoWayBinding(rollKickSlide.CanKick, jump.CanKick));jump.Add(new TwoWayBinding(player.Character.LastSupportedSpeed, jump.LastSupportedSpeed));wallRun.Add(new Binding(wallRun.IsSwimming, player.Character.IsSwimming));wallRun.Add(new TwoWayBinding(player.Character.LinearVelocity, wallRun.LinearVelocity));wallRun.Add(new TwoWayBinding(transform.Position, wallRun.Position));wallRun.Add(new TwoWayBinding(player.Character.IsSupported, wallRun.IsSupported));wallRun.Add(new CommandBinding(wallRun.LockRotation, (Action)rotation.Lock));wallRun.Add(new CommandBinding(wallRun.UpdateLockedRotation, rotation.UpdateLockedRotation));vault.Add(new CommandBinding(wallRun.Vault, delegate() { vault.Go(true); }));wallRun.Predictor = predictor;wallRun.Add(new Binding(wallRun.Height, player.Character.Height));wallRun.Add(new Binding(wallRun.JumpSpeed, player.Character.JumpSpeed));wallRun.Add(new Binding(wallRun.MaxSpeed, player.Character.MaxSpeed));wallRun.Add(new TwoWayBinding(rotation.Rotation, wallRun.Rotation));wallRun.Add(new TwoWayBinding(player.Character.AllowUncrouch, wallRun.AllowUncrouch));wallRun.Add(new TwoWayBinding(player.Character.HasTraction, wallRun.HasTraction));wallRun.Add(new Binding(wallRun.LastWallJump, jump.LastWallJump));wallRun.Add(new Binding(player.Character.LastSupportedSpeed, wallRun.LastSupportedSpeed));player.Add(new Binding(player.Character.WallRunState, wallRun.CurrentState));input.Bind(rollKickSlide.RollKickButton, settings.RollKick);rollKickSlide.Add(new Binding(rollKickSlide.EnableCrouch, player.EnableCrouch));rollKickSlide.Add(new Binding(rollKickSlide.Rotation, rotation.Rotation));rollKickSlide.Add(new Binding(rollKickSlide.IsSwimming, player.Character.IsSwimming));rollKickSlide.Add(new Binding(rollKickSlide.IsSupported, player.Character.IsSupported));rollKickSlide.Add(new Binding(rollKickSlide.FloorPosition, floor));rollKickSlide.Add(new Binding(rollKickSlide.Height, player.Character.Height));rollKickSlide.Add(new Binding(rollKickSlide.MaxSpeed, player.Character.MaxSpeed));rollKickSlide.Add(new Binding(rollKickSlide.JumpSpeed, player.Character.JumpSpeed));rollKickSlide.Add(new Binding(rollKickSlide.SupportVelocity, player.Character.SupportVelocity));rollKickSlide.Add(new TwoWayBinding(wallRun.EnableEnhancedWallRun, rollKickSlide.EnableEnhancedRollSlide));rollKickSlide.Add(new TwoWayBinding(player.Character.AllowUncrouch, rollKickSlide.AllowUncrouch));rollKickSlide.Add(new TwoWayBinding(player.Character.Crouched, rollKickSlide.Crouched));rollKickSlide.Add(new TwoWayBinding(player.Character.EnableWalking, rollKickSlide.EnableWalking));rollKickSlide.Add(new TwoWayBinding(player.Character.LinearVelocity, rollKickSlide.LinearVelocity));rollKickSlide.Add(new TwoWayBinding(transform.Position, rollKickSlide.Position));rollKickSlide.Predictor = predictor;rollKickSlide.Bind(model);rollKickSlide.VoxelTools = voxelTools;rollKickSlide.Add(new CommandBinding(rollKickSlide.DeactivateWallRun, (Action)wallRun.Deactivate));rollKickSlide.Add(new CommandBinding(rollKickSlide.Footstep, footsteps.Footstep)); I ran into tons of problems. I created binding cycles that caused infinite loops. I found out that initialization order is often important, and initialization is a nightmare with data binding, with some properties getting initialized multiple times as bindings are added. When it came time to add animation, I found that data binding made it difficult and non-intuitive to animate between two states. And this isn't just me. Watch this Netflix talk which gushes about how great React is before explaining how they have to turn it off any time they run an animation. I too realized the power of turning a binding on or off, so I added a new field:class Binding{ public bool Enabled;} Unfortunately, this defeated the purpose of data binding. I wanted to get rid of UI state, and this code actually added some. How can I eliminate this state? I know! Data binding!class Binding{ public Property Enabled = new Property { Value = true };} Yes, I really did try this briefly. It was bindings all the way down. I soon realized how crazy it was. How can we improve on data binding? Try making your UI actually functional and stateless. dear imgui is a great example of this. Separate behavior and state as much as possible. Avoid techniques that make it easy to create state. It should be a pain for you to create state. # Conclusion There are many, many more embarrassing mistakes to discuss. I discovered another "creative" method to avoid globals. For some time I was obsessed with closures. I designed an "entity" "component" "system" that was anything but. I tried to multithread a voxel engine by sprinkling locks everywhere. Here's the takeaway: • Make decisions upfront instead of lazily leaving them to the computer. • Separate behavior and state. • Write pure functions. • Write the client code first. • Write boring code. That's my story. What horrors from your past are you willing to share? Thanks for sharing, it's good sometimes to hear some of the bad and not just the good. I think most programmers go through a similar journey, but it's not something we often get to hear about. Great article. I only fully agree with the first half though - please, no globals, and immediate-mode UIs need to die in a fire. ;) Yeah, some of your conclusions, I disagree with.  If you are going to remove the null checks, at least replace them with asserts.  Handling exceptions and having good error messages is good, at the very least, you should give a good error message and then crash.  Singletons are better than globals because at least it forces you to logically split them out and divy up code and responsibilities, and it's much easier to refactor and figure out who is touching what Singleton.  The lazy initialization does suck, but that's not a required feature of singletons.  You could easily initialize all your singletons in the order you choose.  (That said, at that point you ought to sit down and figure out who needs what and pass them in)  Though I still like Lazy Singletons for logging. The lazy initialization does suck, but that's not a required feature of singletons. I think we're in agreement; we just have different definitions of what a singleton is. To me, a singleton is defined as a function that lazily initializes a single global instance of an object. Without that lazy check, it's just... a regular global. I am totally cool with grouping global variables into a structure or object to "logically split them out and divy up code and responsibilities". I do it all the time. I only take issue when you put a singleton in front of it. The singleton pattern just requires that only one instance of a class be possible.  The Lazy part is not required, just used a lot.  It's a total nit, but important because people should be able to use the same terms and understand what they mean. (And apparently there is some magic in Java that gets around most of the perf impact of the lazy initialization: https://en.wikipedia.org/wiki/Initialization-on-demand_holder_idiom) etodd, you're an inspiration.  I personally appreciated you're journals/blogs back a few years ago and followed you're development all the way through to the steam green-light.  I look forward to your future works. I think that reflection, as long as it is done as a study and not a way to regret, can be a very positive and motivating period. As for the "write boring code" sometimes simple is good. :) ## Create an account Register a new account
# A Is there a local interpretation of Reeh-Schlieder theorem? • Featured #### Demystifier 2018 Award Non-philosophically inclined experts in relativistic QFT often insist that QFT is a local theory. They are not impressed much by arguments that quantum theory is non-local because such arguments typically rest on philosophical notions such as ontology, reality, hidden variables, or the measurement problem. But how about the Reeh-Schlieder theorem? The Reeh-Schlieder theorem (recently explained in a relatively simple way by Witten in https://arxiv.org/abs/1803.04993 ) is an old theorem, well-known in axiomatic QFT, but relatively unknown in a wider QFT community. In somewhat over-simplified terms (see Sec. 2.5 of the paper above), the theorem states that acting with a suitable (not necessarily unitary) local operator (e.g. having a support only on Earth) on the vacuum, one can create an arbitrary state (e.g. a state describing a building on Mars). As all other theorems that potentially can be interpreted as signs of quantum non-locality, this theorem is also a consequence of quantum entanglement. However, unlike other theorems about quantum non-locality, this theorem seems not to contain any philosophical concepts or assumptions. The theorem is pure mathematical physics, without any philosophical excess baggage. What I would like to see is how do non-philosophically inclined experts in QFT, who insist that relativistic QFT is local, interpret Reeh-Schlieder theorem in local terms? How can it be compatible with locality? Last edited: Related Quantum Physics News on Phys.org #### Peter Morgan Gold Member One trouble with the Reeh-Schlieder theorem is that the Wightman axioms, say, that are required to prove it are not satisfied by interacting Lagrangian fields. Hegerfeldt proves something similar with much more restricted assumptions, almost only the spectrum condition (that the energy-momentum must be in or on the forward light-cone), which an interacting quantum field ought to satisfy, but it seems often to be argued that this is not an example of nonlocality, and perhaps the proof is not compelling because it can be dismissed on a quick reading as being about nonrelativistic QM. More constructively, the 2-point Wightman function (the non-time-ordered 2-point Vacuum Expectation Value) has to have the Källén-Lehmann form even for an interacting field (the scalar boson case, to keep at least some simplicity), so it's definitely non-zero at space-like separation (and so is the time-ordered VEV). Microcausality shows up in this context as the imaginary part of the 2-point VEV being zero at space-like separation, but the real part is always non-zero except on the light-cone, where it's undefined (there's a delta function on the light-cone, as well as inverse square scaling at small invariant distance from the light-cone). Even for Gibbs' states of the classical Klein-Gordon field there are nonlocal correlations, however, even though the advanced and retarded propagators of the classical KG dynamics are definitely zero at space-like separation, so we can't just point to nonlocal correlations and say, "look, nonlocality!" For the classical KG equation, it's not the advanced or retarded propagator that shows up when we compute the Gibbs' state, but (at space-like separation) we obtain the inverse fourier transform of $(\vec k\cdot\vec k+m^2)^{-1}$, corresponding to different boundary conditions. The Gibbs' state, it should also be noted, is either a consequence of an infinite-time convergence to equilibrium because of non-KG dynamical perturbations (one supposes), or else it's just a superdetermined-by-minimum-free-energy initial condition. What I think is curious about the quantized KG field, and might turn at least some heads that might not be turned by any of the above discussion, is that the real part of its 2-point Wightman function is the inverse fourier transform of $\left(\sqrt{\vec k\cdot\vec k+m^2}\right)^{-1}$, where the operator $\sqrt{-\vec\partial\cdot\vec\partial+m^2}$ is said to be "anti-local" by mathematicans (see my Physics Letters A 338 (2005) 8–12, arXiv:quant-ph/0411156, and, much more definitively, I.E. Segal, R.W. Goodman, "Anti-locality of certain Lorentz-invariant operators", J. Math. Mech. 14 (1965) 629. This last might possibly be the reference for some physicists to be confronted with, because its conclusions are quite similar to Hegerfeldt's conclusions.) At the end of the day, I think all the nonlocality can be attributed to boundary and initial conditions, which, not being dynamical, for some people makes it not nonlocality. After dark, however, this somewhat pushes us towards the introduction of some form of superdeterminism, so perhaps it's just that you take your choice of poison. #### A. Neumaier how do non-philosophically inclined experts in QFT, who insist that relativistic QFT is local, interpret Reeh-Schlieder theorem in local terms? How can it be compatible with locality? The theorem is essentially a consequence of analyticity, where knowledge of a function in some neighborhood implies knowing it everywhere. The theorem is compatible with locality since it is proved from the Wightman axioms, which are based on locality. As mentioned by Witten on p.12, field creation and annihilation operators are not unitary operators, hence not physically realizable in time. So there is no interpretation problem. #### Demystifier 2018 Award As mentioned by Witten on p.12, field creation and annihilation operators are not unitary operators, hence not physically realizable in time. So there is no interpretation problem. This indeed is a good argument that there is no physical non-locality. But since the creation and annihilation operators are well defined mathematically, one could still argue that there is something non-local about QFT in its mathematical formulation. #### A. Neumaier This indeed is a good argument that there is no physical non-locality. But since the creation and annihilation operators are well defined mathematically, one could still argue that there is something non-local about QFT in its mathematical formulation. Well, mathematically, all nonlocal theories ever discussed were created locally from some desk. But this has no philosophical consequences. #### Demystifier 2018 Award The theorem is compatible with locality since it is proved from the Wightman axioms, which are based on locality. I am not convinced that all axioms are based on locality. In particular, the axiom that there exists an invariant state (called vacuum) looks quite non-local to me. Intuitively, this axiom says that there exists a state that everywhere looks the same. #### Demystifier 2018 Award The theorem is essentially a consequence of analyticity, where knowledge of a function in some neighborhood implies knowing it everywhere. Isn't then analyticity itself a kind of mathematical non-locality? #### A. Neumaier the axiom that there exists an invariant state (called vacuum) looks quite non-local to me. Intuitively, this axiom says that there exists a state that everywhere looks the same. All states of any field theory are nonlocal, since they specify the field everywhere. But this is an irrelevant meaning of nonlocality. The latter means either nonlocal dynamics (not valid in relativistic QFT by definition) or nonlocal correlations (valid by direct construction). Locality of the dynamics and nonlocality of the correlations are fully compatible. Isn't then analyticity itself a kind of mathematical non-locality? Yes. But arbitrarily small deviation in the given neighborhood may have arbitrarily large consequences at any given other point, so this has no physical relevance. #### A. Neumaier am not convinced that all axioms are based on locality. Fort compatibility it is sufficient that the dynamical locality axiom is among the Wightman axioms, and that there are examples satisfying these axioms. #### Demystifier 2018 Award nonlocal correlations (valid by direct construction) What do you mean by "direct construction"? #### Demystifier 2018 Award Yes. But arbitrarily small deviation in the given neighborhood may have arbitrarily large consequences at any given other point, so this has no physical relevance. This sheds some light on the origin of Reeh-Schlieder theorem, but why does it not have physical relevance? By the way, this reminds me of the butterfly effect in chaos theory. Would you say that butterfly effect also does not have physical relevance? #### Demystifier 2018 Award All states of any field theory are nonlocal, since they specify the field everywhere. But this is an irrelevant meaning of nonlocality. The latter means either nonlocal dynamics (not valid in relativistic QFT by definition) or nonlocal correlations (valid by direct construction). Locality of the dynamics and nonlocality of the correlations are fully compatible. Whether or not nonlocality of correlations requires some kind of nonlocal dynamics (perhaps at the level of hidden variables) is a part of usual philosophical discussions about the meaning of Bell theorem, which I want to avoid in this thread. #### A. Neumaier What do you mean by "direct construction"? e.g., two entangled photons a la Bell. why does it not have physical relevance? because in observable physics, consequences must depend smoothly on causes. Whether or not nonlocality of correlations requires some kind of nonlocal dynamics (perhaps at the level of hidden variables) is a part of usual philosophical discussions It involves no philosophy, only shut-up-and-calculate, since free QED (which is dynamically local) suffices to do the argument in the traditional, purely mathematical way. #### Demystifier 2018 Award because in observable physics, consequences must depend smoothly on causes. Why? #### A. Neumaier Because one can prepare and measure only to finite precision. #### Peter Morgan Gold Member because in observable physics, consequences must depend smoothly on causes The Wightman axioms enforce analyticity — in other words significantly more than smoothness, which finite precision could never confirm or deny. How significant is it that thermal and other non-vacuum states require, so to speak, less analyticity? Also, smoothness wouldn't be expected of a classical ideal model, particularly in statistical mechanics, where discontinuity is commonplace. We might expect that smoothness would reappear if one modeled more accurately, but in practice we work with the discontinuous model because it is accurate enough. #### Peter Morgan Gold Member A separate aspect of nonlocality in QM/QFT is the commonplace use of the vacuum projection operator by practicing physicists. Whenever a physicist discusses a transition probability, meaning $\bigl|\langle\phi|\psi\rangle\bigr|^2=\langle\psi|\phi\rangle\langle\phi|\psi\rangle$, the measurement $|\phi\rangle\langle\phi|$ in the state $\hat A\mapsto\langle\psi|\hat A|\psi\rangle$, they are implicitly using a modulated form of the vacuum projection operator $|0\rangle\langle 0|$, thereby erasing all the careful construction of microcausal locality. $|\phi\rangle\langle\phi|$ does not commute with any local observable. This is such a commonplace that I doubt it can be eradicated in general use, but it seems too free and easy with nonlocality that the implicit use of the vacuum projection operator is a step in the most common computations that show that QM violates Bell-CHSH inequalities [of course one has constructions that violate Bell-CHSH inequalities that are entirely local, such as Landau's Phys Lett A 120, 54(1987), "On the violation of Bell's inequalities in quantum theory", so this is merely a criticism of using the vacuum projection operator when it does not have to be used.] #### samalkhaiat Non-philosophically inclined experts in relativistic QFT often insist that QFT is a local theory. They are not impressed much by arguments that quantum theory is non-local because such arguments typically rest on philosophical notions such as ontology, reality, hidden variables, or the measurement problem. But how about the Reeh-Schlieder theorem? The Reeh-Schlieder theorem (recently explained in a relatively simple way by Witten in https://arxiv.org/abs/1803.04993 ) is an old theorem, well-known in axiomatic QFT, but relatively unknown in a wider QFT community. In somewhat over-simplified terms (see Sec. 2.5 of the paper above), the theorem states that acting with a suitable (not necessarily unitary) local operator (e.g. having a support only on Earth) on the vacuum, one can create an arbitrary state (e.g. a state describing a building on Mars). As all other theorems that potentially can be interpreted as signs of quantum non-locality, this theorem is also a consequence of quantum entanglement. However, unlike other theorems about quantum non-locality, this theorem seems not to contain any philosophical concepts or assumptions. The theorem is pure mathematical physics, without any philosophical excess baggage. What I would like to see is how do non-philosophically inclined experts in QFT, who insist that relativistic QFT is local, interpret Reeh-Schlieder theorem in local terms? How can it be compatible with locality? I am having trouble with your notion of locality or the lack of it in QFT. Locality in QFT refers to all (localizable) quantities that can be measured in practice, i.e., the self-adjoint elements of the polynomial algebra $\mathcal{A}(\mathcal{O})$ generated by operators of the form $$\int d^{4}x_{1} \cdots d^{4}x_{n} \ \varphi_{k_{1}} (x_{1}) \cdots \varphi_{k_{n}}(x_{n}) \ f(x_{1} , \cdots , x_{n}) , \ \ n = 0 , 1 , 2 , \cdots$$ with $\mathcal{O}$ being a (bounded) open set in $\mathbb{R}^{(1,3)}$ and $f \in \mathcal{D} (\mathcal{O}^{n})$. When $\mathcal{O} \subset \mathbb{R}^{(1,3)}$ is a finite space-time region, the self-adjoint elements of $\mathcal{A}(\mathcal{O})$ are called local observables. When $f \in \mathcal{S} (\mathbb{R}^{4n})$, the resulting polynomial algebra is called the field algebra $\mathcal{A}$. In theories with positive-definite Hilbert space $\mathcal{H}$, one can show that the following statements are equivalent to one another: i) Uniqueness and cyclicity of the vacuum: There exists a unique vector $\Omega$ invariant under any translation $P_{\mu}\Omega = 0$, which is cyclic with respect to $\mathcal{A}$, i.e., $\mathcal{A}\Omega$ is dense in $\mathcal{H}$ with respect to an admissible topology $\tau$: $\overline{\mathcal{A}\Omega}^{\tau} = \mathcal{H}$. ii) Irreducibility of the field algebra $\mathcal{A}$. The R-S theorem states that for any open set $\mathcal{O}$ in space-time, the equality $$\overline{\mathcal{A}(\mathcal{O})\Omega}^{\tau} = \overline{\mathcal{A}\Omega}^{\tau} = \mathcal{H} ,$$ holds for any admissible topology $\tau$. In English, the theorem asserts that if the field algebra is irreducible (or cyclic with respect to the vacuum $\Omega$) then the set of states obtained by applying a polynomial algebra of fields (taken from arbitrarily small but open set in space-time) to $\Omega$ also spans the whole Hilbert space $\mathcal{H}$. In other words, the theory says that the physical states in $\mathcal{H}$ cannot be localized although the observables can. So, the non-local features are carried by the vacuum. Of course, many aspects of particle physics can be explained using the assumption that particle states are localized excitations of the vacuum, i.e., they arise by applying local field operator to the vacuum as per Wightman. Similarly, in the algebraic setting of Haag, Roberts and others, it is assumed that a particle state cannot be distinguished from the vacuum by measurements done in the space-like complement of sufficiently large but bounded regions of space-time. Particles carrying a non-gaugable-symmetry charge, such as iso-spin, strangeness, baryon number or lepton number, fit well into this scheme. But, we know this method does not work in gauge theories: For example in QED, Gauss’ law allows the charged states to be distinguished from the vacuum in the causal complement of any bounded region, because it is possible to evaluate the charge by measuring the flux through a very large sphere. Thus, in gauge field theories charged states are not localizable. #### A. Neumaier smoothness wouldn't be expected of a classical ideal model, particularly in statistical mechanics, where discontinuity is commonplace. We might expect that smoothness would reappear if one modeled more accurately, but in practice we work with the discontinuous model because it is accurate enough. In statistical mechanics, everything is analytic until one takes the thermodynamic limit (a conventional approximation), in which case discontinuities arise in some relationships, due to phase transitions. But the sensitivity in analytic continuation (and hence in the Reeh-Schlieder nonlocality) is very different - it is a generic ill-posedness, not just on a set of measure zero. #### Peter Morgan Gold Member the theorem asserts that if the field algebra is irreducible (or cyclic with respect to the vacuum $\Omega$) then the set of states obtained by applying a polynomial algebra of fields (taken from arbitrarily small but open set in space-time) to $\Omega$ also spans the whole Hilbert space $\mathcal{H}$. In other words, the theory says that the physical states in $\mathcal{H}$ cannot be localized although the observables can. So, the non-local features are carried by the vacuum. "the physical states in $\mathcal{H}$ cannot be localized although the observables can" is a nice accessible summary of the Reeh-Schlieder theorem, but I think the difficulty that Demystifier has focused on is a consequence of particle physicists not believing that interacting quantum fields in the Lagrangian formalism fall under the Wightman or Haag-Kastler axioms. It can't be the case that irreducibility (or cyclicity) is sufficient on its own, because one at least needs there to be a concept of space-time, which irreducibility does not speak to, but can any theory to which a physicist's interacting quantum field can be an asymptotic expansion —with all the mathematical troubles that different types of regularization and renormalization imply— be proved to satisfy the requirements for a derivation of the Reeh-Schlieder theorem? Because of Hegerfeldt, and also because of Arnold Neumaier's point above, The theorem is essentially a consequence of analyticity, it seems that a representation of the translation group and the spectrum condition alone might be sufficient axioms to prove that "the physical states in $\mathcal{H}$ cannot be localized although the observables can", independently of irreducibility, additivity, microcausality, representation of the Lorentz group, and primitive causality. [The difficulty with Hegerfeldt is that none of his many papers are definitively enough stated or proved. Have a look at his paper arXiv:quant-ph/9809030, for example, which I believe is his most recent. Because of the way Hegerfeldt does things, one hears it said that Hegerfeldt only applies to nonrelativistic QM, however it more seems that positive frequency is sufficient, the (stronger) spectrum condition is not required. The connection of positive frequency with analyticity through the Hilbert transform is relatively more elementary than working with the spectrum condition.] It seems that indeed physicists might agree that any theory they would be willing to accept would have to satisfy positive energy/the spectrum condition, because that is associated with stability. #### A. Neumaier The connection of positive frequency with analyticity through the Hilbert transform it is crucial for causality, bot in the nonrelativistic and the relativistic case. in the relativistic case, the spectral condition of Wightman is essential for Poincare invariance. there to be a concept of space-time, which irreducibility does not speak to But the Poincare group present produces flat space-time through the Fourier transform. #### Peter Morgan Gold Member But the Poincare group present produces flat space-time through the Fourier transform. Absolutely, but if the translation group is sufficient and the proof is independent of whether the Lorentz symmetry is present, then best not to include the Lorentz group. Irreducibility is a purely analytic property of the algebra of observables that doesn't imply any geometrical properties, not even the translation group. One source of wiggle room for physicists is that one cannot be absolutely certain of Poincaré covariance for interacting Lagrangian quantum fields because neither regularization nor renormalization is absolutely certain to preserve Lorentz covariance; the translation group is more certain (although even that might be denied by those who say that we can't have a conversation with anything less than full quantum gravity). Last edited: #### A. Neumaier neither regularization nor renormalization is absolutely certain to preserve Lorentz covariance With causal renormalization theory one can be absolutely certain. One cannot dispense with it in elementary particle physics. #### samalkhaiat it seems that a representation of the translation group and the spectrum condition alone might be sufficient axioms to prove that "the physical states in $\mathcal{H}$ cannot be localized although the observables can", independently of irreducibility, additivity, microcausality, representation of the Lorentz group, and primitive causality. For any $|\Phi \rangle \ , |\Psi \rangle \in \mathcal{A}| \Omega \rangle$, the spectrum postulate can be written (in a form which is suitable for Hilbert space with indefinite metric) as $$\int d^{4}a \ e^{-i p \cdot a} \ \langle \Phi | U(a) | \Psi \rangle = 0, \ \ \mbox{if} \ \ \ p \not\in \overline{\mathcal{V}}_{+} = \{ q \in \mathbb{R}^{4}; q_{0} \geq 0, \ q^{2} \geq 0 \},$$ where $U(a)$ is the $U(1 , a)$ element of the group of unitary operators $U(\Lambda , a)$ on $\mathcal{H}$, i.e., the groups homomorphism $U: \ ISO(1,3) \to U( \mathcal{H})$. Using translational invariance of the vacuum, we may define a “function” $W_{k_{1} \cdots k_{n}}(y_{1}, \cdots y_{n-1})$ by $$W_{k_{1} \cdots k_{n}}(x_{1}-x_{2}, \cdots , x_{n-1}-x_{n}) = \langle \Omega | \varphi_{k_{1}}(x_{1}) \cdots \varphi_{k_{n}}(x_{n})| \Omega \rangle ,$$ and rewrite the spectrum condition in the form $$\int d^{4}y_{1} \cdots d^{4}y_{n-1} \ e^{i \sum_{j = 1}^{n-1}q_{j}y_{j}} \ W_{k_{1} \cdots k_{n}} (y_{1} \cdots y_{n-1}) = 0, \ \ \mbox{if} \ \ q_{j} \not\in \overline{\mathcal{V}}_{+}. \ \ (1)$$ Next, we would like to continue $W_{k_{1}\cdots k_{n}}(y_{1}, \cdots , y_{n-1})$ analytically to the complex analytic function $W_{k_{1}\cdots k_{n}}(z_{1}, \cdots , z_{n-1})$ in the permuted extended tube. For this we need to combine the spectrum postulate (1) with the postulate of Poincare’ covariance: $$U^{\dagger}(\Lambda , a) \varphi_{a}(x) U(\Lambda , a) = D_{a}{}^{b}(\Lambda) \varphi_{b} \left( (\Lambda , a)^{-1}x \right) ,$$ and the postulate of microcausality (the local commutation relations): $$[\varphi_{i} (x) , \varphi_{j} (y)] = 0, \ \ \ \mbox{when} \ \ (x-y)^{2} < 0 .$$ The above together with the vacuum postulate allows us to prove the Reeh-Schlieder theorem. If $\mathcal{H}$ has a positive-definite inner product, then the vacuum postulate can be replaced by the irreducibility of the field algebra $\mathcal{A}$. The theorem can also be proved for Hilbert space with indefinite metric. In this case, however, the vacuum postulate is no longer equivalent to the irreducibility of $\mathcal{A}$. #### rubi The Reeh-Schlieder theorem says that you can reach all states by repeated application of local operators to the vacuum. However, it doesn't say anything about locality, because there is no physical process that is modeled by an application of a local operator to the state. All physical processes are modeled by unitary evolution or projection and both these operations respect locality. The Reeh-Schlieder theorem is just a mathematical fact about the cyclicity of the vacuum state with respect to local algebras. "Is there a local interpretation of Reeh-Schlieder theorem?" ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
# radical polymerizationZoomA-Z ## Subject - Macromolecular Chemistry Radical polymerizations are in which polymeric radicals are the reactive components. The most important reaction steps of the radical polymerization are shown below. Tab.1 Initiation: Generation of starting radical $R•$ $R• + M →RM•$ $RMn• + M→RMn+1•$ $RMn• + RMm•→RMn+mR or other inactive products$
# Math Help - Real quick derivative answer and maybe explain? 1. ## Real quick derivative answer and maybe explain? Yesterday in my calc class I possed a derivative problem to my professor who spent 40 mins on the problem... burned all class and didnt come up with an answer so I ask you if you know. y=sqrt(x+sqrt(x+sqrt(x))) written differently y=(x+(x+(x)^(1/2))^(1/2))^(1/2) Please find the derivative. or atleast explain how so i may attempt and show prof. 2. See attachment--be kijnd to your professor its not a difficult pblm but is not easy in front of group of people 3. Originally Posted by mib7289 Yesterday in my calc class I possed a derivative problem to my professor who spent 40 mins on the problem... burned all class and didnt come up with an answer so I ask you if you know. y=sqrt(x+sqrt(x+sqrt(x))) written differently y=(x+(x+(x)^(1/2))^(1/2))^(1/2) Please find the derivative. or atleast explain how so i may attempt and show prof. $y = [x + (x + x^{\frac{1}{2}})^{\frac{1}{2}}]^{\frac{1}{2}}$ It's just chain rule. It will get messy but it's not a hard problem $\frac{1}{2[x + (x + x^{\frac{1}{2}})^{\frac{1}{2}}]^{\frac{1}{2}}} \cdot 1 + \frac{1}{2(x+x^\frac{1}{2})^\frac{1}{2}} \cdot 1 +\frac{1}{2x^{\frac{1}{2}}}$ Pretty sure that's it. Edit: Originally Posted by Calculus26 See attachment-- Looks like I may have missed the 3rd term. But I don't think any of the substitution is necessary. 4. Shananay you are correct-- I diiferentiated one term twice I changed the attachment to reflect this 5. Originally Posted by Calculus26 Shananay you are correct-- I diiferentiated one term twice I changed the attachment to reflect this okay good, I kept looking at it and was confused why the answers differed
## Ball generated property of direct sums of Banach spaces 22 Feb 2015  ·  Hardtke Jan-David · A Banach space $X$ is said to have the ball generated property (BGP) if every closed, bounded, convex subset of $X$ can be written as an intersection of finite unions of closed balls. In 2002 S. Basu proved that the BGP is stable under (infinite) $c_0$- and $\ell^p$-sums for $1<p<\infty$... We will show here that for any absolute, normalised norm $\|\cdot\|_E$ on $\mathbb{R}^2$ satisfying a certain smoothness condition the direct sum $X\oplus_E Y$ of two Banach spaces $X$ and $Y$ with respect to $\|\cdot\|_E$ enjoys the BGP whenever $X$ and $Y$ have the BGP. read more PDF Abstract # Code Add Remove Mark official No code implementations yet. Submit your code now # Categories Functional Analysis
location:  Publications → journals → CJM Abstract view # On the Local Lifting Properties of Operator Spaces In this paper, we mainly study operator spaces which have the locally lifting property (LLP). The dual of any ternary ring of operators is shown to satisfy the strongly local reflexivity, and this is used to prove that strongly local reflexivity holds also for operator spaces which have the LLP. Several homological characterizations of the LLP and weak expectation property are given. We also prove that for any operator space $V$, $V^{**}$ has the LLP if and only if $V$ has the LLP and $V^{*}$ is exact.
How do we predict electrical properties of a material using scattering data and vice-versa? I know that the band gap is related to conductivity. What I'm wondering is what it is like for an experimentalist who is trying to figure out what an unknown material, a black box, is doing. The only tool the experimentalist has is a beam of light at various wave lengths and a photo-detector which he/she can place at various positions. From the scattering data, the experimentalist is tasked to predict what the electrical properties of the material are, i.e. current as a function of the history of the voltage. I say history because in the case of things like integrated circuits, the way it responds to voltages depends on the way that past voltages were applied. So that is my question, how can I predict the electrical properties of a material using scattering data? Equally interesting, how do we do the converse, predict the scattering data using the electrical properties? **EDIT:**People talk about band gaps and how they are related to conductivity. That you need to knock an electron into a conduction band. Is that how it works? Photo-diodes work like this(I think) and also the higher the temperature the more a semi-conductor will conduct; this is consistent with the energy level picture. It has been my understanding that this energy band is described by eigenstates of a Hamiltonian. Yet as some have pointed out, strictly speaking, the Hamiltonian can have any spectrum and the conductivity, where the frequency $\omega=0$ has nothing to do when $\omega \neq 0$. However, actual Hamiltonians can only have local interactions(I assume). So that constrains things a bit. So here is my question. How do energy levels of a material, as determined by scattering experiments, correspond to a band-structure/conductivity and vice-versa? • The simple answer is that you can't. The behavior of a material at dc bears absolutely no useful relationship to its behavior at 1e15Hz. Maybe you are trying to ask a different question, and I just don't understand you correctly? – CuriousOne Sep 15 '14 at 21:30 • Well, Kramers & Kronig have a little something to add to the discussion. If you know $\epsilon$ as a function of all $\omega > 0$ you can determine $\epsilon(\omega=0)$. But that is being a little picky... – Jon Custer Sep 15 '14 at 21:40 • @JonCuster: In practice Kramers-Kronig doesn't do anything for you, because you would have to know the whole continuum of frequencies to determine the value at a single frequency. It's a very interesting mathematical theorem, though. – CuriousOne Sep 15 '14 at 23:49 • I probably should write an answer, because I work on Monte Carlo simulation of electron scattering in matter, and we do use the optical properties of the materials to determine the characteristics of slow secondary electron generation. That's not necessarily the main effect relevant here, but it can be important for modeling the behavior of the (slow secondary) electron detectors. – Thomas Klimpel Sep 16 '14 at 9:25 • Thanks for the replies. They do not quite answer my question, though they are very informative. I have edited my post to be more clear. – sebastianspiegel Sep 18 '14 at 22:55
# Simple Explanation of Apache Flume Can anybody explain Apache Flume for me in a plain language? I'd appreciate an explanation with a practical example instead of abstract theoretical definitions, then I can understand better. What is it used for? At which stage of a BigData analysis is it used? And what are prerequisites for learning it? As you would explain for a non-technical person What is it used for? Data ingestion into a distributed datastore (e.g. HDFS). See image (I did not make the image and am only including the image for visual aid). There are other tools that will help you with the ingestion of data as well (Storm and Sqoop are mentioned). At which stage of a BigData analysis is it used? It is used for data ingestion into your distributed datastore (e.g. HDFS). So for example a webserver is running logging information into /var/logs/webserver.log. Apache Flume can look at that file, grab what it needs out of it and send it to HDFS. Once the data gets put into your datastore you can then utilize other tools to analyze the imported data (e.g. Hive, Pig, MR, etc.) And what are prerequisites for learning it? Understanding of how to write scripts, edit configuration settings, and your way around Linux would be the absolute minimum to get started. This set of instructions are old but a starting point would be to look at the hortwonworks tutorial on flume. http://hortonworks.com/hadoop-tutorial/how-to-refine-and-visualize-server-log-data/ If you would like me to elaborate more I would be glad to but I wanted to try and meet your requirement of a simple, short explanation. • Thank you for your nice answer Kevin! I'll wait 1-2 more days for other answers and in case I do not get better ones I'll accept yours. Cheers :) – DanielWelke Jan 12 '16 at 9:52 • Nice explanation :) is it conceptually similar to Spark Streaming? – Rami Jan 13 '16 at 20:12 • Not quite. Spark Streaming would be analogous to Apache Storm or Apache Flink. While there might be some overlap in the same concept of ingesting data into HDFS, Spark Streaming, Storm, Flink all are more related to streaming datasets (think Twitter streams). Flume/Sqoop would be more for "static" type datasets (e.g. extracting data from log files or SQL databases). This is a good article that discusses the performance comparison of SS, Flink, Storm. Might give you a better idea of possible utilities. – Kevin Vasko Jan 13 '16 at 20:31 What is Apache Flume? • Apache Flume is a tool for designed for streaming data ingestion in HDFS. Objective: The main objective of Flume is to capture streaming data from various web servers to HDFS. Applications of Flume • The applications of flume are, • Flume is used in e-commerce Company to analyze the customer behavior of different regions. • It is used to feed huge log data generated by application servers into HDF5 at a higher speed. What are prerequisites for learning it? • Basics of Hadoop, Big-data is must. • Basics of Linux and scripting • Main thing is interest towards technology.
## Files in this item FilesDescriptionFormat application/vnd.openxmlformats-officedocument.presentationml.presentation 891443.pptx (6MB) PresentationMicrosoft PowerPoint 2007 application/pdf 2429.pdf (14kB) AbstractPDF ## Description Title: CAN INTERNAL CONVERSION BE CONTROLLED BY MODE-SPECIFIC VIBRATIONAL EXCITATION IN POLYATOMIC MOLECULES Author(s): Bar, Ilana Contributor(s): Portnov, Alexander; Epshtein, Michael Subject(s): Mini-symposium: Multiple Potential Energy Surfaces Abstract: Nonadiabatic processes, dominated by dynamic passage of reactive fluxes through conical intersections (CIs) are considered to be appealing means for manipulating reaction paths. One approach that is considered to be effective in controlling the course of dissociation processes is the selective excitation of vibrational modes containing a considerable component of motion. Here, we have chosen to study the predissociation of the model test molecule, methylamine and its deuterated isotopologues, excited to well-characterized quantum states on the first excited electronic state, S$_{1}$, by following the N-H(D) bond fission dynamics through sensitive H(D) photofragment probing. The branching ratios between slow and fast H(D) photofragments, the internal energies of their counter radical photofragments and the anisotropy parameters for fast H photofragments, confirm correlated anomalies for predissociation initiated from specific rovibronic states, reflecting the existence of a dynamic resonance in each molecule. This resonance strongly depends on the energy of the initially excited rovibronic states, the evolving vibrational mode on the repulsive S$_{1}$ part during N-H(D) bond elongation, and the manipulated passage through the CI that leads to radicals excited with C-N-H(D) bending and preferential perpendicular bond breaking, relative to the photolyzing laser polarization, in molecules containing the NH$_{2}$ group. The indicated resonance plays an important role in the bifurcation dynamics at the CI and can be foreseen to exist in other photoinitiated processes and to control their outcome. Issue Date: 6/20/2017 Publisher: International Symposium on Molecular Spectroscopy Citation Info: APS Genre: CONFERENCE PAPER/PRESENTATION Type: Text Language: English URI: http://hdl.handle.net/2142/96835 DOI: 10.15278/isms.2017.TG05 Date Available in IDEALS: 2017-07-272018-01-29 
birthdeath {ape} R Documentation ## Estimation of Speciation and Extinction Rates With Birth-Death Models ### Description This function fits by maximum likelihood a birth-death model to the branching times computed from a phylogenetic tree using the method of Nee et al. (1994). ### Usage birthdeath(phy) ## S3 method for class 'birthdeath' print(x, ...) ### Arguments phy an object of class "phylo". x an object of class "birthdeath". ... further arguments passed to the print function. ### Details Nee et al. (1994) used a re-parametrization of the birth-death model studied by Kendall (1948) so that the likelihood has to be maximized over d/b and b - d, where b is the birth rate, and d the death rate. This is the approach used by the present function. This function computes the standard-errors of the estimated parameters using a normal approximations of the maximum likelihood estimates: this is likely to be inaccurate because of asymmetries of the likelihood function (Nee et al. 1995). In addition, 95 intervals of both parameters are computed using profile likelihood: they are particularly useful if the estimate of d/b is at the boundary of the parameter space (i.e. 0, which is often the case). Note that the function does not check that the tree is effectively ultrametric, so if it is not, the returned result may not be meaningful. ### Value An object of class "birthdeath" which is a list with the following components: tree the name of the tree analysed. N the number of species. dev the deviance (= -2 log lik) at its minimum. para the estimated parameters. se the corresponding standard-errors. CI the 95% profile-likelihood confidence intervals. ### References Kendall, D. G. (1948) On the generalized “birth-and-death” process. Annals of Mathematical Statistics, 19, 1–15. Nee, S., May, R. M. and Harvey, P. H. (1994) The reconstructed evolutionary process. Philosophical Transactions of the Royal Society of London. Series B. Biological Sciences, 344, 305–311. Nee, S., Holmes, E. C., May, R. M. and Harvey, P. H. (1995) Estimating extinctions from molecular phylogenies. in Extinction Rates, eds. Lawton, J. H. and May, R. M., pp. 164–182, Oxford University Press. branching.times, diversi.gof, diversi.time, ltt.plot, yule, bd.ext, yule.cov, bd.time
# ProbabilisticLinearSolver¶ class probnum.linalg.solvers.ProbabilisticLinearSolver(policy, information_op, belief_update, stopping_criterion) Compose a custom probabilistic linear solver. Class implementing probabilistic linear solvers. Such (iterative) solvers infer solutions to problems of the form $Ax=b,$ where $$A \in \mathbb{R}^{n \times n}$$ and $$b \in \mathbb{R}^{n}$$. They return a probability measure which quantifies uncertainty in the output arising from finite computational resources or stochastic input. This class unifies and generalizes probabilistic linear solvers as described in the literature. 1 2 3 4 Parameters • policy – Policy returning actions taken by the solver. • information_op – Information operator defining how information about the linear system is obtained given an action. • belief_update – Belief update defining how to update the QoI beliefs given new observations. • stopping_criterion – Stopping criterion determining when a desired terminal condition is met. References 1 Hennig, P., Probabilistic Interpretation of Linear Solvers, SIAM Journal on Optimization, 2015, 25, 234-260 2 Cockayne, J. et al., A Bayesian Conjugate Gradient Method, Bayesian Analysis, 2019, 14, 937-1012 3 Bartels, S. et al., Probabilistic Linear Solvers: A Unifying View, Statistics and Computing, 2019 4 Wenger, J. and Hennig, P., Probabilistic Linear Solvers for Machine Learning, Advances in Neural Information Processing Systems (NeurIPS), 2020 problinsolve Solve linear systems in a Bayesian framework. bayescg Solve linear systems with prior information on the solution. Examples Define a linear system. >>> import numpy as np >>> from probnum.problems import LinearSystem >>> from probnum.problems.zoo.linalg import random_spd_matrix >>> rng = np.random.default_rng(42) >>> n = 100 >>> A = random_spd_matrix(rng=rng, dim=n) >>> b = rng.standard_normal(size=(n,)) >>> linsys = LinearSystem(A=A, b=b) Create a custom probabilistic linear solver from pre-defined components. >>> from probnum.linalg.solvers import ( ... ProbabilisticLinearSolver, ... beliefs, ... information_ops, ... policies, ... stopping_criteria, ... ) >>> pls = ProbabilisticLinearSolver( ... information_op=information_ops.ProjectedResidualInformationOp(), ... stopping_criterion=( ... stopping_criteria.MaxIterationsStoppingCriterion(100) ... | stopping_criteria.ResidualNormStoppingCriterion(atol=1e-5, rtol=1e-5) ... ), ... ) Define a prior over the solution. >>> from probnum import linops, randvars >>> prior = beliefs.LinearSystemBelief( ... x=randvars.Normal( ... mean=np.zeros((n,)), ... cov=np.eye(n), ... ), ... ) Solve the linear system using the custom solver. >>> belief, solver_state = pls.solve(prior=prior, problem=linsys) >>> np.linalg.norm(linsys.A @ belief.x.mean - linsys.b) / np.linalg.norm(linsys.b) 7.1886e-06 Methods Summary solve(prior, problem[, rng]) Solve the linear system. solve_iterator(prior, problem[, rng]) Generator implementing the solver iteration. Methods Documentation solve(prior, problem, rng=None)[source] Solve the linear system. Parameters Returns • belief – Posterior belief $$(\mathsf{x}, \mathsf{A}, \mathsf{H}, \mathsf{b})$$ over the solution $$x$$, the system matrix $$A$$, its (pseudo-)inverse $$H=A^\dagger$$ and the right hand side $$b$$. • solver_state – Final state of the solver. Return type solve_iterator(prior, problem, rng=None)[source] Generator implementing the solver iteration. This function allows stepping through the solver iteration one step at a time and exposes the internal solver state. Parameters Yields solver_state – State of the probabilistic linear solver. Return type Generator[LinearSolverState, None, None]
# Theory of sets: model in \$ mathsf {ZFC} \$ without points \$ P \$ nor points Q \$ A $$P$$-point is an ultrafilter $$scr U$$in $$omega$$ such that for each function $$f: omega to omega$$ There is $$x en { scr U}$$ such that the restriction $$f | _x$$ It is constant or injective. A $$Q$$-point is an ultrafilter $$scr U$$in $$omega$$ such that for each function $$f: omega to omega$$ with the property that $$f ^ {- 1} ( {m })$$ it is finite for each $$m en omega$$, There is $$x en { scr U}$$ such that the restriction $$f | _x$$ he is injective. $$P$$-points do not need to exist, and $$Q$$-Points do not need to exist. Question. It is possible that none $$P$$– neither $$Q$$-points exist?
Line e Surface Infinitesimal Jhenrique I think you know definition of line infinitesimal: $$[ds]^2 = \begin{bmatrix} dx & dy & dz \end{bmatrix} \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\\ \end{bmatrix}^2 \begin{bmatrix} dx\\ dy\\ dz\\ \end{bmatrix} = \begin{bmatrix} dr & d\theta & dz \end{bmatrix} \begin{bmatrix} 1 & 0 & 0\\ 0 & r & 0\\ 0 & 0 & 1\\ \end{bmatrix}^2 \begin{bmatrix} dr\\ d\theta\\ dz\\ \end{bmatrix} = \begin{bmatrix} d\rho & d\phi & d\theta \end{bmatrix} \begin{bmatrix} 1 & 0 & 0\\ 0 & \rho & 0\\ 0 & 0 & \rho\;sin(\phi)\\ \end{bmatrix}^2 \begin{bmatrix} d\rho\\ d\phi\\ d\theta\\ \end{bmatrix}$$ From this, is correct if I deduce the formula to surface infinitesimal like this? $$[d^2S]^2 = \begin{bmatrix} dydz & dxdz & dxdy \end{bmatrix} \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\\ \end{bmatrix}^2 \begin{bmatrix} dydz\\ dxdz\\ dxdy\\ \end{bmatrix} = \begin{bmatrix} d\theta dz & drdz & drd\theta \end{bmatrix} \begin{bmatrix} r & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & r\\ \end{bmatrix}^2 \begin{bmatrix} d\theta dz\\ drdz\\ drd\theta\\ \end{bmatrix} = \begin{bmatrix} d\phi d\theta & d\rho d\theta & d\rho d\phi \end{bmatrix} \begin{bmatrix} \rho^2\;sin(\phi) & 0 & 0\\ 0 & \rho\;sin(\phi) & 0\\ 0 & 0 & \rho\\ \end{bmatrix}^2 \begin{bmatrix} d\phi d\theta\\ d\rho d\theta\\ d\rho d\phi\\ \end{bmatrix}$$ And more one second question: dxdy is equal d²xy ? Related Differential Geometry News on Phys.org ChrisVer Gold Member you can always try the "old way" of doing things... Find the infinitesimal tangent vectors on your surface, and take the cross product
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnline Submission ㆍMy Manuscript - For Reviewers - For Editors Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, : 1049—1388 Computing fuzzy subgroups of some special cyclic groups Babington Makamba, Michael M. Munywoki Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1049—1067 Strong $P$-cleanness of trivial Morita contexts Mete B. Calci, Sait Halicioglu, Abdullah Harmanci Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1069—1078 Some branching formulas for Kac--Moody Lie algebras Kyu-Hwan Lee, Jerzy Weyman Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1079—1098 Centralizing and commuting involution in rings with derivations Abdul Nadim Khan Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1099—1104 Generalizations of number-theoretic sums Narakorn Rompurk Kanasri, Patchara Pornsurat, Yanapat Tongron Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1105—1115 A combinatorial approach to asymptotic behavior of Kirillov model for $GL_2$ Yusuf Danisman Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1117—1133 Computation of the matrix of the Toeplitz operator on the Hardy space Young-Bok Chung Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1135—1143 A note on the value distribution of differential polynomials Subhas S. Bhoosnurmath, Bikash Chakraborty, Hari M. Srivastava Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1145—1155 A remark on convergence theory for iterative processes of Proinov contraction Ravindra K. Bisht Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1157—1162 Convexity of integral operators generated by some new inequalities of hyper-Bessel functions Muhey U. Din Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1163—1173 A $p$-deformed $q$-inverse pair and associated polynomials including Askey scheme Rajesh V. Savalia Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1175—1199 Some fixed point results for TAC-Suzuki contractive mappings Akindele A. Mebawondu, Oluwatosin T. Mewomo Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1201—1222 On complete monotonicity of linear combination of finite psi functions Bai-Ni Guo, Feng Qi Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1223—1228 $k^{th}$-order essentially slant weighted Toeplitz operators Anuradha Gupta, Shivam Kumar Singh Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1229—1243 On growth properties of transcendental meromorphic solutions of linear differential equations with entire coefficients of higher order Nityagopal Biswas, Sanjib Kumar Datta, Samten Tamang Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1245—1259 Inequalities and complete monotonicity for the gamma and related functions Chao-Ping Chen, Junesang Choi Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1261—1278 Some results in $\eta$-Ricci soliton and gradient $\rho$-Einstein soliton in a complete Riemannian manifold Chandan Kumar Mondal, Absos Ali Shaikh Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1279—1287 Some results on almost Kenmotsu manifolds with generalized $(k,\mu)'$-nullity distribution Uday Chand De, Gopal Ghosh Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1289—1301 Characterization of warped product submanifolds of Lorentzian concircular structure manifolds Shyamal Kumar Hui, Tanumoy Pal, Laurian Ioan Piscoran Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1303—1313 Certain results on contact metric generalized $(\kappa,\mu)$-space forms Aruna Kumara Huchchappa, Devaraja Mallesha Naik, Venkatesha Venkatesha Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1315—1328 A note on expansive $\mathbb{Z}^k$-action and generators Ekta Shah Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1329—1334 The logarithmic Kumaraswamy family of distributions: properties and applications Zubair Ahmad Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1335—1352 Modified Lagrange functional for solving elastic problem with a crack in continuum mechanics Robert V. Namm, Georgiy I. Tsoy, Gyungsoo Woo Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1353—1364 Long-time behavior of solutions to a nonlocal quasilinear parabolic equation Le Thi Thuy, Le Tran Tinh Commun. Korean Math. Soc. 2019 Vol. 34, No. 4, 1365—1388
lilypond-devel [Top][All Lists] Re: Let \time in mid-measure work without warning in some cases (issue 1 From: dak Subject: Re: Let \time in mid-measure work without warning in some cases (issue 143450043 by address@hidden) Date: Tue, 23 Sep 2014 14:17:11 +0000 On 2014/09/23 01:11:51, Dan Eble wrote: This is a simpler (easier to comprehend) change than what I proposed in http://article.gmane.org/gmane.comp.gnu.lilypond.bugs/39790 . I rewrote the regression test "time-signature-midmeasure.ly" to focus on a specific aspect of this feature: suppress the warning, so the output is quite different. I find this too hard to understand anyway and the decision taken too arbitrary (probably only matching one particular music style): if we do something like \time 4/4 c4 \time 3/4 then the decision whether or not to start a new measure/warning will likely change if some grace note occurs anywhere. So I propose we junk the warning if we cannot place it accurately: people who want whole-measure warnings are supposed to use barchecks anyway. So what else do we need to do? When doing \time 4/4 c1 \time 3/4 ... or \time 3/4 c2. \time 4/4 ... we clearly want the new meter to start on the bar and not, say, have time 4/4 notice that it only used up 3 beats and still has one to extend. So I'd say that \time ... should probably do an implicit \partial 1*0, meaning that you need to override it _afterwards_ with a different \partial if you don't actually want it to start a new bar. I think that should get us rid of most problems, the main resulting problem being that if \time occurs in several places at the same point of time, one \partial (if one is desired) has to follow the last \time: any \partial executed before the final \time of a time step would be ignored. I think "\time will start a new measure, follow it with \partial if you don't want that" is a clear rule and probably not all that different from what people would be expect/write without reading the manual previously. https://codereview.appspot.com/143450043/ reply via email to
# Chapter 2 Review Exercises ## Chapter 2 Review Exercises ### Solve Equations using the Subtraction and Addition Properties of Equality Verify a Solution of an Equation In the following exercises, determine whether each number is a solution to the equation. Exercise $$\PageIndex{1}$$ $$10 x-1=5 x ; x=\frac{1}{5}$$ Exercise $$\PageIndex{2}$$ $$w+2=\frac{5}{8} ; w=\frac{3}{8}$$ no Exercise $$\PageIndex{3}$$ $$-12 n+5=8 n ; n=-\frac{5}{4}$$ Exercise $$\PageIndex{4}$$ $$6 a-3=-7 a, a=\frac{3}{13}$$ yes Solve Equations using the Subtraction and Addition Properties of Equality In the following exercises, solve each equation using the Subtraction Property of Equality. Exercise $$\PageIndex{5}$$ $$x+7=19$$ Exercise $$\PageIndex{6}$$ $$y+2=-6$$ $$y=-8$$ Exercise $$\PageIndex{7}$$ $$a+\frac{1}{3}=\frac{5}{3}$$ Exercise $$\PageIndex{8}$$ $$n+3.6=5.1$$ $$n=1.5$$ In the following exercises, solve each equation using the Addition Property of Equality. Exercise $$\PageIndex{9}$$ $$u-7=10$$ Exercise $$\PageIndex{10}$$ $$x-9=-4$$ $$x=5$$ Exercise $$\PageIndex{11}$$ $$c-\frac{3}{11}=\frac{9}{11}$$ Exercise $$\PageIndex{12}$$ $$p-4.8=14$$ $$p=18.8$$ In the following exercises, solve each equation. Exercise $$\PageIndex{13}$$ $$n-12=32$$ Exercise $$\PageIndex{14}$$ $$y+16=-9$$ $$y=-25$$ Exercise $$\PageIndex{15}$$ $$f+\frac{2}{3}=4$$ Exercise $$\PageIndex{16}$$ $$d-3.9=8.2$$ $$d=12.1$$ Solve Equations That Require Simplification In the following exercises, solve each equation. Exercise $$\PageIndex{17}$$ $$y+8-15=-3$$ Exercise $$\PageIndex{18}$$ $$7 x+10-6 x+3=5$$ $$x=-8$$ Exercise $$\PageIndex{19}$$ $$6(n-1)-5 n=-14$$ Exercise $$\PageIndex{20}$$ $$8(3 p+5)-23(p-1)=35$$ $$p=-28$$ Translate to an Equation and Solve In the following exercises, translate each English sentence into an algebraic equation and then solve it. Exercise $$\PageIndex{21}$$ The sum of $$-6$$ and $$m$$ is 25 Exercise $$\PageIndex{22}$$ Four less than $$n$$ is 13 $$n-4=13 ; n=17$$ Translate and Solve Applications In the following exercises, translate into an algebraic equation and solve. Exercise $$\PageIndex{23}$$ Rochelle’s daughter is 11 years old. Her son is 3 years younger. How old is her son? Exercise $$\PageIndex{24}$$ Tan weighs 146 pounds. Minh weighs 15 pounds more than Tan. How much does Minh weigh? 161 pounds Exercise $$\PageIndex{25}$$ Peter paid $9.75 to go to the movies, which was$46.25 less than he paid to go to a concert. How much did he pay for the concert? Exercise $$\PageIndex{26}$$ Elissa earned $$\ 152.84$$ this week, which was $$\ 2 . .65$$ more than she earned last week. How much did she earn last week? $$\ 131.19$$ ### Solve Equations using the Division and Multiplication Properties of Equality Solve Equations Using the Division and Multiplication Properties of Equality In the following exercises, solve each equation using the division and multiplication properties of equality and check the solution. Exercise $$\PageIndex{27}$$ $$8 x=72$$ Exercise $$\PageIndex{28}$$ $$13 a=-65$$ $$a=-5$$ Exercise $$\PageIndex{29}$$ $$0.25 p=5.25$$ Exercise $$\PageIndex{30}$$ $$-y=4$$ $$y=-4$$ Exercise $$\PageIndex{31}$$ $$\frac{n}{6}=18$$ Exercise $$\PageIndex{32}$$ $$\frac{y}{-10}=30$$ $$y=-300$$ Exercise $$\PageIndex{33}$$ $$36=\frac{3}{4} x$$ Exercise $$\PageIndex{34}$$ $$\frac{5}{8} u=\frac{15}{16}$$ $$u=\frac{3}{2}$$ Exercise $$\PageIndex{35}$$ $$-18 m=-72$$ Exercise $$\PageIndex{36}$$ $$\frac{c}{9}=36$$ $$c=324$$ Exercise $$\PageIndex{37}$$ $$0.45 x=6.75$$ Exercise $$\PageIndex{38}$$ $$\frac{11}{12}=\frac{2}{3} y$$ $$y=\frac{11}{8}$$ Solve Equations That Require Simplification In the following exercises, solve each equation requiring simplification. Exercise $$\PageIndex{39}$$ $$5 r-3 r+9 r=35-2$$ Exercise $$\PageIndex{40}$$ $$24 x+8 x-11 x=-7-14$$ $$x=-1$$ Exercise $$\PageIndex{41}$$ $$\frac{11}{12} n-\frac{5}{6} n=9-5$$ Exercise $$\PageIndex{42}$$ $$-9(d-2)-15=-24$$ $$d=3$$ Translate to an Equation and Solve In the following exercises, translate to an equation and then solve. Exercise $$\PageIndex{43}$$ 143 is the product of $$-11$$ and $$y$$ Exercise $$\PageIndex{44}$$ The quotient of $$b$$ and and 9 is $$-27$$ $$\frac{b}{9}=-27 ; b=-243$$ Exercise $$\PageIndex{45}$$ The sum of q and one-fourth is one. Exercise $$\PageIndex{46}$$ The difference of s and one-twelfth is one fourth. $$s-\frac{1}{12}=\frac{1}{4} ; s=\frac{1}{3}$$ Translate and Solve Applications In the following exercises, translate into an equation and solve. Exercise $$\PageIndex{47}$$ Ray paid $21 for 12 tickets at the county fair. What was the price of each ticket? Exercise $$\PageIndex{48}$$ Janet gets paid $$\ 24$$ per hour. She heard that this is $$\frac{3}{4}$$ of what Adam is paid. How much is Adam paid per hour? Answer$32 ### Solve Equations with Variables and Constants on Both Sides Solve an Equation with Constants on Both Sides In the following exercises, solve the following equations with constants on both sides. Exercise $$\PageIndex{49}$$ $$8 p+7=47$$ Exercise $$\PageIndex{50}$$ $$10 w-5=65$$ $$w=7$$ Exercise $$\PageIndex{51}$$ $$3 x+19=-47$$ Exercise $$\PageIndex{52}$$ $$32=-4-9 n$$ $$n=-4$$ Solve an Equation with Variables on Both Sides In the following exercises, solve the following equations with variables on both sides. Exercise $$\PageIndex{53}$$ $$7 y=6 y-13$$ Exercise $$\PageIndex{54}$$ $$5 a+21=2 a$$ $$a=-7$$ Exercise $$\PageIndex{55}$$ $$k=-6 k-35$$ Exercise $$\PageIndex{56}$$ $$4 x-\frac{3}{8}=3 x$$ $$x=\frac{3}{8}$$ Solve an Equation with Variables and Constants on Both Sides In the following exercises, solve the following equations with variables and constants on both sides. Exercise $$\PageIndex{57}$$ $$12 x-9=3 x+45$$ Exercise $$\PageIndex{58}$$ $$5 n-20=-7 n-80$$ $$n=-5$$ Exercise $$\PageIndex{59}$$ $$4 u+16=-19-u$$ Exercise $$\PageIndex{60}$$ $$\frac{5}{8} c-4=\frac{3}{8} c+4$$ $$c=32$$ ### Use a General Strategy for Solving Linear Equations Solve Equations Using the General Strategy for Solving Linear Equations In the following exercises, solve each linear equation. Exercise $$\PageIndex{61}$$ $$6(x+6)=24$$ Exercise $$\PageIndex{62}$$ $$9(2 p-5)=72$$ $$p=\frac{13}{2}$$ Exercise $$\PageIndex{63}$$ $$-(s+4)=18$$ Exercise $$\PageIndex{64}$$ $$8+3(n-9)=17$$ $$n=12$$ Exercise $$\PageIndex{65}$$ $$23-3(y-7)=8$$ Exercise $$\PageIndex{66}$$ $$\frac{1}{3}(6 m+21)=m-7$$ $$m=-14$$ Exercise $$\PageIndex{67}$$ $$4(3.5 y+0.25)=365$$ Exercise $$\PageIndex{68}$$ $$0.25(q-8)=0.1(q+7)$$ $$q=18$$ Exercise $$\PageIndex{69}$$ $$8(r-2)=6(r+10)$$ Exercise $$\PageIndex{70}$$ $$\begin{array}{l}{5+7(2-5 x)=2(9 x+1)} \\ {-(13 x-57)}\end{array}$$ $$x=-1$$ Exercise $$\PageIndex{71}$$ $$\begin{array}{l}{(9 n+5)-(3 n-7)} \\ {=20-(4 n-2)}\end{array}$$ Exercise $$\PageIndex{72}$$ $$\begin{array}{l}{2[-16+5(8 k-6)]} \\ {=8(3-4 k)-32}\end{array}$$ $$k=\frac{3}{4}$$ Classify Equations In the following exercises, classify each equation as a conditional equation, an identity, or a contradiction and then state the solution. Exercise $$\PageIndex{73}$$ $$\begin{array}{l}{17 y-3(4-2 y)=11(y-1)} \\ {+12 y-1}\end{array}$$ Exercise $$\PageIndex{74}$$ $$\begin{array}{l}{9 u+32=15(u-4)} \\ {-3(2 u+21)}\end{array}$$ Exercise $$\PageIndex{75}$$ $$-8(7 m+4)=-6(8 m+9)$$ Exercise $$\PageIndex{76}$$ $$\begin{array}{l}{21(c-1)-19(c+1)} \\ {=2(c-20)}\end{array}$$ identity; all real numbers ### Solve Equations with Fractions and Decimals Solve Equations with Fraction Coefficients In the following exercises, solve each equation with fraction coefficients. Exercise $$\PageIndex{77}$$ $$\frac{2}{5} n-\frac{1}{10}=\frac{7}{10}$$ Exercise $$\PageIndex{78}$$ $$\frac{1}{3} x+\frac{1}{5} x=8$$ $$x=15$$ Exercise $$\PageIndex{79}$$ $$\frac{3}{4} a-\frac{1}{3}=\frac{1}{2} a-\frac{5}{6}$$ Exercise $$\PageIndex{80}$$ $$\frac{1}{2}(k-3)=\frac{1}{3}(k+16)$$ $$k=41$$ Exercise $$\PageIndex{81}$$ $$\frac{3 x-2}{5}=\frac{3 x+4}{8}$$ Exercise $$\PageIndex{82}$$ $$\frac{5 y-1}{3}+4=\frac{-8 y+4}{6}$$ $$y=-1$$ Solve Equations with Decimal Coefficients In the following exercises, solve each equation with decimal coefficients. Exercise $$\PageIndex{83}$$ $$0.8 x-0.3=0.7 x+0.2$$ Exercise $$\PageIndex{84}$$ $$0.36 u+2.55=0.41 u+6.8$$ $$u=-85$$ Exercise $$\PageIndex{85}$$ $$0.6 p-1.9=0.78 p+1.7$$ Exercise $$\PageIndex{86}$$ $$0.6 p-1.9=0.78 p+1.7$$ $$d=-20$$ ### Solve a Formula for a Specific Variable Use the Distance, Rate, and Time Formula In the following exercises, solve. Exercise $$\PageIndex{87}$$ Natalie drove for 7$$\frac{1}{2}$$ hours at 60 miles per hour. How much distance did she travel? Exercise $$\PageIndex{88}$$ Mallory is taking the bus from St. Louis to Chicago. The distance is 300 miles and the bus travels at a steady rate of 60 miles per hour. How long will the bus ride be? 5 hours Exercise $$\PageIndex{89}$$ Aaron’s friend drove him from Buffalo to Cleveland. The distance is 187 miles and the trip took 2.75 hours. How fast was Aaron’s friend driving? Exercise $$\PageIndex{90}$$ Link rode his bike at a steady rate of 15 miles per hour for 2$$\frac{1}{2}$$ hours. How much distance did he travel? 37.5 miles Solve a Formula for a Specific Variable In the following exercises, solve. Exercise $$\PageIndex{91}$$ Use the formula. d=rt to solve for t 1. when d=510 and r=60 2. in general Exercise $$\PageIndex{92}$$ Use the formula. d=rt to solve for r 1. when when d=451 and t=5.5 2. in general 1. r=82mph 2. $$r=\frac{D}{t}$$ Exercise $$\PageIndex{93}$$ Use the formula $$A=\frac{1}{2} b h$$ to solve for b 1. when A=390 and h=26 2. in general Exercise $$\PageIndex{94}$$ Use the formula $$A=\frac{1}{2} b h$$ to solve for b 1. when A=153 and b=18 2. in general 1. $$h=17$$ 2. $$h=\frac{2 A}{b}$$ Exercise $$\PageIndex{95}$$ Use the formula I=Prt to solve for the principal, P for 1. I=$2,501,r=4.1%, t=5 years 2. in general Exercise $$\PageIndex{96}$$ Solve the formula 4x+3y=6 for y 1. when x=−2 2. in general Answer ⓐ $$y=\frac{14}{3}$$ ⓑ $$y=\frac{6-4 x}{3}$$ Exercise $$\PageIndex{97}$$ Solve $$180=a+b+c$$ for $$c$$ Exercise $$\PageIndex{98}$$ Solve the formula $$V=L W H$$ for $$H$$ Answer $$H=\frac{V}{L W}$$ ### Solve Linear Inequalities Graph Inequalities on the Number Line In the following exercises, graph each inequality on the number line. Exercise $$\PageIndex{99}$$ 1. $$x\leq 4$$ 2. x>−2 3. x<1 Exercise $$\PageIndex{100}$$ 1. x>0 2. x<−3 3. $$x\geq −1$$ Answer In the following exercises, graph each inequality on the number line and write in interval notation. Exercise $$\PageIndex{101}$$ 1. $$x<-1$$ 2. $$x \geq-2.5$$ 3. $$x \leq \frac{5}{4}$$ Exercise $$\PageIndex{102}$$ 1. $$x>2$$ 2. $$x \leq-1.5$$ 3. $$x \geq \frac{5}{3}$$ Answer Solve Inequalities using the Subtraction and Addition Properties of Inequality In the following exercises, solve each inequality, graph the solution on the number line, and write the solution in interval notation. Exercise $$\PageIndex{103}$$ $$n-12 \leq 23$$ Exercise $$\PageIndex{104}$$ $$m+14 \leq 56$$ Answer Exercise $$\PageIndex{105}$$ $$a+\frac{2}{3} \geq \frac{7}{12}$$ Exercise $$\PageIndex{106}$$ $$b-\frac{7}{8} \geq-\frac{1}{2}$$ Answer Solve Inequalities using the Division and Multiplication Properties of Inequality In the following exercises, solve each inequality, graph the solution on the number line, and write the solution in interval notation. Exercise $$\PageIndex{107}$$ $$9 x>54$$ Exercise $$\PageIndex{108}$$ $$-12 d \leq 108$$ Answer Exercise $$\PageIndex{109}$$ $$\frac{5}{2} j<-60$$ Exercise $$\PageIndex{110}$$ $$\frac{q}{-2} \geq-24$$ Answer Solve Inequalities That Require Simplification In the following exercises, solve each inequality, graph the solution on the number line, and write the solution in interval notation. Exercise $$\PageIndex{111}$$ $$6 p>15 p-30$$ Exercise $$\PageIndex{112}$$ $$9 h-7(h-1) \leq 4 h-23$$ Answer Exercise $$\PageIndex{113}$$ $$5 n-15(4-n)<10(n-6)+10 n$$ Exercise $$\PageIndex{114}$$ $$\frac{3}{8} a-\frac{1}{12} a>\frac{5}{12} a+\frac{3}{4}$$ Answer Translate to an Inequality and Solve In the following exercises, translate and solve. Then write the solution in interval notation and graph on the number line. Exercise $$\PageIndex{115}$$ Five more than z is at most 19. Exercise $$\PageIndex{116}$$ Three less than c is at least 360. Answer Exercise $$\PageIndex{117}$$ Nine times n exceeds 42. Exercise $$\PageIndex{118}$$ Negative two times a is no more than 8. Answer ### Everyday Math Exercise $$\PageIndex{119}$$ Describe how you have used two topics from this chapter in your life outside of your math class during the past month. ## Chapter 2 Practice Test Exercise $$\PageIndex{1}$$ Determine whether each number is a solution to the equation $$6 x-3=x+20$$ 1. 5 2. $$\frac{23}{5}$$ Answer 1. no 2. yes In the following exercises, solve each equation. Exercise $$\PageIndex{2}$$ $$n-\frac{2}{3}=\frac{1}{4}$$ Exercise $$\PageIndex{3}$$ $$\frac{9}{2} c=144$$ Answer c=32 Exercise $$\PageIndex{4}$$ $$4 y-8=16$$ Exercise $$\PageIndex{5}$$ $$-8 x-15+9 x-1=-21$$ Answer $$x=-5$$ Exercise $$\PageIndex{6}$$ $$-15 a=120$$ Exercise $$\PageIndex{7}$$ $$\frac{2}{3} x=6$$ Answer $$x=9$$ Exercise $$\PageIndex{8}$$ $$x-3.8=8.2$$ Exercise $$\PageIndex{9}$$ $$10 y=-5 y-60$$ Answer $$y=-4$$ Exercise $$\PageIndex{10}$$ $$8 n-2=6 n-12$$ Exercise $$\PageIndex{11}$$ $$9 m-2-4 m-m=42-8$$ Answer $$m=9$$ Exercise $$\PageIndex{12}$$ $$-5(2 x-1)=45$$ Exercise $$\PageIndex{13}$$ $$-(d-9)=23$$ Answer $$d=-14$$ Exercise $$\PageIndex{14}$$ $$\frac{1}{4}(12 m-28)=6-2(3 m-1)$$ Exercise $$\PageIndex{15}$$ $$2(6 x-5)-8=-22$$ Answer $$x=-\frac{1}{3}$$ Exercise $$\PageIndex{16}$$ $$8(3 a-5)-7(4 a-3)=20-3 a$$ Exercise $$\PageIndex{17}$$ $$\frac{1}{4} p-\frac{1}{3}=\frac{1}{2}$$ Answer $$p=\frac{10}{3}$$ Exercise $$\PageIndex{18}$$ $$0.1 d+0.25(d+8)=4.1$$ Exercise $$\PageIndex{19}$$ $$14 n-3(4 n+5)=-9+2(n-8)$$ Answer contradiction; no solution Exercise $$\PageIndex{20}$$ $$9(3 u-2)-4[6-8(u-1)]=3(u-2)$$ Exercise $$\PageIndex{21}$$ Solve the formula x−2y=5 for y 1. when x=−3 2. in general Answer 1. y=4 2. $$y=\frac{5-x}{2}$$ In the following exercises, graph on the number line and write in interval notation. Exercise $$\PageIndex{22}$$ $$x \geq-3.5$$ Exercise $$\PageIndex{23}$$ $$x<\frac{11}{4}$$ Answer In the following exercises,, solve each inequality, graph the solution on the number line, and write the solution in interval notation. Exercise $$\PageIndex{24}$$ $$8 k \geq 5 k-120$$ Exercise $$\PageIndex{25}$$ $$3 c-10(c-2)<5 c+16$$ Answer In the following exercises, translate to an equation or inequality and solve. Exercise $$\PageIndex{26}$$ 4 less than twice x is 16. Exercise $$\PageIndex{27}$$ Fifteen more than n is at least 48. Answer $$n+15 \geq 48 ; n \geq 33$$ Exercise $$\PageIndex{28}$$ Samuel paid$25.82 for gas this week, which was \$3.47 less than he paid last week. How much had he paid last week? Exercise $$\PageIndex{29}$$ Jenna bought a coat on sale for $$\ 120,$$ which was $$\frac{2}{3}$$ of the original price. What was the original price of the coat? $$120=\frac{2}{3} p ;$$ The original price was $$\ 180$$ Exercise $$\PageIndex{30}$$ Sean took the bus from Seattle to Boise, a distance of 506 miles. If the trip took 7$$\frac{2}{3}$$ hours, what was the speed of the bus? ## Review for 2.7 Ratio and Proportions and Similar Triangles Solve Proportions In the following exercises, solve. Exercise $$\PageIndex{74}$$ $$\dfrac{x}{4}=\dfrac{3}{5}$$ $$\dfrac{12}{5}$$ Exercise $$\PageIndex{75}$$ $$\dfrac{3}{y}=\dfrac{9}{5}$$ Exercise $$\PageIndex{76}$$ $$\dfrac{s}{s+20}=\dfrac{3}{7}$$ $$15$$ Exercise $$\PageIndex{77}$$ $$\dfrac{t−3}{5}=\dfrac{t+2}{9}$$ ​​​​​​​In the following exercises, solve using proportions. Exercise $$\PageIndex{78}$$ Rachael had a $$21$$ ounce strawberry shake that has $$739$$ calories. How many calories are there in a $$32$$ ounce shake? $$1161$$ calories Exercise $$\PageIndex{79}$$ Leo went to Mexico over Christmas break and changed $$525$$ dollars into Mexican pesos. At that time, the exchange rate had $$1$$ US is equal to $$16.25$$ Mexican pesos. How many Mexican pesos did he get for his trip? ​​​​​​​Solve Similar Figure Applications In the following exercises, solve. Exercise $$\PageIndex{80}$$ $$∆ABC$$ is similar to $$∆XYZ$$. The lengths of two sides of each triangle are given in the figure. Find the lengths of the third sides. $$b=9$$; $$x=2\dfrac{1}{3}$$ Exercise $$\PageIndex{81}$$ On a map of Europe, Paris, Rome, and Vienna form a triangle whose sides are shown in the figure below. If the actual distance from Rome to Vienna is $$700$$ miles, find the distance from 1. a. Paris to Rome 2. b. Paris to Vienna Exercise $$\PageIndex{82}$$ Tony is $$5.75$$ feet tall. Late one afternoon, his shadow was $$8$$ feet long. At the same time, the shadow of a nearby tree was $$32$$ feet long. Find the height of the tree. $$23$$ feet Exercise $$\PageIndex{83}$$ The height of a lighthouse in Pensacola, Florida is $$150$$ feet. Standing next to the statue, $$5.5$$ foot tall Natalie cast a $$1.1$$ foot shadow How long would the shadow of the lighthouse be? ## Review for 2.9 Compound Inequalities Solve Compound Inequalities with “and” In each of the following exercises, solve each inequality, graph the solution, and write the solution in interval notation. 98. $$x\leq 5$$ and $$x>−3$$ 99. $$4x−2\leq 4$$ and $$7x−1>−8$$ 100. $$5(3x−2)\leq 5$$ and $$4(x+2)<3$$ 101. $$34(x−8)\leq 3$$ and $$15(x−5)\leq 3$$ 102. $$34x−5\geq −2$$ and $$−3(x+1)\geq 6$$ 103. $$−5\leq 4x−1<7$$ Solve Compound Inequalities with “or” In the following exercises, solve each inequality, graph the solution on the number line, and write the solution in interval notation. 104. $$5−2x\leq −1$$ or $$6+3x\leq 4$$ 105. $$3(2x−3)<−5$$ or $$4x−1>3$$ 106. $$34x−2>4$$ or $$4(2−x)>0$$ 107. $$2(x+3)\geq 0$$ or $$3(x+4)\leq 6$$ 108. $$12x−3\leq 4$$ or $$13(x−6)\geq −2$$ Solve Applications with Compound Inequalities In the following exercises, solve. 109. Liam is playing a number game with his sister Audry. Liam is thinking of a number and wants Audry to guess it. Five more than three times her number is between 2 and 32. Write a compound inequality that shows the range of numbers that Liam might be thinking of. 110. Elouise is creating a rectangular garden in her back yard. The length of the garden is 12 feet. The perimeter of the garden must be at least 36 feet and no more than 48 feet. Use a compound inequality to find the range of values for the width of the garden. $$6\leq w\leq 12$$
# Where is XAMPP's Shell? I am using the latest version of XAMPP with XAMPP control panel v2.5 (9 May 2007). I want to access the command line to run php -q htdocs\path\to\file.php. Problem: On my XAMPP control panel, I do not see the Shell button that will bring up the command line interface. How can I access the shell, or is there another way to run the PHP file? My XAMPP Control Panel What I am finding: - Why don't you open the console, change the directory to C:\xampp\php and execute php.exe? –  ComFreek Nov 18 '11 at 17:46 If you have php.exe's directory in your PATH environment variable, you can simply open a command line window (Start > Run > cmd), go in your PHP script directory and launch it with php yourscript.php.
Volume 291 - 9th International Workshop on the CKM Unitarity Triangle (CKM2016) - Parallel WG5 Current challenges and future prospects for $\gamma$ from $B\to D hh^\prime$ decays T. Gershon* on behalf of the LHCb collaboration *corresponding author Full text: pdf Pre-published on: June 05, 2017 Published on: June 13, 2017 Abstract Decays of the type $B\to D hh^\prime$, where a $b$~hadron decays to a neutral charm meson that can be an admixture of $D^0$ and $\overline{D}{}^0$ states together with two light particles that are typically a kaon and a pion, have demonstrated potential to enable precise determinations of the angle $\gamma$ of the CKM Unitarity Triangle. The current status and future prospects of these measurements are reviewed. DOI: https://doi.org/10.22323/1.291.0115 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
# What are the differences between a slot and a slat? If you look at image 5-18 in the PHAK Chapter 5 about leading edge lift devices you see this: However the paragraph preceding it states High-lift devices also can be applied to the leading edge of the airfoil. The most common types are fixed slots, movable slats, leading edge flaps, and cuffs. So is this a typo? Should the image say 'movable slat' instead of 'movable slot'? This leads into my main question which is, what are the differences between a slat and a slot? If I were to get asked this by an examiner what would a good response be? I guess it should be movable slat. A leading edge slot is basically a spanwise opening in the wing. Slats are aerodynamic surfaces in the leading edge, which when deployed, allows the wing to operate at higher angle of attack. When deployed, the slat opens up a slot between itself and the wing. Image from simhq.com In some aircraft, the slats are fixed, which opens up a slot between the wing and the slat. In this case, the terms slot and slat are used interchangeably. "Leading edge slot" by Sanchom - Own work. Licensed under CC BY 3.0 via Commons. A number of airliners use movable slats, in which case, the system is called slat, rather than slot. "Voilure A319" by Nicourse - Own work. Licensed under CC BY-SA 3.0 via Commons. In short, the system is pretty much the same, but is (usually) called slat in case of movable one and slot in case of fixed one. • Would it be fair to say that a slat is a movable device that creates or closes off a slot in the leading edge of the wing? It seems to me that fixed slot is simply a permanent, spanwise hole in the leading edge of the wing. Dec 29, 2015 at 16:38 • @FreeMan Yes. It would be fair. This was the meaning in which the term slot was originally used. Dec 29, 2015 at 16:48 • Are a leading edge cuff and a droop just two names for the same thing? Dec 29, 2015 at 20:15 • @TomMcW The relationship is kinda similar. The droop is usually a deployable LE device. LE cuff is a fixed LE droop and is used as a general term, which probably got its name from the shape of the sharp cut off point. NASA did a lot of research into this for spin resistance and called it drooped LE, not cuff. Dec 30, 2015 at 1:11
# From G. H. Darwin   [6 or 7 August 1874] The transcript of this letter is not yet available online. ### Summary Urges CD not to break with Murray even if he does not force the editor [of Q. Rev.] to insert GHD’s letter [in response to Mivart’s attack]. Murray may have a rule not to meddle with editor. ## Summary Urges CD not to break with Murray even if he does not force the editor [of Q. Rev.] to insert GHD’s letter [in response to Mivart’s attack]. Murray may have a rule not to meddle with editor. ## Letter details Letter no. DCP-LETT-9592 From George Howard Darwin To Charles Robert Darwin Sent from unstated Source of text DAR 210.2: 40 Physical description 3pp
# 1 Why mgcv is awesome Don’t think about it too hard… 😉 This post looks at why building Generalized Additive Model, using the mgcv package is a great option. To do this, we need to look, first, at linear regression and see why it might not be the best option in some cases. I’m using simulated data for all plots, but I’ve hidden the first few code chunks to aid the flow of the post. If you want to see them, you can get the code from the page on my GitHub repository here. # 2 Regression models Let’s say we have some data with two attributes, $$Y$$ and $$X$$. If they are linearly related, they might look something like this: library(ggplot2) a<-ggplot(my_data, aes(x=X,y=Y))+ geom_point()+ scale_x_continuous(limits=c(0,5))+ scale_y_continuous(breaks=seq(2,10,2))+ theme(axis.title.y = element_text(vjust = 0.5,angle=0)) a To examine this relationship we could use a regression model. Linear regression is a method for predicting a variable, $$Y$$, using another, $$X$$. Applying this to our data would predict a set of values that form the red line: library(ggforce) a+geom_smooth(col="red", method="lm")+ geom_polygon(aes(x=X, y=Y), col="goldenrod", fill=NA, linetype="dashed", size=1.2, data=dt2)+ geom_label(aes(x=X, y = Y, label=label), data=triangle)+ geom_mark_circle(aes(x=0, y=2), col="goldenrod", fill=NA, linetype="dashed", size=1.2)+ theme(axis.title.y = element_text(vjust = 0.5,angle=0)) This might bring back school maths memories for some, but it is the ‘equation of a straight line’. According to this equation, we can describe a straight line from the position it starts on the $$y$$ axis (the ‘intercept’, or $$\alpha$$), and how much $$y$$ increases for each unit of $$x$$ (the ‘slope’, which we will also call the coefficient of $$x$$, or $$\beta$$). There is also a some natural fluctuation, as all points would be perfectly in-line if not. We refer to this as the ‘residual error’ ($$\epsilon$$). Phrased mathematically, that is: $y= \alpha + \beta x + \epsilon$ Or if we substitute the actual figures in, we get the following: $y= 2 + 1.5 x + \epsilon$ This post won’t go into the mechanics of estimating these models in any depth, but we estimate the models by taking the difference between each data point and the line (the ‘residual’ error), then minimising this difference. We have both positive and negative errors, above and below the line, so to make them all positive for estimation by squaring them and minimising the ‘sum of the squares.’ You may hear this referred to as ‘ordinary least squares’, or OLS. # 3 What about nonlinear relationships? So what do we do if our data look more like this: One of the key assumptions of the model we’ve just seen is that $$y$$ and $$x$$ are linearly related. If our $$y$$ is not normally distributed, we use a Generalized Linear Model (Nelder & Wedderburn, 1972), where $$y$$ is transformed through a link-function, but again we are assuming $$f(y)$$ and $$x$$ are linearly related. If this is not the case, and the relationship varies across the range of $$x$$, it may not be the best fit. We have a few options here: • We can use a linear fit, but we will under or over score sections of the data if we do that. • We can divide into categories. I’ve used three in the plot below, and that is a reasonable option, but it’s a bit or a ‘broad brush stroke.’ Again we may be under or over score sections, plus is seems inequitable around the boundaries between categories. e.g. is $$y$$ that much different if $$x=49$$ when compared to $$x=50$$? • We can use transformation such as polynomials. Below I’ve used a cubic polynomial, so the model fits: $$y = x + x^2 + x^3$$. The combination of these allow the functions to smoothly approximate changes. This is a good option, but can oscillate at the extremes and may induce correlations in the data that degrade the fit. # 4 Splines A further refinement of polynomials is to fit ‘piece-wise’ polynomials, where we chain polynomials together across the range of the data to describe the shape. ‘Splines’ are piece-wise polynomials, named after the tools draftsmen used to use to draw curves. Physical splines were flexible strips that could bent to shape and were held in place by weights. When constructing mathematical splines, we have polynomial functions (the flexible strip), continuous up to and including second derivative (for those of you who know what that means), fixed together at ‘knot’ points. Below is a ggplot2 object with a geom_smooth line with a formula containing a ‘natural cubic spline’ from the ns function. This type of spline is ‘cubic’ ($$x+x^2+x^3$$) and linear past the outer knot points (‘natural’), and it is using 10 knots # 5 How smooth is smooth? The splines can be a smooth or ‘wiggly’ as you like, and this can be controlled either by altering the number of knots $$(k)$$, or by using a smoothing penalty $$\gamma$$. If we increase the number of knots, it will be more ‘wiggly’. This is likely to be closer to the data, and the error will be less, but we are starting to ‘over-fit’ the relationship and fit the noise in our data. When we combine the smoothing penalty, we penalize the complexity in the model and this helps reduce the over-fitting. # 6 Generalized Additive Models (GAMs) A Generalized Additive Model (GAM) (Hastie, 1984) uses smooth functions (like splines) for the predictors in a regression model. These models are strictly additive, meaning we can’t use interaction terms like a normal regression, but we could achieve the same thing by reparametrising as a smoother. This is not quite the case, but essentially we are moving to a model like: $y= \alpha + f(x) + \epsilon$ A more formal example of a GAM, taken from Wood (2017), is: $g(\mu i) = A_i \theta + f_1(x_1) + f_2(x_{2i}) + f3(x_{3i}, x_{4i}) + ...$ Where: • $$\mu i \equiv E(Y _i)$$, the expectation of Y • $$Yi \sim EF(\mu _i, \phi _i)$$, $$Yi$$ is a response variable, distributed according to exponential family distribution with mean $$\mu _i$$ and shape parameter $$\phi$$. • $$A_i$$ is a row of the model matrix for any strictly parametric model components with $$\theta$$ the corresponding parameter vector. • $$f_i$$ are smooth functions of the covariates, $$xk$$, where $$k$$ is each function basis. ## 6.1 So what does that mean for me? In cases where you would build a regression model, but you suspect a smooth fit would do a better job, GAM are a great option. They are suited to non-linear, or noisy data, and you can use ‘random effects’ as they can be viewed as a type of smoother, or all smooths can be reparameterised as random effects. In a similar fashion, they can be viewed as either Frequentist random effects models, or as Bayesian models where the smoother is essentially your prior (don’t quote me on that, I know very little about Bayesian methods at this point). There are two major implementations in R: • Trevor Hastie’s gam package, that uses loess smoothers or smoothing splines at every point. • Simon Wood’s mgcv package, that uses reduced-rank smoothers, and is the subject of this post. There are many other further advances on these approaches, such as GAMLSS, but they are beyond the scope of this post. # 7 mgcv: mixed gam computation vehicle Prof. Simon Wood’s package (Wood,2017) is pretty much the standard. It is included in the standard distribution of R from the R project, and included in other packages such as in geom_smooth from ggplot2. I described it above as using ‘reduced-rank’ smoothers. By this, I mean that we don’t fit a smoother to every point. If our data has 100 points, but could be adequately described by a smoother with 10 knots, it could be described as wasteful to require more. This also hits our degrees-of-freedom and can affect our tendency to over fit. When we combine enough knots to adequately describe our data’s relationship, we can use the penalty to hone this to the desired shape. This is a nice safety net, as the number of knots is not critical past the minimum number. mgcv not only offers the mechanism to build the models and smoothers, but it will also automatically estimate the penalty values for you and optimize the smoothers. When you are more familiar with it, you can change these settings, but it does a very good job out of the box. In my own PhD work, I was building models based on overdispersed data, where there was more error/noise in the data than expected. mgcv was great at cutting through this noise, but I had to increase the penalty values to compensate for this noise. So how do you specify an mgcv GAM for the sigmoidal data above? Here I will use a cubic regression spline in mgcv: library(mgcv) my_gam <- gam(Y ~ s(X, bs="cr"), data=dt) The settings above mean: • s() specifies a smoother. There are other options, but s is a good default • bs="cr" is telling it to use a cubic regression spline (‘basis’). Again there are other options, and the default is a ‘thin-plate regression spline’, which is a little more complicated than the one above, so I stuck with cr. • The s function works out a default number of knots to use, but you can alter this with k=10 for 10 knots for example. # 8 Model Output: So if we look at our model summary, as we would do with a regression model, we’ll see a few things: summary(my_gam) ## ## Family: gaussian ## Link function: identity ## ## Formula: ## Y ~ s(X, bs = "cr") ## ## Parametric coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 43.9659 0.8305 52.94 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Approximate significance of smooth terms: ## edf Ref.df F p-value ## s(X) 6.087 7.143 296.3 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## R-sq.(adj) = 0.876 Deviance explained = 87.9% ## GCV = 211.94 Scale est. = 206.93 n = 300 • Model coefficients for our intercept is shown, and any non-smoothed parameters will show here • An overall significance of each smooth term is below. • Degrees of freedom look unusual, as we have decimal. This is based on ‘effective degrees of freedom’ (edf), because we are using spline functions that expand into many parameters, but we are also penalising them and reducing their effect. We have to estimate our degrees of freedom rather than count predictors (Hastie et al., 2009) . # 9 Check your model: The gam.check() function can be used to look at the residual plots, but it also test the smoothers to see if there are enough knots to describe the data. Read the details below, or the help file, but if the p-value is very low, you need more knots. gam.check(my_gam) ## ## Method: GCV Optimizer: magic ## Smoothing parameter selection converged after 4 iterations. ## The RMS GCV score gradient at convergence was 1.107369e-05 . ## The Hessian was positive definite. ## Model rank = 10 / 10 ## ## Basis dimension (k) checking results. Low p-value (k-index<1) may ## indicate that k is too low, especially if edf is close to k'. ## ## k' edf k-index p-value ## s(X) 9.00 6.09 1.1 0.97 # 10 Is it any better than linear model? Lets test our model against a regular linear regression withe the same data: my_lm <- lm(Y ~ X, data=dt) anova(my_lm, my_gam) ## Analysis of Variance Table ## ## Model 1: Y ~ X ## Model 2: Y ~ s(X, bs = "cr") ## Res.Df RSS Df Sum of Sq F Pr(>F) ## 1 298.00 88154 ## 2 292.91 60613 5.0873 27540 26.161 < 2.2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Yes, yes it is! Our anova function has performed an f-test here, and our GAM model is significantly better that linear regression. # 11 Summary So, we looked at what a regression model is and how we are explaining one variable: $$y$$, with another: $$x$$. One of the underlying assumptions is a linear relationship, but this isn’t always the case. Where the relationship varies across the range of $$x$$, we can use functions to alter this shape. A nice way to do it is by chaining smooth curves together at ‘knot’-points, and we refer to this as a ‘spline.’ We can use these splines in a regular regression, but if we use them in the context of a GAM, we estimate both the regression model and how smooth to make our smoothers. The mgcv package is great for estimating GAMs and choosing the smoothing parameters. The example above shows a spline-based GAM giving a much better fit than a linear regression model. If your data are not linear, or noisy, a smoother might be appropriate I highly recommend Noam Ross’ https://twitter.com/noamross?s=20 free online GAM course: https://noamross.github.io/gams-in-r-course/ For more depth, Simon Wood’s Generalized Additive Models is one of the best books on, not just GAMS, but regression in general: https://www.crcpress.com/Generalized-Additive-Models-An-Introduction-with-R-Second-Edition/Wood/p/book/9781498728331 # 12 References: • NELDER, J. A. & WEDDERBURN, R. W. M. 1972. Generalized Linear Models. Journal of the Royal Statistical Society. Series A (General), 135, 370-384. • HARRELL, F. E., JR. 2001. Regression Modeling Strategies, New York, Springer-Verlag New York. • HASTIE, T. & TIBSHIRANI, R. 1986. Generalized Additive Models. Statistical Science, 1, 297-310. 291 • HASTIE, T., TIBSHIRANI, R. & FRIEDMAN, J. 2009. The Elements of Statistical Learning : Data Mining, Inference, and Prediction, New York, NETHERLANDS, Springer. • WOOD, S. N. 2017. Generalized Additive Models: An Introduction with R, Second Edition, Florida, USA, CRC Press. ##### Chris Mainey ###### Senior Data Scientist NHS Senior Data Scientist with interests in statistical modelling and machine learning in healthcare data.
Timezone: » Poster A new metric on the manifold of kernel matrices with application to matrix geometric means Suvrit Sra Wed Dec 05 07:00 PM -- 12:00 AM (PST) @ Harrah’s Special Events Center 2nd Floor Symmetric positive definite (spd) matrices are remarkably pervasive in a multitude of scientific disciplines, including machine learning and optimization. We consider the fundamental task of measuring distances between two spd matrices; a task that is often nontrivial whenever an application demands the distance function to respect the non-Euclidean geometry of spd matrices. Unfortunately, typical non-Euclidean distance measures such as the Riemannian metric $\riem(X,Y)=\frob{\log(X\inv{Y})}$, are computationally demanding and also complicated to use. To allay some of these difficulties, we introduce a new metric on spd matrices: this metric not only respects non-Euclidean geometry, it also offers faster computation than $\riem$ while being less complicated to use. We support our claims theoretically via a series of theorems that relate our metric to $\riem(X,Y)$, and experimentally by studying the nonconvex problem of computing matrix geometric means based on squared distances. #### Author Information ##### Suvrit Sra (MIT) Suvrit Sra is a faculty member within the EECS department at MIT, where he is also a core faculty member of IDSS, LIDS, MIT-ML Group, as well as the statistics and data science center. His research spans topics in optimization, matrix theory, differential geometry, and probability theory, which he connects with machine learning --- a key focus of his research is on the theme "Optimization for Machine Learning” (http://opt-ml.org)
## Online Course Discussion Forum ### MC III Algebra help MC III Algebra help 3.31: My equations are: $$a^4-4a \geq0$$ $$x= (-a^2+ \sqrt{a^4-4a})/2$$ But I don't know how to get the maximum value, I tried completing the square but it didn't work. 3.34: I started with $$20x=x^2+y^2$$, then I got $$x^3-y^3-18xy = (x+y)(x^2-xy+y^2)-18y=(x+y)(20x-xy)-18xy=x((x+y)(20-y)-18y)$$ I got x=0 or y = 0, and (x+y)(20-y)-18xy=0. however, I can't figure out how to solve the equation. 3.35: I substituted $$a= \sqrt[5]{x-6}$$ and $$b=\sqrt[5]{39-x}$$, and I got ab(a+b)(a^2+b^2)=30. I tried c = ab and d=a+b, but It didn't really work out.
$l$-adic representations from Shimura curves - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-20T05:45:25Z http://mathoverflow.net/feeds/question/92155 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/92155/l-adic-representations-from-shimura-curves $l$-adic representations from Shimura curves e1 2012-03-25T11:41:04Z 2012-03-25T12:55:48Z <p>This question may be kind of vague. And we use the <strong>same</strong> notations as in Carayol's papers:</p> <p><em>H. Carayol, Sur les représentations l-adiques associées aux formes modulaires de Hilbert;</em></p> <p><em>H. Carayol, Sur la mauvaise réduction des courbes de Shimura.</em></p> <p>We know Carayol constructed l-adic representation $\sigma$ of $Gal(\bar{F}/F)$ "in" the étale cohomology group of "quaternionic" Shimura curves, roughly speaking, by taking some Hecke "eigenspace" of $H_{ét}^1(M_K\times_F \bar{F}, \mathcal{F}_{\lambda})$.</p> <p>My question is could we get this Galois representation from some unitary Shimura curves? More precisely, does <code>$\sigma|_{Gal(\bar{E}/E)}$</code> appear in <code>$H_{ét}^1(M_{K'}'\times_E \bar{E},\mathcal{F}_{\lambda}')$</code> for some <code>$K'$</code>, and some locally constant sheaf <code>$\mathcal{F}_{\lambda}'$</code>?</p> <p>Thanks!</p> http://mathoverflow.net/questions/92155/l-adic-representations-from-shimura-curves/92164#92164 Answer by David Loeffler for $l$-adic representations from Shimura curves David Loeffler 2012-03-25T12:55:48Z 2012-03-25T12:55:48Z <p>Yes, this can be done. In recent years, Clozel, Harris, Taylor and others have shown how to attach Galois representations to sufficiently nice automorphic representations of $GL_n$ (for arbitrary $n$) over totally real and CM fields. Very very roughly, when the base is a CM field the necessary Galois representations are constructed using unitary Shimura varieties; and when the base is totally real, the Galois representations are constructed by patching representations of "enough" imaginary CM extensions of the given totally real field.</p> <p>See for instance Taylor's review article <a href="http://www.math.ias.edu/~rtaylor/longicm02.pdf" rel="nofollow">http://www.math.ias.edu/~rtaylor/longicm02.pdf</a>, where it is sketched in the proof of Theorem 3.6 how to get representations of $Gal(\overline{\mathbb{Q}} / \mathbb{Q})$ by gluing together representations of $Gal(\overline{L} / L)$ for all imaginary quadratic fields $L$ in which some auxilliary prime $p$ is split.</p>
# Evaluate limit as x approaches 0 of (sin(x^2))/x Evaluate the limit of the numerator and the limit of the denominator. Take the limit of the numerator and the limit of the denominator. Evaluate the limit of the numerator. Take the limit of each term. Move the limit inside the trig function because sine is continuous. Move the exponent from outside the limit using the Limits Power Rule. Evaluate the limit of by plugging in for . Raising to any positive power yields . The exact value of is . Evaluate the limit of by plugging in for . The expression contains a division by The expression is undefined. Undefined Since is of indeterminate form, apply L’Hospital’s Rule. L’Hospital’s Rule states that the limit of a quotient of functions is equal to the limit of the quotient of their derivatives. Find the derivative of the numerator and denominator. Differentiate the numerator and denominator. Differentiate using the chain rule, which states that is where and . To apply the Chain Rule, set as . The derivative of with respect to is . Replace all occurrences of with . Differentiate using the Power Rule which states that is where . Remove parentheses. Move to the left of . Multiply by . Reorder the factors of . Differentiate using the Power Rule which states that is where . Take the limit of each term. Split the limit using the Limits Quotient Rule on the limit as approaches . Move the term outside of the limit because it is constant with respect to . Split the limit using the Product of Limits Rule on the limit as approaches . Move the limit inside the trig function because cosine is continuous. Move the exponent from outside the limit using the Limits Power Rule. Evaluate the limits by plugging in for all occurrences of . Evaluate the limit of by plugging in for . Evaluate the limit of by plugging in for . Evaluate the limit of which is constant as approaches .
## Maxima and minima — using calculus — for IITJEE Advanced Mathematics Problem: The length of the edge of the cube $ABCDA_{1}B_{1}C_{1}D_{1}$ is 1. Two points M and N move along the line segments AB and $A_{1}D_{1}$, respectively, in such a way that at any time t $(0 \leq t < \infty)$ we have $BM = |\sin{t}|$ and $D_{1}N=|\sin(\sqrt{2}t)|$. Show that MN has no maximum. Proof: Clearly, $MN \geq MA_{1} \geq AA_{1} =1$. If $MN=1$ for some t, then $M=A$ and $N=A_{1}$, which is equivalent to $|\sin{t}|=1$ and $|\sin{\sqrt{2}t}|=1$. Consequently, $t=\frac{\pi}{2}+k\pi$ and $\sqrt{2}t=\frac{\pi}{2}+n\pi$ for some integers k and n, which implies $\sqrt{2}=\frac{2n+1}{2k+1}$, a contradiction since $\sqrt{2}$ is irrational. This is why $MN >1$ for any t. We will now show that MN can be made arbitrarily close to 1. For any integer k, set $t_{k}=\frac{\pi}{2}+k\pi$. Then, $|\sin{t_{k}}|=1$, so at any time $t_{k}$, the point M is at A. To show that N can be arbitrarily close to $A_{1}$ at times $t_{k}$, it is enough to show that $|\sin(\sqrt{2}t_{k})|$ can be arbitrarily close to 1 for appropriate choices of k. We are now going to use Kronecker’s theorem: If $\alpha$ is an irrational number, then the set of numbers of the form $ma+n$where m is a positive integer while n is an arbitrary integer, is dense in the set of all real numbers. The latter means that every non-empty open interval (regardless of how small it is) contains a number of the form $ma+n$. Since $\sqrt{2}$ is irrational, we can use Kronecker’s theorem with $\alpha=\sqrt{2}$. Then, for $x=\frac{1-\sqrt{2}}{2}$, and any $\delta>0$, there exist integer $k \geq 1$ and $n_{k}$ such  that $k\sqrt{2}-n_{k} \in (x-\delta, x+\delta)$. That is, for $\in_{k}=\sqrt{2}k+\frac{\sqrt{2}}{2}-\frac{1}{2}-n_{k}$ we have $|\in_{k}|<\delta$. Since $\sqrt{2}(k+\frac{1}{2})=\frac{1}{2}+n_{k}+\in_{k}$, we have $|\sin(\sqrt{2}t_{k})|=|\sin \pi \sqrt{2}(k+\frac{1}{2})|=|\sin(\frac{\pi}{2}+n_{k}\pi+\in_{k}\pi)|=|\cos{(\pi \in_{k})}|$. It remains to note that $|\cos{(\delta\pi)}|$ tends to 1 as $\delta$ tends to 0. Hence, MN can be made arbitrarily close to 1. Ref: Geometric Problems on Maxima and Minima by Titu Andreescu, Oleg Mushkarov, and Luchezar Stoyanov. Thanks to Prof Andreescu, et al ! Nalin Pithwa
Tamilnadu State Board New Syllabus Samacheer Kalvi 11th Maths Guide Pdf Chapter 8 Vector Algebra – I Ex 8.4 Text Book Back Questions and Answers, Notes. ## Tamilnadu Samacheer Kalvi 11th Maths Solutions Chapter 8 Vector Algebra – I Ex 8.4 Question 1. Find the magnitude of $$\vec{a}$$ × $$\vec{b}$$ if $$\vec{a}$$ = 2î + ĵ + 3k̂ and $$\vec{b}$$ = 3î + 5ĵ – 2k̂ The given vectors are $$\vec{a}$$ = 2î + ĵ + 3k̂ $$\vec{b}$$ = 3î + 5ĵ – 2k̂ Question 2. Show that Question 3. Find the vectors of magnitude 10√3 that are perpendicular to the plane which contains î + 2ĵ + k̂ and î + 3ĵ + 4k̂ Let the given vectors be $$\vec{a}$$ = î + 2ĵ + k̂ $$\vec{b}$$ = î + 3ĵ + 4k̂ Question 4. Find the unit vectors perpendicular to each of the vectors $$\vec{a}$$ + $$\vec{b}$$ and $$\vec{a}$$ – $$\vec{b}$$, where $$\vec{a}$$ = î + ĵ + k̂ and $$\vec{b}$$ = î + 2ĵ + 3k̂ Question 5. Find the area of the parallelogram whose two adjacent sides are determined by the vectors î + 2ĵ + 3k̂ and 3î – 2ĵ + k̂ Question 6. Find the area of the triangle whose vertices are A(3, -1, 2), B(1, -1, -3), and C(4, -3, 1) The given vertices of the triangle ABC are A(3, -1, 2), B(1, -1, -3) and C(4, -3, 1) Question 7. Question 8. For any vector $$\vec{a}$$ prove that Let the given vector be $$\vec{a}$$ = 2î + ĵ – k̂ and $$\vec{b}$$ = 2î + ĵ – k̂
# Applied Mathematics and Numerical Analysis Seminar ## Past sessions ### Oscillatory behavior of a mixed type difference equation with variable coefficients MathJax TeX Test PageIn this talk, we present a study on the oscillatory behaviour of the mixed type difference equation with variable coefficients$$\Delta x(n) = \sum_{i=1}^l p_i(n)x(\tau_i(n)) + \sum_{j=1}^m q_j(n)x(\sigma_j(n)),\, n \geq n_0.$$ ### An alternative stabilization in numerical simulations of Oldrod-B type fluids The numerical simulation of non-Newtonian viscoelastic fluids flow is a challenging problem. One of the approaches being often adopted to stablize the numerical simulations is based on addition of stress diffusion term into the transport equations for viscoelastic stress tensor. The additional term affect the solution of the problem and special care should be taken to keep the modified model consistent with the original problem. In this work it was analyzed in detail the influence of numerical stabilization using artificial stress diffusion and it was presented a new arternative. Instead of the classical addition of artificial stress diffusion term it was used the modified additional term which is only present during the transient phase and should vanish in when approaching the stationary case. The steady solution is not affected by such vanishing artificial term, however the stability of the numerical method is improved. This is joint work with Tomás Bodnár (Institute of Mathematics, Czech Academy of Sciences and Faculty of Mechanical Engineering, Czech Technical University in Prague, Czech Republic). ### Exact solution for a Benney-Lin equation type (Gain in Regularity) In this seminar we will show a exact solution for a Benney-Lin equation type using mainly the Ince Transformation. We establish exact traveling waves solution to the nonlinear evolution equation Benney-Lin type. ### Very high-order finite volume schemes with curved domains Finite volume methods of third or higher order require a specific treatment of the boundary conditions when dealing with a non-polygonal domain that does not exactly fit with the mesh. We also face a similar situation with internal smooth interfaces sharing two subdomains. To address this issue, several technologies have been developed since the 90's such as the isoparamatric elements, the (ghost cells) immersed boundary and the inverse Lax-Wendroff boundary treatment among others. We propose a quick overview of the traditional methods and introduce the new Reconstruction of Off-site Data (ROD) method. Basically, the idea consists, first in definitively distinguishing the computational domain (cells or nodes where the solution is computed) to the physical one and, secondly, in "transporting" the boundary conditions prescribed on the real boundary to the computing domain. To this end, specific local polynomial reconstructions that contains a fingerprint of the boundary conditions are proposed and used to schemes that achieve up to sixth-order of accuracy. Several applications will be proposed in the context of the finite volume (flux reconstruction) and finite difference (ghost cells) for the convection diffusion equation and the Euler system. ### Renormalized transport of inertial particles We study how an imposed fluid flow — laminar or turbulent — modifies the transport properties of inertial particles (e.g. aerosols, droplets or bubbles), namely their terminal velocity, effective diffusivity, and concentration following a point-source emission. Such quantities are investigated by means of analytical and numerical computations, as functions of the control parameters of both flow and particle; i.e., density ratio, inertia, Brownian diffusivity, gravity (or other external forces), turbulence intensity, compressibility degree, space dimension, and geometric/temporal properties. The complex interplay between these parameters leads to the following conclusion of interest in the realm of applications: any attempt to model dispersion and sedimentation processes — or equivalently the wind-driven surface transport of floaters — cannot avoid taking into account the full details of the flow field and of the inertial particle. ### On the existence of a solution of a class of non-stationary free boundary problems We consider a class of parabolic free boundary problems with heterogeneous coefficients. We establish existence of a solution for this problem. We use a regularized problem for which we prove existence of a solution by applying the Tychonoff fixed point theorem. Then we pass to the limit to get a solution of our problem. ### On the time fractional differential equation with integral conditions We study the existence and uniqueness of a solution for time order partial fractional differential equations with integral conditions. By using the method of energy inequalities, we find a priori estimates and the density of the range of the operator generated by given the problem. ### Mathematical modeling, analysis and simulation of biological, bio-inspired and engineering systems Over the last decade, there have been dramatic advances in mathematical modeling, analysis and simulation techniques to understand fundamental mechanisms underlying multidisciplinary applications that involve multi-physics interactions. This work will present the results from projects that evolved from multidisciplinary applications of differential equations for multi-physics problems in biological, bio-inspired and engineering systems. Specifically, mathematical modeling and numerical methods for efficient computation of nonlinear interaction for coupled differential equation models that arise from applications such as flow-structure interactions to understand rupture of aneurysms to dynamics of micro-air vehicles as well as modeling dynamics of infectious disease to modeling social dynamics will be presented. Some theoretical results that validate the reliability and robustness of the computational methodology employed will also be presented. We will also discuss how such projects can provide opportunities for students and faculty at all levels to employ transformative research in multidisciplinary areas. Upcoming opportunities for undergraduate and graduate fellowships as well as research opportunities for faculty for collaborative proposals will also be discussed. ### A new numerical method based on the modified hat functions for solving fractional optimal control problems In the present work, a numerical method based on the modified hat functions is introduced for solving a class of fractional optimal control problems. In this scheme, the control and the fractional derivative of state function are considered as linear combinations of the modified hat functions. The Riemann–Liouville integral operator is used to give approximations for the state function and some of its derivatives. Using the properties of the considered basis functions, the fractional optimal control problem is easily reduced to solving a system of nonlinear algebraic equations. An error bound is presented for the approximate optimal value of the performance index obtained by the proposed method. Then the method is developed for solving a class of optimal control problems with inequality constraints. Finally, some illustrative examples are considered to demonstrate the effectiveness and accuracy of the proposed technique. ### Solving integral and integro-differential equations using Collocation and Wavelets methods The main objective of this work is to study some classes of integral and integro-differential equations with regular and singular kernels. We introduce a wavelets method to solve a new class of Fredholm integral equations of the second kind with non symmetric kernel; we also apply a collocation method based on the airfoil polynomial to numerically solve an integro-differential equation of second order with Cauchy kernel. ### Numerical simulations of Stokes flow in 3D using the MFS In this work we present the method of fundamental solutions (MFS) in several contexts. We start with a more intuitive example and then we extend it to vectorial PDE's. We present the density results that justify our approach and then we use the MFS to solve the Stokes system in 2D and 3D. We deal with non-trivial domains and with different types of boundary conditions, namely the mixed Dirichlet-Neumann conditions. We will present several simulations that show the strengths of the method and some of the numerical difficulties found. (Joint work with C. J. S. Alves and A. L. Silvestre) ### Modeling microbial interactions, dynamics and interventions: from data to processes Controlling the evolution and spread of antimicrobial resistance is a major global health priority. While the discovery of new antibiotics does not follow the rate at which new resistances develop, a more rational use of available drugs remains critical. In my work, I explore the role of host immunity in infection dynamics and control, hence as an important piece in the puzzle of antibiotic resistance management. I will present two studies from my research on microbial dynamics, addressing processes that occur at the within- and between-host level. The first study examines antibiotic resistance and treatment optimization for bacterial infections, quantifying the crucial role of host immune defenses at the single host level. The second study presents a multi-strain epidemiological model applied to pneumococcus data before and after vaccination. This framework allows for retrospective inference of strain interactions in co-colonization and vaccine efficacy parameters, and can be useful for comparative analyses across different immunized populations. ### Suitable Far-field Boundary Conditions for Wall-bounded Stratified Flows This talk presents an alternative boundary conditions setup for the numerical simulations of stable stratified flow. The focus of the tested computational setup is on the pressure boundary conditions on the artificial boundaries of the computational domain. The simple three dimensional test case deals with the steady flow of an incompressible, variable density fluid over a low smooth model hill. The Boussinesq approximation model is solved by an in-house developed high-resolution numerical code, based on compact finite-difference discretization in space and Strong Stability Preserving Runge-Kutta method for (pseudo-) time stepping. This is a joint work with Philippe Fraunie, University of Toulon, France. ### Necessary and sufficient conditions for existence of minimizers for vector problems in the Calculus of Variations We report our work about the main necessary and sufficient conditions for weak lower semi-continuity of integral functionals in vector Calculus of Variations. In particular we provide tools to investigate rank-one convexity of functions defined on $2\times 2$-matrices. Furthermore, we explore some consequences and examples. We also explore the quasiconvexity condition in the case where the integrand of an integral functional is a fourth-degree homogeneous polynomial. ### Reconstruction of PDE coefficients with overprescripton of Cauchy data at the boundary Frequently incomplete information about coefficients in partial differential equations is compensated by overprescribed Cauchy data on the boundary. We analyse this kind of boundary value problems in an elliptic system in Lipschitz domains. Main techniques are variational formulation, boundary integral equations and the Calderon projector. To estimate those coefficients we propose a variational formulation based on internal discrepancy observed in the mixed boundary value problem, obtained by splitting the overprescribed Cauchy data. Some numerical experiments are presented. ### Models for sustainable biodiesel production Several countries have already begun to invest in alternative energies due to smaller and smaller fossil fuel resources. In particular, for biodiesel production the Jatropha curcas appears to be a possible resource, in that it thrives even in harsh and very dry conditions. From its seeds a relevant quantity of oil can be extracted, for production of high grade biodiesel fuel. But this plant is subject to parasitism from a mosaic virus, the Begomovirus, that is carried by white-flies Bemisia tabaci. The talk is centered on the investigation of two models for the fight of this plant infection. In the case of large plantations we investigate the optimal insecticide spraying policy. Here the most relevant parameters of the ecosystem appear to be the infection transmission rate from vectors to plants and the vector mortality. The results indicate that spraying should be administered only after 10 days of the epidemics insurgence, relentlessly continued for about three months, after which disease eradication is obtained, [2]. At the small scale instead, we consider possible production by individuals, that cultivate this plant in small plots that would be otherwise be left wild and unproductive, [1]. We consider the effects of media campaigns that keep people aware of this plant disease, and indicate means for fighting it. The model shows that awareness campaigns should be implemented rather intensively, in order to effectively reduce or completely eradicate the infection. References [1] Priti Kumar Roy, Fahad Al Basir, Ezio Venturino (2017) Effects of Awareness Program for Controlling Mosaic Disease in Jatropha curcas Plantations, to appear in MMAS. [2] Ezio Venturino, Priti Kumar Roy, Fahad Al Basir, Abhirup Datta (2016) A model for the control of the Mosaic Virus disease in Jatropha Curcas plantations, to appear in Energy, Ecology and Environment, doi:10.1007/s40974-016-0033-8. (work in collaboration with Fahad Al Basir and Priti K. Roy, Javadpur University, India) ### A hierarchy of models for the flow of fluids through porous solids The celebrated equations due to Fick and Darcy are approximations that can be obtained systematically on the basis of numerous assumptions within the context of Mixture Theory; these equations however not having been developed in such a manner by Fick or Darcy. Relaxing the assumptions made in deriving these equations via mixture theory, selectively, leads to a hierarchy of mathematical models and it can be shown that popular models due to Forchheimer, Brinkman, Biot and many others can be obtained via appropriate approximations to the equations governing the flow of interacting continua. It is shown that a variety of other generalizations are possible in addition to those that are currently in favor, and these might be appropriate for describing numerous interesting technological applications. ### Perturbative methods in thermal imaging I will discuss shortly three different inverse problems in the field of thermal nondestructive testing. In all cases we have a thin metallic plate $\Omega_0$ whose top boundary $S_{top}$ is not accessible while we are able to operate on the opposite surface $S_{bot}$. We heat the specimen by applying a heat flux of density $\phi$ on $S_{bot}$ and measure a sequence of temperature maps. Problem 1. Recover a perturbation of the heat transfer coefficient on $S_{top}$. [Inglese, Olmi - 2017] Problem 2. Recover a surface damage on the inaccessible side $S_{top}$. [Inglese, Olmi - in progress] Problem 3. Recover a nonlinear heat transfer coefficient on $S_{top}$. [Clarelli, Inglese - 2016] Older session pages: Previous 2 3 4 5 6 7 8 9 10 11 12 13 14 Oldest
# Lower Bounds on Stabilizer Rank Shir Peleg1, Amir Shpilka1, and Ben Lee Volk2 1Blavatnik School of Computer Science, Tel Aviv University, Tel Aviv, Israel 2Efi Arazi School of Computer Science, Reichman University, Herzliya, Israel ### Abstract The $\textit{stabilizer rank}$ of a quantum state $\psi$ is the minimal $r$ such that $\left| \psi \right \rangle = \sum_{j=1}^r c_j \left|\varphi_j \right\rangle$ for $c_j \in \mathbb{C}$ and stabilizer states $\varphi_j$. The running time of several classical simulation methods for quantum circuits is determined by the stabilizer rank of the $n$-th tensor power of single-qubit magic states. We prove a lower bound of $\Omega(n)$ on the stabilizer rank of such states, improving a previous lower bound of $\Omega(\sqrt{n})$ of Bravyi, Smith and Smolin [7]. Further, we prove that for a sufficiently small constant $\delta$, the stabilizer rank of any state which is $\delta$-close to those states is $\Omega(\sqrt{n}/\log n)$. This is the first non-trivial lower bound for approximate stabilizer rank. Our techniques rely on the representation of stabilizer states as quadratic functions over affine subspaces of $\mathbb{F}_2^n$, and we use tools from analysis of boolean functions and complexity theory. The proof of the first result involves a careful analysis of directional derivatives of quadratic polynomials, whereas the proof of the second result uses Razborov-Smolensky low degree polynomial approximations and correlation bounds against the majority function. ### ► References [1] Scott Aaronson and Alex Arkhipov. The Computational Complexity of Linear Optics. Theory Comput., 9:143–252, 2013. doi:10.4086/​toc.2013.v009a004. https:/​/​doi.org/​10.4086/​toc.2013.v009a004 [2] Scott Aaronson and Daniel Gottesman. Improved simulation of stabilizer circuits. Phys. Rev. A, 70:052328, Nov 2004. doi:10.1103/​PhysRevA.70.052328. https:/​/​doi.org/​10.1103/​PhysRevA.70.052328 [3] Ethan Bernstein and Umesh V. Vazirani. Quantum Complexity Theory. SIAM J. Comput., 26(5):1411–1473, 1997. doi:10.1137/​S0097539796300921. https:/​/​doi.org/​10.1137/​S0097539796300921 [4] Sergey Bravyi, Dan Browne, Padraic Calpin, Earl Campbell, David Gosset, and Mark Howard. Simulation of quantum circuits by low-rank stabilizer decompositions. Quantum, 3:181, September 2019. doi:10.22331/​q-2019-09-02-181. https:/​/​doi.org/​10.22331/​q-2019-09-02-181 [5] Sergey Bravyi and David Gosset. Improved Classical Simulation of Quantum Circuits Dominated by Clifford Gates. Phys. Rev. Lett., 116:250501, Jun 2016. doi:10.1103/​PhysRevLett.116.250501. https:/​/​doi.org/​10.1103/​PhysRevLett.116.250501 [6] Sergey Bravyi and Alexei Kitaev. Universal quantum computation with ideal Clifford gates and noisy ancillas. Phys. Rev. A, 71:022316, Feb 2005. doi:10.1103/​PhysRevA.71.022316. https:/​/​doi.org/​10.1103/​PhysRevA.71.022316 [7] Sergey Bravyi, Graeme Smith, and John A. Smolin. Trading Classical and Quantum Computational Resources. Phys. Rev. X, 6:021043, Jun 2016. doi:10.1103/​PhysRevX.6.021043. https:/​/​doi.org/​10.1103/​PhysRevX.6.021043 [8] Jeroen Dehaene and Bart De Moor. Clifford group, stabilizer states, and linear and quadratic operations over GF(2). Phys. Rev. A, 68:042318, Oct 2003. doi:10.1103/​PhysRevA.68.042318. https:/​/​doi.org/​10.1103/​PhysRevA.68.042318 [9] Yuval Filmus, Hamed Hatami, Steven Heilman, Elchanan Mossel, Ryan O'Donnell, Sushant Sachdeva, Andrew Wan, and Karl Wimmer. Real Analysis in Computer Science: A collection of Open Problems. Simons Institute, 2014. URL: https:/​/​simons.berkeley.edu/​sites/​default/​files/​openprobsmerged.pdf. https:/​/​simons.berkeley.edu/​sites/​default/​files/​openprobsmerged.pdf [10] Daniel Gottesman. Stabilizer Codes and Quantum Error Correction. Dissertation (Ph.D.), California Institute of Technology, 1997. doi:10.7907/​rzr7-dt72. https:/​/​doi.org/​10.7907/​rzr7-dt72 [11] Lov K. Grover. A Fast Quantum Mechanical Algorithm for Database Search. In Proceedings of the Twenty-Eighth Annual ACM Symposium on the Theory of Computing, pages 212–219. ACM, 1996. doi:10.1145/​237814.237866. https:/​/​doi.org/​10.1145/​237814.237866 [12] Cupjin Huang, Michael Newman, and Mario Szegedy. Explicit Lower Bounds on Strong Quantum Simulation. IEEE Trans. Inf. Theory, 66(9):5585–5600, 2020. doi:10.1109/​TIT.2020.3004427. https:/​/​doi.org/​10.1109/​TIT.2020.3004427 [13] Daniel J. Kleitman. On a combinatorial conjecture of Erdös. Journal of Combinatorial Theory, 1(2):209–214, 1966. doi:10.1016/​S0021-9800(66)80027-3. https:/​/​doi.org/​10.1016/​S0021-9800(66)80027-3 [14] Lucas Kocia. Improved Strong Simulation of Universal Quantum Circuits. arXiv preprint arXiv:2012.11739, 2020. URL: https:/​/​arxiv.org/​abs/​2012.11739. arXiv:2012.11739 [15] Farrokh Labib. Stabilizer rank and higher-order Fourier analysis. arXiv preprint arXiv:2107.10551, 2021. URL: https:/​/​arxiv.org/​abs/​2107.10551. https:/​/​doi.org/​10.22331/​q-2022-02-09-645 arXiv:2107.10551 [16] Benjamin Lovitz and Vincent Steffan. New techniques for bounding stabilizer rank. arXiv preprint arXiv:2110.07781, 2021. URL: https:/​/​arxiv.org/​abs/​2110.07781. arXiv:2110.07781 [17] Tomoyuki Morimae and Suguru Tamaki. Fine-grained quantum computational supremacy. Quant. Inf. Comput., 19:1089, 2019. doi:10.26421/​QIC19.13-14-2. https:/​/​doi.org/​10.26421/​QIC19.13-14-2 [18] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information (10th Anniversary edition). Cambridge University Press, 2016. doi:10.1017/​CBO9780511976667. https:/​/​doi.org/​10.1017/​CBO9780511976667 [19] Hammam Qassim, Hakop Pashayan, and David Gosset. Improved upper bounds on the stabilizer rank of magic states. Quantum, 5:606, December 2021. doi:10.22331/​q-2021-12-20-606. https:/​/​doi.org/​10.22331/​q-2021-12-20-606 [20] Ran Raz and Avishay Tal. Oracle separation of BQP and PH. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, pages 13–23. ACM, 2019. doi:10.1145/​3313276.3316315. https:/​/​doi.org/​10.1145/​3313276.3316315 [21] Alexander A. Razborov. Lower bounds on the size of bounded depth circuits over a complete basis with logical addition. Mat. Zametki, 41:598–607, 1987. English translation in Mathematical Notes of the Academy of Sci. of the USSR, 41(4):333-338, 1987. doi:10.1007/​BF01137685. https:/​/​doi.org/​10.1007/​BF01137685 [22] Peter W. Shor. Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer. SIAM J. Comput., 26(5):1484–1509, 1997. doi:10.1137/​S0097539795293172. https:/​/​doi.org/​10.1137/​S0097539795293172 [23] Daniel R. Simon. On the Power of Quantum Computation. SIAM J. Comput., 26(5):1474–1483, 1997. doi:10.1137/​S0097539796298637. https:/​/​doi.org/​10.1137/​S0097539796298637 [24] Roman Smolensky. Algebraic Methods in the Theory of Lower Bounds for Boolean Circuit Complexity. In Proceedings of the 19th Annual ACM Symposium on Theory of Computing, pages 77–82. ACM, 1987. doi:10.1145/​28395.28404. https:/​/​doi.org/​10.1145/​28395.28404 [25] Roman Smolensky. On Representations by Low-Degree Polynomials. In 34th Annual Symposium on Foundations of Computer Science, pages 130–138. IEEE Computer Society, 1993. doi:10.1109/​SFCS.1993.366874. https:/​/​doi.org/​10.1109/​SFCS.1993.366874 [26] Maarten Van den Nest. Classical simulation of quantum computation, the Gottesman-Knill theorem, and slightly beyond. Quant. Inf. Comput., 10:258, 2010. doi:10.26421/​QIC10.3-4-6. https:/​/​doi.org/​10.26421/​QIC10.3-4-6 [27] R. Ryan Williams. Limits on Representing Boolean Functions by Linear Combinations of Simple Functions: Thresholds, ReLUs, and Low-Degree Polynomials. In 33rd Computational Complexity Conference, CCC 2018, volume 102 of LIPIcs, pages 6:1–6:24. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2018. doi:10.4230/​LIPIcs.CCC.2018.6. https:/​/​doi.org/​10.4230/​LIPIcs.CCC.2018.6 ### Cited by [1] Benjamin Lovitz and Vincent Steffan, "New techniques for bounding stabilizer rank", Quantum 6, 692 (2022). The above citations are from Crossref's cited-by service (last updated successfully 2022-05-28 21:17:06). The list may be incomplete as not all publishers provide suitable and complete citation data. On SAO/NASA ADS no data on citing works was found (last attempt 2022-05-28 21:17:06).
• Solid Mechanics • ### TOPOLOGY OPTIMIZATION OF PIEZOELECTRIC ACTUATOR CONSIDERING CONTROLLABILITY 1) Hu Jun,Kang Zhan() 1. State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian 116024, China • Received:2019-01-07 Accepted:2019-02-25 Online:2019-07-18 Published:2019-07-30 • Contact: Kang Zhan Abstract: Piezoelectric actuators can convert electrical energy into mechanical energy, and has application potential in active vibration control of structures. Since the layout of the piezoelectric actuators has a great influence on the vibration control effect, the optimization of the actuators has always been one of the key factors to structural control. In order to improve the efficiency of control energy in the piezoelectric structure, this paper proposes a topology optimization method for the layout design of piezoelectric actuators with the goal of improving structural controllability. The finite element modeling of the piezoelectric structure is carried out based on the classical laminate theory. The modal superposition method is used to map the dynamic governing equation to the modal space. The controllability index based on the singular value of the control matrix is derived. In the optimization model, the exponential form of the controllability index is chosen as the objective function, and the design variables are the relative densities of the actuator elements. Based on the Solid Isotropic Material Penalization method, an artificial piezoelectric coefficient penalty model is constructed. Sensitivity analysis for the controllability index is proposed based on the singular value of the control matrix. The optimization problem is solved by a gradient-based mathematical programming method. Numerical examples verify the effectiveness of the sensitivity analysis method and the optimization model and show the significance of the layout design of piezoelectric actuators. The influence of some key factors on the optimization results are discussed. It shows that the more piezoelectric materials, the better the controllability; the modes of interest in the objective function has a great influence on the layout of the piezoelectric actuators. CLC Number:
## LaTeX forum ⇒ Graphics, Figures & Tables ⇒ Figure between 2 paragraphs Information and discussion about graphics, figures & tables in LaTeX documents. maxime54 Posts: 3 Joined: Sat Jun 07, 2014 5:34 pm ### Figure between 2 paragraphs Hi everyone, I have just started to use LaTeX and I do not manage to do one thing : I would like to add a diagram (.png)between 2 paragraphs. However, when I try to do so, the diagram and the text place on top each other (superposition). I looked for the solution on the Internet but no one works. I hope you can help me. (I apologize for my English mistakes, I'm not a native) By the way, here is what I did : \documentclass[10pt]{report}\usepackage[francais]{babel} \usepackage[T1]{fontenc}\usepackage[top=2cm, bottom=2cm, left=2cm, right=2cm]{geometry}\usepackage{soul}\usepackage{color}\usepackage{graphicx}\usepackage{wrapfig}\usepackage{float}\usepackage{here}\usepackage{picins}\usepackage[section]{placeins}\makeatletter\renewcommand{\thesection}{\@arabic\c@section}\makeatother\begin{document}  Condition de réplication. \begin{figure}[!h]\begin{center}\includegraphics[scale=0.25]{replication.png} \caption{Schéma de la réplication chez les Procaryotes}\end{center}\end{figure} Avant Meselson et Stahl Thanks in advance Last edited by Stefan Kottwitz on Sat Jun 07, 2014 10:06 pm, edited 1 time in total. Reason: inline code changed to code block Tags: Johannes_B Site Moderator Posts: 4044 Joined: Thu Nov 01, 2012 4:08 pm Please read some introductory material. Using the floating environment figure means, latex can put it where it fits best. This can be on top of a page, at the bottom, or sometomes on it's own page. If you really don't want the pic to float, use \captionof{figure}{<your caption>} provided by package caption. BTW: Environment center adds some extra vertical space, unnecessary using a floating environment. The smart way: Calm down and take a deep breath, read posts and provided links attentively, try to understand and ask if necessary. maxime54 Posts: 3 Joined: Sat Jun 07, 2014 5:34 pm Thanks for your answer but whatever i do, i obtain the text and the diagram superposed and there is an error I do not understand : "Cannot determine the size of graphique in danreplicationfork.bmp" How can i do to let Latex determine the size ? Johannes_B Site Moderator Posts: 4044 Joined: Thu Nov 01, 2012 4:08 pm Oh, the pic is on the text, not above it on the paper? LaTeX doesn't support bmp-graphics. Please convert it to a jpg-file (png would be possible as well) and try again. The smart way: Calm down and take a deep breath, read posts and provided links attentively, try to understand and ask if necessary. maxime54 Posts: 3 Joined: Sat Jun 07, 2014 5:34 pm yes, pic is on the text. Even with a png pic it does not work. I do not understand at all why Latex does not want to have " text then pic then text" Johannes_B Site Moderator Posts: 4044 Joined: Thu Nov 01, 2012 4:08 pm You should try to prepare a minimal working example. This is the best and safest way to find the cause of errors. All i can do now is guess, and there is nothing coming to mind right now. The smart way: Calm down and take a deep breath, read posts and provided links attentively, try to understand and ask if necessary. Return to “Graphics, Figures & Tables” ### Who is online Users browsing this forum: No registered users and 3 guests
# Is this geometric argument enough to show that special relativity assumes flat spacetime? I am preparing myself to teach a class about special relativity a few weeks from now. To make sure they'll understand that spacetime must be flat for special relativity to work, I came up with the following argument, which is based on the famous derivation of time dilation where a moving reference frame ($$S'$$) sees light take a longer path because of the motion between the reference frames. Basically, this derivation. To make some context and define notation, here is the picture I am using: So there is a reference frame $$S$$, stationary relative to a light source. Another frame $$S'$$ moves away from the source with velocity $$v$$. Light is emitted and takes a time $$t$$ in the frame $$S$$ to reach the 'ceiling'. If we enter the reference frame $$S'$$ we see light take a different path due to the motion of frame $$S$$ (this causes the frame $$S'$$ to measure a time $$t'$$ until light reaches the 'ceiling'). The famous picture for this effect is: Now this is where my argument begins: When we draw the motion of light in both frames, we can create a right triangle as shown. But to derive the time dilation formula we end up having to assume that the Pythagorean Theorem holds! Since the Lorentz transformations for special relativity depend on this step (and so all of Special Relativity does), it appears that all of Special Relativity depends on assuming that the Pythagorean Theorem will work. But the Pythagorean theorem only holds if the geometry is euclidean! Therefore, Special Relativity will only work if space has euclidean geometry. From here I would present the idea that to generalize Relativity to any geometry we'd need to use General Relativity (GR). But GR itself says that for small enough regions, spacetime can always be approximately flat, which means Special Relativity does hold on laboratory experiments, particle interactions etc. on Earth due to space being approximately euclidean for those cases. And this is the argument I want to present. I am afraid, however, that this argument might not be correct for some reason (perhaps there is some mistake on the relationship between flat spacetime and euclidean space, but I'm not sure). I have an acceptable Special Relativity background so complex explanations and other (correct) arguments for this are welcome. My problem is that I am not so familiar with the math of GR, and this is could become a big problem for any arguments I come up here. So I ask: is there a mistake in this argument? Or does it hold? Is it sufficient? If not, please correct me and/or link me to an actually good argument which I could present. My students have also never seen a tensor in their lives, so if possible avoid them. But anyway, all help is welcome! • Consider that the equation you presented here might be confusing to your students, since you're mixing t and t' from the different reference frames on the right hand side. The drawing might also make it seem like the light originates from two different points in space time. – Halbeard Nov 30 '18 at 20:37 • A very rigorous and tightly reasoned treatment that does a good job on this sort of thing is Bertel Laurent, Introduction to spacetime: a first course on relativity. Another approach that may be of interest is in my SR book, lightandmatter.com/sr (sections 2.2 and 2.5). It's relatively easy to take a typical exposition of SR and pick out one or more places where flatness has been implicitly assumed but never explicitly invoked. It's more difficult to build up all the scaffolding without noticing that you've made such implicit assumptions. – Ben Crowell Nov 30 '18 at 20:50 No, flat space doesn't guarantee flat spacetime. For example, consider a spacetime with a weak gravitational field, corresponding to gravitational potential $$\phi$$. It can be described by the metric $$g_{00} = - (1 + 2 \phi), \quad g_{ij} = \delta_{ij}$$ which has perfectly flat space but non-flat spacetime.
# non-singular bilinear form Show that if $f$ is a non-singular bilinear form, and $U$ is a subspace of $V$ , then $(U^\perp)^\perp = U$ and $\dim(U) + \dim(U^\perp) = \dim(V)$. It is clear to me that $U$ is contained in $(U^\perp)^\perp$. But I am struggling to use non-singularity to show the converse. - Are you assuming finite dimensionality? – rschwieb Jan 16 '13 at 20:11 Oh yes - sorry. Finite dimensionally is assumed. – user58514 Jan 16 '13 at 20:15 For $u \in U$, if $f(u,v) = 0$ for all $v \in V$ then $u = 0$ by non-singularity of $f$. And for $v \in V$, if $f(u,v) = 0$ for all $u \in U$ then $v \in U^\perp$ by definition. Therefore if we restrict the bilinear form $f \colon V \times V \rightarrow F$ to a pairing $U \times V \rightarrow F$, it further descends to a pairing $U \times (V/U^\perp) \rightarrow F$ that is non-singular (i.e., if $\langle u,\overline{v}\rangle = 0$ for all $u$ then $\overline{v} = 0$ while if $\langle u,\overline{v}\rangle = 0$ for all $\overline{v}$ then $u = 0$). Thus the linear maps $U \rightarrow (V/U^\perp)^*$ and $V/U^\perp \rightarrow U^*$ that we get to the dual spaces by fixing the first coordinate or the second coordinate of the pairing are both embeddings. So by counting dimensions we have $\dim U \leq \dim(V/U^\perp)$ and $\dim(V/U^\perp) \leq \dim U$ (we're in finite dimensions, so dual spaces don't change dimensions). Putting these together gives us $\dim U = \dim V - \dim(U^\perp)$, so $\dim U + \dim(U^\perp) = \dim V$. Replacing $U$ with $U^\perp$ in that last equation then shows us $\dim((U^\perp)^\perp) = \dim V - \dim(U^\perp) = \dim(U)$, so a containment of $U$ inside $(U^\perp)^\perp$ has to be an equality. By the way, you left out a hypothesis to assure us that $U^\perp$ is well-defined. In general there is a left and right orthogonal subspace: $U^{\perp_L} = \{v \in V : f(v,u) = \text{ for all } u \in U\}$ and $U^{\perp_R} = \{v \in V : f(u,v) = 0 \text{ for all } u \in U\}$, and these need not be the same. If they are not then the notation $U^\perp$ is ambiguous (all the more for $(U^\perp)^\perp$). You need to know that the perpendicularity relation $f(v,w) = 0$ is symmetric in $v$ and $w$ to be sure that $U^\perp$ is well-defined. - Fix a basis such that $B$ is the matrix for the bilinear form is given by: $xBy^T=\langle x,y\rangle$. Since the form is nondegenerate, $B$ has full rank. Now form a matrix $M$ whose rows are a basis for $U$, and consider $MB$. This is a matrix with the same rank as $M$, which is just $\dim(U)$. (If $B$ did not have full rank, this could potentially be lower.) View $MB$ as a transformation from $R^n\rightarrow R^{\dim(U)}$. Notice that the kernel of this transformation is precisely $U^\perp$. By the rank-nullity theorem, the dimension of $\dim(U^\perp)=n-\dim(U)$. By the same result, $\dim(U^{\perp\perp})=n-\dim(U^\perp)=n-(n-\dim(U))=\dim(U)$. Since you already established $U\subseteq U^{\perp\perp}$, and the dimensions are equal, equality of $U$ and $U^{\perp\perp}$ must hold. -
Is the following problem NP-hard? (or have you seen it before?) I genuinely don't know if the following problem is NP-hard. I have never seen it mentioned online, but it's hard to even search for exact problems like this. I have been trying to find an efficient algorithm for a while and intuitively it feels very NP-complete to me, but I haven't been able to prove it. A solution would be directly applicable to my work, if it could be found. Problem: Assume you have $$N$$ different tasks which each have a unique integer length $$l_n$$. These tasks are arranged in a dependency tree so that each task can only be started after zero or more other tasks are completed. You also have $$M$$ different resources. Each task requires a given resource and each resource can only be used by one task at a time. What ordering of tasks allows for the fastest completion of the overall program (based on maximizing the amount of tasks that can be parallelized). Example: In less abstract terms, imagine that each task is a step in a recipe. The resources are elements of a kitchen like an oven or mixer. If cooking one part of a meal requires a short use of the oven followed by a lot of manual work, and cooking the other part of the meal requires a long oven time, you should schedule the shorter cook first so that you can do the rest of the work for that component while the other half is in the oven. Another example: Consider the dependency tree below. $$a \rightarrow b$$ indicates "$$a$$ depends on $$b$$". The length of a block indicates the time it takes to complete (1 or 3 in this case). There are two resources/channels: red and blue. An example solution might look like this: This shows that a greedy approach (like scheduling (1) and (2) simultaneously) is not sufficient -- you have to look ahead to minimize waiting for resources. • Does each task require a specific resource, or can you have tasks for which one of several resources can be used? If the latter, this is NP-hard even for 2 resources, by reduction from Partition. – j_random_hacker Apr 27 at 4:53 • See also : Operations Research or.stackexchange.com – jmullee Apr 28 at 19:40 4 Answers This problem (more formally its decision version) is NP-complete. NP-hardness can be shown via a reduction from the Job-Shop Scheduling Problem (JSP) with makespan objective, which is well-known to be NP hard. In the JSP, we have $$n$$ jobs $$J_1, J_2, ..., J_n$$. Within each job there is a set of operations $$O_1, O_2, ..., O_n$$ which need to be processed in a specific order. Each operation $$o$$ has a specific machine $$m_o \in M$$ that it needs to be processed on and only one operation in a job can be processed at a given time. (adapted from Wikipedia) Now it is easy to see that JSP is just a special case of your proposed problem (one task per operation of JSP, dependencies only between successive operations in a job), which makes it at least NP-hard too. Membership in NP can be shown by guessing an optimal schedule and verifying in polynomial time that it fulfills all constraints. What you are describing is a planning and scheduling problem. Kautz and Selman pioneered the use of Boolean satisfiability and SAT solvers to attack such problems in the early 1990's. SATPLAN, STRIPS, and PDDL are good search terms for further research. There seem to be several planner implementations that take world descriptions written in STRIPS and generate a plan that minimizes plan length, ordering tasks to take advantage of time savings produced by tasks that can be executed in parallel. Given how effective SAT solvers are at attacking planning instances, I'd guess that the average-case hardness of planning problems is low, but I wouldn't be surprised if the general problem turned out to be NP-hard. • Thanks for the answer. I have some terms to google now – QuinnFreedman Apr 27 at 3:45 • Having worked on such software (using a different method), in real life nearly always there are a few tasks on the critical path and once critical path sorted the rest of the problem is easy to find a good enough answer. – Ian Ringrose Apr 27 at 17:45 Thanks to "user2357112 supports Monica" for pointing out issues! Now fixed. Let's formulate the decision problem form of this problem, which I'll call Tree Scheduling (TS): Given a number $$k$$, and a rooted tree with • tasks $$t_1, \dots, t_n$$ for vertices, each having some integer duration $$l_i$$ and requiring some resource $$r_i$$ from a set $$S$$ of resources, and • arcs representing dependencies between tasks, is there a schedule that satisfies all dependencies, avoids scheduling two jobs that use the same resource at overlapping times, and completes within $$k$$ time units? This problem is NP-complete even with just two resources, by reduction from Partition (PART). In PART, we are given a multiset of $$m$$ numbers $$a_1, \dots, a_m$$ having $$\sum_i a_i = T$$, and the task is to determine whether we can partition them into two multisets, each having sum $$T/2$$. Given an instance of PART, we construct an instance of TS with: 1. A root task $$t_r$$ using resource 1, of duration 1; 2. For each $$1 \le i \le m$$, a task $$t_i$$ using resource 1, having duration $$l_i = a_i$$ and on which $$t_r$$ depends; 3. A task $$t_{after}$$ using resource 2, of duration $$T/2$$ and on which $$t_r$$ depends; 4. A task $$t_{sep}$$ using resource 1, of duration 1 and on which $$t_{after}$$ depends; 5. A task $$t_{before}$$ using resource 2, of duration $$T/2$$ and on which $$t_{sep}$$ depends; 6. $$k=T+2$$. The idea is that an optimal solution to an instance consisting of just $$t_r$$, $$t_{before}$$, $$t_{sep}$$ and $$t_{after}$$ takes time $$T+2$$, since these tasks are on a critical path of this length. This solution has 2 "gaps" of length $$T/2$$ in the usage of resource 1. After adding the tasks $$t_1, \dots, t_m$$, we can still complete all tasks in $$T+2$$ time iff we are able to fill up each gap with exactly $$T/2$$ time units' worth of tasks. • This reduction doesn't seem to make sense. No matter what the multiset is, you can finish all the tasks in $T+2$ time by running $t_{sep}$ first, then running $t_1$ through $t_m$ in any order while $t_{gap}$ runs, then running $t_r$. Are you assuming that you can't start a job while another job is in progress on a different resource? – user2357112 supports Monica Apr 27 at 9:37 • @user2357112supportsMonica: Thanks for noticing that, you're absolutely right. I have now fixed this by adding a second gap of length $T/2$, which creates a critical path of length $T+2$ that will be disrupted if $t_1, \dots, t_m$ cannot be partitioned perfectly in half -- please take another look. – j_random_hacker Apr 27 at 10:47 • You've still got a reference to $t_{gap}$ that I think was supposed to be changed to $t_{after}$, but other than that, the fix seems to work. – user2357112 supports Monica Apr 27 at 16:42 • @user2357112supportsMonica: Thanks! Now fixed. – j_random_hacker Apr 27 at 23:45 It is NP-Hard as others have said, but... sometimes there is an easier way to solve it. You can do a Lagrangian relaxation to essentially remove the NP-hardness solve that problem, and then use that as an initial guess for the original problem. Pekny and Miller essentially did something like to for the Asymmetric Traveling Salesman problem. They reduced it to an assignment problem (n^3), and the used that to stich together a solution to the original problem. where they could stich together the cycles without adding cost, they had a guaranteed optimum in polynomial time - FOR FORTUITOUS DATA SETS. But having a fortuitous data set might end up being most of the time.
## Abstract and Applied Analysis ### Modified Hybrid Block Iterative Algorithm for Convex Feasibility Problems and Generalized Equilibrium Problems for Uniformly Quasi--Asymptotically Nonexpansive Mappings #### Abstract We introduce a modified block hybrid projection algorithm for solving the convex feasibility problems for an infinite family of closed and uniformly quasi-$\phi$-asymptotically nonexpansive mappings and the set of solutions of the generalized equilibrium problems. We obtain a strong convergence theorem for the sequences generated by this process in a uniformly smooth and strictly convex Banach space with Kadec-Klee property. The results presented in this paper improve and extend some recent results. #### Article information Source Abstr. Appl. Anal., Volume 2010 (2010), Article ID 357120, 22 pages. Dates First available in Project Euclid: 1 November 2010 https://projecteuclid.org/euclid.aaa/1288620743 Digital Object Identifier doi:10.1155/2010/357120 Mathematical Reviews number (MathSciNet) MR2669086 Zentralblatt MATH identifier 1206.47084 #### Citation Saewan, Siwaporn; Kumam, Poom. Modified Hybrid Block Iterative Algorithm for Convex Feasibility Problems and Generalized Equilibrium Problems for Uniformly Quasi- -Asymptotically Nonexpansive Mappings. Abstr. Appl. Anal. 2010 (2010), Article ID 357120, 22 pages. doi:10.1155/2010/357120. https://projecteuclid.org/euclid.aaa/1288620743
I have to prepare a beamer presentation, but depending on the audience, during the presentation I might decide to skip some material. Is there a way to do this without having to skip quickly through the slides? I seem to remember that there was a command, but I forgot what it was and anyway I never succeeded in having it work. Thanks for your help! • Never worked with beamer presentation, but have you tried the hyperref package and \label and \ref? – Shade Apr 3 '18 at 16:39 • Ok, thanks. It was not what I was looking for but perhaps it works. Thank you to you and to Patrick. – Tanda Apr 3 '18 at 20:52 • if you give us an MWE, we can have a more sorrow look into it. With the question you've asked, hyperref is a completely satisfying solution. – Shade Apr 3 '18 at 21:11 Beamer offers a variety of navigation buttons for such purposes. Together with the \hyperlink command they can be used to conveniently jump to other slides or frames. \documentclass{beamer} \begin{document} \begin{frame} 1 \end{frame} \begin{frame} 2 \end{frame} \begin{frame}[label=foo] 3 \end{frame} \end{document} For more examples, please see section 11.1 Adding Hyperlinks and Buttons from the beamer user guide. You could use the hyperref package and create references to the slides you want to view.
# Electric field in a non-uniformly charged sheet [closed] So if we have a large sheet that is not uniformly charged and is NOT a conductor, how can I find an expression for the electric field everywhere? Things we know about the sheet: • the width is 2b • it is lying on the yz plane (centered at the origin) • the charge density varies as ρ = ρ(0)(x/b)^2 • ρ(0) is constant I'm not sure if I can actually use Gauss' law because there's not the same symmetry as if it was uniformly charged. Is there some other law of electricity or formula that exists? How do I go about tackling this problem? ## closed as off-topic by Kyle Kanos, ACuriousMind♦, HDE 226868, Neuneck, MartinAug 3 '15 at 12:16 This question appears to be off-topic. The users who voted to close gave this specific reason: • "Homework-like questions should ask about a specific physics concept and show some effort to work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – Kyle Kanos, ACuriousMind, HDE 226868, Neuneck, Martin If this question can be reworded to fit the rules in the help center, please edit the question. • Welcome to Physics! Please note that Physics.StackExchange is not a homework help site. Please read this Meta post on asking homework-like questions and this Meta post for "check my work" problems. – Kyle Kanos Aug 1 '15 at 20:44 • By 'width' I assume you mean the thickness of the sheet? In that case the situation is still very symmetric and uniform. For points outside the sheet you can just treat the sheet as infinitely thin and use Gauss's law. And for points that are inside you treat the sheet as two separate infinitely thin sheets placed on both sides of that point. – SpiderPig Aug 2 '15 at 6:40 If you have some (static) charge distribution $\rho(\mathbf{x})$ the the electric field is given by: $$\nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon_0}$$ or it might be easier to calculate the potential using Poisson's equation: $$\nabla^2 V = -\frac{\rho}{\varepsilon_0}$$ and then calculate the field using: $$\mathbf{E} = -\nabla V$$ In your case you are given $\rho$ so the equations are easy to set up. Solving them might be harder ... • I'm a little surprised to be downvoted. This seemed to me a reasonable response to a question that borders on homework. Have I made a factual error, or is the downvote a philosophical objection? – John Rennie Aug 2 '15 at 5:54 Here's the easy way to do it. Do Gauss's law for a single uniformly charged $yz$-plane with surface charge $\sigma$. You'll find that the electric field is distance-independent except for its sign; $$\vec E = \frac{\sigma}{2\epsilon_0} ~ \operatorname{sgn}(x) ~ \hat x.$$Here $\hat x$ is the unit vector in the $x$-direction and $\operatorname{sgn}(x) = x / |x|$ is the "sign function" which is $+1$ for positive $x$, $-1$ for negative $x$, and $0$ when $x = 0$. Now use the principle of superposition: replace $\sigma$ with an infinitesimal charge $\rho(x')~dx'$, and shift the center of the $\operatorname{sgn}(x)$ term to $x - x'$ to reflect its new center. You get from their superposition:$$\vec E(x) = \frac{\rho_0}{2\epsilon} ~ \hat x ~ \int_{-b}^{b}dx'~\left(\frac{x'}{b}\right)^2~\operatorname{sgn}(x - x')$$ You should be able to handle this integral the same way you handle any integral with an absolute value: break it up into two parts, $x' > x$ and $x' < x$, and "fill in" the appropriate $\text{sgn}$ for each. You don't have to do this if $x > b$ or $x < -b$ of course, so you should get one answer for $|x| > b$ corresponding to all of the charge being "shrunk down" into the equivalent surface charge $\sigma$, and another answer where you're inside the material and the charges on either side of you are opposing each other.
# Beamer: Changing the font size of the references using the shrink argument causes an error I'm trying to change the font size of the references using the shrink of the frame argument. However, if I run this code: \documentclass{beamer} \usepackage[american]{babel} \usepackage[utf8]{inputenc} \usepackage{csquotes} \usepackage[style=apa, backend=biber,doi,url]{biblatex} \DeclareLanguageMapping{american}{american-apa} \begin{filecontents}{literature.bib} @article{foobar, Author = {An Author}, Journal = {A Journal}, Pages = {1--2}, Title = {Some title}, Volume = {1}, Year = {1900} } \end{filecontents} \begin{document} \frame{ \textcite{foobar} } \frame[shrink=50]{ \frametitle{References} \printbibliography } \end{document} I get the following error: ! Arithmetic overflow. \beamer@shrinkframebox ...\@tempdimc by\@tempcnta \relax \ifdim \@tempdimc >... l.32 } ? ! Emergency stop. \beamer@shrinkframebox ...\@tempdimc by\@tempcnta \relax \ifdim \@tempdimc >... l.32 } If I remove the shrink argument it works. I'm using TexLive 2011 on Debian Squeeze and I updated all my packages ;). - Test with \frame[shrink=50,fragile], this often helps if you are doing complex stuff on your frame ;-) –  schlamar May 24 '12 at 11:51 Thanks for your help! With \frame[shrink=50,fragile] I get Runaway argument? ! File ended while scanning use of \next. –  deboerk May 24 '12 at 12:53 What happens if you change directly the font size, for example with \tiny? –  schlamar May 24 '12 at 13:04 The font size of the references stays the same. –  deboerk May 24 '12 at 13:22 Maybe \renewcommand*{\bibfont}{\scriptsize} will help. –  André May 24 '12 at 17:18 I think \renewcommand*{\bibfont}{\scriptsize} will help! ;) - I tried André's answer with no success. I found a work-around, if you need a quick solution: 1. Compile your beamer document including the bibliography (correct settings of citing/formatting style etc.). 2. open the *.bbl file and copy all its contents (\begin{thebibliography} ...\end{thebibliography}). 3. Paste it over the \bibliograpy{yourbibfile} command. 4. put a \small tag after \begin{thebibliography} for a smaller fontsize for your references! As I said, its a quick way, if you don't want to fuss around. The downside being its complete absence of reactiveness to changes (\...cite commands) in your document. - Welcome to TeX.SX! I've taken the liberty of editing your answer, to keep it in line with the site's style. –  egreg Jul 9 '13 at 17:03 Welcome to TeX.SX! You can have a look on our starter guide to familiarize yourself further with our format. –  Claudio Fiandrino Jul 9 '13 at 17:24
# Greek Anthology Book XIV: 7. - Problem ## Problem I am a brazen lion; my spouts are my two eyes, my mouth, and the flat of my right foot. My right eye fills a jar in two days, my left eye in three, and my foot in four. My mouth is capable of filling it in six hours; tell me how long all four together will take to fill it. ## Solution Let $t$ be the number of hours it takes to fill a jar. Let $r, l, m, f$ be the flow rate in numbers of jars per day of (respectively) right eye, left eye, mouth and foot. In $t$ hours, the various contributions of each of the spouts is: Right eye: $r t$ Left eye: $l t$ Right foot: $f t$ Mouth: $m t$ So for the total contribution to be $1$ jar, we have: $\ds \paren {r + l + f + m} t$ $=$ $\ds 1$ $\ds \leadsto \ \$ $\ds t$ $=$ $\ds \dfrac 1 {r + l + f + m}$ There are $24$ hours in a day. Hence, we have: $\ds r$ $=$ $\ds \frac 1 {2 \times 24}$ that is, $1$ jar in $2$ days $\ds l$ $=$ $\ds \frac 1 {3 \times 24}$ that is, $1$ jar in $3$ days $\ds f$ $=$ $\ds \frac 1 {4 \times 24}$ that is, $1$ jar in $4$ days $\ds m$ $=$ $\ds \frac 1 6$ $1$ jar in $6$ hours and so: $\ds t$ $=$ $\ds \dfrac 1 {r + l + f + m}$ $\ds$ $=$ $\ds \dfrac 1 {\dfrac 1 {2 \times 24} + \dfrac 1 {3 \times 24} + \dfrac 1 {4 \times 24} + \dfrac 1 6}$ $\ds \leadsto \ \$ $\ds t$ $=$ $\ds \dfrac {12 \times 24} {6 + 4 + 3 + 48}$ multiplying top and bottom by $12 \times 24$ $\ds$ $=$ $\ds \dfrac {288} {61}$ $\ds$ $=$ $\ds 4 \frac {44} {61}$ So the jar will be filled in $4 \frac {44} {61}$ hours, or $4$ hours, $43$ minutes and $17$ seconds, approximately. $\blacksquare$ ## Variant This is the prototype of a whole class of problems of this type which schoolchildren have been set over the centuries, for example: $A$ and $B$ together can do a piece of work in $6$ days. $B$ and $C$ together can do it in $20$ days. $C$ and $A$ together can do it in $7 \frac 1 2$ days. How many days will each require to do the job separately? ## Historical Note In W.R. Paton's $1918$ translation of The Greek Anthology Book XIV, he gives: The scholia propose several, two of which, by not counting fractions, reach the result of four hours; but the strict sum is $3 \frac {33} {37}$ hours. The above is correct if it is assumed there are $12$ hours in a day. It would also need to be assumed that, in order to fulfil the conditions of the statement of the problem, the spouts are turned off at night. This problem was apparently first presented by Heron of Alexandria in his Metrika. It was still being taught in classrooms up until the middle of the $20$th Century, and was considered the epitome of "useless" mathematics. This, as David Wells points out in his Curious and Interesting Puzzles of $1992$, is a shame, because the idea behind it is far from useless.
# First order logic First order logic is a type of logic which is used in certain branches of mathematics and philosophy. First order logic enables the definition of a syntax which is independent of the mathematical or logical terms. In first order logic, reasoning can be done from two points of view: either using syntax alone, or including semantic terms. First order logic is different from propositional logic: In first order logic, there are quantifiers, called for all (written as ${\displaystyle \forall }$) and there is at least one (written as ${\displaystyle \exists }$).[1] Negation, conjunction, inclusive disjunction, exclusive disjunction and implication are all defined the same way as in propositional logic. Because of that, first order logic can be thought of as an extension of propositional logic.[2] The completeness of first order logic, the result which asserts the equivalence between valid formulas and formal theorems, is established by Gödel.[3] Together with Zermelo–Fraenkel set theory, first order logic is the foundation of many branches of modern mathematics. ## References 1. "Comprehensive List of Logic Symbols". Math Vault. 2020-04-06. Retrieved 2020-09-02. 2. "3.1: First Order Logic Syntax and Semantics". Engineering LibreTexts. 2018-12-18. Retrieved 2020-09-02. 3. Weisstein, Eric W. "First-Order Logic". mathworld.wolfram.com. Retrieved 2020-09-02.